title
stringlengths 5
164
| labels
list | bodyText
stringlengths 0
46.7k
|
---|---|---|
Tpye error: 'int' Object is not callable when pytorch_lighning is upgraded to version 0.8.1 or aboveใHowever, in version 0.7.1, it works normally
|
[
"question",
"won't fix"
] |
Tpye error: 'int' Object is not callable when pytorch_lighning is upgraded to version 0.8.1 or aboveใHowever, in version 0.7.1, it works normally
Traceback (most recent call last):
File "/home/zwx/pointNet_family/pointnet.pytorch-master/utils/mytrain.py", line 210, in
triner.fit(model)
File "/home/zwx/anaconda3/envs/PCReg/lib/python3.6/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/home/zwx/anaconda3/envs/PCReg/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1084, in fit
results = self.accelerator_backend.train(model)
File "/home/zwx/anaconda3/envs/PCReg/lib/python3.6/site-packages/pytorch_lightning/accelerators/cpu_backend.py", line 39, in train
results = self.trainer.run_pretrain_routine(model)
File "/home/zwx/anaconda3/envs/PCReg/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1239, in run_pretrain_routine
self.train()
File "/home/zwx/anaconda3/envs/PCReg/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 347, in train
model.train()
TypeError: 'int' object is not callable
What's the reason๏ผ anything help will be appreciate
|
LightningModule and Pytorch Models should really be decoupled.
|
[
"feature",
"help wanted",
"discussion"
] |
Not a feature per se, but a design suggestion for the LightningModule to discuss.
I think a lot of people already using PL the way so that they build the model in other classes. For example filling a Generator and Discriminator class and only initialize them within the LightningModule init method. The motivation is to have a modular design and more plug and play nature.
I am actually wondering if a specific feature obligates this coupling (like DDP support or some logging) or it was one of initial simple paths that is now difficult to refactor since the project got larger. By the way, I was thinking similar about usage of datasets and DataModule update just really solved my (hypothetical) problem about it. I appreciate the update and wonder; can a similar thing done with "nn.Module"s?
|
Incorrect "Saving latest checkpoint" warning
|
[
"bug",
"help wanted",
"checkpointing"
] |
๐ Bug
"Saving latest checkpoint..." warning appears regardless of whether a ModelCheckpoint exists or save_last is set to True
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Lines 167 to 169
in
a71d62d
# Save latest checkpoint
rank_zero_warn('Saving latest checkpoint..')
self.check_checkpoint_callback(should_check_val=False, force_save=True)
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Lines 196 to 204
in
a71d62d
def check_checkpoint_callback(self, should_check_val, force_save=False):
model = self.trainer.get_model()
# when no val loop is present or fast-dev-run still need to call checkpoints
# TODO bake this logic into the checkpoint callback
should_activate = not is_overridden('validation_step', model) and not should_check_val
if should_activate or force_save:
checkpoint_callbacks = [c for c in self.trainer.callbacks if isinstance(c, ModelCheckpoint)]
[c.on_validation_end(self.trainer, model) for c in checkpoint_callbacks]
This might confuse an user to think the last checkpoint got saved when it did not.
Proposed change:
def check_checkpoint_callback(self, should_check_val, force_save=False):
model = self.trainer.get_model()
# when no val loop is present or fast-dev-run still need to call checkpoints
# TODO bake this logic into the checkpoint callback
should_activate = not is_overridden('validation_step', model) and not should_check_val
if should_activate or force_save:
checkpoint_callbacks = [c for c in self.trainer.callbacks if isinstance(c, ModelCheckpoint)]
if any(c.save_last for c in checkpoint_callbacks):
rank_zero_warn('Saving latest checkpoint..')
[c.on_validation_end(self.trainer, model) for c in checkpoint_callbacks]
|
Out of memory when trainer.save_checkpoint("example.ckpt")
|
[
"bug",
"help wanted",
"checkpointing"
] |
The sbatch session crashes and I get the following error when I include trainer.save_checkpoint("example.ckpt") in my code.
/var/spool/slurmd/job220424/slurm_script: line 15: 39865 Killed python roco_train_mlm_lightning.py --run_name debug --precision 16 --mlm_prob 0.15
slurmstepd: error: Detected 3 oom-kill event(s) in step 220424.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
Same thing happens when I run the code in notebook on the remoteserver. The kernel dies. Please help. Thank you.
|
Trainer: Separate framework options from backend options
|
[
"feature",
"help wanted",
"won't fix"
] |
๐ Feature
Stop mixing framework and backend options in the Trainer's constructor.
Motivation
I find it confusing both as a user and a backend implementer because it's not obvious which options affect which backend.
Pitch
The backend options could be passed as a separate, specific object:
trainer = Trainer( default_root_dir=".", pl.accelerators.GPU(gpus=8))
trainer = Trainer( default_root_dir=".", pl.accelerators.TPU(tpu_cores=8))
This would remove the need for select_accelerator or at least simplify it by making it a simple
if isinstance(opts, pl.accelerators.GPU):
...
Thoughts ?
|
distributed training: ModelCheckpoint is receiving bad data
|
[
"bug",
"help wanted",
"checkpointing"
] |
You can reproduce in 4 minutes on 0.9.0.
I tried master and got an unrelated wandb error and gave up trying to reproduce there.
you must be on a machine with multiple gpus
git clone git@github.com:huggingface/transformers.git
cd transformers
pip install -e .
pip install -e .[examples] # installs pytorch-lightning==0.8.5
git checkout pl-checkpoint-bug
cd examples/seq2seq
wget https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_ro.tar.gz
tar -xzvf wmt_en_ro.tar.gz
export MAX_LEN=128
export m=sshleifer/student_marian_en_ro_6_3
python finetune.py \
--learning_rate=3e-4 \
--do_train \
--do_predict \
--fp16 \
--val_check_interval 0.25 \
--data_dir wmt_en_ro \
--max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \
--freeze_encoder --freeze_embeds \
--train_batch_size=64 --eval_batch_size=64 \
--tokenizer_name $m --model_name_or_path $m \
--warmup_steps 500 --sortish_sampler --logger_name wandb \
--fp16_opt_level=O1 --task translation --num_sanity_val_steps=0 \
--model_name_or_path $m --gpus 8 --num_train_epochs=1 \
--data_dir wmt_mar_pl --output_dir dmar_pl_only_v3 --save_top_k=10
Results
ls dmar_pl_only_v3/*.ckpt
-rw-r--r-- 1 shleifer shleifer 351351790 Sep 21 23:58 dmar_pl_only_v3/val_avg_bleu=23.3951-step_count=5.ckpt
-rw-r--r-- 1 shleifer shleifer 351351790 Sep 21 23:57 dmar_pl_only_v3/val_avg_bleu=23.2619-step_count=4.ckpt
-rw-r--r-- 1 shleifer shleifer 351351790 Sep 21 23:56 dmar_pl_only_v3/val_avg_bleu=22.6724-step_count=3.ckpt
-rw-r--r-- 1 shleifer shleifer 351351790 Sep 21 23:56 dmar_pl_only_v3/val_avg_bleu=22.2664-step_count=2.ckpt
-rw-r--r-- 1 shleifer shleifer 351351790 Sep 21 23:55 dmar_pl_only_v3/val_avg_bleu=23.2263-step_count=1.ckpt
There are 5 checkpoints which much lower scores. PL thinks the best checkpoint is from step 5, but
cat dmar_pl_only_v3/metrics.json | grep bleu
"val_avg_bleu": 26.4513,
"val_avg_bleu": 25.5289,
"val_avg_bleu": 25.6942,
"val_avg_bleu": 26.2227,
"val_avg_bleu": 25.8546,
(the best checkpoint is step 1)
When I evaluate offline on the best checkpoint without truncation, I get val_bleu = 27+, which makes me nearly certain that the numbers in metrics.json (which I create and save in finetune.py are correct and the numbers in the saved paths are incorrect.)
Is this a known issue with a workaround? How can I fix? Should be high priority because suboptimal checkpoint saving is a huge productivity drain.
Additional Notes:
The numbers logged to wandb are also the low/wrong ones.
on 1 or 2 GPU the numbers are identical!
|
Infinite hang when running `Trainer.test` after `Trainer.fit` with DDP
|
[
"bug",
"duplicate",
"help wanted",
"working as intended"
] |
๐ Bug
If I run Trainer.test after running Trainer.fit with distributed_backend='ddp' then the system hangs.
To Reproduce
Steps to reproduce the behavior:
Run the following script
# main.py
import os
from argparse import ArgumentParser
from pl_examples.models.lightning_template import LightningTemplateModel
from pytorch_lightning import Trainer, seed_everything
seed_everything(234)
def main(args):
model = LightningTemplateModel(**vars(args))
trainer = Trainer.from_argparse_args(args)
trainer.fit(model) # if this is commented out then test will complete, otherwise it hangs
trainer.test(model)
def run_cli():
root_dir = os.path.dirname(os.path.realpath(__file__))
parent_parser = ArgumentParser(add_help=False)
parser = LightningTemplateModel.add_model_specific_args(parent_parser, root_dir)
parser = Trainer.add_argparse_args(parser)
parser.set_defaults(gpus=2)
args = parser.parse_args()
main(args)
if __name__ == '__main__':
run_cli()
with command line arguments (assuming >= 2 GPUs)
python main.py --gpus 2 --hidden_dim 500 --max_epochs 1 --distributed_backend ddp
Running this script causes the program to hang during test phase.
Expected behavior
I would expect Trainer.test to complete rather than hanging.
Environment
Output of collect_env_details.py:
* CUDA:
- GPU:
- GeForce RTX 2080 Ti
- GeForce RTX 2080 Ti
- available: True
- version: 10.2
* Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.1rc3
- tqdm: 4.49.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.7.5
- version: #51~18.04.1-Ubuntu SMP Sat Sep 5 14:35:50 UTC 2020
PyTorch Version: 1.6.0
OS: Ubuntu 20.04
How you installed PyTorch: pip
Build command you used (if compiling from source):
Python version: 3.7.5
CUDA/cuDNN version: 7.6.5
GPU models and configuration: GeForce RTX 2080 Ti (x2)
List of all installed packages (output of pip freeze):
absl-py==0.10.0
cachetools==4.1.1
certifi==2020.6.20
chardet==3.0.4
decorator==4.4.2
fsspec==0.8.2
future==0.18.2
google-auth==1.21.2
google-auth-oauthlib==0.4.1
grpcio==1.32.0
idna==2.10
importlib-metadata==1.7.0
Markdown==3.2.2
networkx==2.5
numpy==1.19.1
oauthlib==3.1.0
packaging==20.4
Pillow==7.2.0
pkg-resources==0.0.0
protobuf==3.13.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==2.4.7
pytorch-lightning==0.9.1rc3
PyYAML==5.3.1
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
six==1.15.0
tensorboard==2.2.0
tensorboard-plugin-wit==1.7.0
torch==1.6.0
torchvision==0.7.0
tqdm==4.49.0
urllib3==1.25.10
Werkzeug==1.0.1
zipp==3.1.0
Additional context
If I comment out trainer.fit then everything works as expected.
I was able to pause the execution during hang while running in PyCharm. The following are the stack frames for the main thread, which is the only thread I could get to pause.
select, selectors.py:418
wait, connection.py:920
_poll, connection.py:414
poll, connection.py:257
get, queues.py:104
_worker_loop, worker.py:167
run, process.py:99
_bootstrap, process.py:297
_launch, popen_fork.py:74
__init__, popen_fork.py:20
_Popen, context.py:277
_Popen, context.py:223
start, process.py:112
__init__, dataloader.py:737
__iter__, dataloader.py:291
run_evaluation, trainer.py:437
run_test, trainer.py:489
train_or_test, base_backend.py:34
ddp_train, ddp_backend.py:243
train, ddp_backend.py:138
fit, trainer.py:324
wrapped_fn, states.py:48
__test_given_model, trainer.py:627
test, trainer.py:564
wrapped_fn, states.py:48
main, main.py:13
run_cli, main.py:24
<module>, main.py:28
|
LightningDataModule seems to do some dataloader operations on CPU, which was not the case with LightningModule loader methods
|
[
"bug",
"help wanted",
"won't fix",
"data handling",
"priority: 2"
] |
๐ Bug
While using LightningDataModule as lit_model(datamodule=datamodule) the models waits for some time using 1 CPU core before beginging training, and periodically stops training (every 50 train steps) GPU util goes 0% and 1 CPU core is in use. This behaviour continues till training finishes.
To Reproduce
Steps to reproduce the behavior:
Create a lightning model which takes a datamodule as input. __init__ containsthis.datamodule=datamodule
In the LightningDataModule I'm using PyTorch's UCF101 dataset
Code sample
class UCF101DataModule(LightningDataModule):
def setup(self, stage=None):
if stage == "fit" or stage is None:
self.train_dataset = datasets.UCF101(
UCF101_ROOT_PATH,
UCF101_ANNO_PATH,
frames_per_clip=5,
step_between_clips=30,
num_workers=UCF101_WORKERS,
train=True,
fold=self.fold,
)
def train_dataloader(self):
print("Train Dataloader Called")
return DataLoader(
self.train_dataset,
batch_size=self.batch_size,
num_workers=DATALOADER_WORKERS,
collate_fn=custom_collate,
shuffle=True,
)
class LitModel(LightningModule):
def __init__(datamodule):
self.datamodule = datamodule
Expected behavior
Training loop should start immediately
Environment
CUDA:
GPU:
GeForce RTX 2080 Ti
available: True
version: 10.2
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 0.9.0
tqdm: 4.46.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.3
version: 113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020
|
When training in GPU the model does not decrease the loss, in CPU it does
|
[
"bug",
"help wanted"
] |
๐ Bug
When a toy model is trained in GPU error rate does not seem to go down, but if i use the CPU it does. I have just use 0.9 version.
To Reproduce
Steps to reproduce the behavior:
Based on the following model
mean_1 = [0, 0]
cov_1 = [[1, 0], [0, 100]]
mean_2 = [5,-7]
cov_2 = [[16, 70], [1000, 0.1]]
class ToyDataset(Dataset):
def __init__(self, param1, param2):
mean_1, cov_1 = param1
mean_2, cov_2 = param2
data_1 = np.random.multivariate_normal(mean_1, cov_1, 50000)
y_1 = np.zeros(50000)
data_2 = np.random.multivariate_normal(mean_2, cov_2, 50000)
y_2 = np.ones(50000)
data_all_x = np.concatenate((data_1,data_2), axis=0)
data_all_y = np.concatenate((y_1,y_2), axis=0)
idx = list(range(100000))
random.shuffle(idx)
self.data_all_x = data_all_x[idx]
self.data_all_y = data_all_y[idx]
def __getitem__(self, sample_index):
return self.data_all_x[sample_index], self.data_all_y[sample_index]
def __len__(self):
return len(self.data_all_y)
class PocLightning(pl.LightningModule):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(2, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return F.sigmoid(x)
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x.float())
criterion = nn.BCELoss()
loss = criterion(y_hat, y.float())
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x.float())
criterion = nn.BCELoss()
loss = criterion(y_hat, y.float())
return loss
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x.float())
criterion = nn.BCELoss()
loss = criterion(y_hat, y.float())
return loss
def configure_optimizers(self):
opt_SGD = torch.optim.SGD(self.parameters(), lr = 0.001, momentum=0.9)
return opt_SGD
def prepare_data(self):
self.train_data = ToyDataset((mean_1, cov_1), (mean_2, cov_2))
self.test_data = ToyDataset((mean_1, cov_1), (mean_2, cov_2))
self.val_data = ToyDataset((mean_1, cov_1), (mean_2, cov_2))
def train_dataloader(self):
return DataLoader(self.train_data, batch_size=32)
def val_dataloader(self):
return DataLoader(self.val_data, batch_size=32)
def test_dataloader(self):
return DataLoader(self.test_data, batch_size=32)
When the model is trained in GPU
#train
trainer = pl.Trainer(gpus=1, max_epochs=2)
model = PocLightning()
trainer.fit(model)
result
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
CUDA_VISIBLE_DEVICES: [0]
:8: RuntimeWarning: covariance is not positive-semidefinite.
data_2 = np.random.multivariate_normal(mean_2, cov_2, 50000)
| Name | Type | Params
0 | fc1 | Linear | 360
1 | fc2 | Linear | 10 K
2 | fc3 | Linear | 85
Epoch 1: 100%
6250/6250 [00:14<00:00, 417.76it/s, loss=0.690, v_num=85, val_loss=0.691]
Saving latest checkpoint..
But doing without GPU
trainer = pl.Trainer(gpus=0, max_epochs=2)
model = PocLightning()
trainer.fit(model)
result
GPU available: True, used: False
TPU available: False, using: 0 TPU cores
:8: RuntimeWarning: covariance is not positive-semidefinite.
data_2 = np.random.multivariate_normal(mean_2, cov_2, 50000)
| Name | Type | Params
0 | fc1 | Linear | 360
1 | fc2 | Linear | 10 K
2 | fc3 | Linear | 85
Epoch 1: 100%
6250/6250 [00:10<00:00, 580.56it/s, loss=0.111, v_num=86, val_loss=0.115]
Saving latest checkpoint..
Environment
CUDA:
GPU:
GeForce RTX 2080 Ti
available: True
version: 10.1
Packages:
numpy: 1.19.1
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 0.9.0
tqdm: 4.47.0
System:
OS: Linux
architecture:
64bit
ELF
processor:
python: 3.8.5
version: #1 SMP Debian 4.19.98-1 (2020-01-26)
|
Log validation metrics before training
|
[
"question",
"won't fix"
] |
โ Questions and Help
Is there an easy way to run a full evaluation on the the validation set before starting training. I would like this as a kind of benchmark to see where I'm starting from and if the network learns anything at all.
While #1715 allows running the sanity check on the complete validation set, this does not log any metrics.
I tried the code as recommended:
class run_validation_on_start(Callback):
def __init__(self):
pass
def on_train_start(self, trainer: Trainer, pl_module):
return trainer.run_evaluation(test_mode=False)
Originally posted by @dvirginz in #1715 (comment)
but this gives me the following error:
Traceback (most recent call last):โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 12/13 [00:00<00:00, 10.56it/s]
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/scratch/bartonp/avamap/trainer/train.py", line 95, in <module>
main(hparams)
File "/scratch/bartonp/avamap/trainer/train.py", line 52, in main
trainer.fit(model, train_loader, val_loader)
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1073, in fit
results = self.accelerator_backend.train(model)
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_backend.py", line 51, in train
results = self.trainer.run_pretrain_routine(model)
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1239, in run_pretrain_routine
self.train()
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 363, in train
self.on_train_start()
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_hook.py", line 111, in on_train_start
callback.on_train_start(self, self.get_model())
File "/scratch/bartonp/avamap/trainer/train.py", line 17, in on_train_start
return trainer.run_evaluation(test_mode=False)
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 603, in run_evaluation
self.on_validation_end()
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_hook.py", line 176, in on_validation_end
callback.on_validation_end(self, self.get_model())
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 27, in wrapped_fn
return fn(*args, **kwargs)
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 357, in on_validation_end
filepath = self.format_checkpoint_name(epoch, ckpt_name_metrics)
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/site-packages/pytorch_lightning/callbacks/model_checkpoint.py", line 253, in format_checkpoint_name
groups = re.findall(r'(\{.*?)[:\}]', self.filename)
File "/scratch/bartonp/miniconda/envs/eco/lib/python3.7/re.py", line 223, in findall
return _compile(pattern, flags).findall(string)
TypeError: expected string or bytes-like object
Is there no simple way to run and log the validation set before training?
|
#3598 does not allow monitoring tensors logged via `TrainResult`
|
[
"bug",
"help wanted"
] |
๐ Bug
Code sample
@pytest.mark.parametrize("monitor", ["tr_foo", "tr_bar", "va_foo", "va_bar"])
def test(tmpdir, monitor):
model = DeterministicModel()
def training_step(batch, batch_idx):
acc = model.step(batch, batch_idx)
result = TrainResult(minimize=acc)
result.log("tr_foo", torch.randn(1), on_step=False, on_epoch=True)
result.log("tr_bar", torch.randn(1), on_step=False, on_epoch=True)
return result
def validation_step(*args, **kwargs):
result = EvalResult()
result.log("va_foo", torch.randn(1), on_step=False, on_epoch=True)
result.log("va_bar", torch.randn(1), on_step=False, on_epoch=True)
return result
model.training_step = training_step
model.validation_step = validation_step
model.validation_step_end = None
model.validation_epoch_end = None
trainer = Trainer(
default_root_dir=tmpdir,
early_stop_callback=EarlyStopping(monitor=monitor),
checkpoint_callback=ModelCheckpoint(monitor=monitor),
limit_train_batches=3,
limit_val_batches=3,
max_epochs=2,
weights_summary=None,
)
trainer.fit(model)
tr_foo and tr_bar fail. va_foo and va_bar work.
Expected behavior
No failure and callbacks correctly monitor their monitor
Environment
Current master
@williamFalcon
|
automatically copy state-dict when using ddp
|
[
"feature",
"help wanted"
] |
๐ Feature
Copy model state-dict from rank 0 process to other processes when using ddp
Motivation
This would mean that the user does not need to worry about initializing models with the same weights
Alternatives
Alternatively lightning could at least check if the weights are the same and if not warn the user / thrown an exception
Not sure if this is possible and how easy it can be accomplished, but I could imagine that this could be a source for errors.
|
Unexpected key(s) in state_dict Error when calling `load_from_checkpoint`
|
[
"question"
] |
โ Questions and Help
What is your question?
Unexpected key(s) in state_dict Error when calling load_from_checkpoint
Code
class smallCNN(pl.LightningModule):
def __init__(self, out_class) -> None:
super().__init__()
self.net_1 = nn.Sequential(nn.Conv2d(3, out_channels=8,kernel_size=3,stride=1,padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(8),
nn.MaxPool2d(2,2),
nn.Conv2d(8, out_channels=16,kernel_size=3,stride=1,padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(16),
nn.MaxPool2d(2,2),
nn.Conv2d(16, out_channels=32,kernel_size=3,stride=1,padding=1),
nn.ReLU(inplace=True),
nn.BatchNorm2d(32),
nn.MaxPool2d(2,2)
)
self.net_2 = nn.Sequential(nn.Linear(512,out_class))
def forward(self, x):
out1 = self.net_1(x)
x = out1.view(x.shape[0],-1)
out2 = self.net_2(x)
return out1, out2
class Model(pl.LightningModule):
def __init__(self,load_model=True, **kwargs) -> None:
super().__init__()
self.save_hyperparameters()
if load_model:
self.pre_CNN_here = smallCNN.load_from_checkpoint('./model_file/pretrain/lightning_logs/version_0/checkpoints/epoch=4.ckpt', out_class=256)
else:
self.pre_CNN_here = smallCNN(256)
self.criterion = Loss.SparseCircleLoss(m=0.25, emdsize= 256, class_num= self.hparams.class_num, gamma=128)
def configure_optimizers(self):
....
def training_step(self, batch, batch_idx):
....
result = TrainResult(minimize=loss, checkpoint_on=loss)
result.log('Train/Loss', loss, on_step=False, on_epoch=True)
return result
def forward(self, batch):
....
if __name__=="__main__":
PreTrain_flag = True
parser = ArgumentParser(add_help=False)
if PreTrain_flag:
#! PreTrain Para
parser.add_argument('--max_epochs', type=int, default=500)
else:
#! Train Para
parser.add_argument('--load_pretrain', type=bool, default=True)
parser.add_argument('--max_epochs', type=int, default=500)
parser.add_argument('--check_val_every_n_epoch', type=int, default=5)
parser.add_argument('--gpus', type=int, default=1)
parser.add_argument('--fast_dev_run', type=bool, default=False) #ๅฟซ้ๅฎ้ช
args = parser.parse_args()
if PreTrain_flag:
pre_model = Model(**vars(args), load_model = False)
trainer = pl.Trainer.from_argparse_args(args, default_root_dir='./model_file/pretrain/')
trainer.fit(pre_model)
else:
model = Model(**vars(args), load_model = True)
trainer = pl.Trainer.from_argparse_args(args, default_root_dir='./model_file/normal/')
trainer.fit(model)
First, I pretrain this model and save a checkpoint file. Then when I want to load this checkpoint file by .load_from_checkpoint(), an ERROR was raised:
RuntimeError
Error(s) in loading state_dict for smallCNN:
Missing key(s) in state_dict: "net_1.0.weight", "net_1.0.bias", "net_1.2.weight", "net_1.2.bias", "net_1.2.running_mean", "net_1.2.running_var", "net_1.4.weight", "net_1.4.bias", "net_1.6.weight", "net_1.6.bias", "net_1.6.running_mean", "net_1.6.running_var", "net_1.8.weight", "net_1.8.bias", "net_1.10.weight", "net_1.10.bias", "net_1.10.running_mean", "net_1.10.running_var", "net_2.0.weight", "net_2.0.bias".
Unexpected key(s) in state_dict: "pre_CNN_here.net_1.0.weight", "pre_CNN_here.net_1.0.bias", "pre_CNN_here.net_1.2.weight", "pre_CNN_here.net_1.2.bias", "pre_CNN_here.net_1.2.running_mean", "pre_CNN_here.net_1.2.running_var", "pre_CNN_here.net_1.2.num_batches_tracked", "pre_CNN_here.net_1.4.weight", "pre_CNN_here.net_1.4.bias", "pre_CNN_here.net_1.6.weight", "pre_CNN_here.net_1.6.bias", "pre_CNN_here.net_1.6.running_mean", "pre_CNN_here.net_1.6.running_var", "pre_CNN_here.net_1.6.num_batches_tracked", "pre_CNN_here.net_1.8.weight", "pre_CNN_here.net_1.8.bias", "pre_CNN_here.net_1.10.weight", "pre_CNN_here.net_1.10.bias", "pre_CNN_here.net_1.10.running_mean", "pre_CNN_here.net_1.10.running_var", "pre_CNN_here.net_1.10.num_batches_tracked", "pre_CNN_here.net_2.0.weight", "pre_CNN_here.net_2.0.bias".
File "/home/zpy/MyProject/3d-lstm/mainOrg.py", line 40, in __init__
self.pre_CNN_here = Model.lstmSmall.smallCNN.load_from_checkpoint('./model_file/pretrain/lightning_logs/version_0/checkpoints/epoch=4.ckpt', out_class=256)
File "/home/zpy/MyProject/3d-lstm/mainOrg.py", line 221, in <module>
pre_model = PretrainModel(**vars(args))
What have you tried?
I follow the tutorial to write my code.
What's your environment?
OS: Linux
Version: 0.9.0
Additional Info
This is my hyperparameters:
check_val_every_n_epoch: 5
fast_dev_run: false
gpus: 1
max_epochs: 500
|
DDP is not working for me...
|
[
"help wanted",
"working as intended"
] |
๐ Bug
I run mnist example code in this repo.
When i excuted mnist.py, I could see one gpu's usage is 100% and this process is not finished.
After change ddp to ddp_spawn, It works.
But I want to use ddp!
To Reproduce
Steps to reproduce the behavior:
Run mnist.py.
Environment
script : mnist.py.
You can get the script and run it with:
python mnist.py --batch_size 128 --max_epochs 2 --gpus '0,1' --distributed_backend 'ddp'
PyTorch Version (e.g., 1.0): 1.6.0+cu101
OS (e.g., Linux): Ubuntu 18.04 (Linux)
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source): No
Python version: 3.6.9
CUDA/cuDNN version: 10.1/7.6.4
GPU models and configuration: Geforce 1080
Any other relevant information:
Additional context
|
Support checkpointing for Sub-Epoch period
|
[
"feature",
"help wanted"
] |
Question
When setting period to a fractional value, checkpointing doesnโt trigger correctly. Additionally I think period should default to val_check_interval, if it doesnโt already.
To Reproduce
Steps to reproduce the behavior:
Run any model and set checkpoint to run at a fractional value. Only the first checkpoint will be saved.
Expected behavior
A checkpoint should be saved every specified period
Environment
Lighting Version: 0.9.0
PyTorch Version (e.g., 1.0): 1.6
OS (e.g., Linux): Ubuntu 16.04
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.7
CUDA/cuDNN version: 10.1
|
Incorrect progress bar during validation
|
[
"help wanted",
"working as intended"
] |
๐ Bug
when running for multiple epochs, the progress bar doesn't look right.
While during training the progress bar looks fine (percentage increases and rewrites over) - in the example below this goes on fine until 67% of the epoch. However, during validation instead of switching to "Validating", the "training" progress bar is printed over it, and again on a new line for every validation iteration.
Here's where i think the "validation" message is printed: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/callbacks/progress.py#L349
and immediately after that, it's overwritten: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/callbacks/progress.py#L350
Epoch 0: 67%|โโโโโโโ | 1000/1500 [00:03<00:01, 284.98it/s, loss=0.054, v_num=14]
Epoch 0: 68%|โโโโโโโ | 1014/1500 [00:03<00:01, 288.05it/s, loss=0.054, v_num=14]
Epoch 0: 78%|โโโโโโโโ | 1166/1500 [00:03<00:01, 322.01it/s, loss=0.054, v_num=14]
Epoch 0: 89%|โโโโโโโโโ | 1332/1500 [00:03<00:00, 357.92it/s, loss=0.054, v_num=14]
Epoch 0: 100%|โโโโโโโโโโ| 1500/1500 [00:03<00:00, 391.67it/s, loss=0.054, v_num=14]
Epoch 1: 67%|โโโโโโโ | 1000/1500 [00:03<00:01, 270.00it/s, loss=0.051, v_num=14]
Epoch 1: 78%|โโโโโโโโ | 1174/1500 [00:03<00:01, 309.87it/s, loss=0.051, v_num=14]
Epoch 1: 91%|โโโโโโโโโ | 1358/1500 [00:03<00:00, 349.22it/s, loss=0.051, v_num=14]
Epoch 1: 100%|โโโโโโโโโโ| 1500/1500 [00:03<00:00, 377.86it/s, loss=0.051, v_num=14]
Epoch 2: 67%|โโโโโโโ | 1000/1500 [00:04<00:02, 238.99it/s, loss=0.049, v_num=14]
Epoch 2: 74%|โโโโโโโโ | 1104/1500 [00:04<00:01, 260.11it/s, loss=0.049, v_num=14]
Epoch 2: 88%|โโโโโโโโโ | 1314/1500 [00:04<00:00, 302.46it/s, loss=0.049, v_num=14]
Epoch 2: 100%|โโโโโโโโโโ| 1500/1500 [00:04<00:00, 337.50it/s, loss=0.049, v_num=14]
Epoch 3: 67%|โโโโโโโ | 1000/1500 [00:03<00:01, 251.44it/s, loss=0.047, v_num=14]
Epoch 3: 70%|โโโโโโโ | 1050/1500 [00:04<00:01, 262.49it/s, loss=0.047, v_num=14]
Epoch 3: 84%|โโโโโโโโโ | 1260/1500 [00:04<00:00, 306.74it/s, loss=0.047, v_num=14]
Epoch 3: 100%|โโโโโโโโโโ| 1500/1500 [00:04<00:00, 354.20it/s, loss=0.047, v_num=14]
Epoch 4: 67%|โโโโโโโ | 1000/1500 [00:04<00:02, 233.42it/s, loss=0.047, v_num=14]
Epoch 4: 70%|โโโโโโโ | 1050/1500 [00:04<00:01, 243.11it/s, loss=0.047, v_num=14]
Epoch 4: 84%|โโโโโโโโโ | 1260/1500 [00:04<00:00, 284.35it/s, loss=0.047, v_num=14]
Epoch 4: 100%|โโโโโโโโโโ| 1500/1500 [00:04<00:00, 328.15it/s, loss=0.047, v_num=14]
Epoch 5: 67%|โโโโโโโ | 1000/1500 [00:04<00:02, 210.54it/s, loss=0.046, v_num=14]
Epoch 5: 70%|โโโโโโโ | 1050/1500 [00:04<00:02, 219.27it/s, loss=0.046, v_num=14]
Epoch 5: 84%|โโโโโโโโโ | 1260/1500 [00:04<00:00, 255.91it/s, loss=0.046, v_num=14]
Validating: 61%|โโโโโโโ | 307/500 [00:00<00:00, 1482.33it/s]
Epoch 5: 100%|โโโโโโโโโโ| 1500/1500 [00:05<00:00, 295.12it/s, loss=0.046, v_num=14]
Epoch 6: 67%|โโโโโโโ | 1000/1500 [00:04<00:02, 211.04it/s, loss=0.046, v_num=14]
To Reproduce
Take sample code from project page and run for multiple epochs (see below amended code)
Code sample
import os
import torch
from torch import nn
import torch.nn.functional as F
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
import pytorch_lightning as pl
class LitAutoEncoder(pl.LightningModule):
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3))
self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28))
def forward(self, x):
# in lightning, forward defines the prediction/inference actions
embedding = self.encoder(x)
return embedding
def training_step(self, batch, batch_idx):
# training_step defined the train loop. It is independent of forward
x, y = batch
x = x.view(x.size(0), -1)
z = self.encoder(x)
x_hat = self.decoder(z)
loss = F.mse_loss(x_hat, x)
result = pl.TrainResult(loss)
return result
def validation_step(self, batch, batch_idx):
x, y = batch
x = x.view(x.size(0), -1)
z = self.encoder(x)
x_hat = self.decoder(z)
loss = F.mse_loss(x_hat, x)
result = pl.EvalResult(loss)
return result
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
train, val, unused = random_split(dataset, [1000, 500, len(dataset) - 1000 - 500])
autoencoder = LitAutoEncoder()
trainer = pl.Trainer(max_epochs=10)
trainer.fit(autoencoder, DataLoader(train), DataLoader(val))
The expected behaviour is:
The correct progress bar for "validation" to be printed in the console
Not to be overwritten by the "training" progress bar
Not a new line for every iteration
Enviroment:
Windows with pytorch 1.6.0 (or nightly) from conda
pytorch-lighting from master
|
How to use a custom Callback with Trainer.from_argparse_args
|
[
"question"
] |
โ Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
I'd like specify a custom Callback while passing argparse paramaters using Trainer.from_argparse_args
Code
For example, I've tried something like this with no success:
trainer = Trainer(callbacks=[CustomCallback()]).from_argparse_args(args)
which doesn't seem to properly apply the callback.
What is the proper way to define a custom callback within a trainer when using from_argparse_args?
What have you tried?
However, this DOES work as expected when calling from checkpoint:
trainer = Trainer(
resume_from_checkpoint=ckpt_fname,
callbacks=[CustomCallback()])
What's your environment?
OS: Linux
Packaging: conda
Version: 0.9.0
|
Creation of many data module instances incurs RecursionError
|
[
"bug",
"help wanted"
] |
๐ Bug
Thank you for a nice framework!
When I repeated hundreds of experiments, each time with a new instance of a single LightningDataModule class, RecursionError was raised. I also found that creating data modules and calling setup() were enough to reproduce the issue.
To Reproduce
Please look at the following code sample and error messages:
Code sample
import pytorch_lightning as pl
class DummyDM(pl.LightningDataModule):
def setup(self, stage=None):
pass
if __name__ == "__main__":
MAX_ITERS = 1000
for i in range(MAX_ITERS):
try:
dm = DummyDM()
dm.setup()
except RecursionError:
print(f"RecursionError occured in the {i}-th iteration!")
raise
Error messages
RecursionError occured in the 998-th iteration!
Traceback (most recent call last):
File "test_dm.py", line 18, in <module>
dm.setup()
File "/workspace/src/.venv/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 85, in wrapped_fn
return fn(*args, **kwargs)
File "/workspace/src/.venv/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 85, in wrapped_fn
return fn(*args, **kwargs)
File "/workspace/src/.venv/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 85, in wrapped_fn
return fn(*args, **kwargs)
[Previous line repeated 995 more times]
File "/workspace/src/.venv/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 69, in wrapped_fn
if fn.__name__ == 'setup':
RecursionError: maximum recursion depth exceeded in comparison
Expected behavior
The above code sample is expected to exit without any outputs.
Environment
PyTorch Version (e.g., 1.0): 1.6.0
PytorchLightning Version: 0.9.0
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source): n/a
Python version: 3.8.2
CUDA/cuDNN version: 10.2
GPU models and configuration: 1080Ti
|
TrainResult.log dosen't work as log_dit
|
[
"bug",
"help wanted"
] |
๐ Bug
When using TrainResult as the return of LightningModule.training_step(),TrainRsult.log() can not add metrics to Trainer.logged_metrics and this make Checkpoint.format_checkpoint_name() works not well.
To Reproduce
class Resnet18(pl.LightningModule):
def __init__(self, input_dim=40, numclass=1211, learning_rate=0.1, batch_size=128, num_workers=3,
**kwargs):
super(Resnet18, self).__init__()
self.save_hyperparameters()
self.example_input_array = torch.rand((1, 200, input_dim))
self.net = resnet18(num_classes=numclass)
def forward(self, x):
"""
input: size (batch, seq_len, input_features)
outpu: size (batch, new_seq_len, output_features)
"""
x = torch.unsqueeze(x, 1)
x = torch.cat([x, x, x], 1)
x = self.net(x)
return x
def train_dataloader(self):
transform = transforms.Compose([transforms.ToTensor()])
trainloader = Dataset(path,
transform=transform)
trainloader = DataLoader(trainloader,
batch_size=self.hparams.batch_size,
shuffle=True,
num_workers=self.hparams.num_workers,
pin_memory=True)
return trainloader
def loss(self, input, target):
return F.cross_entropy(input, target)
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
result = pl.TrainResult(minimize=loss, checkpoint_on=loss)
result.log(train_loss,loss) # dose not work well
print(result.get_epoch_log_metrics()) # will print None and checkpoint file can't get the metric
result.log_dict({'train_loss': loss}) # work
print(result.get_epoch_log_metrics()) # will print train_loss and checkpoint file correctly get the metric
return result
if __name__ == '__main__':
ckpt = ModelCheckpoint(filepath=osp.join("save/ckpt", "{epoch:03d}-{train_loss:.2f}"),
monitor='checkpoint_on',
mode='min',
save_top_k=-1,
verbose=True,
save_weights_only=True)
callbacks = [ckpt]
# model
model = Resnet18()
# training
trainer = pl.Trainer(gpus=args.gpus,
max_epochs=2,
profiler=True,
checkpoint_callback=ckpt,
early_stop_callback=False,
# callbacks=callbacks,
limit_train_batches=0.01, )
trainer.fit(model)
Environment
How you installed PyTorch (conda):
CUDA:
- GPU:
- TITAN Xp
- available: True
- version: 9.2
Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.0
- tqdm: 4.48.2
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.9
- version: #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018
|
EvalResult.write() should use self.logger
|
[
"feature",
"help wanted",
"won't fix"
] |
๐ Feature
I quite like using EvalResult.write() to generate a report from the test_step. However, I feel this should be integrated with self.logger
Motivation
By using self.logger for all logging, consistency is maintained - files end up in the same location and it's easy to enable / disable logging. In particular, I'm using mlflow. I'd like my predictions.pt to end up as a mlflow artifact. At the moment I'm using the code below (in test_step) - this works fine but I'm using file:./mlflow as the tracking url - not sure this would work with an http uri.
filename = os.path.join(self.logger.save_dir,self.logger.experiment_id,self.logger.run_id,'artifacts','predictions.pt')
result.write('src', [';'.join(src)], filename=filename)
result.write('tgt', [';'.join(tgt)], filename=filename)
result.write('preds', [';'.join(preds)], filename=filename)
Also it would be nice to be able to control the logging format - I'm using nlp so I'm logging sentences. It would be nicer if the file was txt / html rather than pt (i.e. one of the supported mlflow artifact formats). Also result.write('src', 'the quick brown fox', filename=filename) fails - it needs to be wrapped as a singleton array i.e. result.write('src', ['the quick brown fox'], filename=filename)` - it might be nice is strings were handled as a special case.
|
switch from LBFGS to ADAM optimizer during the training loop
|
[
"question"
] |
Is possible to show how we should write the "configure_optimizers" and "training_step" functions for the following code.
The purpose of the code is to switch the optimizer from LBFGS to Adam when the loss_SUM<0.3
optimizer = optim.LBFGS(model.parameters(), lr=0.003)
Use_Adam_optim_FirstTime=True
Use_LBFGS_optim=True
for epoch in range(30000):
loss_SUM = 0
for i, (x, t) in enumerate(GridLoader):
x = x.to(device)
t = t.to(device)
if Use_LBFGS_optim:
def closure():
optimizer.zero_grad()
lg, lb, li = problem_formulation(x, t, x_Array,t_Array,bndry,pi)
loss_total=lg+ lb+ li
loss_total.backward(retain_graph=True)
return loss_total
loss_out=optimizer.step(closure)
loss_SUM+=loss_out.item()
elif Use_Adam_optim_FirstTime:
Use_Adam_optim_FirstTime=False
optimizerAdam = optim.Adam(model.parameters(), lr=0.0003)
model.load_state_dict(checkpoint['model'])
optimizerAdam.zero_grad()
lg, lb, li = problem_formulation(x, t, x_Array,t_Array,bndry,pi)
lg.backward()
lb.backward()
li.backward()
optimizerAdam.step()
loss_SUM += lg.item()+lb.item()+li.item()
else:
optimizerAdam.zero_grad()
lg, lb, li = problem_formulation(x, t, x_Array,t_Array,bndry,pi)
lg.backward()
lb.backward()
li.backward()
optimizerAdam.step()
loss_SUM += lg.item()+lb.item()+li.item()
if loss_SUM<.3 and use_LBFGS_optim == True:
Use_LBFGS_optim=False
checkpoint = {'model': model.state_dict(),
'optimizer': optimizer.state_dict()}
|
How to use LBFGS in Pytorch-Lightening
|
[
"question"
] |
how to use the LBFGS in lightening. the loss in the following code does not change and it seems that LBFGS is not be
being used correctly
def configure_optimizers(self):
optimizer = optim.LBFGS(self.parameters(), lr=self.hparams.lr_LBFGS)
return optimizer
def training_step(self,train_batch,batch_idx):
x,t=train_batch
lg, lb, li = self.problem_formulation(x, t,self.x_Array,self.t_Array,self.bndry,self.pi)
loss=lg+lb+li
self.lg=lg
self.lb=lb
self.li=li
return {'loss':loss,'lg':lg, 'lb':lb, 'li':li}
def backward(self, trainer, loss, optimizer, optimizer_idx):
loss.backward(retain_graph=True)
def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_idx, second_order_closure,
on_tpu=True, using_native_amp=False, using_lbfgs=True):
optimizer.step(second_order_closure)`
|
Accuracy metric return tuple of length num_gpus on ddp in 0.9.1rc4
|
[
"bug",
"help wanted"
] |
code:
vid_acc = self.accuracy_video(video_labels_hat, video_labels)
print(len(vid_acc), vid_acc)
monitor = 0-vid_acc
In 0.9.1rc3, vid acc is a tensor, but in rc4, it changes to a tuple. I want to use -vid_acc as monitor, and I think it should be a tensor.
Using rc4, in macos's cpu mode, it's a tensor. But in linux ddp mode, it's a tuple of length num_gpus.
|
remove value field from Result objects 'meta' key
|
[
"feature",
"help wanted",
"won't fix"
] |
๐ Feature
remove value field from Result objects 'meta' key
Motivation
The value field in the Result obj's meta data is
a duplicate of the raw data in the Result obj,
not being used,
not updated in the gathered results presented to the user in [training,validation,test]_epoch_end.
Especially this last point can lead to confusion.
Pitch
The value field should no longer be written to the meta data in the Result obj.
I can submit a PR if this change is approved.
|
Fix exception chaining
|
[
"feature",
"help wanted"
] |
I recently went over PyTorch and Detectron2, suggesting a fix in the way that Python 3's exception chaining is used.
As described in detail in this article, exception chaining (PEP 3134) can be used to make exceptions more user-friendly, and in that case, the syntax raise new_exception from old_exception needs to be used.
When raise .. from .. is used, there will be a line saying The above exception was the direct cause of the following exception between tracebacks. However, when implicitly chaining exceptions (meaning when raise .. from .. is not used), the message will be During handling of the above exception, another exception occurred which can confuse users.
Specifically, the following should be used in order to chain exceptions:
try:
something_which_raises_OldError
except OldError as e:
raise NewError("A more user-friendly exception message.") from e
instead of:
try:
something_which_raises_OldError
except OldError:
raise NewError("A more user-friendly exception message.")
One example which needs to be fixed is:
pytorch-lightning/pytorch_lightning/utilities/parsing.py
Lines 159 to 162
in
3d76f60
try:
return self[key]
except KeyError:
raise AttributeError(f'Missing attribute "{key}"')
If this suggestion sounds good and reasonable, I'd be happy to create a PR! Let me know your thoughts on this!
|
Support launching ddp job as module python -m ...
|
[
"feature",
"help wanted",
"good first issue",
"distributed"
] |
๐ Feature
Motivation
Some users wish to launch their training program as a module with python -m some.module.py
Pitch
We should evalute whether this is possible for ddp and support this option when possible.
We need to strip the -m argument and append it to the command with which we launch the child processes.
Alternatives
Additional context
This feature was orginally reported as a bug: #3600
|
Rename row_log_interval and log_save_interval
|
[
"feature",
"priority: 0"
] |
row_log_interval -> log_every_n_steps
log_save_interval -> flush_logs_every_n_steps
|
Missing attribute "training_step_output_for_epoch_end"
|
[
"bug",
"help wanted"
] |
I used the documentation way of stopping the training (https://pytorch-lightning.readthedocs.io/en/latest/early_stopping.html#enable-early-stopping-using-callbacks-on-epoch-end).
If on_bath_start method returns -1 at the very beginning of an epoch, the titled AttributeError exception.
The problem is in training_loop.py line 496 (batch_output.training_step_output_for_epoch_end).
Code sample
Use the method and run your code:
def on_batch_start(self, batch):
return -1
Expected behavior
Check batch_output value if equals -1 before running trainin_loop.py line 495.
The early stopping method achieved the same way the documentation specifies should not throw an exception but rather simply stop the training.
Environment
CUDA:
GPU:
available: False
version: None
Packages:
numpy: 1.19.1
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 0.9.0
tqdm: 4.49.0
System:
OS: Windows
architecture:
64bit
WindowsPE
processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
python: 3.8.5
version: 10.0.18362
|
How to set default EarlyStopping patience?
|
[
"question"
] |
Is it possible to set the default EarlyStopping patience without creating a custom early stopping callback?
Instead of writing:
trainer = pl.Trainer(early_stop_callback=EarlyStopping(patience=XXX))
I'd like to overwrite the default patience directly and then use EvalResult(early_stop_on=...).
|
Loads args from .evn files some type of config file.
|
[
"feature",
"help wanted",
"won't fix"
] |
๐ Feature / Motivation
I often run my training scripts on different systems with different setups. It would be nice to be able to read args from a configuration file. Like If i on one node have a default directory somewhere other than root, i just put it in the config file. And the trainer loads the args from the file. All i have to do i run python train.py and not python train.py --every-arg-needed etc.
Pitch
Read arguments from a configuration file at startup, possibly a .env file?
Alternatives
Make our own parsers with the same arguments and set defaul=os.getenv("DEFAULT_ROOT") or something. This is what i am doing now.
|
Checkpoints based on validation_step or validation_epoch_end
|
[
"question",
"won't fix"
] |
Somewhere I found an example for
def validation_step(self, batch, batch_idx):
....
return {'val_loss': loss, ....}
def validation_epoch_end(self, batch):
avg_val_loss = torch.tensor([ x['val_loss'] for x in batch] ).mean()
.....
return {'val_loss': avg_val_loss,....}
What does the automatic checkpoint use for deciding if it got a better checkpoint?
My average val loss is getting better, but I do not have a checkpoint ( green line is run 292 ).
To avoid ambiguity, it would be nice to change the name. Where are all the places I would have to change the name 'val_loss' if I were to make it 'avg_val_loss'?
btw, lighting is amazing! I made excellent progress on a monster transformer model and I never worried about figuring out checkpoint, ddp, multi gpu, etc, etc.
|
How to store test_step outputs to file?
|
[
"question",
"won't fix"
] |
Is there an approach to save to one file all the outputs during test_step?
def test_step(self, batch, batch_idx):
x1, x2 = batch["x1"], batch["x2"]
r1, r2 = self(x1, x2)
test_loss = self.loss_fn(predict)
test_mrr = self.mrr(r1, r2)
return {'test_loss': test_loss, 'test_mrr': test_mrr}
My LightningModule outputs two dense vectors for representation r1 and r2. They must be saved to be used in a downstream task and for the measurement of some metrics.
|
Change the tensorboard run-names to
|
[
"question"
] |
What is your question?
What is the easiest and most pythonic way to change the Tensorboard runs names from "version_{n}" to something like "{hostname}{time}{lr}_{batch_size}" etc? Do i have to manually create a tensorboard logger and send it to the trainer? What happens with the checkpoint folder and stuff?
What's your environment?
CUDA:
GPU:
GeForce RTX 2070 SUPER
available: True
version: 10.2
Packages:
numpy: 1.19.2
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 0.9.0
tqdm: 4.49.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.6.9
version: #52~18.04.1-Ubuntu SMP Thu Sep 10 12:50:22 UTC 2020
|
AccuracyMetric automatically do ReduceOp.SUM in test_epoch_end
|
[
"bug"
] |
my code:
def test_step(self, batch, batch_idx):
...
# self.accuracy_video = Accuracy()
vid_acc = self.accuracy_video(video_labels_hat, video_labels)
print("test_step, ", vid_acc)
return {'test_loss': loss, "test_pacc": part_acc, "test_vacc": vid_acc}
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
avg_acc = torch.stack([x['test_vacc'] for x in outputs]).mean()
print("avg_acc_1:", avg_acc)
dist.all_reduce(avg_loss, op=dist.ReduceOp.SUM)
dist.all_reduce(avg_acc, op=dist.ReduceOp.SUM)
print("avg_acc_2:", avg_acc)
avg_acc = avg_acc / self.trainer.world_size
return {'test_loss': avg_loss, 'test_acc': avg_acc,}
my test_step vid_acc is 0.6~0.7, normal, but in test_epoch_end, avg_acc_1=22.3333, avg_acc_2=714.6666. Does test_vacc is already synced by dist.ReduceOp.SUM by default? then I only need to do avg_acc = avg_acc / self.trainer.world_size?
|
Argparse usability issues
|
[
"won't fix",
"discussion"
] |
๐ Feature
Improve the usability of the argparse functionality by allowing to use extensions of argparse.
Motivation
Currently in pytorch-lightning argparse is used as parser = ArgumentParser(parents=[parent_parser]) which prevents many behaviors that users might want. The most basic example, someone implements a trainer.py tool, thus writes:
from argparse import ArgumentParser
from pytorch_lightning import Trainer
parser = ArgumentParser(description='My cool tool that uses pytorch-lightning')
parser = Trainer.add_argparse_args(parser)
parser.parse_args()
If you then run trainer.py --help the description of the parser is not shown. This is because the parser that add_argparse_args returns is not the one that the user created. It is a new one that just copied the defined arguments, the rest being lost. Another feature lost is specifying a formatter_class for example doing parser = ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter).
Another consequence of the way argparse is currently used is that it is not possible to use argparse-like parsers. The argparse module is rather limited, so it would be nice to be able to use extensions of argparse. More specifically, I would really like to use jsonargparse with pytorch-lightning. This would for example make it trivial to parse arguments from a json or yaml configuration file. When you have so many options that can be configured such as with the Trainer class, a config file is simply a must. Reinventing the wheel by adding config file support to pytorch-lightning would not make much sense. Much better to use an existing package that already provides this.
I tried changing the pytorch-lightning code from parser = ArgumentParser(parents=[parent_parser]) to parser = parent_parser and use a jsonargparse parser. Unfortunately it failed with an AttributeError: 'bool' object has no attribute 'lower'. Without looking at the source code I imagine that a bool argument support has been added since argparse does not support it. However, jsonargparse already supports defining arguments with type=bool.
Honestly I can't imagine any benefit or good reason for using argparse's parent option here. If there is, please comment.
Pitch
I propose the following:
Internally use jsonargparse including its bool support.
Add arguments directly to the parser provided to add_argparse_args if the provided parser is a jsonargparse parser.
To be backwards compatible, the add_argparse_args would return the parser and use the parent option if the provided parser is not a jsonargparse parser.
Change the documentation to recommend using jsonargparse and describing its support for config files.
To pitch further, I would be more than willing to implement and create a pull request for this. I am the developer of jsonargparse and continuously work on machine learning. Also I think pytorch-lightning is an awesome project and plan to use it a lot and hopefully contribute to it.
|
Hydra Hyperparameter Optimization
|
[
"bug",
"help wanted"
] |
๐ Bug
After update PL from 0.8.x to 0.9.0, I started to face the following error when passing a configuration file via hydra:
Running in fast_dev_run mode: will run a full train, val and test loop using a single batch
[2020-09-29 17:24:44,547][lightning][INFO] - Running in fast_dev_run mode: will run a full train, val and test loop using a single batch
GPU available: True, used: True
[2020-09-29 17:24:44,589][lightning][INFO] - GPU available: True, used: True
TPU available: False, using: 0 TPU cores
[2020-09-29 17:24:44,589][lightning][INFO] - TPU available: False, using: 0 TPU cores
CUDA_VISIBLE_DEVICES: [0]
[2020-09-29 17:24:44,589][lightning][INFO] - CUDA_VISIBLE_DEVICES: [0]
/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: Could not log computational graph since the `model.example_input_array` attribute is not set or `input_array` was not given
warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "/home/celso/projects/semantic_code_search/source/semantic_code_search.py", line 236, in <module>
dev_run()
File "/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/hydra/main.py", line 24, in decorated_main
strict=strict,
File "/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/hydra/_internal/utils.py", line 174, in run_hydra
overrides=args.overrides,
File "/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 86, in run
job_subdir_key=None,
File "/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/hydra/plugins/common/utils.py", line 109, in run_job
ret.return_value = task_function(task_cfg)
File "/home/celso/projects/semantic_code_search/source/semantic_code_search.py", line 45, in dev_run
trainer.fit(model)
File "/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1073, in fit
results = self.accelerator_backend.train(model)
File "/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_backend.py", line 51, in train
results = self.trainer.run_pretrain_routine(model)
File "/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1169, in run_pretrain_routine
self.logger.save()
File "/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py", line 27, in wrapped_fn
return fn(*args, **kwargs)
File "/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/pytorch_lightning/loggers/tensorboard.py", line 212, in save
save_hparams_to_yaml(hparams_file, self.hparams)
File "/home/celso/projects/venvs/semantic_code_search/lib/python3.7/site-packages/pytorch_lightning/core/saving.py", line 357, in save_hparams_to_yaml
if OmegaConf.is_config(hparams):
AttributeError: type object 'OmegaConf' has no attribute 'is_config'
To Reproduce
I'm starting the training as follows:
@hydra.main(config_path="configs/config.yaml")
def run(cfg):
# logger
tb_logger = pl_loggers.TensorBoardLogger(cfg.logs.path, name='funcom_exp')
# checkpoint callback
checkpoint_callback = ModelCheckpoint(
filepath=cfg.checkpoint.path + "joint_encoder-java-{epoch:02d}",
monitor='avg_val_loss')
model = JointEncoder(config=cfg)
trainer = Trainer(
fast_dev_run=True,
max_epochs=cfg.train.max_epochs,
gpus=1,
logger=tb_logger,
checkpoint_callback=checkpoint_callback
)
# training
trainer.fit(model)
# testing
trainer.test()
and my PL model is created as in the code snippet below:
class JointEncoder(LightningModule):
def __init__(self, config):
super(JointEncoder, self).__init__()
self.config = config
...
Any direction on that?
|
Avoid storing a list of outputs to compute aggregated metrics at the end of the epoch.
|
[
"question"
] |
Hi,
to my understanding, the current way of logging an aggregated metric at the end of an epoch requires implicitly to store the outputs of all steps in the epoch. There are tasks like semantic segmentation requiring to accumulate a confusion matrix over steps, e.g. in order to compute the mIoU metric. However, storing all those confusion matrices for the only sake of aggregating them at the end is a waste of resources (it might occupy gigabytes of memory). Do you have a preferred solution to sidestep the issue? It would be better to also offer the possibility of aggregating information while traversing the steps (without storing all outputs) and let the aggregated data to be accessible at the end of the epoch e.g. for logging.
Thank you.
|
test_step hangs after one iteration when on multiple GPUs
|
[
"bug",
"help wanted",
"distributed"
] |
๐ Bug
When running the same code on a computer with 1 gpu, test_step runs as normal and logs what it should.
How ever on a node with 4 gpus, it hangs after 1 iteration!
Code sample
images, masks = batch["image"], batch["mask"]
if images.shape[1] != self.hparams.n_channels:
raise AssertionError(
f"Network has been defined with {self.n_channels} input channels, "
f"but loaded images have {images.shape[1]} channels. Please check that "
"the images are loaded correctly."
)
masks = (
masks.type(torch.float32)
if self.hparams.n_classes == 1
else masks.type(torch.long)
)
masks_pred = self(images) # Forward pass
loss = self.loss_funciton(masks_pred, masks)
result = pl.EvalResult(loss, checkpoint_on=loss)
result.log("test_loss", loss, on_step=True, on_epoch=True, sync_dist=True)
rand_idx = randint(0, self.hparams.batch_size - 1)
onehot = torch.sigmoid(masks_pred[rand_idx]) > 0.5
for tag, value in self.named_parameters():
tag = tag.replace(".", "/")
self.logger.experiment.add_histogram(tag, value, self.current_epoch)
mask_grid = torchvision.utils.make_grid([masks[rand_idx], onehot], nrow=2)
self.logger.experiment.add_image(
"TEST - Target vs Predicted", mask_grid, self.current_epoch
)
alpha = 0.5
image_grid = torchvision.utils.make_grid(
[
images[rand_idx],
torch.clamp(
kornia.enhance.add_weighted(
src1=images[rand_idx],
alpha=1.0,
src2=onehot,
beta=alpha,
gamma=0.0,
),
max=1.0,
),
]
)
self.logger.experiment.add_image(
"TEST - Image vs Predicted", image_grid, self.current_epoch
)
pred = (torch.sigmoid(masks_pred) > 0.5).float()
f1 = f1_score(pred, masks, self.hparams.n_classes + 1)
rec = recall(pred, masks, self.hparams.n_classes + 1)
pres = precision(pred, masks, self.hparams.n_classes + 1)
result.log("test_f1", f1, on_epoch=True)
result.log("test_recall", rec, on_epoch=True)
result.log("test_precision", pres, on_epoch=True)
return result
Expected behavior
I expect it to finish the testing-epoch.
Environment
Environment 1
CUDA:
GPU:
GeForce RTX 2070 SUPER
available: True
version: 10.2
Packages:
numpy: 1.19.2
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 0.9.0
tqdm: 4.49.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.6.9
version: #52~18.04.1-Ubuntu SMP Thu Sep 10 12:50:22 UTC 2020
Environment 2
CUDA:
GPU:
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
GeForce RTX 2080 Ti
available: True
version: 10.2
Packages:
numpy: 1.19.1
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 0.9.0
tqdm: 4.49.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.0
version: #208-Ubuntu SMP Sun Apr 5 23:45:10 UTC 2020
|
Support best model checkpoint path even if save_top_k=-1
|
[
"feature",
"help wanted"
] |
๐ Feature
Support best model checkpoint path even if save_top_k=-1
Motivation
For the model checkpoint callback, the callback could still track the best checkpoint path even if save_top_k=-1. The only case where we couldn't track the best checkpoint is if the monitor metric isn't specified. What do you think?
Pitch
Update the model checkpoint callback to only skip tracking the best checkpoint if monitor is None
|
RuntimeError: Input and hidden tensors are not at the same device, found
|
[
"bug",
"help wanted"
] |
๐ Bug
I train LSTM for character level text generation. At first I initialize hidden and cell with zeros using torch.zeros. Unfortunately this tensors are defaultly assigned to the cpu so I get the following error while training
RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu
To Reproduce
Model
class RNN(pl.LightningModule):
lr = 0.0005
def __init__(self, input_size, hidden_size, embeding_size, n_categories, n_layers, output_size, p):
super().__init__()
self.criterion = nn.CrossEntropyLoss()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.embeding = nn.Embedding(input_size+n_categories, embeding_size)
self.lstm = nn.LSTM(embeding_size+n_categories, hidden_size, n_layers, dropout=p)
self.out_fc = nn.Linear(hidden_size, output_size)
self.dropout = nn.Dropout(p)
def forward(self, batch_of_category, batch_of_letter, hidden, cell):
## letter level operations
embeding = self.dropout(self.embeding(batch_of_letter))
category_plus_letter = torch.cat((batch_of_category, embeding), 1)
#sequence_length = 1
category_plus_letter = category_plus_letter.unsqueeze(1)
out, (hidden, cell) = self.lstm(category_plus_letter, (hidden, cell))
out = self.out_fc(out)
out = out.squeeze(1)
return out, (hidden, cell)
def configure_optimizers(self):
optimizer = Adam(self.parameters(), self.lr)
scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
return [optimizer], [scheduler]
def training_step(self, batch, batch_idx):
item_dict = batch
loss = 0
batch_of_category = item_dict["category_tensors"]
#we loop over letters, single batch at the time
hidden = torch.zeros(self.n_layers, 1, self.hidden_size).cuda()
cell = torcAh.zeros(self.n_layers, 1, self.hidden_size).cuda()
for t in range(item_dict["input_tensors"].size(1)):
batch_of_letter = item_dict["input_tensors"][:, t]
output, (hidden, cell) = self(batch_of_category, batch_of_letter, hidden, cell)
loss += criterion(output, item_dict["target_tensors"][:, t])
loss = loss/(t+1)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def init_hidden(self, batch_size):
hidden = torch.zeros(self.n_layers, batch_size, self.hidden_size)
cell = torch.zeros(self.n_layers, batch_size, self.hidden_size)
return hidden, cell
Batch
(['Russian', 'English', 'Russian', 'English'],
['Piskarenkov', 'Clarkson', 'Pochkaev', 'Woods'],
tensor([[0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0.]]),
tensor([[42, 9, 19, 11, 1, 18, 5, 14, 11, 15, 22],
[29, 12, 1, 18, 11, 19, 15, 14, 0, 0, 0],
[42, 15, 3, 8, 11, 1, 5, 22, 0, 0, 0],
[49, 15, 15, 4, 19, 0, 0, 0, 0, 0, 0]]),
tensor([[ 9, 19, 11, 1, 18, 5, 14, 11, 15, 22, 59],
[12, 1, 18, 11, 19, 15, 14, 59, 0, 0, 0],
[15, 3, 8, 11, 1, 5, 22, 59, 0, 0, 0],
[15, 15, 4, 19, 59, 0, 0, 0, 0, 0, 0]]))
Trainer
dm = NamesDatamodule(1)
rnn_model = RNN(input_size=ds.n_tokens,
hidden_size=256,
embeding_size = 128,
n_layers=2,
n_categories=ds.n_categories,
output_size=ds.n_tokens,
p=0.3)
trainer = Trainer(max_epochs=3,
logger=None,
gpus=1,
early_stop_callback=False,
checkpoint_callback=False,
)
trainer.fit(rnn_model, dm)
Expected behavior
Hidden values should automatically be assigned to the device
Environment
Google Colab
Pytroch 1.6.0+cu101
Lightning 0.9.1rc3
Python version:
GPU models and configuration: single colab GPU
Additional context
Problem can be solved by adding .cuda() to the variables but it is not a solution that I think should be necessary
|
How to use more than one optimizer at each step (jointly train multiple modules within one model)?
|
[
"question",
"won't fix"
] |
โ Questions and Help
What is your question?
I have a model which consists of two blocks, let's call them first_module and second_module.
Code (simplified)
Training Step
def training_step(self, batch, batch_idx, optimizer_idx):
out = self.first_module(batch)
out = self.second_module(out)
loss = criterion(out, batch['target'])
metrics = {'train_loss': loss}
output = {'loss': loss,
'log': metrics,
'progress_bar': metrics}
return output
Optimizers
def configure_optimizers(self):
train_params = self.train_params
optimizer_first_module = torch.optim.Adam(
self.first_module.parameters(),
lr=train_params['lr_first_module'], betas=(0.5, 0.999))
optimizer_second_module = torch.optim.Adam(
self.second_module.parameters(),
lr=train_params['lr_second_module'], betas=(0.5, 0.999))
return [optimizer_first_module, optimizer_second_module]
Question
How to do optimizer_first_module.step() and optimizer_second_module.step() at each batch and ignore batch_idx?
It may be seen that optimizer_step always passes only one optimizer per step
def optimizer_step(
...
optimizer: Optimizer,
optimizer_idx: int,
...
) -> None:
Possible solution (?)
Blending two optimizers into one (hacky way, not sure if pl would see the result as a correct optimizer class)
Modify training loop in PL (this option is even worse)
|
Current batch loss and mean reduced loss
|
[
"question"
] |
Over training_step and validation_step I am logging the losses (train_loss and val_loss) and metrics (train_mrr and val_mrr), both in the logger and in the progress bar:
def training_step(self, batch, batch_idx):
x1, x2 = batch["x1"], batch["x2"]
r1, r2 = self(x1, x2)
train_loss = self.loss_fn(r1, r2)
train_mrr = self.mrr(r1, r2)
result = TrainResult(minimize=train_loss)
result.log('train_loss', train_loss, prog_bar=True)
result.log('train_mrr', train_mrr, prog_bar=True)
return result
def validation_step(self, batch, batch_idx):
x1, x2 = batch["x1"], batch["x2"]
r1, r2 = self(x1, x2)
val_loss = self.loss_fn(r1, r2)
val_mrr = self.mrr(r1, r2)
result = EvalResult(checkpoint_on=val_loss)
# logging
result.log('val_loss', val_loss, prog_bar=True)
result.log('val_mrr', val_mrr, prog_bar=True)
return result
However, the progress bar also shows a loss with a value different from the losses aforementioned mentioned.
Epoch 1: 69%|โโโโโโโ | 49804/72642 [3:55:49<1:48:08, 3.52it/s, loss=0.532, v_num=1, train_loss=0.255, train_mrr=0.927, val_loss=0.518, val_mrr=0.891]
Then, loss printed over the progress bar is current batch loss and train_loss is actually the mean reduce overpassed train_losses?
|
type object got multiple values for keyword argument 'loss'
|
[
"bug",
"help wanted"
] |
๐ Bug
The error appears when TrainReport has minimize param set and loss log added at the same time with prog_bar=True
Code sample
def training_step(self, batch, batch_idx):
loss = self(batch)
result = pl.TrainResult(minimize=loss)
result.log("loss", loss, prog_bar=True)
return result
Where the problem is
I followed the code and it comes to the problem with the ProgressBar callback inside progress.py line 339 -> trainer.py line 884 (return dict(**ref_model.get_progress_bar_dict(), **self.progress_bar_metrics)) which returns
ref_model.get_progress_bar_dict()
Out[4]: {'loss': '0.692', 'v_num': 9}
self.progress_bar_metrics
Out[5]: {'loss': 0.6924866437911987}
Expected behavior
Not sure. At least the error message should be a bit clearer since a user does not create two loss logs but just one.
Environment
CUDA:
GPU:
available: False
version: None
Packages:
numpy: 1.19.1
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 0.9.0
tqdm: 4.49.0
System:
OS: Windows
architecture:
64bit
WindowsPE
processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
python: 3.8.5
version: 10.0.18362
|
Tenorboard, logs either don't appear or have prepended 'epoch_' names
|
[
"question"
] |
I have two kinds of problems with Tensorboard.
Either logs don't appear when I create them inside training_step.
Code:
def training_step(self, batch, batch_idx):
type = "train"
loss, acc, y_true, y_pred, name = self.step(batch)
result = pl.TrainResult(minimize=loss)
result.log(type + "_loss", loss, prog_bar=True, on_step=True, on_epoch=False)
result.log(type + "_acc", acc, prog_bar=True, on_step=True, on_epoch=False)
return result
Screen from Tensorflow:
2. Or... 'epoch_' text is prepended to the logs name for an unknown reason although the same code is done for validation.
Code:
def training_step(self, batch, batch_idx):
type = "train"
loss, acc, y_true, y_pred, name = self.step(batch)
result = pl.TrainResult(minimize=loss)
result.log(type + "_loss", loss, logger=False, prog_bar=True, on_step=True, on_epoch=False)
result.log(type + "_acc", acc, logger=False, prog_bar=True, on_step=True, on_epoch=False)
return result
def training_epoch_end(self, outputs):
type = "train"
avg_loss = torch.stack([x for x in outputs[type + "_loss"]]).mean()
avg_acc = torch.stack([x for x in outputs[type + "_acc"]]).mean()
result = pl.TrainResult()
result.log(type + "_loss", avg_loss, prog_bar=False, on_epoch=True)
result.log(type + "_acc", avg_acc, prog_bar=False, on_epoch=True)
return result
def validation_step(self, batch, batch_idx):
type = "val"
loss, acc, y_true, y_pred, name = self.step(batch)
result = pl.EvalResult()
result.log(type + "_loss", loss, logger=False, prog_bar=True, on_step=True, on_epoch=False)
result.log(type + "_acc", acc, logger=False, prog_bar=True, on_step=True, on_epoch=False)
return result
def validation_epoch_end(self, outputs):
type = "val"
avg_loss = torch.stack([x for x in outputs[type + "_loss"]]).mean()
avg_acc = torch.stack([x for x in outputs[type + "_acc"]]).mean()
result = pl.EvalResult(checkpoint_on=avg_loss, early_stop_on=avg_loss)
result.log(type + "_loss", avg_loss, prog_bar=False, on_epoch=True)
result.log(type + "_acc", avg_acc, prog_bar=False, on_epoch=True)
return result
Screen from Tensorflow:
Why is that happening? Why validation logs do not have epoch_ beginning? The only difference is in using TrainReport vs EvalReport.
|
on_step logging not working as expected/described
|
[
"docs"
] |
๐ Bug
When training a model with the MLFlowLogger, on_step logging in training_step() does not appear to log metrics as frequently as expected. See complete example below.
To Reproduce
Code sample
Here is a complete working example that generates the described behavior. This example is derived from the code in Lightning in 2 steps. The only changes are adding the logging details, setting a mini-batch size, and subsetting the MNIST dataset to decrease running time.
import os
import torch
import torch.nn.functional as F
from torchvision.datasets import MNIST
from torchvision import transforms
from torch.utils.data import DataLoader, Subset
import pytorch_lightning as pl
from torch.utils.data import random_split
import numpy as np
class LitModel(pl.LightningModule):
def __init__(self):
super().__init__()
self.layer_1 = torch.nn.Linear(28 * 28, 128)
self.layer_2 = torch.nn.Linear(128, 10)
def forward(self, x):
x = x.view(x.size(0), -1)
x = self.layer_1(x)
x = F.relu(x)
x = self.layer_2(x)
return x
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
result = pl.TrainResult(loss)
result.log('train_loss', loss, on_step=True)
return result
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
result = pl.EvalResult(checkpoint_on=loss)
result.log('val_loss', loss)
return result
# dataloaders
dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
data_subset = Subset(dataset, np.random.choice(len(dataset), 1000, replace=False))
train, val = random_split(data_subset, [700, 300])
train_loader = DataLoader(train, batch_size=4)
val_loader = DataLoader(val)
# init model
model = LitModel()
logger = pl.loggers.mlflow.MLFlowLogger()
trainer = pl.Trainer(logger=logger, max_epochs=20)
trainer.fit(model, train_loader, val_loader)
Expected behavior
Training for 20 epochs with a training dataset of 700 samples and a mini-batch size of 4, we would expect training_step() to run 3,500 times ((700 / 4) * 20). Assuming I correctly understand the semantics of logging on_step, I would also expect the training loss to be logged 3,500 times. However, when I inspect the results in the MLflow UI or directly examine the logged values in the metrics/ folder within mlruns/, only 60 loss values are logged for the entirety of the run. I've confirmed that training_step() does, in fact, get called 3,500 times over the course of the run.
Environment
CUDA:
GPU:
available: False
version: 10.2
Packages:
numpy: 1.19.2
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 0.9.0
tqdm: 4.50.0
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.6.9
Additional context
I am still learning pytorch-lightning, so I will readily admit that I might not understand how this is actually supposed to work. However, I'll note that the documentation describes the semantics of on_step as "logs the metric at that step in training", which suggests the expected behavior described above.
|
Handling AttributeErrors while cleaning params namespace while setting up fit
|
[
"bug",
"help wanted"
] |
๐ Bug
is_picklable in parsing.py does not handle AttributeError thrown by pickle.dumps() - specifically, the following :
AttributeError: Can't pickle local object 'ArgumentParser.__init__.<locals>.identity'
To Reproduce
Here's a stack trace:
Traceback (most recent call last):
File "/home/chirag/miniconda3/envs/ml/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/chirag/miniconda3/envs/ml/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/chirag/Projects/mingle/social-processes/run/run_synthetic_social.py", line 120, in <module>
main()
File "/home/chirag/Projects/mingle/social-processes/run/run_synthetic_social.py", line 116, in main
trainer.fit(process, datamodule=dm)
File "/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 425, in fit
self.train_loop.setup_fit(model, train_dataloader, val_dataloaders, datamodule)
File "/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 90, in setup_fit
parsing.clean_namespace(model.hparams)
File "/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/parsing.py", line 75, in clean_namespace
del_attrs = [k for k, v in hparams_dict.items() if not is_picklable(v)]
File "/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/parsing.py", line 75, in <listcomp>
del_attrs = [k for k, v in hparams_dict.items() if not is_picklable(v)]
File "/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/parsing.py", line 62, in is_picklable
pickle.dumps(obj)
AttributeError: Can't pickle local object 'ArgumentParser.__init__.<locals>.identity'
I forked the repo, found the file, and made the following change to is_picklable:
def is_picklable(obj: object) -> bool:
"""Tests if an object can be pickled"""
try:
pickle.dumps(obj)
return True
except (pickle.PicklingError, AttributeError):
return False
I then installed the package from my local repo, ran the same code and got the following warnings:
/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: attribute 'trials' removed from hparams because it cannot be pickled
warnings.warn(*args, **kwargs)
/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: attribute 'optimize_parallel' removed from hparams because it cannot be pickled
warnings.warn(*args, **kwargs)
/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: attribute 'optimize_parallel_gpu' removed from hparams because it cannot be pickled
warnings.warn(*args, **kwargs)
/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: attribute 'optimize_parallel_cpu' removed from hparams because it cannot be pickled
warnings.warn(*args, **kwargs)
/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: attribute 'generate_trials' removed from hparams because it cannot be pickled
warnings.warn(*args, **kwargs)
/home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: attribute 'optimize_trials_parallel_gpu' removed from hparams because it cannot be pickled
warnings.warn(*args, **kwargs)
These aren't params added by the end user I believe, and for some reason pickle raises an AttributeError rather than pickling.PicklingError for these.
Environment
CUDA:
- GPU:
- Quadro P4000
- available: True
- version: 10.2
Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.1rc4
- tqdm: 4.48.2
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.5
- version: #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020
Additional context
Not sure if the solution is to also handle AttributeError, or if a more elegant alternative is needed.
|
auto_scale_batch_size doesnt use 'binsearch'
|
[
"bug",
"docs"
] |
I tried to following and it's still using power:
#####################
# 1. Init Model
#####################
model = LitAutoEncoder()
#####################
# 2. Init Trainer
#####################
trainer = pl.Trainer(auto_scale_batch_size='binsearch')
#####################
# 3. Tune
#####################
trainer.fit(model)
Did we remove support? or is that a bug?
|
Checkpointing and Early Stopping fail to work correctly when increasing number of train batches (in some cases)
|
[
"bug",
"help wanted",
"priority: 0"
] |
๐ Bug
( Preface: I created a complete minimal example for this bug report that unfortunately didn't end up reproducing the behavior, but I still think it might be useful to mention this nevertheless ).
The symptom is that when I leave everything else the same but increase the number of my training batches from 1000 to 5000, both checkpointing and early stopping completely fail to work correctly. As verified by creating a minimal example with a different simpler model it's not so much the number of batches but perhaps somehow related to the time it takes for an epoch to run, maybe. Here is a more detailed description:
Setup
In the LightningModule:
def training_step(self, batch, _) -> Tensor:
""" Perform a single step in the training loop """
loss, nll = self.shared_step(batch, self.hparams.teacher_forcing)
loss_with_reg = loss + self.reg(self.process)
logs = {"loss_no_reg": loss, "loss_with_reg": loss_with_reg,
"nll": nll}
self.log_dict(logs, on_epoch=True)
return loss_with_reg
def validation_step(self, batch: types.DataSplit, batch_idx) ->None:
""" Perform an evaluation step """
nll = torch.tensor(float(0)).to(batch.context.device)
losses = [40, 20, 30, 10, 1, 0.9, 1, 1, 90, 100]
loss = torch.tensor(float(losses[self.current_epoch])).to(batch.context.device)
logs = {"val_loss": loss, "val_nll": nll}
self.log_dict(logs)
By construction, epoch 5 should be the best model, and early stopping should trigger on epoch 8.
Experiment setup:
outroot = Path(args.out_dir)
logger = TestTubeLogger(save_dir=str(outroot / "logs"))
ckpt_filepath = "{}/{{epoch}}-{{val_loss:.2f}}".format(
str(outroot / "logs" / "checkpoints"))
checkpoint_callback = ModelCheckpoint(
filepath=ckpt_filepath, save_top_k=1, monitor="val_loss",
verbose=True
)
early_stop = EarlyStopping(monitor="val_loss", verbose=True)
trainer = Trainer.from_argparse_args(
args, logger=logger, checkpoint_callback=checkpoint_callback,
early_stop_callback=early_stop
)
trainer.fit(model, datamodule=dm)
Behavior
Run 1 - All correct.
Okay, so with that, if I run with anywhere between 10 to a 1000 training batches, things work perfectly:
โฏ python -m run.run_synthetic_social --gpus 1 ... --max_epochs 10 --limit_train_batches 10 --limit_val_batches 5
...
home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: The validation_epoch_end should not return anything as of 9.1.to log, use self.log(...) or self.write(...) directly in the LightningModule
warnings.warn(*args, **kwargs)
Epoch 0: 73%|โโโโโโโโโโโโโโโโโโโโโโ | 11/15 [00:00<00:00, 12.14it/s, loss=443490368.000, v_num=0]Epoch 0: val_loss reached 40.00000 (best 40.00000), saving model to /home/chirag/Projects/mingle/social-processes/artefacts/exp/dev_run/logs/checkpoints/epoch=0-val_loss=40.00.ckpt as top 1
Epoch 1: 80%|โโโโโโโโโโโโโโโโโโโโโโโโ | 12/15 [00:01<00:00, 11.90it/s, loss=372233728.000, v_num=0]Epoch 1: val_loss reached 20.00000 (best 20.00000), saving model to /home/chirag/Projects/mingle/social-processes/artefacts/exp/dev_run/logs/checkpoints/epoch=1-val_loss=20.00.ckpt as top 1
Epoch 2: 80%|โโโโโโโโโโโโโโโโโโโโโโโโ | 12/15 [00:01<00:00, 10.89it/s, loss=197302992.000, v_num=0]Epoch 2: val_loss was not in top 1 | 1/5 [00:00<00:00, 7.89it/s]
Epoch 3: 80%|โโโโโโโโโโโโโโโโโโโโโโโโ | 12/15 [00:01<00:00, 11.46it/s, loss=108065304.000, v_num=0]Epoch 3: val_loss reached 10.00000 (best 10.00000), saving model to /home/chirag/Projects/mingle/social-processes/artefacts/exp/dev_run/logs/checkpoints/epoch=3-val_loss=10.00.ckpt as top 1
Epoch 4: 80%|โโโโโโโโโโโโโโโโโโโโโโโโ | 12/15 [00:00<00:00, 12.58it/s, loss=104392560.000, v_num=0]Epoch 4: val_loss reached 1.00000 (best 1.00000), saving model to /home/chirag/Projects/mingle/social-processes/artefacts/exp/dev_run/logs/checkpoints/epoch=4-val_loss=1.00.ckpt as top 1
Epoch 5: 80%|โโโโโโโโโโโโโโโโโโโโโโโโโ | 12/15 [00:00<00:00, 13.81it/s, loss=64182412.000, v_num=0]Epoch 5: val_loss reached 0.90000 (best 0.90000), saving model to /home/chirag/Projects/mingle/social-processes/artefacts/exp/dev_run/logs/checkpoints/epoch=5-val_loss=0.90.ckpt as top 1
Epoch 6: 80%|โโโโโโโโโโโโโโโโโโโโโโโโโ | 12/15 [00:01<00:00, 11.82it/s, loss=65260504.000, v_num=0]Epoch 6: val_loss was not in top 1 | 1/5 [00:00<00:00, 6.81it/s]
Epoch 7: 80%|โโโโโโโโโโโโโโโโโโโโโโโโ | 12/15 [00:01<00:00, 11.47it/s, loss=105555992.000, v_num=0]Epoch 7: val_loss was not in top 1 | 1/5 [00:00<00:00, 7.17it/s]
Epoch 8: 80%|โโโโโโโโโโโโโโโโโโโโโโโโ | 12/15 [00:01<00:00, 11.64it/s, loss=113607824.000, v_num=0]Epoch 8: val_loss was not in top 1 | 1/5 [00:00<00:00, 6.88it/s]
Epoch 8: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 15/15 [00:01<00:00, 13.69it/s, loss=113607824.000, v_num=0Epoch 00009: early stopping triggered.
Epoch 8: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 15/15 [00:01<00:00, 13.17it/s, loss=113607824.000, v_num=0]
The checkpoint updates correctly on disk during training, and the last one is correctly named epoch=5-val_loss=0.90.ckpt.
Run 2 - Problematic.
If I increase the number of training batches to 5000, early stopping is triggered after the first 3 epochs, the checkpoints are not created live (only after early stopping has ended), and has the completely incorrect name epoch=3-val_loss=40.00.ckpt. Note that the epoch 3 val loss by construction should be 10, while it's picked up the epoch 0 loss.
โฏ python -m run.run_synthetic_social --gpus 1 ... --max_epochs 10 --limit_train_batches 5000 --limit_val_batches 5
...
home/chirag/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: The validation_epoch_end should not return anything as of 9.1.to log, use self.log(...) or self.write(...) directly in the LightningModule
warnings.warn(*args, **kwargs)
Epoch 3: 20%|โโโโโโ | 1026/5005 [01:22<05:18, 12.49it/s, loss=341889.875, v_num=0]Epoch 3: val_loss reached 40.00000 (best 40.00000), saving model to /home/chirag/Projects/mingle/social-processes/artefacts/exp/dev_run/logs/checkpoints/epoch=3-val_loss=40.00.ckpt as top 1
Epoch 00004: early stopping triggered.
Epoch 3: 20%|โโโโโโ | 1026/5005 [01:22<05:19, 12.44it/s, loss=341889.875, v_num=0]
To Reproduce / Code Sample
I do have an isolated minimal code sample but unfortunately it works okay as expected, even with 15000 training batches. It's a much simpler model, since I'm using a variation of the LitModel, and I've tried to keep dimensions of tensors similar to my actual problem, so I don't know where the problem is right now. Here is the minimal code sample nevertheless:
https://gist.github.com/chiragraman/16b1a89787df0c517b8dfffae5c3d591
Expected behavior
The expected behavior in Run 2 above is to match the behavior in Run 1.
Environment
CUDA:
- GPU:
- Quadro P4000
- available: True
- version: 10.2
Packages:
- numpy: 1.19.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.9.1rc4
- tqdm: 4.48.2
System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.8.5
- version: #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020
Additional context
Also, minor side note: I'm getting a warning about UserWarning: The validation_epoch_end should not return anything as of 9.1.to log, use self.log(...) or self.write(...) directly in the LightningModule when I haven't implemented validation_epoch_end at all, and am not returning anything from validation_step
|
User Deprecation Warning thrown even if user does not override `validation_epoch_end`
|
[
"bug",
"help wanted"
] |
๐ Bug
From #3789 additional context. If the user does not override validation_epoch_end a warning is still thrown reading: UserWarning: The validation_epoch_end should not return anything as of 9.1.to log, use self.log(...) or self.write(...) directly in the LightningModule
I tracked this down to this snippet:
pytorch-lightning/pytorch_lightning/trainer/evaluation_loop.py
Lines 197 to 222
in
0c12065
eval_results = outputs
if num_dataloaders == 1:
eval_results = outputs[0]
user_reduced = False
if self.testing:
if is_overridden('test_epoch_end', model=model):
model._current_fx_name = 'test_epoch_end'
if using_eval_result:
eval_results = self.__gather_epoch_end_eval_results(outputs)
eval_results = model.test_epoch_end(eval_results)
user_reduced = True
else:
if is_overridden('validation_epoch_end', model=model):
model._current_fx_name = 'validation_epoch_end'
if using_eval_result:
eval_results = self.__gather_epoch_end_eval_results(outputs)
eval_results = model.validation_epoch_end(eval_results)
user_reduced = True
# depre warning
if eval_results is not None:
The problem is that eval_results contains the outputs if validation_epoch_end is not overriden by the user. I believe the test in line 222 should be updated to
if eval_results is not None and user_reduced is True:
To Reproduce
Run a model without overriding validation_epoch_end and have a single eval data loader.
Expected behavior
No user warning should be thrown if the user hasn't overriden validation_epoch_end
Environment
CUDA:
GPU:
Quadro P4000
available: True
version: 10.2
Packages:
numpy: 1.19.1
pyTorch_debug: False
pyTorch_version: 1.6.0
pytorch-lightning: 0.9.1rc4
tqdm: 4.48.2
System:
OS: Linux
architecture:
64bit
ELF
processor: x86_64
python: 3.8.5
version: #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020
Additional context
Could have issued a PR if this seems okay but got the message there is a freeze on PRs.
|
Fix docs for auto_lr_find
|
[
"docs",
"priority: 0"
] |
This is the correct way to run
trainer = pl.Trainer(auto_lr_find=True)
lr_finder = trainer.tuner.lr_find(model) # Run learning rate finder
fig = lr_finder.plot(suggest=True) # Plot
fig.show()
model.hparams.learning_rate = lr_finder.suggestion()
|
Access metrics in custom callbacks
|
[
"question"
] |
โ Questions and Help
I have found it useful/helpful to sometimes access metrics in custom callbacks. In v0.9.0 this works using something like this:
def training_step(self, batch, batch_idx):
return {"loss": self._step(batch)}
def validation_step(self, batch, batch_idx):
return {"val_loss": self._step(batch)}
def training_epoch_end(self, outputs):
# ...
return {"interesting_key_train": interesting_value}
def validation_epoch_end(self, outputs):
# ...
return {"interesting_key_val": interesting_value}
The setup allows for the values returned in the _epoch_end methods to be accessed via trainer.callback_metrics. As such, a callback could use these values, e.g.
class CustomCallback(Callback):
def on_validation_end(self, trainer, pl_module):
metrics = trainer.callback_metrics
interesting_value = metrics["interesting_key_train"]
When using the current master branch, the above approach is possible for values returned in validation_epoch_end but no longer possible for training_epoch_end as setting a return value in training_epoch_end raises the exception,
MisconfigurationException: training_epoch_end expects a return of None. HINT: remove the return statement in training_epoch_end
Additionally the values stored in trainer.callback_metrics have changed. Using the example above, in v0.9.0, it is {"loss": ..., "interesting_key_train": ..., "interesting_key_val": ...} and on master it is simply {"interesting_key_val": ...}.
What is the intended way to access metrics (in particular from the training loop) in callbacks?
|
ModelCheckpoint not picking up metrics logged from lightning module
|
[
"bug",
"help wanted"
] |
๐ Bug
The Model Checkpoint raises a misconfiguration error because metrics logged from validation epoch end are mysteriously unavailable to the callback
To Reproduce
from typing import Optional
import torch
from pytorch_lightning import Trainer, LightningModule
from pytorch_lightning.callbacks import ModelCheckpoint
from torch.utils.data.dataset import Dataset
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class TestModule(LightningModule):
def __init__(self, epoch_min_loss_override: Optional[int] = None):
"""LightningModule for testing purposes
Args:
epoch_min_loss_override (int, optional): Pass in an epoch that will be set to the minimum
validation loss for testing purposes (zero based). If None this is ignored. Defaults to None.
"""
super().__init__()
self.layer = torch.nn.Linear(32, 2)
self.epoch_min_loss_override = epoch_min_loss_override
def forward(self, x):
return self.layer(x)
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def training_step(self, batch, batch_idx):
output = self.forward(batch)
loss = self.loss(batch, output)
return {"output": output, "loss": loss, "checkpoint_on": loss}
def validation_step(self, batch, batch_idx):
output = self.forward(batch)
loss = self.loss(batch, output)
return {"output": output, "loss": loss, "checkpoint_on": loss}
def test_step(self, batch, batch_idx):
output = self.forward(batch)
loss = self.loss(batch, output)
return {"output": output, "loss": loss}
def training_epoch_end(self, outputs) -> None:
avg_loss = torch.stack([x["loss"] for x in outputs]).mean()
self.log("avg_loss", avg_loss)
def validation_epoch_end(self, outputs) -> None:
avg_val_loss = torch.stack(
[torch.randn(1, requires_grad=True) for _ in outputs]
).mean()
# For testing purposes allow a nominated epoch to have a low loss
if self.current_epoch == self.epoch_min_loss_override:
avg_val_loss -= 1e10
self.log("avg_val_loss", avg_val_loss)
self.log("checkpoint_on", avg_val_loss)
def test_epoch_end(self, outputs) -> None:
avg_loss = torch.stack(
[torch.randn(1, requires_grad=True) for _ in outputs]
).mean()
self.log("val_loss", avg_loss)
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
def train_dataloader(self):
return torch.utils.data.DataLoader(RandomDataset(32, 64))
def val_dataloader(self):
return torch.utils.data.DataLoader(RandomDataset(32, 64))
def test_dataloader(self):
return torch.utils.data.DataLoader(RandomDataset(32, 64))
def train():
checkpoint_callback = ModelCheckpoint(save_top_k=1, monitor="avg_val_loss")
trainer = Trainer(
max_epochs=epoch_min_loss_override + 2,
logger=False,
checkpoint_callback=checkpoint_callback,
)
model = TestModule(epoch_min_loss_override=2)
lightning_trainer.fit(model)
this is the error I see
raise MisconfigurationException(m)
pytorch_lightning.utilities.exceptions.MisconfigurationException: ModelCheckpoint(monitor='avg_val_loss') not found in the returned metrics: ['avg_loss']. HINT: Did you call self.log('avg_val_loss', tensor) in the LightningModule?
Full stacktrace:
lightning_trainer.fit(model) File "pytorch_lightning/trainer/trainer.py", line 442, in fit
results = self.accelerator_backend.train()
File "pytorch_lightning/accelerators/cpu_backend.py", line 47, in train
results = self.train_or_test()
File "pytorch_lightning/accelerators/base_backend.py", line 43, in train_or_test
results = self.trainer.train()
File "pytorch_lightning/trainer/trainer.py", line 489, in train
self.train_loop.run_training_epoch()
File "pytorch_lightning/trainer/training_loop.py", line 538, in run_training_epoch
self.trainer.run_evaluation(test_mode=False)
File "pytorch_lightning/trainer/trainer.py", line 611, in run_evaluation
self.evaluation_loop.on_evaluation_end()
File "pytorch_lightning/trainer/evaluation_loop.py", line 95, in on_evaluation_end
self.trainer.call_hook('on_validation_end', *args, **kwargs)
File "pytorch_lightning/trainer/trainer.py", line 800, in call_hook
trainer_hook(*args, **kwargs)
File "pytorch_lightning/trainer/callback_hook.py", line 177, in on_validation_end
callback.on_validation_end(self, self.get_model())
File "pytorch_lightning/callbacks/model_checkpoint.py", line 167, in on_validation_end
self.save_checkpoint(trainer, pl_module)
File "pytorch_lightning/callbacks/model_checkpoint.py", line 197, in save_checkpoint
self._validate_monitor_key(trainer)
File "pytorch_lightning/callbacks/model_checkpoint.py", line 440, in _validate_monitor_key
raise MisconfigurationException(m)
pytorch_lightning.utilities.exceptions.MisconfigurationException: ModelCheckpoint(monitor='avg_val_loss') not found in the returned me
trics: ['avg_loss']. HINT: Did you call self.log('avg_val_loss', tensor) in the LightningModule?
Expected behavior
We can save the top-1 checkpoint with the monitor based on "avg_val_loss"
Environment
This is based on Lightning git revision 0c12065
Additional context
|
Calling module.log(...) within a callback fails
|
[
"bug",
"feature"
] |
๐ Bug
Calling pl_module.log(...) within a Callback fails, even though this is recommended by the documentation here: https://pytorch-lightning.readthedocs.io/en/latest/loggers.html#logging-from-a-callback
Error
File "my_callback_file.py", line XX, in on_validation_epoch_end
pl_module.log_dict(my_metrics_dict)
File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 287, in log_dict
self.log(
File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 233, in log
self._results.log(
File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 171, in log
self.__set_meta(
File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 217, in __set_meta
_internal = self['meta']['_internal']
KeyError: '_internal'
python-BaseException
cc @nathanpainchaud
This is happening on master
Expected behavior
We can log from callbacks using the lightning module
Environment
Happening on PyTorch Lightning master
|
PyTorch Lightning throws error when used on TPU
|
[
"help wanted",
"waiting on author"
] |
I'm having this error just after the validation sanity check
GPU available: False, used: False
TPU available: True, using: 8 TPU cores
training on 8 TPU cores
INIT TPU local core: 0, global rank: 0 with XLA_USE_BF16=None
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: Could not log computational graph since the `model.example_input_array` attribute is not set or `input_array` was not given
warnings.warn(*args, **kwargs)
INIT TPU local core: 3, global rank: 3 with XLA_USE_BF16=None
INIT TPU local core: 2, global rank: 2 with XLA_USE_BF16=None
INIT TPU local core: 1, global rank: 1 with XLA_USE_BF16=None
INIT TPU local core: 6, global rank: 6 with XLA_USE_BF16=None
INIT TPU local core: 4, global rank: 4 with XLA_USE_BF16=None
INIT TPU local core: 7, global rank: 7 with XLA_USE_BF16=None
INIT TPU local core: 5, global rank: 5 with XLA_USE_BF16=None
| Name | Type | Params
----------------------------------------
0 | model | Predictor | 44 M
1 | criterion | MSELoss | 0
Validation sanity check: 100% 1/1.0 [01:06<00:00, 66.35s/it]Exception in device=TPU:2: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Exception in device=TPU:1: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 112, in tpu_train_in_process
results = trainer.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 396, in _evaluate
eval_results = self.__run_eval_epoch_end(test_mode, outputs, dataloaders, using_eval_result)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 494, in __run_eval_epoch_end
eval_results = self.__auto_reduce_result_objs(outputs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 539, in __auto_reduce_result_objs
result = result.__class__.reduce_on_epoch_end(dl_output)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 366, in reduce_on_epoch_end
reduced_val = weighted_mean(result[k], batch_sizes)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 845, in weighted_mean
numerator = torch.dot(result.float(), weights.t().float())
RuntimeError: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Exception in device=TPU:0: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Exception in device=TPU:3: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Exception in device=TPU:4: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Exception in device=TPU:6: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Exception in device=TPU:7: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Exception in device=TPU:5: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 112, in tpu_train_in_process
results = trainer.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 396, in _evaluate
eval_results = self.__run_eval_epoch_end(test_mode, outputs, dataloaders, using_eval_result)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 494, in __run_eval_epoch_end
eval_results = self.__auto_reduce_result_objs(outputs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 539, in __auto_reduce_result_objs
result = result.__class__.reduce_on_epoch_end(dl_output)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 366, in reduce_on_epoch_end
reduced_val = weighted_mean(result[k], batch_sizes)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 845, in weighted_mean
numerator = torch.dot(result.float(), weights.t().float())
RuntimeError: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 112, in tpu_train_in_process
results = trainer.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 396, in _evaluate
eval_results = self.__run_eval_epoch_end(test_mode, outputs, dataloaders, using_eval_result)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 494, in __run_eval_epoch_end
eval_results = self.__auto_reduce_result_objs(outputs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 539, in __auto_reduce_result_objs
result = result.__class__.reduce_on_epoch_end(dl_output)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 366, in reduce_on_epoch_end
reduced_val = weighted_mean(result[k], batch_sizes)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 845, in weighted_mean
numerator = torch.dot(result.float(), weights.t().float())
RuntimeError: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 112, in tpu_train_in_process
results = trainer.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 396, in _evaluate
eval_results = self.__run_eval_epoch_end(test_mode, outputs, dataloaders, using_eval_result)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 494, in __run_eval_epoch_end
eval_results = self.__auto_reduce_result_objs(outputs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 539, in __auto_reduce_result_objs
result = result.__class__.reduce_on_epoch_end(dl_output)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 366, in reduce_on_epoch_end
reduced_val = weighted_mean(result[k], batch_sizes)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 845, in weighted_mean
numerator = torch.dot(result.float(), weights.t().float())
RuntimeError: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 112, in tpu_train_in_process
results = trainer.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 396, in _evaluate
eval_results = self.__run_eval_epoch_end(test_mode, outputs, dataloaders, using_eval_result)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 494, in __run_eval_epoch_end
eval_results = self.__auto_reduce_result_objs(outputs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 539, in __auto_reduce_result_objs
result = result.__class__.reduce_on_epoch_end(dl_output)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 366, in reduce_on_epoch_end
reduced_val = weighted_mean(result[k], batch_sizes)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 845, in weighted_mean
numerator = torch.dot(result.float(), weights.t().float())
RuntimeError: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 112, in tpu_train_in_process
results = trainer.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 396, in _evaluate
eval_results = self.__run_eval_epoch_end(test_mode, outputs, dataloaders, using_eval_result)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 494, in __run_eval_epoch_end
eval_results = self.__auto_reduce_result_objs(outputs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 539, in __auto_reduce_result_objs
result = result.__class__.reduce_on_epoch_end(dl_output)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 366, in reduce_on_epoch_end
reduced_val = weighted_mean(result[k], batch_sizes)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 845, in weighted_mean
numerator = torch.dot(result.float(), weights.t().float())
RuntimeError: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 112, in tpu_train_in_process
results = trainer.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 396, in _evaluate
eval_results = self.__run_eval_epoch_end(test_mode, outputs, dataloaders, using_eval_result)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 494, in __run_eval_epoch_end
eval_results = self.__auto_reduce_result_objs(outputs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 539, in __auto_reduce_result_objs
result = result.__class__.reduce_on_epoch_end(dl_output)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 366, in reduce_on_epoch_end
reduced_val = weighted_mean(result[k], batch_sizes)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 845, in weighted_mean
numerator = torch.dot(result.float(), weights.t().float())
RuntimeError: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 112, in tpu_train_in_process
results = trainer.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1224, in run_pretrain_routine
self._run_sanity_check(ref_model, model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1257, in _run_sanity_check
eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 396, in _evaluate
eval_results = self.__run_eval_epoch_end(test_mode, outputs, dataloaders, using_eval_result)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 494, in __run_eval_epoch_end
eval_results = self.__auto_reduce_result_objs(outputs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 539, in __auto_reduce_result_objs
result = result.__class__.reduce_on_epoch_end(dl_output)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 366, in reduce_on_epoch_end
reduced_val = weighted_mean(result[k], batch_sizes)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/step_result.py", line 845, in weighted_mean
numerator = torch.dot(result.float(), weights.t().float())
RuntimeError: torch_xla/csrc/helpers.cpp:97 : Check failed: min_shape_dim <= dim && dim <= max_shape_dim
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
torch_xla::XlaHelpers::GetCanonicalDimensionIndex(long long, long long)
torch_xla::XlaHelpers::MakeTransposePermutation(long long, long long, long long)
torch_xla::XLATensor::transpose(torch_xla::XLATensor const&, long long, long long)
torch_xla::AtenXlaType::t(at::Tensor const&)
c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(at::Tensor const&), at::Tensor, c10::guts::typelist::typelist<at::Tensor const&> >, at::Tensor (at::Tensor const&)>::call(c10::OperatorKernel*, at::Tensor const&)
at::t(at::Tensor const&)
at::Tensor::t() const
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyFunction_FastCallDict
_PyObject_FastCallKeywords
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyEval_EvalCode
PyRun_FileExFlags
PyRun_SimpleFileExFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Value out of range (expected to be in range of [-1, 0], but got 1)
Traceback (most recent call last):
File "main.py", line 22, in <module>
train(config)
File "main.py", line 17, in train
trainer.fit(model, data)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1078, in fit
self.accelerator_backend.train(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/tpu_backend.py", line 87, in train
start_method=self.start_method
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn
start_method=start_method)
File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 157, in start_processes
while not context.join():
File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 112, in join
(error_index, exitcode)
Exception: process 1 terminated with exit code 17
And here's a quick glance at my implementation
model = Model(config)
data = Data(config)
trainer = pl.Trainer(tpu_cores=8, max_epochs=10)
trainer.fit(model, data)
And this works completely fine on GPU
But working with TPU gives me this error
|
Deprecate EvalModelTemplatein favor of BoringModel and another simple model that does actually learn
|
[
"feature",
"help wanted",
"good first issue",
"ci",
"design"
] |
๐ Feature
correct actual EvalModelTemplate to use new API unless it is testing other purposes or deprecated API
Motivation
better testing of the actual API
|
use docker image for GH action testing
|
[
"feature",
"help wanted",
"good first issue",
"ci"
] |
๐ Feature
Check options to use a docker image to run Conda testing with our base images
https://stackoverflow.com/questions/57549439/how-do-i-use-docker-with-github-actions
Motivation
setting Conda for each run takes about 8min
|
Incorrect batch size tracking in training and validation steps
|
[
"bug",
"help wanted"
] |
๐ Bug
Batch sizes are tracked both in training and evaluation loops to reduce the Train/Eval results on epoch end.
In both cases len(batch) is used to find the current batch_size which is incorrect, for example MNIST loader will return 2 since batch = batch_data, batch_target.
Training loop:
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Line 1026
in
b40de54
training_step_output.track_batch_size(len(split_batch))
Evaluation loop:
pytorch-lightning/pytorch_lightning/trainer/evaluation_loop.py
Line 339
in
b40de54
output.track_batch_size(len(batch))
Expected behavior
Match the actual batch_size
|
NCCL error when using ddp with 2 gpus
|
[
"bug",
"priority: 0",
"distributed"
] |
๐ Bug
I try to run pytorch lighting using ddp with 2 gpus. Running with one gpu works fine. Using fp16 vs not results in the same error. See the stacktrace at the end of the post to see the error. I also tried ddp2 and dp, but both of those fail with a different error.
To Reproduce
Not sure. Let me know what I can do to diagnose.
I'm running my code on a cluster where each gpu is locked to one process. I'm using NCCL version 2.4.8.
I tried pytorch-lightning versions 0.9.0, 0.9.1rc4, 0.10.0rc1. All of them result in the same error. I'm running pytorch version 1.6.
Expected behavior
I expected training to start running smoothly using both gpus.
Environment
CUDA:
- GPU:
- GeForce GTX 1080
- GeForce GTX 1080
- available: True
- version: 10.1
Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.10.0rc1
- tqdm: 4.46.1
System:
- OS: Linux
- architecture:
- 64bit
-
- processor:
- python: 3.7.7
- version: #1 SMP Tue May 12 16:57:42 UTC 2020
Additional context
Stacktrace and error.
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2
INFO:lightning:initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]
INFO:lightning:LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]
Using native 16bit precision.
INFO:lightning:Using native 16bit precision.
initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/2
INFO:lightning:initializing ddp: GLOBAL_RANK: 1, MEMBER: 2/2
lo-s4-039:21587:21587 [0] NCCL INFO Bootstrap : Using [0]fabric:10.204.67.89<0> [1]enp129s0f0:10.204.3.89<0>
lo-s4-039:21587:21587 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs1
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
lo-s4-039:21587:21587 [0] NCCL INFO NET/IB : No device found.
lo-s4-039:21587:21587 [0] NCCL INFO NET/Socket : Using [0]fabric:10.204.67.89<0> [1]enp129s0f0:10.204.3.89<0>
NCCL version 2.4.8+cuda10.1
lo-s4-039:21614:21614 [1] NCCL INFO Bootstrap : Using [0]fabric:10.204.67.89<0> [1]enp129s0f0:10.204.3.89<0>
lo-s4-039:21614:21614 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
libibverbs: Warning: couldn't open config directory '/etc/libibverbs.d'.
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs1
libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs0
lo-s4-039:21614:21614 [1] NCCL INFO NET/IB : No device found.
lo-s4-039:21614:21614 [1] NCCL INFO NET/Socket : Using [0]fabric:10.204.67.89<0> [1]enp129s0f0:10.204.3.89<0>
lo-s4-039:21587:21646 [0] NCCL INFO Setting affinity for GPU 0 to 1fd001fd
lo-s4-039:21614:21647 [1] NCCL INFO Setting affinity for GPU 1 to 1fd001fd
lo-s4-039:21587:21646 [0] NCCL INFO Channel 00 : 0 1
lo-s4-039:21587:21646 [0] NCCL INFO Ring 00 : 0[1] -> 1[2] via P2P/IPC
lo-s4-039:21614:21647 [1] NCCL INFO Ring 00 : 1[2] -> 0[1] via P2P/IPC
lo-s4-039:21587:21646 [0] transport/p2p.cc:574 NCCL WARN failed to open CUDA IPC handle : 711 peer mapping resources exhausted
lo-s4-039:21587:21646 [0] NCCL INFO init.cc:669 -> 1
lo-s4-039:21587:21646 [0] NCCL INFO init.cc:815 -> 1
lo-s4-039:21587:21646 [0] NCCL INFO init.cc:951 -> 1
lo-s4-039:21587:21646 [0] NCCL INFO misc/group.cc:69 -> 1 [Async thread]
lo-s4-039:21614:21647 [1] transport/p2p.cc:574 NCCL WARN failed to open CUDA IPC handle : 711 peer mapping resources exhausted
lo-s4-039:21614:21647 [1] NCCL INFO init.cc:669 -> 1
lo-s4-039:21614:21647 [1] NCCL INFO init.cc:815 -> 1
lo-s4-039:21614:21647 [1] NCCL INFO init.cc:951 -> 1
lo-s4-039:21614:21647 [1] NCCL INFO misc/group.cc:69 -> 1 [Async thread]
Traceback (most recent call last):
File "tools/lightning.py", line 514, in <module>
Traceback (most recent call last):
File "/cluster/home/user/tracking/tools/lightning.py", line 514, in <module>
trainer.fit(model)
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 451, in fit
results = self.accelerator_backend.train()
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 140, in train
trainer.fit(model)
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 451, in fit
results = self.ddp_train(process_idx=self.task_idx, model=model)
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 266, in ddp_train
model = model.configure_ddp(model, device_ids)
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 954, in configure_ddp
results = self.accelerator_backend.train()
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 140, in train
results = self.ddp_train(process_idx=self.task_idx, model=model)
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 266, in ddp_train
model, device_ids=device_ids, find_unused_parameters=True
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 333, in __init__
model = model.configure_ddp(model, device_ids)
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 954, in configure_ddp
self.broadcast_bucket_size)
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 549, in _distributed_broadcast_coalesced
model, device_ids=device_ids, find_unused_parameters=True
dist._broadcast_coalesced(self.process_group, tensors, buffer_size)
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 333, in __init__
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1595629403081/work/torch/lib/c10d/ProcessGroupNCCL.cpp:518, unhandled cuda error, NCCL version 2.4.8
self.broadcast_bucket_size)
File "/cluster/home/user/miniconda3/envs/track/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 549, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(self.process_group, tensors, buffer_size)
RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1595629403081/work/torch/lib/c10d/ProcessGroupNCCL.cpp:518, unhandled cuda error, NCCL version 2.4.8
|
Unusual printing statements after 90% epoch completition
|
[
"bug",
"help wanted"
] |
i've encountered this unusual print statements while training
It seems that this printing starts when epoch is 90% complete and both loss and train_loss is same until 100% completition
This behaviour is same on TPU's as well as on GPU's
2020-10-05 11:32:55.426605: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
CUDA_VISIBLE_DEVICES: [0]
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: Could not log computational graph since the `model.example_input_array` attribute is not set or `input_array` was not given
warnings.warn(*args, **kwargs)
| Name | Type | Params
----------------------------------------
0 | model | Predictor | 44 M
1 | criterion | MSELoss | 0
Epoch 0: 90% 1531/1702 [12:41<01:25, 2.01it/s, loss=0.158, v_num=0, train_loss=0.149]
Validating: 0it [00:00, ?it/s]
Epoch 0: 90% 1532/1702 [12:41<01:24, 2.01it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 90% 1533/1702 [12:41<01:23, 2.01it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 90% 1534/1702 [12:41<01:23, 2.01it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 90% 1535/1702 [12:41<01:22, 2.01it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 90% 1536/1702 [12:42<01:22, 2.02it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 90% 1537/1702 [12:42<01:21, 2.02it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 90% 1538/1702 [12:42<01:21, 2.02it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 90% 1539/1702 [12:42<01:20, 2.02it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 90% 1540/1702 [12:42<01:20, 2.02it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1541/1702 [12:42<01:19, 2.02it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1542/1702 [12:43<01:19, 2.02it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1543/1702 [12:43<01:18, 2.02it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1544/1702 [12:43<01:18, 2.02it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1545/1702 [12:43<01:17, 2.02it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1546/1702 [12:43<01:17, 2.02it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1547/1702 [12:43<01:16, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1548/1702 [12:44<01:16, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1549/1702 [12:44<01:15, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1550/1702 [12:44<01:14, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1551/1702 [12:44<01:14, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1552/1702 [12:44<01:13, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1553/1702 [12:44<01:13, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1554/1702 [12:45<01:12, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1555/1702 [12:45<01:12, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1556/1702 [12:45<01:11, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 91% 1557/1702 [12:45<01:11, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1558/1702 [12:45<01:10, 2.03it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1559/1702 [12:45<01:10, 2.04it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1560/1702 [12:46<01:09, 2.04it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1561/1702 [12:46<01:09, 2.04it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1562/1702 [12:46<01:08, 2.04it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1563/1702 [12:46<01:08, 2.04it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1564/1702 [12:46<01:07, 2.04it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1565/1702 [12:46<01:07, 2.04it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1566/1702 [12:47<01:06, 2.04it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1567/1702 [12:47<01:06, 2.04it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1568/1702 [12:47<01:05, 2.04it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1569/1702 [12:47<01:05, 2.04it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1570/1702 [12:47<01:04, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1571/1702 [12:47<01:04, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1572/1702 [12:48<01:03, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1573/1702 [12:48<01:02, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 92% 1574/1702 [12:48<01:02, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1575/1702 [12:48<01:01, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1576/1702 [12:48<01:01, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1577/1702 [12:48<01:00, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1578/1702 [12:49<01:00, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1579/1702 [12:49<00:59, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1580/1702 [12:49<00:59, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1581/1702 [12:49<00:58, 2.05it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1582/1702 [12:49<00:58, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1583/1702 [12:49<00:57, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1584/1702 [12:49<00:57, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1585/1702 [12:50<00:56, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1586/1702 [12:50<00:56, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1587/1702 [12:50<00:55, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1588/1702 [12:50<00:55, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1589/1702 [12:50<00:54, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1590/1702 [12:50<00:54, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 93% 1591/1702 [12:51<00:53, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1592/1702 [12:51<00:53, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1593/1702 [12:51<00:52, 2.06it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1594/1702 [12:51<00:52, 2.07it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1595/1702 [12:51<00:51, 2.07it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1596/1702 [12:51<00:51, 2.07it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1597/1702 [12:52<00:50, 2.07it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1598/1702 [12:52<00:50, 2.07it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1599/1702 [12:52<00:49, 2.07it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1600/1702 [12:52<00:49, 2.07it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1601/1702 [12:52<00:48, 2.07it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1602/1702 [12:52<00:48, 2.07it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1603/1702 [12:53<00:47, 2.07it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1604/1702 [12:53<00:47, 2.07it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1605/1702 [12:53<00:46, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1606/1702 [12:53<00:46, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1607/1702 [12:53<00:45, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 94% 1608/1702 [12:53<00:45, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1609/1702 [12:54<00:44, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1610/1702 [12:54<00:44, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1611/1702 [12:54<00:43, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1612/1702 [12:54<00:43, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1613/1702 [12:54<00:42, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1614/1702 [12:54<00:42, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1615/1702 [12:55<00:41, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1616/1702 [12:55<00:41, 2.08it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1617/1702 [12:55<00:40, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1618/1702 [12:55<00:40, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1619/1702 [12:55<00:39, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1620/1702 [12:55<00:39, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1621/1702 [12:56<00:38, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1622/1702 [12:56<00:38, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1623/1702 [12:56<00:37, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1624/1702 [12:56<00:37, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 95% 1625/1702 [12:56<00:36, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1626/1702 [12:56<00:36, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1627/1702 [12:57<00:35, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1628/1702 [12:57<00:35, 2.09it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1629/1702 [12:57<00:34, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1630/1702 [12:57<00:34, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1631/1702 [12:57<00:33, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1632/1702 [12:57<00:33, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1633/1702 [12:58<00:32, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1634/1702 [12:58<00:32, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1635/1702 [12:58<00:31, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1636/1702 [12:58<00:31, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1637/1702 [12:58<00:30, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1638/1702 [12:58<00:30, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1639/1702 [12:59<00:29, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1640/1702 [12:59<00:29, 2.10it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1641/1702 [12:59<00:28, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 96% 1642/1702 [12:59<00:28, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1643/1702 [12:59<00:27, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1644/1702 [12:59<00:27, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1645/1702 [13:00<00:27, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1646/1702 [13:00<00:26, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1647/1702 [13:00<00:26, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1648/1702 [13:00<00:25, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1649/1702 [13:00<00:25, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1650/1702 [13:00<00:24, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1651/1702 [13:00<00:24, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1652/1702 [13:01<00:23, 2.11it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1653/1702 [13:01<00:23, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1654/1702 [13:01<00:22, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1655/1702 [13:01<00:22, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1656/1702 [13:01<00:21, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1657/1702 [13:01<00:21, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1658/1702 [13:02<00:20, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 97% 1659/1702 [13:02<00:20, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1660/1702 [13:02<00:19, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1661/1702 [13:02<00:19, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1662/1702 [13:02<00:18, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1663/1702 [13:02<00:18, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1664/1702 [13:03<00:17, 2.12it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1665/1702 [13:03<00:17, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1666/1702 [13:03<00:16, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1667/1702 [13:03<00:16, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1668/1702 [13:03<00:15, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1669/1702 [13:03<00:15, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1670/1702 [13:04<00:15, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1671/1702 [13:04<00:14, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1672/1702 [13:04<00:14, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1673/1702 [13:04<00:13, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1674/1702 [13:04<00:13, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1675/1702 [13:04<00:12, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 98% 1676/1702 [13:05<00:12, 2.13it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1677/1702 [13:05<00:11, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1678/1702 [13:05<00:11, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1679/1702 [13:05<00:10, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1680/1702 [13:05<00:10, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1681/1702 [13:05<00:09, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1682/1702 [13:06<00:09, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1683/1702 [13:06<00:08, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1684/1702 [13:06<00:08, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1685/1702 [13:06<00:07, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1686/1702 [13:06<00:07, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1687/1702 [13:06<00:06, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1688/1702 [13:07<00:06, 2.14it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1689/1702 [13:07<00:06, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1690/1702 [13:07<00:05, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1691/1702 [13:07<00:05, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1692/1702 [13:07<00:04, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 99% 1693/1702 [13:07<00:04, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 100% 1694/1702 [13:08<00:03, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 100% 1695/1702 [13:08<00:03, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 100% 1696/1702 [13:08<00:02, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 100% 1697/1702 [13:08<00:02, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 100% 1698/1702 [13:08<00:01, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 100% 1699/1702 [13:08<00:01, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 100% 1700/1702 [13:09<00:00, 2.15it/s, loss=0.158, v_num=0, train_loss=0.149]
Epoch 0: 100% 1701/1702 [13:09<00:00, 2.16it/s, loss=0.158, v_num=0, train_loss=0.149]/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/distributed.py:37: RuntimeWarning: The metric you returned None must be a `torch.Tensor` instance, checkpoint not saved HINT: what is the value of loss in validation_epoch_end()?
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/distributed.py:37: RuntimeWarning: Can save best model only with loss available, skipping.
warnings.warn(*args, **kwargs)
Epoch 0: 100% 1702/1702 [13:09<00:00, 2.16it/s, loss=0.158, v_num=0, train_loss=0.149, valid_loss=0.162]
Epoch 1: 73% 1246/1702 [10:19<03:46, 2.01it/s, loss=0.161, v_num=0, train_loss=0.191, valid_loss=0.162]/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: Detected KeyboardInterrupt, attempting graceful shutdown...
warnings.warn(*args, **kwargs)
Saving latest checkpoint..
Epoch 1: 73% 1246/1702 [10:19<03:46, 2.01it/s, loss=0.161, v_num=0, train_loss=0.191, valid_loss=0.162]
This issue can be reproduced in current stable and in 0.10.1rc1 also
|
Consider making the docs default to the latest stable version instead of the latest
|
[
"docs"
] |
๐ Documentation
Hi,
I just started using PyTorch Lightning and got a bit confused by the fact that pytorch-lightning.readthedocs.io defaults to the latest version (including release candidates), while running pip install pytorch-lightning (without specifying a version) will (correctly) default to the latest stable version.
This is quite confusing for a new user trying to go through a tutorial like this one as it instructs him to just run pip install pytorch-lightning and follow along. However, this will (currently) download v0.9.0, while the tutorial uses features which only work in newer RC versions, like using self.log(...).
I guess this could easily be solved by defaulting pytorch-lightning.readthedocs.io to the latest stable version, or changing all pip install instructions to pip install pytorch-lightning==x.x.x.
|
Enable .write and .write_dict from LM
|
[
"feature"
] |
Enable .write and .write_dict from LM
|
Convert step_ and epoch_ prefixes to postfix
|
[
"feature"
] |
Convert step_ and epoch_ prefixes to postfix
|
enable passing in connectors
|
[
"feature",
"won't fix"
] |
apex, slurm, etc can all be configured via connectors
Trainer(connectors=[...])
Alternative, call them plug-ins
Trainer(plugins=[...])
|
enable test loop in fast_dev_run
|
[
"feature",
"won't fix"
] |
check the test step during fast_dev_run
|
merge new metrics API
|
[
"feature",
"priority: 0"
] | |
Lightning Module's to_disk should use fsspec to write reusults
|
[
"feature",
"help wanted"
] |
๐ Feature
use fsspec here to support more storage backends besides local disk:
pytorch-lightning/pytorch_lightning/trainer/supporters.py
Lines 138 to 165
in
cea5f1f
def to_disk(self):
"""Write predictions to file(s).
"""
for filename, predictions in self.predictions.items():
# Absolute path to defined prediction file. rank added to name if in multi-gpu environment
outfile = Path(filename).absolute()
outfile = outfile.with_name(
f"{outfile.stem}{f'_rank_{self.global_rank}' if self.world_size > 1 else ''}{outfile.suffix}"
)
outfile.parent.mkdir(exist_ok=True, parents=True)
# Convert any tensor values to list
predictions = {k: v if not isinstance(v, Tensor) else v.tolist() for k, v in predictions.items()}
# Check if all features for this file add up to same length
feature_lens = {k: len(v) for k, v in predictions.items()}
if len(set(feature_lens.values())) != 1:
raise ValueError('Mismatching feature column lengths found in stored EvalResult predictions.')
# Switch predictions so each entry has its own dict
outputs = []
for values in zip(*predictions.values()):
output_element = {k: v for k, v in zip(predictions.keys(), values)}
outputs.append(output_element)
# Write predictions for current file to disk
torch.save(outputs, outfile)
cc @nateraw
|
[Tensorboard] Storing arrays, lists and more complicated structures
|
[
"question"
] |
Fast question.
Is there any way to store array, list or more complicated structures in Tensorboard than just scalars, images, grids etc.? Or will I need to implement a simple text file saving method on my own?
|
Accessing logger's data at the end of a training
|
[
"question"
] |
I'd know if it is possible to access all logs that were created during the training process at its end? I'd like to do something with the data. How do I access logger's data?
Is it even possible with the code that exists or should I create a new data structure in my class to store it along with the logger's actions?
I'm using 0.9.1rc4.
|
Multi-GPU training. learning rate is all zero in tensorboard .
|
[
"bug",
"help wanted",
"3rd party"
] |
๐ Bug
I used LearningrateLogger to log learning rate. But in tensorboard learning rate is all zero.
To Reproduce
Steps to reproduce the behavior:
install 0.10.0rc1
set gpus in Trainer bigger than 1
use lr_logger = LearningRateLogger(logging_interval='step') to log learning rate
Code sample
pseudocode
class BartFineTuner(pl.LightningModule):
def __init__(self, hparams, learning_rate=None):
super(BartFineTuner, self).__init__()
self.hparams = hparams
self.learning_rate = learning_rate
model_name = hparams.model_name_or_path
self.model, self.tokenizer = load_model_and_tokenizer(model_name, config_dict)
self.loss = nn.CrossEntropyLoss(ignore_index=self.tokenizer.pad_token_id)
def is_logger(self):
return self.trainer.global_rank <= 0
def forward(
self, input_ids, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, lm_labels=None,
use_cache=False
):
return self.model(
input_ids,
attention_mask=attention_mask,
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=decoder_attention_mask,
use_cache=use_cache
)
def _step(self, batch):
tgt_ids = batch["target_ids"]
decoder_input_ids = shift_tokens_right(tgt_ids, self.tokenizer.pad_token_id)
outputs = self(
input_ids=batch["source_ids"],
attention_mask=batch["source_mask"],
decoder_input_ids=decoder_input_ids,
decoder_attention_mask=None,
use_cache=False
)
target = batch["target_ids"]
if self.hparams.epsilon > 0:
output_softmax = F.log_softmax(outputs[0], dim=-1)
loss, _ = label_smoothed_nll_loss(output_softmax, target, self.hparams.epsilon,
ignore_index=self.tokenizer.pad_token_id)
else:
ce_loss_fct = torch.nn.CrossEntropyLoss(ignore_index=self.tokenizer.pad_token_id)
loss = ce_loss_fct(outputs[0].view(-1, outputs[0].shape[-1]), target.view(-1))
return loss
def training_step(self, batch, batch_idx):
loss = self._step(batch)
self.log('train_loss', loss)
return {"loss": loss}
def training_epoch_end(self, outputs):
avg_train_loss = torch.stack([x["loss"] for x in outputs]).mean()
self.log("avg_train_loss", avg_train_loss)
def validation_step(self, batch, batch_idx):
loss = self._step(batch)
return {"val_loss": loss}
def validation_epoch_end(self, outputs):
avg_loss = torch.stack([x["val_loss"] for x in outputs]).mean()
self.log("val_loss", avg_loss)
return {"avg_val_loss": avg_loss}
def total_steps(self) -> int:
"""The number of total training steps that will be run. Used for lr scheduler purposes."""
num_devices = max(1, self.hparams.n_gpu)
effective_batch_size = self.hparams.train_batch_size * self.hparams.gradient_accumulation_steps * num_devices
dataset_size = len(self.train_loader.dataset)
return (dataset_size / effective_batch_size) * self.hparams.num_train_epochs
def setup(self, mode):
if mode == "fit":
self.train_loader = self.get_dataloader()
def configure_optimizers(self):
model = self.model
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": self.hparams.weight_decay,
},
{
"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
lr = self.hparams.learning_rate
optimizer = AdamW(optimizer_grouped_parameters, lr=lr, eps=self.hparams.adam_epsilon)
self.opt = optimizer
scheduler = get_polynomial_decay_schedule_with_warmup(
self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=self.total_steps()
)
return [optimizer], [scheduler]
def get_dataloader(self):
train_dataset = get_dataset(tokenizer=self.tokenizer, args=self.hparams, type_path="train",
train_type=self.hparams.train_type)
dataloader = DataLoader(train_dataset, batch_size=self.hparams.train_batch_size, drop_last=True, shuffle=True,
num_workers=1)
return dataloader
tb_logger = pl_loggers.TensorBoardLogger(os.path.join("logs", self.args["name"], args.train_type))
lr_logger = LearningRateLogger(logging_interval='step')
train_params = dict(
gpus=2,
early_stop_callback=False,
precision=16 if args.fp_16 else 32,
amp_level='O1',
checkpoint_callback=False,
callbacks=[lr_logger],
logger=tb_logger,
)
model = BartFineTuner(args)
trainer = pl.Trainer(**train_params)
trainer.fit(model)
Expected behavior
Log real learning rate
Environment
CUDA:
- GPU:
- Tesla P4
- Tesla P4
- available: True
- version: 10.2
Packages:
- numpy: 1.18.5
- pyTorch_debug: False
- pyTorch_version: 1.6.0
- pytorch-lightning: 0.10.0rc1
- tqdm: 4.50.0
System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.9
- version: #32~16.04.2-Ubuntu SMP Thu Jul 20 10:19:48 UTC 2017
Additional context
|
copy badges to release package
|
[
"feature",
"help wanted",
"good first issue"
] |
๐ Feature
Parse the Readme and replace generated badges by downloaded ones
process in setup.py
parse all badges from online CI and save them as png (svg is problematic, does not work for all plaforms)
2 replace badges in Readme with the downloaded ones
Motivation
pipy page does not work well with generated badges and also projecting the master state to a given release does not make sense.. also no need to keep link to CI :]
|
UserWarning for testing_epoch_end in 0.9.1rc4
|
[
"bug",
"help wanted"
] |
Just informing about the user warning I was displayed:
YYY\anaconda3\envs\pt_cpu\lib\site-packages\pytorch_lightning\utilities\distributed.py:37: UserWarning: The testing_epoch_end should not return anything as of 9.1.to log, use self.log(...) or self.write(...) directly in the LightningModule warnings.warn(*args, **kwargs)
although I don't have the testing_epoch_end method in my class. UPDATE: The warning does not appear when I implement it.
I'm using 0.9.1rc4.
If it's being resolved elsewhere and I missed that, feel free to close the issue.
|
Broken links in README.md.
|
[
"docs"
] |
๐ Documentation
Hello,
There are broken links in README.md on Dueling-DQN and Reinforce sections.
|
Plotting multiple metrics in a single graph
|
[
"feature",
"help wanted"
] |
๐ Feature
Can we have multiple metrics plotted on the same graph in Tensorboard logging done by lightning?
That is plotting the dictionary values returned in log_dict in the same graph.
Motivation
Pitch
Alternatives
Additional context
|
limit builds for Docs
|
[
"feature",
"ci"
] |
๐ Feature
limit build for PR with are strictly related to docs, so skip:
Conda & Dockers & Full testing [GH actions]
TPU testing [CircleCI]
GPU testing [Drone CI]
Motivation
lower the resources requirements
Additional context
Btw, if you need skip GPU testing use magic work in commit or PR name [CI SKIP]
https://docs.drone.io/pipeline/skipping/
|
non intuitive batch_size in ddp
|
[
"bug",
"help wanted"
] |
Is there a way in PyTorchLightning to set your desired batch size, say 512 and then have the effective batch size per processor (which is normally batch_size*num_gpus) be computed automatically? Right now your effective batch size scales with the number of gpus so these calculations must be computed outside of pytorchlightning (as far as my tests have shown)... This seems like something that PL could/should be able to handle. You'd likely also have to set a maximum processor batch_size so that it could determine an accumulate_grad_batches so as not to use too much memory.
|
mlflow logger complains about missing run_id
|
[
"bug",
"help wanted"
] |
๐ Bug
When using MLflow logger, log_param() function require run_id
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-23-d048545e1854> in <module>
9 trainer.fit(model=experiment,
10 train_dataloader=train_dl,
---> 11 val_dataloaders=test_dl)
~/anaconda3/envs/ns_dl_2020_torch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule)
452 self.call_hook('on_fit_start')
453
--> 454 results = self.accelerator_backend.train()
455 self.accelerator_backend.teardown()
456
~/anaconda3/envs/ns_dl_2020_torch/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_backend.py in train(self)
51
52 # train or test
---> 53 results = self.train_or_test()
54 return results
55
~/anaconda3/envs/ns_dl_2020_torch/lib/python3.7/site-packages/pytorch_lightning/accelerators/base_accelerator.py in train_or_test(self)
48 results = self.trainer.run_test()
49 else:
---> 50 results = self.trainer.train()
51 return results
52
~/anaconda3/envs/ns_dl_2020_torch/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in train(self)
499
500 # run train epoch
--> 501 self.train_loop.run_training_epoch()
502
503 if self.max_steps and self.max_steps <= self.global_step:
~/anaconda3/envs/ns_dl_2020_torch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py in run_training_epoch(self)
525 # TRAINING_STEP + TRAINING_STEP_END
526 # ------------------------------------
--> 527 batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
528
529 # when returning -1 from train_step, we end epoch early
~/anaconda3/envs/ns_dl_2020_torch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py in run_training_batch(self, batch, batch_idx, dataloader_idx)
660 opt_idx,
661 optimizer,
--> 662 self.trainer.hiddens
663 )
664
~/anaconda3/envs/ns_dl_2020_torch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py in training_step_and_backward(self, split_batch, batch_idx, opt_idx, optimizer, hiddens)
739 """
740 # lightning module hook
--> 741 result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
742
743 if result is None:
~/anaconda3/envs/ns_dl_2020_torch/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py in training_step(self, split_batch, batch_idx, opt_idx, hiddens)
300 with self.trainer.profiler.profile('model_forward'):
301 args = self.build_train_args(split_batch, batch_idx, opt_idx, hiddens)
--> 302 training_step_output = self.trainer.accelerator_backend.training_step(args)
303 training_step_output = self.trainer.call_hook('training_step_end', training_step_output)
304
~/anaconda3/envs/ns_dl_2020_torch/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_backend.py in training_step(self, args)
59 output = self.__training_step(args)
60 else:
---> 61 output = self.__training_step(args)
62
63 return output
~/anaconda3/envs/ns_dl_2020_torch/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_backend.py in __training_step(self, args)
67 batch = self.to_device(batch)
68 args[0] = batch
---> 69 output = self.trainer.model.training_step(*args)
70 return output
71
<ipython-input-21-31b6dc3ffd67> in training_step(self, batch, batch_idx, optimizer_idx)
28 for key, val in train_loss.items():
29 self.log(key, val.item())
---> 30 self.logger.experiment.log_param(key=key, value=val.item())
31
32 return train_loss
TypeError: log_param() missing 1 required positional argument: 'run_id'
Expected behavior
The MlflowLogger should behave the same as the mlflow api where only key and value argment is needed for log_param() function
Code sample
mlf_logger = MLFlowLogger(
experiment_name='test',
tracking_uri="file:./ml-runs"
)
Cllass VAEexperiment(LightningModule):
...
def training_step(self, batch, batch_idx, optimizer_idx = 0):
....
for key, val in train_loss.items():
self.logger.experiment.log_param(key=key, value=val.item())
....
return train_loss
trainer = Trainer(logger=mlf_logger,
default_root_dir='../logs',
early_stop_callback=False,
gpus=1,
auto_select_gpus=True,
max_epochs=40)
trainer.fit(model=experiment,
train_dataloader=train_dl,
val_dataloaders=test_dl)
Environment
pytorch-lightning==0.10.0
torch==1.6.0
torchsummary==1.5.1
torchvision==0.7.0
|
How to break a single large input among different GPUs?
|
[
"question",
"won't fix"
] |
Please check out more details here.
OS: [Ubuntu 18.04]
Packaging [pip]
PyTorch Version [e.g. 1.6]
|
Metrics return unexpected results in 0.10.0rc1
|
[
"bug",
"help wanted"
] |
๐ Bug
There is a chance i dont understand well how it works, but both sklearn, functional and tensor metrics seem to not behave expectedly, specifically precision and recall
To Reproduce
I used a small dummy example for y_true and y_pred for a 2 class classification problem
Code sample
import pytorch_lightning.metrics as plmetrics
import pytorch_lightning as pl
import torch
# Dummy data
y_pred = torch.Tensor([1,0,1,0,1,1])
y_true = torch.Tensor([0,1,1,0,0,0])
## PL scikit learn test
plsk_accuracy = pl.metrics.sklearns.Accuracy()
plsk_precision = pl.metrics.sklearns.Precision()
plsk_recall = pl.metrics.sklearns.Recall()
accuracy = plsk_accuracy(y_pred, y_true)
precision = plsk_precision(y_pred, y_true)
recall = plsk_recall(y_pred, y_true)
print("PL scikit metrics precision: {}, recall: {}, accuracy: {}".format(precision, recall, accuracy))
PL scikit metrics precision: 0.375, recall: 0.375, accuracy: 0.3333333432674408
# Test for class based metrics
pl_accuracy = plmetrics.classification.Accuracy(num_classes=2)
pl_precision = plmetrics.classification.Precision(num_classes=2)
pl_recall = plmetrics.classification.Recall(num_classes=2)
accuracy = pl_accuracy(y_pred, y_true)
precision = pl_precision(y_pred, y_true)
recall = pl_recall(y_pred, y_true)
print("PL class metrics precision: {}, recall: {}, accuracy: {}".format(precision, recall, accuracy))
PL class metrics precision: 0.3333333432674408, recall: 0.3333333432674408, accuracy: 0.3333333432674408
# Normal scikit test
from sklearn.metrics import accuracy_score, precision_score, recall_score
sk_recall = recall_score(y_true.to("cpu").numpy(), y_pred.to("cpu").numpy())
sk_precision = precision_score(y_true.to("cpu").numpy(), y_pred.to("cpu").numpy())
sk_accuracy = accuracy_score(y_true.to("cpu").numpy(), y_pred.to("cpu").numpy())
print("precision: {}, recall: {}, accuracy: {}".format(sk_precision, sk_recall, sk_accuracy))
precision: 0.25, recall: 0.5, accuracy: 0.3333333333333333
Expected behavior
I expect that all 3 would yield the same precision, recall and accuracy numbers, specifically they should match scikit-learns results in the toy example above
Environment
CUDA:
GPU:
Tesla V100-SXM2-16GB
available: True
version: 10.1
Packages:
numpy: 1.18.1
pyTorch_debug: False
pyTorch_version: 1.5.0
pytorch-lightning: 0.10.0rc1
tqdm: 4.42.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.7.7
version: #1 SMP Wed Jun 24 19:07:39 UTC 2020
Additional context
|
How trainer figures out number of batches per epoch.
|
[
"question",
"won't fix"
] |
@JorDikk and I recently found out that Trainer figures out the total number of batches per epoch though the Sampler __len__ and not Dataset __len__.
While for most cases the size of sampler would correspond to the total number of indices in the dataset (train and val),
we were using a hierarchical dataset, where each individual dataset was a collection of smaller datasets.
Our sampler too, then was a collection of smaller samplers. This created a problem as for our base
sampler, the size was the number of smaller datasets, rather than the data indices.
The fix was very easy, but it would help to mention it somewhere in the Docs to avoid much confusion.
|
The use of save_hyperparameters() is currently confusing (due to name and docs)
|
[
"feature",
"docs",
"discussion",
"design"
] |
๐ Documentation & function name change
The following documentation page is relevant here: https://pytorch-lightning.readthedocs.io/en/stable/weights_loading.html
The use of self.save_hyperparameters() is currently confusing for the following 3 reasons:
The role of this function is unclear. In the documentation this function is not mentioned once under the header "Checkpoint saving". Also, all arguments given to a LightningModule will be saved when calling trainer.save_checkpoint(), whether save_hyperparameters() has been used or not.
a. Edit: For example, are there any benefits of calling self.save_hyperparameters('arg1', 'arg3') over just assigning directly to self.hparams? E.g. like: self.hparams['arg1'] = arg1
The name save would indicate it is used to store the hyper parameters somewhere (e.g. disk). You would also expect that this function is not necessary when loading a .ckpt file (I don't want to change the self.hparams, and therefore do not want to save anything).
The documentation only mention this function under:
But if you donโt want to use the values saved in the checkpoint, pass in your own here
class LitModel(LightningModule):
def __init__(self, in_dim, out_dim):
super().__init__()
self.save_hyperparameters()
self.l1 = nn.Linear(self.hparams.in_dim, self.hparams.out_dim)
So my understanding of save_hyperparameters() was only to be used when you load a checkpoint AND want to overwrite the hyper parameters found in that checkpoint.
This resulted me being stuck in a "hparams not restored when loading ckpt" issue (https://forums.pytorchlightning.ai/t/hparams-not-restored-when-using-load-from-checkpoint-default-argument-values-are-the-problem/237) for longer than I would like to admit.
Possible solutions
Clarify why would like to use save_hyperparameters() over just not calling the function. Why was this function created? E.g. indicate that this moves arguments to self.hparams, which are used for automatic logging by e.g. Allegro TRAINS.
Change the name to something like init_hyperparameters() or arguments_to_hyperparameters().
Mention in documentation under Loading that this function is necessary to restore .ckpt hyperparameters.
End word
What are your thoughts about this?
Tagging
@Borda
|
what's the default checkpoint monitor in 0.10.0?
|
[
"question"
] |
what's the default checkpoint monitor in 0.10.0? loss or val_loss returned in validation_step?
|
Add Aim logger
|
[
"help wanted",
"won't fix",
"working as intended",
"logger"
] |
๐ Feature
Implement AimLogger to integrate with Aim.
Motivation
Gor from Aim here. I am helping build Aim โ an open source project that helps to easily track and explore 100s of AI experiments in minutes. I figured it would be good for both parties to integrate Aim with PL.
Solution/Pitch
It appears the following needs to be done:
Implement AimLogger here
Add tests here
Write documentation here
I will prepare a PR with said integration and I only want folks to review/merge it.
Additional context
|
Mismatch between docstring and code regarding when `on_load_checkpoint` hook is called
|
[
"bug",
"help wanted",
"docs"
] |
๐ Bug
The docstring of on_load_checkpointย hook says that it is called before trying to load_state_dict:
pytorch-lightning/pytorch_lightning/core/saving.py
Lines 203 to 206
in
cea5f1f
def on_load_checkpoint(self, checkpoint: Dict[str, Any]) -> None:
"""
Do something with the checkpoint.
Gives model a chance to load something before ``state_dict`` is restored.
However, in LightningModule.load_from_checkpoint, it is called after load_state_dict:
pytorch-lightning/pytorch_lightning/core/saving.py
Lines 195 to 199
in
cea5f1f
# load the state_dict on the model automatically
model.load_state_dict(checkpoint['state_dict'], strict=strict)
# give model a chance to load something
model.on_load_checkpoint(checkpoint)
Additional context
Related discussion on Slack: https://pytorch-lightning.slack.com/archives/CQXV8BRH9/p1602168345184000
I think the docstring is correct and the call to on_load_checkpointย should be moved right before load_state_dictย to give the model a chance to call setup.
|
A model interpretability feature - visualize losses and data samples
|
[
"feature",
"help wanted",
"won't fix"
] |
๐ Feature
An interpretability feature that allows you to log top model losses and visualize examples with the losses.
Motivation
To better understand the working of a trained model, it can be useful to analyze the examples in which your losses are doing well/bad. It would work particularly well with data that's interpretable like images and audio. Targeted actions can also be taken after such an analysis, such as creating specific data augmentation or training more on the harder examples.
Pitch
I don't have any solid information yet on the implementation or usefulness. Creating this issue to start a discussion on a potentially useful feature. Shared some thoughts on implementation below
A method in which the trainer runs validation while doing the appropriate logging of top losses and logging the data sample itself or index of the sample in the dataset. After this, the results can be saved as files or plotted
A separate class that handles interpretability. Something like what fastai has - https://docs.fast.ai/interpret
|
tensorboard two value every step
|
[
"bug",
"help wanted"
] |
๐ Bug
two loss logged every step
To Reproduce
https://colab.research.google.com/drive/1d7a3fwzZOQobFk58QXEqmzziQf1-GJYS?usp=sharing
Code sample
https://colab.research.google.com/drive/1d7a3fwzZOQobFk58QXEqmzziQf1-GJYS?usp=sharing
import os
import torch
from torch.utils.data import Dataset
from pytorch_lightning import Trainer, LightningModule
import torch
from pytorch_lightning.callbacks import LearningRateMonitor
import logging
import os
import pytorch_lightning as pl
import argparse
from pytorch_lightning import loggers as pl_loggers
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
"""
Testing PL Module
Use as follows:
- subclass
- modify the behavior for what you want
class TestModel(BaseTestModel):
def training_step(...):
# do your own thing
or:
model = BaseTestModel()
model.training_epoch_end = None
"""
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def step(self, x):
x = self.layer(x)
out = torch.nn.functional.mse_loss(x, torch.ones_like(x))
return out
def training_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
self.log("train_loss:", loss, on_epoch=True)
return {"loss": loss}
def training_step_end(self, training_step_outputs):
return training_step_outputs
def training_epoch_end(self, outputs) -> None:
torch.stack([x["loss"] for x in outputs]).mean()
def validation_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"x": loss}
def validation_epoch_end(self, outputs) -> None:
torch.stack([x['x'] for x in outputs]).mean()
def test_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"y": loss}
def test_epoch_end(self, outputs) -> None:
torch.stack([x["y"] for x in outputs]).mean()
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
def run_test():
class TestModel(BoringModel):
def on_train_epoch_start(self) -> None:
print('override any method to prove your bug')
# fake data
train_data = torch.utils.data.DataLoader(RandomDataset(32, 2000), batch_size=4)
val_data = torch.utils.data.DataLoader(RandomDataset(32, 2000), batch_size=4)
test_data = torch.utils.data.DataLoader(RandomDataset(32, 2000), batch_size=4)
# model
tb_logger = pl_loggers.TensorBoardLogger(os.path.join("logs", "test"))
lr_logger = LearningRateMonitor(logging_interval='step')
model = TestModel()
trainer = Trainer(
default_root_dir=os.getcwd(),
# limit_train_batches=1,
# limit_val_batches=1,
max_epochs=1,
weights_summary=None,
accumulate_grad_batches=2,
gpus=1,
gradient_clip_val=1,
callbacks=[lr_logger],
logger=tb_logger,
log_every_n_steps=1
)
trainer.fit(model, train_data, val_data)
trainer.test(test_dataloaders=test_data)
if __name__ == '__main__':
run_test()
Expected behavior
only one loss logged every step
Environment
colab
CUDA:
GPU:
Tesla K80
available: True
version: 10.1
Packages:
numpy: 1.18.5
pyTorch_debug: False
pyTorch_version: 1.6.0+cu101
pytorch-lightning: 1.0.0rc2
tqdm: 4.41.1
System:
OS: Linux
architecture:
64bit
processor: x86_64
python: 3.6.9
version: #1 SMP Thu Jul 23 08:00:38 PDT 2020
|
on_train_epoch_end and on_epoch_end are out of order
|
[
"bug",
"help wanted"
] |
๐ Bug
Consider the following order in which the LightningModule hooks are called from #2816 (I have confirmed that in PytorchLightning version 0.10 this is still an issue):
on_epoch_start
on_train_epoch_start
on_validation_start
on_validation_epoch_start
on_validation_epoch_end
on_validation_end
on_epoch_end
on_train_epoch_end
Naturally one would expect the opening and closing scope hooks to match. However, on_train_epoch_end is called after on_epoch_end, which seems incorrect. It is natural to open the epoch scope before the train epoch scope (as is being done currently), in which case the epoch scope should be closed after closing the train epoch scope (which is not currently being done)
PyTorch Version (e.g., 1.0): 1.6.0
OS (e.g., Linux): Ubuntu 18.04
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.8.5
CUDA/cuDNN version: NA
GPU models and configuration: NA
Any other relevant information: NA
|
Can't reproduce logistic regression example
|
[
"bug",
"help wanted"
] |
๐ Bug
I am unable to run the logistic regression example . At training, I get error which ends in:
/usr/local/lib/python3.6/dist-packages/pl_bolts/models/regression/logistic_regression.py in validation_step(self, batch, batch_idx)
81 x = x.view(x.size(0), -1)
82 y_hat = self(x)
---> 83 acc = accuracy(y_hat, y)
84 return {'val_loss': F.cross_entropy(y_hat, y), 'acc': acc}
85
TypeError: 'module' object is not callable
Notably, I modified the example to not use TPUs.
Example code I follow is:
https://pytorch-lightning-bolts.readthedocs.io/en/latest/classic_ml.html#logistic-regression
To Reproduce
I tried to reproduce this in both Colab and as a python script in a fresh virtual env.
Colab notebook gist with full error and requirements:
https://gist.github.com/pavopax/e9040f42725322dfd2b86975e6ba5bbc
Also ran on Linux Python 3.7.1 with requirements:
https://gist.github.com/pavopax/d631dc61eceebbfbf67d9b113504f114
Code sample
I replace code example to remove TPUs:
trainer = pl.Trainer() #(tpu_cores=8, precision=16)
I use example code from: https://pytorch-lightning-bolts.readthedocs.io/en/latest/classic_ml.html#logistic-regression
For code, see gist above
Expected behavior
Code runs without errors and produces result
Environment
Colab with requirements as above
Additional context
EDIT: In intro, I added link to example I'm following
|
Broken link in Documentation
|
[
"bug",
"docs"
] |
๐ Documentation
The Module Index link at the bottom of the main page of the Lightning documentation is broken. This seems to be because the make html command does not create a py-modindex.html file (not sure why).
If the Module Index page is not required a solution is to remove * :ref: modindex from the index.rst file.
Additionally, below the Module Index link there is a link to a search page, that is currently empty. Seeing as searching is possible in the sidebar, not sure if the page is required, so could remove * :ref: search as well.
Not super familiar with Sphinx but think this wouldn't break anything.
|
log_save_interval doesn't have the intended effect
|
[
"help wanted",
"docs"
] |
๐ Bug
I'm using the MLFlowLogger class for logging, and initially, I noticed my training loop slowed down immensely when changing the tracking URI from my local file system to a remote mlflow server (which makes sense). To fix this, I saw in the pytorch lightning docs that log_save_interval can be used to change the frequency at which logs are written and row_log_interval can be used to change the frequency at which rows are added to the logs. However, I found that log_save_interval has no effect on speeding up the training loop, and only row_log_interval speeds up the training loop.
To Reproduce
Steps to reproduce the behavior:
Take any training loop and log metrics to a remote server (in this case for mlflow)
Manipulate log_save_interval and row_log_interval to see the effect
Code sample
The behavior makes sense when looking at sections of the pytorch lightning codebase.
In pytorch_lightning/trainer/training_loop.py:
# when logs should be saved
should_save_log = (batch_idx + 1) % self.log_save_interval == 0 or early_stop_epoch
if should_save_log or self.fast_dev_run:
if self.proc_rank == 0 and self.logger is not None:
self.logger.save()
# when metrics should be logged
should_log_metrics = batch_idx % self.row_log_interval == 0 or early_stop_epoch
if should_log_metrics or self.fast_dev_run:
# logs user requested information to logger
self.log_metrics(batch_step_metrics, grad_norm_dic)
From the above, it can be seen that log_save_interval is not used to control when metrics are logged. From what I can tell self.log_metrics leads to a call to:
self.logger.agg_and_log_metrics(scalar_metrics, step=step)
in pytorch_lightning/trainer/logging.py which for mlflow causes a write to the remote server.
Expected behavior
I expected log_save_interval to reduce the amount of remote writes, but it does not.
Environment
* CUDA:
- GPU:
- available: False
- version: None
* Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.4.0
- pytorch-lightning: 0.7.5
- tensorboard: 2.2.1
- tqdm: 4.46.0
* System:
- OS: Darwin
- architecture:
- 64bit
-
- processor: i386
- python: 3.8.2
- version: Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64
Additional context
I'm aware my pytorch lightning version is behind the latest (due to an issue with 0.7.6 which seems to have been fixed in master), but my code samples are from the latest code.
|
Bug in GAN example
|
[
"help wanted"
] |
Bug in https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/domain_templates/generative_adversarial_net.py
When I run python generative_adversarial_net.py
I get
Traceback (most recent call last):
File "generative_adversarial_net.py", line 218, in <module>
main(hparams)
File "generative_adversarial_net.py", line 192, in main
model = GAN(hparams)
File "generative_adversarial_net.py", line 90, in __init__
self.generator = Generator(latent_dim=self.latent_dim, img_shape=mnist_shape)
File "generative_adversarial_net.py", line 39, in __init__
*block(latent_dim, 128, normalize=False),
File "generative_adversarial_net.py", line 32, in block
layers = [nn.Linear(in_feat, out_feat)]
File "/home/vladimir/anaconda3/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 72, in __init__
self.weight = Parameter(torch.Tensor(out_features, in_features))
TypeError: new(): argument 'size' must be tuple of ints, but found element of type Namespace at pos 2
|
RAM not correctly released when training a pl module multiple times
|
[
"help wanted",
"won't fix"
] |
๐ Bug
When I use the pl.Trainer multiple times (for instance when doing cross-validation), it seems that the ram is not completely released, as the ram memory usage increases over runs, in a strange way.
To Reproduce
Steps to reproduce the behavior:
Define a pl module and a pl.trainer inside a function, let's call it train()
call train() multiple times and track ram usage with psutil
Code sample
import psutil
from loguru import logger
import numpy as np
import torch
import pytorch_lightning as pl
from torch.utils.data import DataLoader, random_split
from torch.nn import functional as F
from torchvision.datasets import MNIST
from torchvision import transforms
import os
class LightningMNISTClassifier(pl.LightningModule):
def __init__(self):
super(LightningMNISTClassifier, self).__init__()
# mnist images are (1, 28, 28) (channels, width, height)
self.layer_1 = torch.nn.Linear(28 * 28, 128)
self.layer_2 = torch.nn.Linear(128, 256)
self.layer_3 = torch.nn.Linear(256, 10)
def forward(self, x):
batch_size, channels, width, height = x.size()
# (b, 1, 28, 28) -> (b, 1*28*28)
x = x.view(batch_size, -1)
# layer 1 (b, 1*28*28) -> (b, 128)
x = self.layer_1(x)
x = torch.relu(x)
# layer 2 (b, 128) -> (b, 256)
x = self.layer_2(x)
x = torch.relu(x)
# layer 3 (b, 256) -> (b, 10)
x = self.layer_3(x)
# probability distribution over labels
x = torch.log_softmax(x, dim=1)
return x
def cross_entropy_loss(self, logits, labels):
return F.nll_loss(logits, labels)
def training_step(self, train_batch, batch_idx):
x, y = train_batch
logits = self.forward(x)
loss = self.cross_entropy_loss(logits, y)
logs = {'train_loss': loss}
return {'loss': loss, 'log': logs}
def prepare_data(self):
# transforms for images
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
# prepare transforms standard to MNIST
mnist_train = MNIST(os.getcwd(), train=True,
download=True, transform=transform)
self.mnist_train, _ = random_split(mnist_train, [10_000, 50_000])
def train_dataloader(self):
return DataLoader(self.mnist_train, batch_size=1000, num_workers=3)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
def train():
model = LightningMNISTClassifier()
trainer = pl.Trainer(
checkpoint_callback=False,
max_epochs=1
)
trainer.fit(model)
# train
rams_used = []
process = psutil.Process()
for _ in range(100):
train()
ram_used = process.memory_info()[0]/2.**30
logger.warning(f"RAM USED : {ram_used}")
rams_used.append(ram_used)
np.save("rams_used.npy", rams_used)
Expected behavior
I would expect to observe the almost exact same ram usage after the training of the model. Indeed I compared it with native pytorch and did not observe any ram usage increase over runs.
Environment
* CUDA:
- GPU:
- TITAN X (Pascal)
- TITAN X (Pascal)
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.4
- pyTorch_debug: False
- pyTorch_version: 1.5.0+cu101
- pytorch-lightning: 0.7.6
- tensorboard: 2.2.1
- tqdm: 4.46.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.7.4
- version: #192-Ubuntu SMP Fri Sep 13 12:02:50 UTC 2019
Additional context
I tried with older versions of lightning, and observed the same behaviour even in 0.7.1.
The example above is with a small dataset of Mnist, but I encountered more impressive increase with a bigger dataset, as it ram went from 20Go to 65Go after 20 runs.
|
specifying the tpu_core speed-up TPU training
|
[
"feature",
"help wanted"
] |
๐ Bug
I am getting a huge time difference between training a model on a specific tpu core tpu_cores=[1] and training a model on just 1 tpu core tpu_cores=1. What's the reason for that? Aren't both the conditions the same with just the difference that I am assigning a specific tpu_core in the first case and assigning the number of tpu_cores I want to use in the second case. Also in the second case, I am getting an error. When training with tpu_cores=[1] epoch time is 17 seconds with tpu_cores=1 epoch time is just 5 seconds.
Running on colab gives me an error but no error on Kaggle kernels. But the time difference issue is the same on both the platforms.
To Reproduce
Code sample
Colab Notebook
Expected behavior
As far as I know in both cases, the training time should be the same regardless of training on a single core or training on a specific core.
Environment
PyTorch Version (e.g., 1.0): 1.5.0
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.7
CUDA/cuDNN version: 10.1
GPU models and configuration: Tesla P100-PCIE-16GB
Any other relevant information:
Additional context
|
Data parallel (dp) distributes the loss computation across devices separately, unlike pytorch
|
[
"help wanted"
] |
[please remove]
|
Error using TrainsLogger with Trainer in 'ddp'
|
[
"help wanted",
"won't fix"
] |
๐ Bug
Got the following error when using TrainsLogger with 'ddp' backend during run_pretrain_routine.
Doesn't happen with 'dp' backend
The attribute self._metrics_to_agg still exists up to the point where spawn._wrap is called
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/access/projects/depth_completion/venv/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/home/access/projects/depth_completion/venv/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 389, in ddp_train
self.run_pretrain_routine(model)
File "/home/access/projects/depth_completion/venv/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 943, in run_pretrain_routine
self.logger.save()
File "/home/access/projects/depth_completion/venv/lib/python3.6/site-packages/pytorch_lightning/loggers/base.py", line 225, in save
self._finalize_agg_metrics()
File "/home/access/projects/depth_completion/venv/lib/python3.6/site-packages/pytorch_lightning/loggers/base.py", line 110, in _finalize_agg_metrics
agg_step, metrics_to_log = self._reduce_agg_metrics()
File "/home/access/projects/depth_completion/venv/lib/python3.6/site-packages/pytorch_lightning/loggers/base.py", line 100, in _reduce_agg_metrics
if not self._metrics_to_agg:
AttributeError: 'TrainsLogger' object has no attribute '_metrics_to_agg'
File "/home/access/projects/depth_completion/src/issue_reproduce.py", line 62, in main
trainer.fit(experiment)
File "/home/access/projects/depth_completion/src/issue_reproduce.py", line 65, in
main()
To Reproduce
See code below
Code sample
The following code will reproduce the issue:
import torch.nn as nn
import os
from torch.optim import Adam
import pytorch_lightning as pl
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import TrainsLogger
from torchvision import transforms
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader, random_split
class Exp(pl.LightningModule):
def __init__(self):
super(Exp, self).__init__()
self.layer_1 = nn.Linear(in_features=28**2, out_features=1)
self.loss = nn.MSELoss(reduction='mean')
def forward(self, img):
return self.layer_1(img.view([-1,1,28**2])).squeeze()
def training_step(self, batch, batch_idx):
x, y = batch
pred = self.forward(x)
loss = self.loss(pred, y)
return {'loss' : loss}
def validation_step(self, batch, batch_idx):
x, y = batch
pred = self.forward(x)
loss = self.loss(pred, y)
return {'val_loss' : loss}
def train_dataloader(self):
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
mnist_train = MNIST(os.getcwd(), train=True, download=False,
transform=transform)
return DataLoader(mnist_train, batch_size=64)
def val_dataloader(self):
transform=transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
mnist_train = MNIST(os.getcwd(), train=True, download=True,
transform=transform)
_, mnist_val = random_split(mnist_train, [55000, 5000])
mnist_val = DataLoader(mnist_val, batch_size=64)
return mnist_val
def configure_optimizers(self):
return Adam(self.parameters(), lr=1e-3) # TODO: scheduler
def main():
trains_logger = TrainsLogger(project_name='Reproduce Issue',
task_name='reproduction',
output_uri='.')
trainer = Trainer(logger=trains_logger,
# distributed_backend='dp',
num_nodes=1,
gpus=2)
experiment = Exp()
trainer.fit(experiment)
if __name__ == '__main__':
main()
Expected behavior
Should run smoothly (this is almost a copy-paste from pytorch-lightning introduction tutorial and TrainsLogger example)
Environment
python: 3.6.9 (pip 20.1.1)
torch : 1.5
trains: 0.14.3
pytorch-lightning: 0.7.6
OS: Ubuntu 18.04
gpus: 2x RTX 2080 Ti
CUDA: 10.1
|
Broken link
|
[
"help wanted",
"good first issue",
"docs"
] |
In the documentation logger where it says "Read more in the Experiment Logging use case", the link is broken.
|
Support DictConfig
|
[
"bug",
"feature",
"help wanted",
"priority: 0"
] |
We need to add DictConfig support for Omegaconf @Borda to the auto hparam save
|
DDP Trainer's `test` method -> TypeError: can't pickle SwigPyObject objects
|
[
"help wanted"
] |
I call
my code (roughly)
module = pl.Module(...)
trainer = pl.Trainer(module, distributed_backend='ddp', n_gpu=2,...)
trainer.fit() # works fine uses all GPUs
trainer.test(model) # code works only with n_gpu=1 or n_gpu=0.
Traceback:
trainer.test(model)
../miniconda3/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:1064: in test
self.fit(model)
../miniconda3/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py:844: in fit
mp.spawn(self.ddp_train, nprocs=self.num_processes, args=(model,))
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/multiprocessing/spawn.py:200: in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/multiprocessing/spawn.py:149: in start_processes
process.start()
../miniconda3/envs/nb/lib/python3.7/multiprocessing/process.py:112: in start
self._popen = self._Popen(self)
../miniconda3/envs/nb/lib/python3.7/multiprocessing/context.py:284: in _Popen
return Popen(process_obj)
../miniconda3/envs/nb/lib/python3.7/multiprocessing/popen_spawn_posix.py:32: in __init__
super().__init__(process_obj)
../miniconda3/envs/nb/lib/python3.7/multiprocessing/popen_fork.py:20: in __init__
self._launch(process_obj)
../miniconda3/envs/nb/lib/python3.7/multiprocessing/popen_spawn_posix.py:47: in _launch
reduction.dump(process_obj, fp)
../miniconda3/envs/nb/lib/python3.7/multiprocessing/reduction.py:60: in dump
ForkingPickler(file, protocol).dump(obj)
E TypeError: can't pickle SwigPyObject objects
Is there something I can do to make this work?
Thanks!
|
Docs are missing the anchor links
|
[
"help wanted",
"good first issue",
"docs"
] |
๐ Documentation
As pointed out by @oplatek the docs suddenly miss the anchor button that allows one to generate a link that points to a particular resource within a page.
This was working before, but now there is a 404 when accessing some assets (js, css, ...)
EDIT: seems not related to the 404 seen in the JS console.
|
Early Stopping stops too early when using SLURM
|
[
"help wanted"
] |
๐ Bug
I have a really strange bug where the Early Stopping Callback seems to fire too early, but only when using my unis Slurm cluster. When I train the same model on my laptop locally this does not happen. Sadly I can't run the code directly on the login node to see if happens on all of their systems or only when Slurm is being used. What's really strange is, when i use higher patience, the training lasts longer, early stopping never stops training sooner than hparams.patience/2 (actually it happens weirdly close to hparams.patience/2) but almost never as late as hparams.patience. I tried to create a minimum working example, code below.
To Reproduce
Steps to reproduce the behavior:
Create a custom Early Stopping Callback and use it to initialise the trainer
Run code on slurm cluster
Code sample
class RNNLightning(pl.LightningModule):
def __init__(self, hp):
super(RNNLightning, self).__init__()
self.sequence_length = hp.seq_len
self.input_size = hp.inp_size
self.hidden_size = hp.hidden_size
self.num_layers = hp.num_layers
self.learning_rate = hp.learning_rate
self.batch_size = hp.batch_size
self.lstm = nn.LSTM(hp.inp_size, hp.hidden_size, hp.num_layers, batch_first=True)
self.fc = nn.Linear(hp.hidden_size, hp.num_classes)
self.training_losses = []
def forward(self, x):
# Set initial hidden and cell states
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size)
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size)
# Forward propagate LSTM
out, _ = self.lstm(x, (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size)
# Decode the hidden state of the last time step
out = self.fc(out[:, -1, :])
return out
def training_step(self, batch, batch_idx):
images, labels = batch
images = images.reshape(-1, self.sequence_length, self.input_size)
outputs = self(images)
criterion = nn.CrossEntropyLoss()
loss = criterion(outputs, labels)
# Saving loss for epoch-wise logging
self.training_losses.append(loss.item())
return {'loss': loss}
def on_epoch_end(self):
# Logging mean loss of epoch
train_loss_mean = np.mean(self.training_losses)
self.logger.experiment.log({'epoch/mean_loss': train_loss_mean, 'epoch': self.current_epoch}, global_step=self.current_epoch)
self.training_losses = [] # reset for next epoch
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate)
return optimizer
def train_dataloader(self):
train_dataset = torchvision.datasets.MNIST(root='data', train=True, transform=transforms.ToTensor(), download=True)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=self.batch_size, shuffle=True)
return train_loader
@staticmethod
def add_model_specific_args(parent_parser):
model_parser = HyperOptArgumentParser(parents=[parent_parser])
model_parser.add_argument('--seq_len', default=28, type=int)
model_parser.add_argument('--inp_size', default=28, type=int)
model_parser.add_argument('--hidden_size', default=128, type=int)
model_parser.add_argument('--num_layers', default=2, type=int)
model_parser.add_argument('--num_classes', default=10, type=int)
model_parser.add_argument('--batch_size', default=100, type=int)
model_parser.add_argument('--num_epochs', default=30, type=int)
model_parser.add_argument('--learning_rate', default=0.1, type=int)
model_parser.add_argument('--patience', default=6, type=int)
model_parser.add_argument('--min_delta', default=0.9, type=float)
return model_parser
def main(hparams):
print(hparams)
model = RNNLightning(hparams)
model.parameters()
testtube_logger = test_tube.TestTubeLogger(
name='test',
save_dir='logs'
)
early_stopping = EarlyStopping(
monitor='loss',
min_delta=hparams.min_delta,
# TODO: Find out why early stopping stops too early
patience=hparams.patience,
mode='min'
)
trainer = pl.Trainer(
logger=testtube_logger,
max_epochs=hparams.num_epochs,
row_log_interval=hparams.batch_size,
log_save_interval=hparams.batch_size,
early_stop_callback=early_stopping,
gpus=None
)
trainer.fit(model)
if __name__ == '__main__':
main_arg_parser = HyperOptArgumentParser(description="parser for min_example", add_help=False)
parser = RNNLightning.add_model_specific_args(main_arg_parser)
hyperparams = parser.parse_args()
main(hyperparams)
And here is my .sh file which I call via sbatch slurm_script.sh:
#!/bin/bash
#SBATCH -e logs/early-stopping-test.err
#SBATCH -o logs/early-stopping-test.out
#SBATCH -J early-stopping
#SBATCH --partition=All
#SBATCH --time=0-02:00:00
export PATH=~/anaconda3/bin:$PATH
###
source activate pytorch-bac
~/anaconda3/envs/pytorch-bac/bin/python min_example.py
Expected behavior
The training to last at least as long as the patience value of the Early Stopping Callback.
I'm using Pytorch Lightning 0.7.7.dev0
|
Trainer should run the test loop with the best weights when ModelCheckpoint is used
|
[
"feature",
"help wanted"
] |
๐ Feature
Motivation
I noticed that even when ModelCheckpoint is used, Trainer by default runs the test loop with the last weights, not the best weights saved by ModelCheckpoint. I believe the sensible default here is to run the test loop with the best weights saved by ModelCheckpoint.
Pitch
Now that ModelCheckpoint has a pointer to the best weights, Trainer can replace the last weights with the best weights before running the test loop automatically.
Alternatives
Possibly, this could be another option to Trainer. I don't like this as much b/c this is the behavior most users would expect.
Additional context
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.