title
stringlengths 5
164
| labels
sequence | bodyText
stringlengths 0
46.7k
|
---|---|---|
Support multiple loggers at once | [
"feature",
"help wanted"
] | π Feature
Users might want to use more than 1 loggers at once |
Checkpointing Names | [
"bug",
"docs"
] | π Documentation
The names I am getting from checkpointing seem different than this doc page:
https://pytorch-lightning.readthedocs.io/en/latest/checkpointing.html
Filepath seems to do something different. (directory?) |
global_step advanced between accumulations if gradient_accumulation > 1 | [
"bug"
] | π Bug
If gradient_accumulation is > 1 and a custom scheduler is used that updated the LR based on steps (instead of default epochs) than global step is incorrect since it is advancing at every batch part (depending on gradient_accumulation value) instead only after all parts of the batch times gradient_accumulation.
To fix this:
Trainer global_step is advanced only if global_step % gradient_accumulation == 0.
it has no effect if gradient_accumulation == 1 (global step is advancing as currently implemented)
To Reproduce
Steps to reproduce the behavior:
Run any model with gradient_accumulation > 1 and verify with trainer.global_step
Expected behavior
For example: (batch size=1, accumulation=2)
b_idx = 0, global_step = 1
b_idx = 1, global_step = 2
backprop
b_idx = 2, global_step = 3
b_idx = 3, global_step = 4
backprop
correct flow:
b_idx = 0, global_step = 1
b_idx = 1, global_step = 1
backprop
b_idx = 2, global_step = 2
b_idx = 3, global_step = 2
backprop
Environment
Collecting environment information...
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration:
GPU 0: TITAN Xp
GPU 1: TITAN Xp
GPU 2: TITAN Xp
GPU 3: TITAN Xp
Nvidia driver version: 418.87.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.2
/usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7.4.2
/usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so.6.0.20
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] pytorch-ignite==0.2.1
[pip] pytorch-lightning==0.6.0
[pip] torch==1.4.0
[pip] torchprof==1.0.0
[pip] torchvision==0.5.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.15 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] pytorch 1.4.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] pytorch-ignite 0.2.1 pypi_0 pypi
[conda] pytorch-lightning 0.6.0 dev_0
[conda] torchprof 1.0.0 pypi_0 pypi
[conda] torchvision 0.5.0 py37_cu101 pytorch |
Demo Notebook Error: | [
"bug"
] | π Bug
trainer.fit(gan_model)
ValueError: Your LightningModule defines 2 optimizers but training_step is missing the "optimizer_idx" argument.
To Reproduce
Steps to reproduce the behavior:
Go to this link.
Run the cell
Scroll down .
See error
PyTorch Version (e.g., 1.0): 1.4.0
OS (e.g., Linux): Linux
How you installed PyTorch (conda, pip, source): pip
Build command you used (if compiling from source):
Python version: 3.6
CUDA/cuDNN version:
GPU models and configuration:
Any other relevant information:
Additional context
This is the exact demo notebook. |
Tensorboard logging should use `num_grad_updates` not `batch_idx` | [
"bug"
] | When accumulate_grad_batches > 1, the x-axis in tensorboard should be number of gradient updates, not number of batches that have been processed. |
advanced profiler description fails for python 3.6 | [
"bug"
] | π Bug
Python 3.6 doesn't have the pstats.SortKey.CUMULATIVE enum so the profiler description breaks.
To Reproduce
Steps to reproduce the behavior:
Use Python 3.6, pass in the AdvancedProfiler, get report at end of a training run.
profiler = AdvancedProfiler(line_count_restriction=10)
trainer = Trainer(profiler=profiler)
trainer.fit(model)
Stack trace:
164 for action_name, pr in self.profiled_actions.items():
165 s = io.StringIO()
--> 166 sortby = pstats.SortKey.CUMULATIVE
167 ps = pstats.Stats(pr, stream=s).strip_dirs().sort_stats(sortby)
168 ps.print_stats(self.line_count_restriction)
AttributeError: module 'pstats' has no attribute 'SortKey'
Code sample
from pytorch_lightning import Trainer
from pytorch_lightning.profiler import AdvancedProfiler
from argparse import Namespace
from pl_examples.basic_examples.lightning_module_template import LightningTemplateModel
# define model
hparams = {
"batch_size": 128,
"in_features": 784,
"hidden_dim": 512,
"drop_prob": 0.0,
"out_features": 10,
"learning_rate": 5e-3,
"data_root": "data"
}
hparams = Namespace(**hparams)
model = LightningTemplateModel(hparams)
# overfit on small batch
profiler = AdvancedProfiler(line_count_restriction=10)
trainer = Trainer(profiler=profiler, overfit_pct=0.05, min_epochs=10)
trainer.fit(model)
Expected behavior
Environment
Collecting environment information...
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.12.0
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: Tesla P100-PCIE-16GB
Nvidia driver version: 418.67
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip3] numpy==1.17.5
[pip3] pytorch-lightning==0.6.1.dev0
[pip3] torch==1.4.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.3.1
[pip3] torchvision==0.5.0
[conda] Could not collect |
Cross validation feature | [
"feature",
"help wanted",
"good first issue",
"discussion"
] | π Feature
Cross-Validation is a crucial model validation techniques for assessing how the model generalizes on new data.
Motivation
Research papers usually require cross-validation. From my point of view, this kind of feature would simplify the work of researches.
Pitch
I want to pass a parameter to the Trainer object to specify that I want to train the model on K-folds.
In the case that nobody wants to make a PR, I can start working on that. |
Handle abstract loader that doesn't have a dataset member | [
"bug"
] | Feature / Bug (?)
I have an abstract loader that chains multiple pytorch loaders when training on video sequences. Each small loader contains one sequence and the chained loader just use itertools.chain to chain them together.
I cannot put all data in a single loader because it does not make sense to read images from two different sequences.
I cannot use an IterableDataset because that would still have the same problem.
pytorch-lightning/pytorch_lightning/trainer/data_loading.py
Line 58
in
06242c2
if EXIST_ITER_DATASET and isinstance(self.get_train_dataloader().dataset, IterableDataset):
This assumes that a loader must have a dataset field, which I felt is too restrictive. I suggest putting a check to see if the loader has a dataset member before doing this check. My workaround, for now, is just to declare a dataset member in my ChainedLoader to be None.
Fix
Since this kind of abstract loader does not have an explicit handle to a dataset, it should belong to the concept of an IterableDataset, which the user specifies the number of batches rather than a percentage. So a simple fix would be to put a check of hasattr somewhere in the same line. |
Epoch end checkpoint restarts previous epoch | [
"bug"
] | π Bug
If restarting the training and reloading the model, the epoch that the checkpoint had just completed is restarted rather than beginning the next.
Expected behavior
When a checkpoint upon epoch end is saved, restarting it should resume its state and start the next epoch. |
Model checkpointing to just save the latest model | [
"feature",
"help wanted"
] | π Feature
There is currently no automated option of always only keeping the latest saved model - there is always a metric benchmark used that needs to be exceeded. This is fine as a default but there should be a 'keep-it-simple' option of always just saving whatever is the latest.
Motivation
The concept of top-k is sound but when testing the capacity of new architectures, the most important is just to be resume the latest version. This can probably be implemented by using max(self.global_step) as the metric - but this seems overly complex.
Alternatives
An option to always save the latest model and no others - e.g. by setting save_top_k=-1 (instead of disabling saving altogether which is better done by not having a ModelCheckpoint). |
[Pyright] Cannot access member 'X' for type 'None' | [
"feature",
"help wanted",
"good first issue"
] | Pyright raises 89 errors on master about Cannot access member 'X' for type 'None'. Here is an example in evaluation_loop.py:
# track outputs for collation
dl_outputs.append(output)
# batch done
if test:
self.test_progress_bar.update(1) # PYRIGHT ERROR
else:
self.val_progress_bar.update(1) # PYRIGHT ERROR
self.main_progress_bar.update(1) # PYRIGHT ERROR
outputs.append(dl_outputs)
eval_results = {}
One way to fix this is to indicate the type of those variables:
self.test_progress_bar: Any = None
Unless you see a more elegant way? |
test fails with `_lazy_train_dataloader` | [
"bug",
"help wanted",
"good first issue"
] | π Bug
The tests are time-to-time randomly failing across all platforms (but mainly macOS and Win) with following message:
_____________________________ test_lbfgs_cpu_model _____________________________
self = LightningTestModel(
(c_d1): Linear(in_features=784, out_features=1000, bias=True)
(c_d1_bn): BatchNorm1d(1000, eps...ats=True)
(c_d1_drop): Dropout(p=0.2, inplace=False)
(c_d2): Linear(in_features=1000, out_features=10, bias=True)
)
def _get_data_loader(self):
try:
> value = getattr(self, attr_name)
pytorch_lightning/core/decorators.py:16:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = LightningTestModel(
(c_d1): Linear(in_features=784, out_features=1000, bias=True)
(c_d1_bn): BatchNorm1d(1000, eps...ats=True)
(c_d1_drop): Dropout(p=0.2, inplace=False)
(c_d2): Linear(in_features=1000, out_features=10, bias=True)
)
name = '_lazy_train_dataloader'
def __getattr__(self, name):
if '_parameters' in self.__dict__:
_parameters = self.__dict__['_parameters']
if name in _parameters:
return _parameters[name]
if '_buffers' in self.__dict__:
_buffers = self.__dict__['_buffers']
if name in _buffers:
return _buffers[name]
if '_modules' in self.__dict__:
modules = self.__dict__['_modules']
if name in modules:
return modules[name]
raise AttributeError("'{}' object has no attribute '{}'".format(
> type(self).__name__, name))
E AttributeError: 'LightningTestModel' object has no attribute '_lazy_train_dataloader'
To Reproduce
https://github.com/Borda/pytorch-lightning/runs/448238658 |
About fine-tuning and explaining the result | [
"question"
] | β Questions and Help
Hi,
Thank you for the contribution and framework. Would you mind if you lead me how to freeze all the layers except classification layer during the fine tuning? And also would you help me how to explain the result of the fine tuned model such as which words of the "sentence" made it labelled as positive?
Thank you. |
Incorrect Signature of `training_end`, `validation_end`, `test_end` in `Experimet Reporing` | [
"docs"
] | π Documentation
Documentation here defines the signature of the functions as follows:
training_end(self, outputs)
test_end(self, outputs)
validation_end(self, outputs)
While in the Experiment Reporting section of docs, the signature of the functions has been misreported as:
training_end(self, batch, batch_idx)
test_end(self, batch, batch_idx)
validation_end(self, batch, batch_idx) |
How do I test before any training? | [
"question"
] | β Questions and Help
I am now migrating some of my previous works into lightning. I wish to see if it is able to reproduce my previous results or not. But the doc implies that all the testing has to be performed after training or after loading the previous lightning training state, which I do not have either.
So How can I test before training?
Code
trainer = Trainer(logger=logger, max_epochs=5, gpus=[3], distributed_backend=None)
hparams = HParams(fold=fold, model=model_name, batch_size=8, num_workers=16)
system = MySYS(hparams, trainer)
system.model.load_state_dict(torch.load(state_dict))
trainer.test()
It cannot work since the trainer does not initialize at all.
What have you tried?
I found inside the code for testing:
def test(self, model=None):
r"""
Separates from fit to make sure you never run on your test set until you want to.
Args:
model (LightningModule): The model to test.
Example::
# Option 1
# run test after fitting
trainer = Trainer()
model = LightningModule()
trainer.fit()
trainer.test()
# Option 2
# run test from a loaded model
model = LightningModule.load_from_checkpoint('path/to/checkpoint.ckpt')
trainer = Trainer()
trainer.test(model)
"""
self.testing = True
if model is not None:
self.fit(model)
else:
self.run_evaluation(test=True)
Which requires to fit the model that I do not understand at all. Why a fitting is required inside training code? If for the purpose of initialization, can't we just put some init code here? |
Example testing | [
"feature",
"help wanted",
"good first issue",
"won't fix",
"ci"
] | π Feature
Find a way how to test examples in CI
Motivation
Due to some API modification, our examples may become invalid and we would not notice it until a bug report...
Pitch
The best way is running examples as a part of CI, on the other hand, it may be quite a time consuming |
suggestion | [
"feature",
"help wanted",
"question"
] | not needed |
simplification over all kinds of "path" settings | [
"feature",
"good first issue",
"won't fix",
"docs"
] | I'm lost in all kinds of "path" settings, could it be more simpler?
Across the source code and examples, I find that there are many types of "path" config, such as the path in ModelCheckpoint, logger, and default_save_path in trainer.
Could you please explain these path configs in more detail? For example, if we set default_save_path, how does it influence other path configs? |
Fix .test() on ddp | [
"bug",
"help wanted"
] | This might be broken on notebooks only.
#875 solves a few problems with .test()
However, ddp + .test might be broken on notebooks because of the "spawn" option. (likely #747). |
Unify usage of multiple callbacks | [
"feature",
"help wanted",
"discussion"
] | π Feature
Simplified API, with callbacks... as e.g. Keras did, pass just list of callbacks to be executed and Trainer will call then when needed instead of having them specified
pytorch-lightning/pytorch_lightning/trainer/trainer.py
Lines 65 to 66
in
b104052
checkpoint_callback=True,
early_stop_callback=None,
mentioned also in #825 (comment) |
MisconfigurationException GPUs on updating lightning | [
"bug",
"help wanted"
] | Not sure what's going on here. I updated to the latest pytorch version:
Can anyone help?
I also included my environment packages.
Traceback (most recent call last):
File "train_gpu.py", line 232, in <module>
main_local(hparam_trial)
File "train_gpu.py", line 106, in main_local
trainer = Trainer(logger = tt_logger , use_amp=False, gpus=1 ,min_nb_epochs=1002, max_nb_epochs=1003) # amp_level='O2' ,
File "/home/bruce/anaconda3/envs/ln2/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 530, in __init__
self.data_parallel_device_ids = parse_gpu_ids(gpus)
File "/home/bruce/anaconda3/envs/ln2/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 563, in parse_gpu_ids
gpus = sanitize_gpu_ids(gpus)
File "/home/bruce/anaconda3/envs/ln2/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 533, in sanitize_gpu_ids
raise MisconfigurationException(message)
pytorch_lightning.utilities.debugging.MisconfigurationException:
You requested GPUs: [0]
But your machine only has: []
# Name Version Build Channel
_libgcc_mutex 0.1 main
_pytorch_select 0.1 cpu_0
absl-py 0.9.0 pypi_0 pypi
blas 1.0 mkl
ca-certificates 2020.1.1 0
cachetools 4.0.0 pypi_0 pypi
certifi 2019.11.28 py37_0
cffi 1.14.0 py37h2e261b9_0
chardet 3.0.4 pypi_0 pypi
cudatoolkit 10.1.243 h6bb024c_0
freetype 2.9.1 h8a8886c_1
future 0.18.2 pypi_0 pypi
google-auth 1.11.2 pypi_0 pypi
google-auth-oauthlib 0.4.1 pypi_0 pypi
grpcio 1.27.2 pypi_0 pypi
idna 2.8 pypi_0 pypi
imageio 2.8.0 pypi_0 pypi
intel-openmp 2020.0 166
joblib 0.14.1 pypi_0 pypi
jpeg 9c h14c3975_1001 conda-forge
ld_impl_linux-64 2.33.1 h53a641e_7
libedit 3.1.20181209 hc058e9b_0
libffi 3.2.1 hd88cf55_4
libgcc-ng 9.1.0 hdf63c60_0
libgfortran-ng 7.3.0 hdf63c60_0
libpng 1.6.37 hbc83047_0
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.1.0 h2733197_0
markdown 3.2.1 pypi_0 pypi
mkl 2020.0 166
mkl-service 2.3.0 py37he904b0f_0
mkl_fft 1.0.15 py37ha843d7b_0
mkl_random 1.1.0 py37hd6b4f25_0
ncurses 6.1 he6710b0_1
ninja 1.9.0 py37hfd86e86_0
numpy 1.18.1 py37h4f9e942_0
numpy-base 1.18.1 py37hde5b4d6_1
oauthlib 3.1.0 pypi_0 pypi
olefile 0.46 py37_0
openssl 1.1.1d h7b6447c_4
pandas 1.0.1 pypi_0 pypi
pillow 6.2.1 py37hd70f55b_1 conda-forge
pip 20.0.2 py37_1
protobuf 3.11.3 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pycparser 2.19 py37_0
python 3.7.6 h0371630_2
python-dateutil 2.8.1 pypi_0 pypi
pytorch 1.3.1 cpu_py37h62f834f_0
pytorch-lightning 0.6.0 pypi_0 pypi
pytz 2019.3 pypi_0 pypi
readline 7.0 h7b6447c_5
requests 2.22.0 pypi_0 pypi
requests-oauthlib 1.3.0 pypi_0 pypi
rsa 4.0 pypi_0 pypi
scikit-learn 0.22.1 pypi_0 pypi
scipy 1.4.1 pypi_0 pypi
setuptools 45.2.0 py37_0
six 1.14.0 py37_0
sqlite 3.31.1 h7b6447c_0
tensorboard 2.1.0 pypi_0 pypi
test-tube 0.7.5 pypi_0 pypi
tk 8.6.10 hed695b0_0 conda-forge
torchvision 0.4.2 cpu_py37h9ec355b_0
tqdm 4.43.0 pypi_0 pypi
urllib3 1.25.8 pypi_0 pypi
werkzeug 1.0.0 pypi_0 pypi
wheel 0.34.2 py37_0
xz 5.2.4 h14c3975_4
zlib 1.2.11 h7b6447c_3
zstd 1.3.7 h0b5b093_0 |
[dp/ddp mode]Enable checking which process I'm in | [
"feature",
"help wanted"
] | π Feature
Motivation
The motivation is that in dp/ddp mode, one print statement in training_step ends up with multiple printed lines (4 lines because I'm using 4 GPUs)
Pitch
I hope that there's a self.rank to let the user check which process they're in.
So they may choose to print for only rank 0, or print the rank to screen with the msg to avoid confusion.
Alternatives
@williamFalcon suggests make it automatically handled with a self.print
Additional context
I'd prefer a self.rank because horovod has this ranking. I'm not sure if there are similar things in pytorch. |
Update torchvision to 0.5.0 | [
"feature",
"help wanted"
] | π Feature
Update torchvision to 0.5.0 (pip install downgrades to 0.4.2)
ERROR: torchvision 0.4.2 has requirement torch==1.3.1, but you'll have torch 1.4.0 which is incompatible.
Installing collected packages: tqdm, torchvision, oauthlib, requests-oauthlib, pyasn1, rsa, pyasn1-modules, cachetools, google-auth, google-auth-oauthlib, markdown, grpcio, absl-py, werkzeug, protobuf, tensorboard, pytorch-lightning
Attempting uninstall: torchvision
Found existing installation: torchvision 0.5.0
Uninstalling torchvision-0.5.0:
Successfully uninstalled torchvision-0.5.0
Successfully installed absl-py-0.9.0 cachetools-4.0.0 google-auth-1.11.2 google-auth-oauthlib-0.4.1 grpcio-1.27.2 markdown-3.2.1 oauthlib-3.1.0 protobuf-3.11.3 pyasn1-0.4.8 pyasn1-modules-0.2.8 pytorch-lightning-0.6.0 requests-oauthlib-1.3.0 rsa-4.0 tensorboard-2.1.0
torchvision-0.4.2 tqdm-4.43.0 werkzeug-1.0.0
Upgrade dependency so that the above doesn't happen.
Motivation
I recently conda installed pytorch and it came with torchvision-0.5.0. However, after pip installing lightning, it downgraded to torchvision-0.4.2.
Pitch
Support the latest torchvision. |
Improve wandb tests | [
"bug",
"help wanted",
"good first issue"
] | The tests of the wandb-logger are not as comprehensive as for the rest of the loggers. Currently, they just test if constructing a logger works.
Testing wandb in the same way as the other loggers works:
def test_wandb_logger(tmpdir):
"""Verify that basic functionality of wandb logger works."""
tutils.reset_seed()
wandb_dir = os.path.join(tmpdir, "wandb")
logger = WandbLogger(save_dir=wandb_dir, anonymous=True, offline=True)
hparams = tutils.get_hparams()
model = LightningTestModel(hparams)
trainer_options = dict(
default_save_path=tmpdir,
max_epochs=1,
train_percent_check=0.05,
logger=logger
)
trainer = Trainer(**trainer_options)
result = trainer.fit(model)
print('result finished')
assert result == 1, "Training failed"
def test_wandb_pickle(tmpdir):
"""Verify that pickling trainer with wandb logger works."""
tutils.reset_seed()
wandb_dir = str(tmpdir)
logger = WandbLogger(save_dir=wandb_dir, anonymous=True)
trainer_options = dict(
default_save_path=tmpdir,
max_epochs=1,
logger=logger
)
trainer = Trainer(**trainer_options)
pkl_bytes = pickle.dumps(trainer)
trainer2 = pickle.loads(pkl_bytes)
trainer2.logger.log_metrics({"acc": 1.0})
But the CI tests on github for windows fail. The tests for Ubuntu and Mac work. Seems to be some problem related to file-access that I cannot recreate locally |
Relax hparams in model saving/loading | [
"question"
] | I've managed to train a model using pl.fit(model) and have the .ckpt file. Now, I'm trying to load the .ckpt file so that I can do inference on a single image:
model = CoolSystem()
to_infer = torch.load('checkpoints/try_ckpt_epoch_1_v0.ckpt')
model.load_from_checkpoint(to_infer) # ------------- error is thrown at this line
However, upon loading the .ckpt file, the following error is thrown:
AttributeError: 'dict' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
Am I doing something wrong when using PyTorch Lightning for inference?
For reference, this is my system:
import pytorch_lightning as pl
import os
import matplotlib.pyplot as plt
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn.functional as F
class CoolSystem(pl.LightningModule):
def __init__(self):
super(CoolSystem, self).__init__()
# self.hparams = hparams
self.data_dir = '/content/hymenoptera_data'
self.model = torchvision.models.resnet18(pretrained=True) # final layer is of size [bs, 1000]
num_ftrs = self.model.fc.in_features
self.model.fc = torch.nn.Linear(num_ftrs, 2) # change final layer to be of size [bs, 2]
def forward(self, x):
x = self.model(x)
return x
def configure_optimizers(self):
# Observe that all parameters are being optimized
optimizer = torch.optim.SGD(self.model.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
return [optimizer], [exp_lr_scheduler]
def training_step(self, batch, batch_idx):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
@pl.data_loader
def train_dataloader(self):
# REQUIRED
transform = transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
train_set = torchvision.datasets.ImageFolder(os.path.join(self.data_dir, 'train'), transform)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=32, shuffle=True, num_workers=4)
return train_loader
@pl.data_loader
def val_dataloader(self):
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
val_set = torchvision.datasets.ImageFolder(os.path.join(self.data_dir, 'val'), transform)
val_loader = torch.utils.data.DataLoader(val_set, batch_size=32, shuffle=True, num_workers=4)
return val_loader
And I'm training it this way:
model = CoolSystem()
import os
checkpoint_callback = pl.callbacks.ModelCheckpoint(
filepath=os.path.join(os.getcwd(), 'checkpoints'),
verbose=True,
monitor='val_loss',
mode='min',
prefix='try',
save_top_k=-1,
period=1 # check val_loss every n periods, and saves the checkpoint if it is better than the val_loss at the previous period
)
trainer = pl.Trainer(
max_epochs=2,
checkpoint_callback=checkpoint_callback)
trainer.fit(model) |
Test pass shouldn't require both test_step and test_end | [
"bug",
"help wanted",
"good first issue"
] | π Bug
trainer.test(...) requires implementation of both test_step and test_end, but the warning says you only need to implement either or both.
pytorch-lightning/pytorch_lightning/trainer/evaluation_loop.py
Line 291
in
56dddf9
if test and not (self.is_overriden('test_step') and self.is_overriden('test_end')):
To Reproduce
Run .test() on any LightningModule with only test_step or test_end implemented.
Code sample
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
import pytorch_lightning as pl
class CoolSystem(pl.LightningModule):
def __init__(self):
super(CoolSystem, self).__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
return {'loss': loss}
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
return {'test_loss': F.cross_entropy(y_hat, y)}
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
@pl.data_loader
def train_dataloader(self):
return DataLoader(MNIST('./', train=True, download=True), batch_size=32)
@pl.data_loader
def test_dataloader(self):
return DataLoader(MNIST('./', train=False, download=True), batch_size=32)
model = CoolSystem()
trainer = pl.Trainer(max_epochs=2, val_percent_check=1)
trainer.test(model)
Expected behaviour
Test pass runs should run with either implemented or when at least test_step is. |
Log training metrics for each epoch | [
"question",
"priority: 0"
] | Currently, I am able to log training metrics to Tensorboard using:
import pytorch_lightning as pl
from pytorch_lightning.loggers import TensorBoardLogger
logger = TensorBoardLogger(save_dir=save_dir, name="my_model")
[...]
trainer = pl.Trainer(logger=logger)
This logs training metrics (loss, for instance) after each batch. I would like to be able to average these metrics across all batches and log them to TensorBoard only once, at the end of each epoch. This is what the validation_end method does in your example:
pytorch-lightning/pl_examples/basic_examples/lightning_module_template.py
Line 145
in
446a1e2
def validation_end(self, outputs):
.
I first thought about writing my own training_end method. But this method is called after each batch instead of being called at the end of an epoch (as I would have thought). The method on_epoch_end seems interesting but does not receive an outputs argument as training_end does. Basically, in my model, I would like to write something like: self.logger.experiment.add_scalar('training_loss', train_loss_mean, global_step=self.current_epoch), but I do not know where to put this line.
OS: Debian GNU/Linux 9.11 (stretch)
Packaging: PIP
Version 0.6.1.dev0 |
Unrecognized `val_loss` metric | [
"bug",
"help wanted"
] | RuntimeWarnings due to unrecognized val_loss metric
pytorch_lightning callbacks are unable to recognize val_loss from validation_step()
To Reproduce
Run CoolModel from
Steps to reproduce the behavior:
Go to Minimal-Example
Run upto trainer.fit()
Scroll down to the end of an epoch
See error -
`/opt/conda/lib/python3.6/site-packages/pytorch_lightning/callbacks/pt_callbacks.py:314: RuntimeWarning: Can save best model only with val_loss available, skipping.
' skipping.', RuntimeWarning)
/opt/conda/lib/python3.6/site-packages/pytorch_lightning/callbacks/pt_callbacks.py:144: RuntimeWarning: Early stopping conditioned on metric val_loss which is not available. Available metrics are: loss,avg_val_loss
RuntimeWarning)`
Code sample
Just the Minimal Example
import os
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
import torchvision.transforms as transforms
import pytorch_lightning as pl
class CoolModel(pl.LightningModule):
def __init__(self):
super(CoolModel, self).__init__()
# not the best model...
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
return {'loss': F.cross_entropy(y_hat, y)}
def validation_step(self, batch, batch_nb):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
return {'avg_val_loss': avg_loss}
def test_step(self, batch, batch_nb):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'test_loss': F.cross_entropy(y_hat, y)}
def test_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
return {'avg_test_loss': avg_loss}
def configure_optimizers(self):
# REQUIRED
return torch.optim.Adam(self.parameters(), lr=0.02)
@pl.data_loader
def train_dataloader(self):
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
@pl.data_loader
def val_dataloader(self):
# OPTIONAL
# can also return a list of val dataloaders
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
@pl.data_loader
def test_dataloader(self):
# OPTIONAL
# can also return a list of test dataloaders
return DataLoader(MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), batch_size=32)
from pytorch_lightning import Trainer
model = CoolModel()
trainer = Trainer()
trainer.fit(model)
Output
Excluding the print from pip install and training tqdm prints
Epoch 1: 100%|ββββββββββ| 3750/3750 [00:22<00:00, 163.69batch/s, batch_idx=1874, loss=1.530, v_num=0]
/opt/conda/lib/python3.6/site-packages/pytorch_lightning/callbacks/pt_callbacks.py:314: RuntimeWarning: Can save best model only with val_loss available, skipping.
' skipping.', RuntimeWarning)
/opt/conda/lib/python3.6/site-packages/pytorch_lightning/callbacks/pt_callbacks.py:144: RuntimeWarning: Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,avg_val_loss
RuntimeWarning)
1
Expected behavior
The 'val_loss' metric should be recognized when a dic with key val_loss is returned by validation_step()
Environment
Collecting environment information...
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: None
OS: Debian GNU/Linux 9 (stretch)
GCC version: (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
CMake version: version 3.7.2
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] msgpack-numpy==0.4.4.3
[pip] numpy==1.18.1
[pip] numpydoc==0.9.2
[pip] pytorch-ignite==0.3.0
[pip] pytorch-lightning==0.6.0
[pip] pytorch-pretrained-bert==0.6.2
[pip] pytorch-transformers==1.1.0
[pip] torch==1.4.0
[pip] torchaudio==0.4.0a0+719bcc7
[pip] torchtext==0.5.0
[pip] torchvision==0.4.2
[conda] blas 1.0 mkl
[conda] cpuonly 1.0 0 pytorch
[conda] mkl 2019.3 199
[conda] mkl-service 2.0.2 py36h7b6447c_0
[conda] mkl_fft 1.0.12 py36ha843d7b_0
[conda] mkl_random 1.0.2 py36hd81dba3_0
[conda] pytorch 1.4.0 py3.6_cpu_0 [cpuonly] pytorch
[conda] pytorch-ignite 0.3.0 pypi_0 pypi
[conda] pytorch-lightning 0.6.0 pypi_0 pypi
[conda] pytorch-pretrained-bert 0.6.2 pypi_0 pypi
[conda] pytorch-transformers 1.1.0 pypi_0 pypi
[conda] torchaudio 0.4.0 py36 pytorch
[conda] torchtext 0.5.0 pypi_0 pypi
[conda] torchvision 0.4.2 pypi_0 pypi
Additional context
Something interesting is you cannot reproduce this when you run CoolSystem.
You won't get any warning because validation_end returns tensorboard logs with val_loss as a key. If you remove this log you will able to catch the bug.
For what its worth, I was trying to make a PyTorch Lightning based Kernel in Kaggle and WandB as logger. After trying "a lot" to find a mistake in my code, I realized that this warning is not shown every time. Once you run and get the warning and re-run the block (I was using Kaggle Kernel Notebook) it disappears. I assume that it is reading from some cache? Since the val_loss is something for the first epoch(maybe something from last epoch or log) and stay 0 for the rest. I am not familiar with the internal working of PL but I suspect there is some mix up between metrics that are logged and metrics returned by lightning methods. |
How to properly load checkpoint for testing? | [
"bug",
"question"
] | I've trained a system as follows:
model = CoolSystem(hparams)
checkpoint_callback = pl.callbacks.ModelCheckpoint(
filepath=os.path.join(os.getcwd(), 'checkpoints'),
verbose=True,
monitor='val_acc',
mode='max',
prefix='try',
save_top_k=-1,
period=1
)
trainer = pl.Trainer(
max_epochs=hparams.epochs,
checkpoint_callback=checkpoint_callback)
trainer.fit(model)
trainer.test()
And with the above, the test accuracy is 0.7975
However, when I load the checkpoints separately instead:
model_test = CoolSystem(hyperparams)
model_test.load_from_checkpoint('checkpoints/try_ckpt_epoch_1.ckpt')
trainer = pl.Trainer()
trainer.test(model_test)
The accuracy returned is 0.5705
Am I loading the checkpoints wrongly? |
Add "epoch" options to basic templates | [
"feature",
"help wanted"
] | π Feature
Add "epochs" option to parser of 'basic_examples/lightning_module_template.py'
Motivation
Thanks to 'basic_examples/lightning_module_template.py', I could build my deep learning model. Some beginners like me might build their model from this basic template. However, there are no options to manipulate epochs. I just thought that what people use often should be included in the basic template, so I uploaded my issue.
Pitch
I suggest that the basic template includes "epoch" option in the basic template.
Alternatives
Add "epoch" options to parser of 'basic_examples/lightning_module_template.py'
parser.add_argument('--epochs', default=10, type=int, metavar='N',
help='number of total epochs to run')
trainer = pl.Trainer(max_epochs=hparams.epochs)
Additional context
I am really enjoying PytorchLightning framework. Thanks π |
Automate choosing sampler | [
"feature",
"help wanted"
] | π Feature
Let's automate choosing the sampler.
Case 1 (DDP, training):
Default to distributedSampler
sampler = DistributedSampler(dataset)
Case 2 (training):
sampler = RandomSampler(dataset)
Case 3 (val, test):
sampler = SequentialSampler(dataset)
Case 4 (tpu, train, val, test):
xm.DistributedSampler(dataset)
Motivation
Samplers are hard and should be automated.
@srush |
Add prepare_data call for downloading datasets | [
"feature",
"help wanted"
] | π Feature
Data download and loading data should be separate because it causes issues on distributed environments.
Add
def prepare_data(self):
# download and prep the data
def train_dataloader(self):
# create dataloader |
Remove the prefix 'version_' in tensorboard logging | [
"feature",
"help wanted"
] | π Feature
Motivation
In tensorboard there's an explicit '/', and I think there's no need for this 'version_'.
it makes the whole stuff too long.
Pitch
I can bring a PR for this.
Alternatives
Additional context |
logger is NoneType hence doesn't have any experiment or other functionality in a lightning module | [
"bug",
"help wanted"
] | π Bug
When trying to use the logging abilities of lightning, I hit a wall, the default and tensorboard loggers both seem to stay uninitialized when calling trainer.fit(model), resulting in crashes everytime I try to log something.
To Reproduce
Create a lightning module as such
class SimpleRegressor(pl.LightningModule):
...
Use the logger anywhere to get this kind of stacktrace:
d:\Documents\projects\MetaWatch\MetaWatch\notebooks\audio-video-interest\simple_regressor.py in configure_optimizers(self)
105 #see https://pytorch-lightning.readthedocs.io/en/latest/pytorch_lightning.core.lightning.html#pytorch_lightning.core.lightning.LightningModule.configure_optimizers
106 # REQUIRED
--> 107 self.logger.experiment.add_hparams({'hidden_layer_size':self.hidden_layer_size,
108 'linear_layer_size':self.linear_layer_size,
109 'lstm_layers':self.lstm_layers})
AttributeError: 'NoneType' object has no attribute 'experiment'
Code sample
import pytorch_lightning as pl
class SimpleRegressor(pl.LightningModule):
def __init__(self, cuda=False):
super(SimpleRegressor, self).__init__()
self.logger.experiment.add_hparams({'hidden_layer_size':1})
Expected behavior
To log as described in the documentation.
Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 970
Nvidia driver version: 441.12
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.18.1
[pip3] pytorch-lightning==0.6.0
[pip3] tinynumpy==1.2.1
[pip3] torch==1.4.0
[pip3] torchvision==0.4.1
[conda] Could not collect
Additional context |
Support both Namespace and dict for hyperparameter saving/loading. | [
"feature",
"help wanted",
"good first issue"
] | π Feature
Let the user pass a dict to LightningModule so that after model saving it can be restored using load_from_checkpoint or load_from_metrics.
Motivation
Currently, there is nothing that prevents the user from passing in hyperparameters to LightningModule via dictionary (or even somthing else). However, the model loading/saving assumes it is always a argparse.Namespace. This could potentially be an issue when load_from_checkpoint restores the module with a Namespace passed in instead of dict.
Pitch
The model saving currently converts Namespace to dict and the model loading converts it back to Namespace.
Pitch 1: Also save the type inside the checkpoint, e.g.,
checkpoint['hparams'] = dict(hparams)
checkpoint['hparams_type'] = type(hparams)
and when restoring we instantiate with the appropriate type.
Pitch 2: Dump the whole hparams object (Namespace, dict, ...) into checkpoint without converting it to dict first and let pickle take care of the rest. Most flexible option but could give problems when loading from checkpoint.
Alternatives
Somehow restrict the user to only use argparse.
Additional context
The idea was suggested by @williamFalcon in PR #919. |
Improve callback system | [
"feature",
"help wanted"
] | Following the new callback system here are a few things we might want to do proposed by people who participated in #889 and #896.
Consolidate model hooks and callbacks system. Make the calls at the same moment: #889 (comment)
Fix on_train_begin() being called in the early stopping callbacks: #889 (comment)
Assert pl_module and trainer are excepted object instances in the callback system train tests: #889 (comment)
Incorporate existing callbacks into the new callback system.
Edit the changelog
ping @Borda @williamFalcon @jeremyjordan @awaelchli |
Support IterableDatasets for validation and test, not just train set [blocked by #953] | [
"feature",
"help wanted"
] | π Feature
Currently Lightning supports IterableDatasets only in the training set (see code). This makes them second-class citizens compared to the map-style datasets, and supporting them seems a low hanging fruit.
Motivation
This enables having larger test sets that may not fit into a machine's memory (they could be very large in production settings, or of modest size running in a student's cheap laptop). Moreover,
datasets are usually generated together (eg train, val, test can come from the same process). It is very likely that the same process has the same signature, so you may end up having IterableDatasets even when their size may not deem it strictly necessary.
Pitch
Changing a few lines of code by bringing in the checks we are doing for training should be enough unless I'm missing something.
Additional context
Are there any gotchas that make this harder than it looks? |
Automatically pick available GPU | [
"feature",
"help wanted",
"discussion"
] | Thanks for this great library!
π Feature
I would like to change the behavior of this code:
trainer = pl.Trainer(
... snip ...,
gpus=1,
)
Currently, when setting gpus to an integer n, the first n GPUs are automatically used.
I would like to change the behavior such that when multiple GPUs are available, the trainer picks the first available GPU.
Motivation
When running multiple jobs in parallel on a server with multiple available GPUs, I get the error:
RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable
This is because all 4 running jobs are scheduled to GPU 0, even though I have 4 GPUs available.
Note: the GPUs are configured to be in "exclusive mode", which means that only one process at a time can use them.
Pitch
I would like to change the behavior such that when multiple GPUs are available, the trainer picks the first available GPU.
Alternatives
One could also fix this in client code, like this:
def retry_jittered_backoff(f, num_retries=5):
# Based on:
# https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/
import time
import random
cap = 1.0 # max sleep time is 1s
base = 0.01 # initial sleep time is 10ms
sleep = base # initial sleep time is 10ms
for i in range(num_retries):
try:
return f()
except RuntimeError as e:
if i == num_retries - 1:
raise e
else:
continue
time.sleep(sleep)
sleep = min(cap, random.uniform(base, sleep * 3))
def pick_gpu():
for i in range(torch.cuda.device_count()):
torch.cuda.set_device(i)
try:
torch.ones(1).cuda()
except RuntimeError:
continue
return i
raise RuntimeError("No GPUs available.")
def main(.. snip ..):
model = Model(.. snip ..)
trainer = pl.Trainer(
.. snip ..,
gpus=[retry_jittered_backoff(pick_gpu)],
)
trainer.fit(model)
This is exactly the kind of boilerplate that I would hope pytorch-lighting can make redundant.
PR-able?
Would you be open to consider accepting a PR? If so, could you give me some pointers where to add the above code? |
refactor len(datasets) call. | [
"feature",
"help wanted"
] | π Feature
Let's minimize len(dataset) calls and do it as late in the training as we can (ie: ideally right before any training loop). This way, we can open up the path to support iterable datasets more cleanly.
Motivation
Getting the length prematurely calls datasets at the wrong time often causing double loads.
This is a blocker to #948 |
How to implement pre-training? | [
"question"
] | β Questions and Help
What is your question?
What would be the best way to implement pre-training? With pre-training, I mean that I want to freeze some layers for a couple of epochs before training all of them. At the moment, I do it in two runs. In the first one, I freeze the layers, train the model, and save its weights. In the second one, I load these weights and train the model without freezing. Is it possible to do it in one run? If so, how? |
Process runs on more GPUs than specified | [
"help wanted"
] | I have a single 8-GPU machine with a faulty GPU0.
I'm running imagenet_example.py on 7 GPUs on this machine by specifying gpus=[1,2,3,4,5,6,7] in the Trainer i.e. I do not want to use GPU0
However, when i run nvidia-smi, I see the Trainer's pid shows on all 8 GPUs, just with lower memory on GPU0 (see output below). I also find it to be slower than non-PL code by about 4x. I don't see this behavior if I manually set CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7 followed by gpus=7 in Trainer. Similarly, it works fine when using a single GPU with, say, gpus=[1].
I'm not sure if it's relevant but I also see gpu=0 in the tqdm progress bar
nvidia-smi with Trainer(gpus=[1,2,3,4,5,6,7]) and CUDA_VISIBLE_DEVICES unset
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 40155 C python 719MiB |
| 1 40155 C python 6003MiB |
| 2 40155 C python 6019MiB |
| 3 40155 C python 6019MiB |
| 4 40155 C python 6019MiB |
| 5 40155 C python 6019MiB |
| 6 40155 C python 6019MiB |
| 7 40155 C python 6019MiB |
+-----------------------------------------------------------------------------+
nvidia-smi with Trainer(gpus=7) and CUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 1 34452 C python 6003MiB |
| 2 34452 C python 6019MiB |
| 3 34452 C python 6019MiB |
| 4 34452 C python 6019MiB |
| 5 34452 C python 6019MiB |
| 6 34452 C python 6019MiB |
| 7 34452 C python 6019MiB |
+-----------------------------------------------------------------------------+
Expected behavior
The process should run on the specified GPUs without manually setting CUDA_VISIBLE_DEVICES
Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.10.2
Python version: 3.8
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce RTX 2080 Ti
GPU 1: GeForce RTX 2080 Ti
GPU 2: GeForce RTX 2080 Ti
GPU 3: GeForce RTX 2080 Ti
GPU 4: GeForce RTX 2080 Ti
GPU 5: GeForce RTX 2080 Ti
GPU 6: GeForce RTX 2080 Ti
GPU 7: GeForce RTX 2080 Ti
Nvidia driver version: 418.87.00
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] pytorch-lightning==0.6.0
[pip] torch==1.4.0
[pip] torch-lr-finder==0.1.2
[pip] torchvision==0.5.0
[conda] blas 1.0 mkl
[conda] mkl 2020.0 166
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.0.15 py38ha843d7b_0
[conda] mkl_random 1.1.0 py38h962f231_0
[conda] pytorch 1.4.0 py3.8_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] pytorch-lightning 0.6.0 pypi_0 pypi
[conda] torch-lr-finder 0.1.2 pypi_0 pypi
[conda] torchvision 0.5.0 py38_cu101 pytorch |
Logging the current learning rate | [
"question"
] | I'd like to write the current learning rate to a Logger. I imagine this would require a call to scheduler.get_lr() but I'm not sure how I can access the scheduler object and where to place the get_lr call
TIA! |
Example of gradient accumulation documentation is not correct | [
"docs"
] | π Documentation
Example of gradient accumulation documentation is not correct :
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import GradientAccumulationScheduler
# at epoch 5 start accumulating every 2 batches
accumulator = GradientAccumulationScheduler(scheduling: {5: 2})
Trainer(accumulate_grad_batches=accumulator)
If running the code like this, I receive following error :
TypeError: Gradient accumulation supports only int and dict types
After reading the source code :
pytorch-lightning/pytorch_lightning/trainer/training_tricks.py
Lines 32 to 41
in
d856989
def configure_accumulated_gradients(self, accumulate_grad_batches):
self.accumulate_grad_batches = None
if isinstance(accumulate_grad_batches, dict):
self.accumulation_scheduler = GradientAccumulationScheduler(accumulate_grad_batches)
elif isinstance(accumulate_grad_batches, int):
schedule = {1: accumulate_grad_batches}
self.accumulation_scheduler = GradientAccumulationScheduler(schedule)
else:
raise TypeError("Gradient accumulation supports only int and dict types")
It seems the option accumulate_grad_batches can be only int or dict. |
Graceful keyboard doesn't work as expected | [
"bug",
"help wanted"
] | @jeremyjordan
Not sure the fix is what you were looking for? Make sure to also log the message only from proc_0. |
Passing dataloader to trainer.fit() doesn't work with tpu (and maybe ddp) | [
"bug",
"help wanted"
] | π Bug
Receive a
AttributeError: Can't pickle local object 'Trainer.__set_fit_dataloaders.<locals>.patch_train_dataloader'
error when passing the dataloader directly to trainer.fit(model, train_loader)
To Reproduce
Steps to reproduce the behavior:
Try to call trainer.fit(model, train_loader) in TPU mode.
(I suspect that anything that calls mp.spawn will cause this problem, so ddp probably will face this issue too.)
Code sample
import os
import pytorch_lightning as pl
import torch
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision import transforms
from torchvision.datasets import MNIST
import torch_xla.core.xla_model as xm
class CoolSystem(pl.LightningModule):
def __init__(self, use_tpu=False):
super(CoolSystem, self).__init__()
# not the best model...
self.use_tpu = use_tpu
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_idx):
# REQUIRED
x, y = batch
y_hat = self.forward(x)
loss = F.cross_entropy(y_hat, y)
tensorboard_logs = {'train_loss': loss}
return {'loss': loss, 'log': tensorboard_logs}
def validation_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'val_loss': F.cross_entropy(y_hat, y)}
def validation_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
tensorboard_logs = {'val_loss': avg_loss}
return {'avg_val_loss': avg_loss, 'log': tensorboard_logs}
def test_step(self, batch, batch_idx):
# OPTIONAL
x, y = batch
y_hat = self.forward(x)
return {'test_loss': F.cross_entropy(y_hat, y)}
def test_end(self, outputs):
# OPTIONAL
avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
tensorboard_logs = {'test_loss': avg_loss}
return {'avg_test_loss': avg_loss, 'log': tensorboard_logs}
def configure_optimizers(self):
# REQUIRED
# can return multiple optimizers and learning_rate schedulers
# (LBFGS it is automatically supported, no need for closure function)
return torch.optim.Adam(self.parameters(), lr=0.0004)
if __name__ == '__main__':
from pytorch_lightning import Trainer
model = CoolSystem(use_tpu=True)
dataset = MNIST(os.getcwd(),
train=True,
download=True,
transform=transforms.ToTensor())
sampler = torch.utils.data.distributed.DistributedSampler(
dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True)
loader = DataLoader(dataset, sampler=sampler, batch_size=32)
# most basic trainer, uses good defaults
trainer = Trainer(num_tpu_cores=8)
trainer.fit(model, loader)
Expected behavior
Ideally, specifying the dataloaders as part of the LightningModule should work just the same as passing the dataloaders into trainer.fit()
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
Docker image: gcr.io/tpu-pytorch/xla:nightly
build steps:
pip install git+git://github.com/williamFalcon/pytorch-lightning.git@master --upgrade
(I don't have access to the machine right now, so please forgive me on the specific version info temporarily)
Proposed solution
The issue is here, trying to assign a local function to the model
def __set_fit_dataloaders(self, model, train_dataloader, val_dataloaders, test_dataloaders):
# when dataloader is passed via fit, patch the train_dataloader
# functions to overwrite with these implementations
if train_dataloader is not None:
if not self.is_overriden('training_step', model):
m = 'You called .fit() with a train_dataloader but did not define training_step()'
raise MisconfigurationException(m)
def patch_train_dataloader():
return train_dataloader
model.train_dataloader = patch_train_dataloader
Instead of using a closure or a local function, you could use a callable defined at the top-level. This will be pickleable.
class DataLoaderPatcher
def __init__(self, loader):
self.loader = loader
def __call__():
return self.loader
def __set_fit_dataloaders(self, model, train_dataloader, val_dataloaders, test_dataloaders):
# when dataloader is passed via fit, patch the train_dataloader
# functions to overwrite with these implementations
if train_dataloader is not None:
if not self.is_overriden('training_step', model):
m = 'You called .fit() with a train_dataloader but did not define training_step()'
raise MisconfigurationException(m)
model.train_dataloader = DataLoaderPatcher(train_dataloader) |
Checkpooint Callback not called when training GAN from documentation | [
"question"
] | I tried to add a model checkpoint callback to the GAN example from the documentation (https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/domain_templates/gan.py), but it is failing silently; no checkpoints are being written. I can't find anything in the checkpoint docs that indicate why it is not working.
What's your environment?
OS: Linux
Packaging: Conda
Version : 0.6, pytorch 1.4 |
Evaluation on subsampled training set during training | [
"question"
] | β Questions and Help
What is your question?
We can evaluate on the validation set to get validation accuracy etc. Is it possible to evaluate on a subsampled version of the training set as well, for example, the training accuracy? Is there an easy wa to do it with PL?
Sorry for the simple question, but I couldn't find the answer in the documentation. |
Support storing hparams as a dict | [
"feature",
"help wanted",
"good first issue"
] | π Feature
Right now, we assume model.hparams is an argparse.Namespace. We've had a number of requests to support hparams as a simple dict. Let's do it. |
Failing test: test_running_test_pretrained_model_ddp | [
"bug",
"help wanted",
"priority: 0"
] | I think this is another problem stemming from the fact that we don't have a way to pass data back from torch.multiprocessing.spawn. Needs more investigation.
def test_running_test_pretrained_model_ddp(tmpdir):
"""Verify `test()` on pretrained model."""
...
# run test set
new_trainer = Trainer(**trainer_options)
> new_trainer.test(pretrained_model)
tests/test_restore_models.py:60:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
pytorch_lightning/trainer/trainer.py:1189: in test
self.run_evaluation(test_mode=True)
pytorch_lightning/trainer/evaluation_loop.py:299: in run_evaluation
if test_mode and not self.is_overriden('test_step'):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pytorch_lightning.trainer.trainer.Trainer object at 0x7f845ec23f90>, f_name = 'test_step', model = None
def is_overriden(self, f_name, model=None):
if model is None:
model = self.get_model()
super_object = LightningModule
# when code pointers are different, it was overriden
> is_overriden = getattr(model, f_name).__code__ is not getattr(super_object, f_name).__code__
E AttributeError: 'NoneType' object has no attribute 'test_step'
pytorch_lightning/trainer/model_hooks.py:20: AttributeError |
Warnings and import errors when generating docs | [
"bug",
"help wanted"
] | π Bug
There are many warnings when generating the docs, many are import failures.
Because of them, the Trainer docs disappeared, see here: https://pytorch-lightning.readthedocs.io/en/latest/trainer.html
To Reproduce
Steps to reproduce the behavior:
cd docs
make clean
make html
WARNING: autodoc: failed to import module 'pl_examples'; the following exception was raised:
Traceback (most recent call last):
File "d:\anaconda3\envs\lightning\lib\site-packages\sphinx\ext\autodoc\importer.py", line 32, in import_module
return importlib.import_module(modname)
File "d:\anaconda3\envs\lightning\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Users\aeduw\Documents\Repositories\pytorch-lightning\pl_examples\__init__.py", line 143, in <module>
from .basic_examples.lightning_module_template import LightningTemplateModel
File "C:\Users\aeduw\Documents\Repositories\pytorch-lightning\pl_examples\basic_examples\lightning_module_template.py", line 21, in <module>
class LightningTemplateModel(pl.LightningModule):
AttributeError: module 'pytorch_lightning' has no attribute 'LightningModule'
Expected behavior
no import errors.
Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Microsoft Windows 10 Pro
GCC version: Could not collect
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX 1070
Nvidia driver version: 442.19
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] pytorch-lightning==0.6.1.dev0
[pip] torch==1.4.0
[conda] blas 1.0 mkl
[conda] mkl 2020.0 166
[conda] mkl-service 2.3.0 py37hb782905_0
[conda] mkl_fft 1.0.15 py37h14836fe_0
[conda] mkl_random 1.1.0 py37h675688f_0
[conda] pytorch 1.4.0 py3.7_cuda101_cudnn7_0 pytorch
[conda] pytorch-lightning 0.6.1.dev0 dev_0
Additional context |
WandbLogger cannot be used with 'ddp' | [
"bug",
"help wanted",
"logger"
] | π Bug
wandb modifies init such that a child process calling init returns None if the master process has called init. This seems to cause a bug with ddp, and results in rank zero having experiment = None, which crashes the program.
To Reproduce
Can be reproduced with the basic MNIST gpu template, simply add a WandbLogger and pass 'ddp' as the distributed backend.
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 331, in ddp_train
self.run_pretrain_routine(model)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 757, in run_pretrain_routine
self.logger.log_hyperparams(ref_model.hparams)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/logging/base.py", line 14, in wrapped_fn
fn(self, *args, **kwargs)
File "/home/rmrao/anaconda3/lib/python3.6/site-packages/pytorch_lightning/logging/wandb.py", line 79, in log_hyperparams
self.experiment.config.update(params)
AttributeError: 'NoneType' object has no attribute 'config'
This occurs with the latest wandb version and with pytorch-lightning 0.6. |
Logger emits exception when there's `None` in hparams | [
"bug",
"help wanted"
] | To Reproduce
My hparams:
{
'n': [8000],
'k': [30],
'batch_size': 512,
'data_dir': '/Users/kyoungrok/Resilio Sync/Dataset/2019 TREC/passage_ranking/dataset',
'max_nb_epochs': 500,
'learning_rate': 0.0001,
'nodes': 1,
'distributed_backend': None,
'eval_test_set': False,
'check_val_every_n_epoch': 1,
'accumulate_grad_batches': 1,
'max_epochs': 200,
'min_epochs': 2,
'train_percent_check': 1.0,
'val_percent_check': 1.0,
'test_percent_check': 1.0,
'val_check_interval': 0.95,
'log_save_interval': 100,
'row_log_interval': 100,
'enable_early_stop': True,
'early_stop_metric': 'val_acc',
'early_stop_mode': 'min',
'early_stop_patience': 3,
'gradient_clip_val': -1,
'track_grad_norm': -1,
'model_save_path': '/Users/kyoungrok/Desktop/trec-2019-deep-learning/trec2019/sparse/sparsenet/model_weights',
'model_save_monitor_value': 'val_acc',
'model_save_monitor_mode': 'max',
'model_load_weights_path': None,
'tt_name': 'pt_test',
'tt_description': 'pytorch lightning test',
'tt_save_path': '/Users/kyoungrok/Desktop/trec-2019-deep-learning/trec2019/sparse/sparsenet/test_tube_logs',
'single_run': False,
'nb_hopt_trials': 1,
'log_stdout': False,
'gpus': None,
'single_run_gpu': False,
'default_tensor_type': 'torch.cuda.FloatTensor',
'use_amp': False,
'check_grad_nans': False,
'amp_level': 'O2',
'on_cluster': False,
'fast_dev_run': True,
'enable_tqdm': False,
'overfit': -1,
'interactive': False,
'debug': False,
'local': False,
'lr_scheduler_milestones': None,
'k_inference_factor': 1.5,
'weight_sparsity': [0.3],
'boost_strength': 1.5,
'boost_strength_factor': 0.85,
'dropout': 0.0,
'use_batch_norm': True,
'normalize_weights': False,
'hpc_exp_number': None
}
I see the following errors because I have None value in my hparams dict
Traceback (most recent call last):
File "sparsenet_trainer.py", line 73, in <module>
main(hparam_trial)
File "sparsenet_trainer.py", line 47, in main
trainer.fit(model)
File "/Users/kyoungrok/anaconda3/envs/trec/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 707, in fit
self.run_pretrain_routine(model)
File "/Users/kyoungrok/anaconda3/envs/trec/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 757, in run_pretrain_routine
self.logger.log_hyperparams(ref_model.hparams)
File "/Users/kyoungrok/anaconda3/envs/trec/lib/python3.7/site-packages/pytorch_lightning/logging/base.py", line 14, in wrapped_fn
fn(self, *args, **kwargs)
File "/Users/kyoungrok/anaconda3/envs/trec/lib/python3.7/site-packages/pytorch_lightning/logging/tensorboard.py", line 88, in log_hyperparams
self.experiment.add_hparams(hparam_dict=params, metric_dict={})
File "/Users/kyoungrok/anaconda3/envs/trec/lib/python3.7/site-packages/torch/utils/tensorboard/writer.py", line 300, in add_hparams
exp, ssi, sei = hparams(hparam_dict, metric_dict)
File "/Users/kyoungrok/anaconda3/envs/trec/lib/python3.7/site-packages/torch/utils/tensorboard/summary.py", line 156, in hparams
raise ValueError('value should be one of int, float, str, bool, or torch.Tensor')
ValueError: value should be one of int, float, str, bool, or torch.Tensor
Environment
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.15.3
GCC version: Could not collect
CMake version: version 3.16.1
Python version: 3.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] pytorch-lightning==0.6.0
[pip] torch==1.4.0
[pip] torchvision==0.5.0
[conda] blas 1.0 mkl
[conda] mkl 2019.4 233
[conda] mkl-service 2.3.0 py37hfbe908c_0
[conda] mkl_fft 1.0.15 py37h5e564d8_0
[conda] mkl_random 1.1.0 py37ha771720_0
[conda] pytorch 1.4.0 py3.7_0 pytorch
[conda] pytorch-lightning 0.6.0 pypi_0 pypi
[conda] torchvision 0.5.0 py37_cpu pytorch |
TypeError: __init__() got an unexpected keyword argument 'precision' | [
"bug",
"help wanted"
] | π Bug
I followed the guide to "use 16bit precision" from the documentation
But when I do:
trainer = Trainer(amp_level='O1', precision=16, gpus=1)
trainer.fit(model)
I get the error message:
TypeError: __init__() got an unexpected keyword argument 'precision'
I am using lightning version: 0.6.0 |
Simplification: Merge load_from_metrics and load_from_checkpoint | [
"feature",
"help wanted",
"good first issue"
] | π Feature
The two ways of loading a LightningModule from checkpoint only differ in one argument, the tags_csv.
Motivation
The code is almost identical for both and the purpose is the same. If we merge these two into one function, it would simplify the API.
Pitch
Combine
load_from_metrics(cls, weights_path, tags_csv, map_location=None) and
load_from_checkpoint(cls, checkpoint_path, map_location=None) into a single signature:
load_from_checkpoint(cls, checkpoint_path, tags_csv=None, map_location=None)
and make load_from_metrics deprecated.
Alternatives
keep as is, not a big deal :) |
GPT2-large on Colab TPU seems to time out | [
"bug",
"help wanted"
] | π Bug
When training gpt2-large on a colab tpu, gpt2-large doesn't work
To Reproduce
See the colab notebook: https://colab.research.google.com/drive/1An6D3wh_H4dbmlEUHYOXZYxkH6S7VKu9
This is the relevant part of the stack trace:
INFO:root:training on 8 TPU cores
2020-03-02 00:43:14.794597: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57271
2020-03-02 00:43:14.857680: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57271
2020-03-02 00:43:14.918609: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57271
2020-03-02 00:43:14.974498: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57271
2020-03-02 00:43:15.031540: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57271
2020-03-02 00:43:15.087601: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57271
2020-03-02 00:43:15.142553: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57271
E0302 00:43:22.445484458 1536 server_chttp2.cc:40] {"created":"@1583109802.445465277","description":"Only 1 addresses added out of total 2 resolved","file":"external/com_github_grpc_grpc/src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":404,"referenced_errors":[{"created":"@1583109802.445463004","description":"Address family not supported by protocol","errno":97,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/socket_utils_common_posix.cc","file_line":420,"os_error":"Address family not supported by protocol","syscall":"socket","target_address":"[::1]:57271"}]}
2020-03-02 00:43:24.109498: I tensorflow/compiler/xla/xla_client/computation_client.cc:197] Fetching mesh configuration for worker tpu_worker:0 from mesh service at localhost:57271
2020-03-02 00:43:24.429623: I tensorflow/compiler/xla/xla_client/computation_client.cc:197] Fetching mesh configuration for worker tpu_worker:0 from mesh service at localhost:57271
2020-03-02 00:43:24.712988: I tensorflow/compiler/xla/xla_client/computation_client.cc:197] Fetching mesh configuration for worker tpu_worker:0 from mesh service at localhost:57271
2020-03-02 00:43:24.731491: I tensorflow/compiler/xla/xla_client/computation_client.cc:197] Fetching mesh configuration for worker tpu_worker:0 from mesh service at localhost:57271
2020-03-02 00:43:24.867584: I tensorflow/compiler/xla/xla_client/computation_client.cc:197] Fetching mesh configuration for worker tpu_worker:0 from mesh service at localhost:57271
2020-03-02 00:43:24.883436: I tensorflow/compiler/xla/xla_client/computation_client.cc:197] Fetching mesh configuration for worker tpu_worker:0 from mesh service at localhost:57271
2020-03-02 00:43:25.112841: I tensorflow/compiler/xla/xla_client/computation_client.cc:197] Fetching mesh configuration for worker tpu_worker:0 from mesh service at localhost:57271
INFO:root:INIT TPU local core: 0, global rank: 0
2020-03-02 00:44:11.382078: I tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:57271
INFO:root:INIT TPU local core: 2, global rank: 2
INFO:root:INIT TPU local core: 5, global rank: 5
2020-03-02 00:44:15.925331: E tensorflow/compiler/xla/xla_client/tf_logging.cc:11] Failed to meet rendezvous 'pl.Trainer.run_pretrain_routine': Socket closed (14)
Traceback (most recent call last):
File "finetune.py", line 129, in <module>
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 976, in fit
xmp.spawn(self.tpu_train, args=(model,), nprocs=self.num_tpu_cores, start_method=start_method)
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 182, in spawn
start_method=start_method)
File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 108, in join
(error_index, name)
Exception: process 1 terminated with signal SIGKILL
Expected behavior
The code works when training on gpt2 (124M) but doesn't when training on gpt2-large (774M)
Environment
Collecting environment information...
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.12.0
Python version: 3.6
Is CUDA available: No
CUDA runtime version: 10.1.243
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip3] numpy==1.17.5
[pip3] torch==1.4.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.3.1
[pip3] torchvision==0.5.0
[conda] Could not collect
Additional context |
Precision=16 with TPUs bug | [
"bug",
"help wanted"
] | π Bug
Setting precision=16 when training with a TPU throws an error
To Reproduce
see colab: https://colab.research.google.com/drive/1s-ZDIqzgKQ1Byf-Lw58RZ8LGgmdB6qjB
Relavent stack trace:
Exception in device=TPU:0: str expected, not int
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 492, in tpu_train
os.environ['XLA_USE_BF16'] = 1
File "/usr/lib/python3.6/os.py", line 674, in __setitem__
value = self.encodevalue(value)
File "/usr/lib/python3.6/os.py", line 744, in encode
raise TypeError("str expected, not %s" % type(value).__name__)
TypeError: str expected, not int
To fix this all that should be needed should be casting 1 to a string
os.environ['XLA_USE_BF16'] = str(1)
Environment
Collecting environment information...
PyTorch version: 1.4.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: version 3.12.0
Python version: 3.6
Is CUDA available: No
CUDA runtime version: 10.1.243
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip3] numpy==1.17.5
[pip3] torch==1.4.0
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.3.1
[pip3] torchvision==0.5.0
[conda] Could not collect |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.