question
stringlengths
9
229
context
stringlengths
0
17.6k
answer
stringlengths
5
3.54k
id
stringlengths
18
28
url
stringlengths
94
97
What is the purpose of reload_dataloaders_every_epoch?
I'm looking to train on chunks of my entire dataset at a time per epoch (preferably over every n epochs, but this is not yet implemented officially), since the size of my dataset exceeds my total RAM. I'd therefore like to update the data in the DataLoaders every epoch. From all the examples I've seen, the actual data content is saved in memory a class attribute (in the following code snippet, it's saved in self.mnist_train and self.mnist_val). It seems that train_dataloader() and val_dataloader() only read self.mnist_train and self.mnist_val into DataLoaders. From my understanding, reload_dataloaders_every_epoch=True calls train_dataloader() and val_dataloader() at every epoch, but I don't see the point of doing this if self.mnist_test and self.mnist_train aren't actually being changed. How do I make train_dataloader() and test_dataloader() return new data every epoch? class MyDataModule(pl.LightningDataModule): def __init__( self, batch_size: int = 32, ): super().__init__() dataset = MNIST(_DATASETS_PATH, train=True, download=True, transform=transforms.ToTensor()) self.mnist_test = MNIST(_DATASETS_PATH, train=False, download=True, transform=transforms.ToTensor()) self.mnist_train, self.mnist_val = random_split(dataset, [55000, 5000]) self.batch_size = batch_size def train_dataloader(self): return DataLoader(self.mnist_train, batch_size=self.batch_size) def val_dataloader(self): return DataLoader(self.mnist_val, batch_size=self.batch_size) def test_dataloader(self): return DataLoader(self.mnist_test, batch_size=self.batch_size)
since the size of my dataset exceeds my total RAM That's not unusual. Often datasets don't fit into the RAM and that's fine. DataLoaders are designed to asynchronously load the data from your hard disk into ram and then onto the GPU. From my understanding, reload_dataloaders_every_epoch=True calls train_dataloader() and val_dataloader() at every epoch, but I don't see the point of doing this if self.mnist_test and self.mnist_train aren't actually being changed. Yes but first of all this is a toy example and doesn't really do anything interesting. Second even though the dataset does not change, the dataloader will be constructed newly every epoch. You could return dataloader with a new or growing dataset every epoch or do something crazy like increase the batch size every epoch or turn shuffle on and off haha.
MDEwOkRpc2N1c3Npb24zMzUxNDcy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7372#discussioncomment-700062
Hyperparameter Tuning in Lightning CLI
I wonder how people do Hyperparameter Tuning with Lightning CLI? Any suggestion of good practices? Thanks!
Personally when I tune hyperparameters (e.g. with optuna or nevergrad), I don't use the Lightning CLI much but use the programmatic way to inject the arguments there (since it's easier for communication across different python libs directly in python and not leaving it to os calls).
MDEwOkRpc2N1c3Npb24zNTM4ODc2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9108#discussioncomment-1233981
accumulate_grad_batches and DDP
Hi! When I use no multi-gpu settings and just a single GPU and the following parameters: batch_size = 16 accumulate_grad_batches=2 my effective batch size is 32. My question is how accumulate_grad_batches and DDP interact. If I am using 2 GPUS that are on the same machine and i use the parameters batch_size = 16 accumulate_grad_batches=2 accelerator='ddp' gpus=[0,1] is my effective batch size now 64? Thanks for the help!
Yes: https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html#batch-size
MDEwOkRpc2N1c3Npb24zMzc0OTYx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7632#discussioncomment-765379
Trainer flags: amp_level vs. precision
Hi! I'm a bit confused regarding the trainer's flags. According to Lightning's documentation, the default settings are: amp_level='O2' (i.e., "Almost FP16" Mixed Precision, cf. Nvidia documentation) precision=32 Isn't it a bit contradictory ? Is the default training mode full precision or mixed precision? Thanks in advance for your clarification :)
Dear @dianemarquette, precision=32 is the default one. However, if you turn on precision=16 and set amp_backend="apex", then amp_level=02 is considered as the default one. Best, T.C
MDEwOkRpc2N1c3Npb24zNTIyMjk1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8923#discussioncomment-1188895
hyper parameters not restored while resuming training
I call save_hyperparameters() in __init__(), and all hyper parameters sent to PL model are saved to checkpoint file. However, when i resume training from a checkpoint(call trainer.fit(..., ckpt_path=checkpoint_file_path)), the hyper parameters are not restored from checkpoint file and all of them keep initial values.
hyperparameters are not restored by default because it allows users to update them if they want, using the checkpoint, while resuming. you can do this: model = LitModel.load_from_checkpoint(checkpoint_file_path) trainer.fit(model, ..., ckpt_path=checkpoint_file_path)
D_kwDOCqWgoM4APOMH
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12639#discussioncomment-2515490
Train on cpu but gives error "You have asked for native AMP on CPU, but AMP is only available on GPU"
hi dear friends, I wanted to train on cpu for debug purpose which can help me understand more about codes. so in my trainer class, i didn't pass gpus parameter, the parameter looks like below: args of trainer Namespace(accelerator=None, accumulate_grad_batches=1, adam_epsilon=1e-08, amp_backend='native', amp_level='O2', auto_lr_find=False, auto_scale_batch_size=False, auto_select_gpus=False, batch_size=10, benchmark=False, bert_config_dir='/Users/i052090/Downloads/segmentation/data/bertmany/bert-base-uncased', bert_dropout=0.2, bert_max_length=128, best_dev_f1=0.0, check_val_every_n_epoch=1, checkpoint_callback=True, data_dir='data/conll03', dataname='conll03', default_root_dir='./conll03/spanner_bert-base-uncased_spMLen_usePruneTrue_useSpLenTrue_useSpMorphTrue_SpWtTrue_value0.5_38274488', deterministic=False, distributed_backend=None, fast_dev_run=False, final_div_factor=10000.0, flush_logs_every_n_steps=100, fp_epoch_result='./conll03/spanner_bert-base-uncased_spMLen_usePruneTrue_useSpLenTrue_useSpMorphTrue_SpWtTrue_value0.5_38274488/epoch_results.txt', gpus=None, gradient_clip_algorithm='norm', gradient_clip_val=1.0, label2idx_list=[('O', 0), ('ORG', 1), ('PER', 2), ('LOC', 3), ('MISC', 4)], limit_predict_batches=1.0, limit_test_batches=1.0, limit_train_batches=1.0, limit_val_batches=1.0, log_every_n_steps=50, log_gpu_memory=None, logger=True, lr=1e-05, max_epochs=1, max_spanLen=4, max_steps=None, max_time=None, min_epochs=None, min_steps=None, modelName='spanner_bert-base-uncased_spMLen_usePruneTrue_useSpLenTrue_useSpMorphTrue_SpWtTrue_value0.5', model_dropout=0.2, morph2idx_list=[('isupper', 1), ('islower', 2), ('istitle', 3), ('isdigit', 4), ('other', 5)], morph_emb_dim=100, move_metrics_to_cpu=False, multiple_trainloader_mode='max_size_cycle', n_class=5, neg_span_weight=0.5, num_nodes=1, num_processes=1, num_sanity_val_steps=2, optimizer='adamw', overfit_batches=0.0, param_name='epoch1_batchsize10_lr1e-5_maxlen128', plugins=None, precision=16, prepare_data_per_node=True, pretrained_checkpoint='', process_position=0, profiler=None, progress_bar_refresh_rate=1, random_int='38274488', reload_dataloaders_every_epoch=False, replace_sampler_ddp=True, resume_from_checkpoint=None, spanLen_emb_dim=100, span_combination_mode='x,y', stochastic_weight_avg=False, sync_batchnorm=False, terminate_on_nan=False, tokenLen_emb_dim=50, tpu_cores=None, track_grad_norm=-1, truncated_bptt_steps=None, use_morph=True, use_prune=True, use_spanLen=True, use_span_weight=True, use_tokenLen=False, val_check_interval=0.25, warmup_steps=0, weight_decay=0.01, weights_save_path=None, weights_summary='top', workers=0) But it would give errors: File "trainer.py", line 572, in main trainer = Trainer.from_argparse_args( File "/Applications/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/properties.py", line 207, in from_argparse_args return from_argparse_args(cls, args, **kwargs) File "/Applications/anaconda3/lib/python3.8/site-packages/pytorch_lightning/utilities/argparse.py", line 52, in from_argparse_args return cls(**trainer_kwargs) File "/Applications/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/env_vars_connector.py", line 40, in insert_env_defaults return fn(self, **kwargs) File "/Applications/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 319, in init self.accelerator_connector = AcceleratorConnector( File "/Applications/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 136, in init self.accelerator = self.select_accelerator() File "/Applications/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 483, in select_accelerator precision_plugin=self.precision_plugin, File "/Applications/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 220, in precision_plugin self._precision_plugin = self.select_precision_plugin() File "/Applications/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py", line 350, in select_precision_plugin raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: You have asked for native AMP on CPU, but AMP is only available on GPU. thanks,
Setting Trainer(precision=16) is only supported on GPU! If you have them available, you can do: Trainer(gpus=N, precision=16)
MDEwOkRpc2N1c3Npb24zNTEzMDQ5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8855#discussioncomment-1159101
Is it possible for SLURM auto submit to work on DP?
In my experience, it never works. I looked to the trainer code and saw that the code managing this works only on DDP. def configure_slurm_ddp(self, num_gpu_nodes): self.is_slurm_managing_tasks = False ### !!HERE!! if self.use_ddp: self.num_requested_gpus = self.num_gpus * num_gpu_nodes self.num_slurm_tasks = 0 try: self.num_slurm_tasks = int(os.environ['SLURM_NTASKS']) self.is_slurm_managing_tasks = self.num_slurm_tasks == self.num_requested_gpus # in interactive mode we don't manage tasks job_name = os.environ['SLURM_JOB_NAME'] if job_name == 'bash': self.is_slurm_managing_tasks = False except Exception: # likely not on slurm, so set the slurm managed flag to false self.is_slurm_managing_tasks = False However, sometimes we are not using the distributed computing on slurm (only DP). It would be nice to have the auto resubmit feature still working in this situation. OS: Linux Packaging conda Version 16
actually, lightning supports slurm no matter what backend you use... def register_slurm_signal_handlers(self): # see if we're using slurm (not interactive) on_slurm = False try: job_name = os.environ['SLURM_JOB_NAME'] if job_name != 'bash': on_slurm = True except Exception as e: pass if on_slurm: log.info('Set SLURM handle signals.') signal.signal(signal.SIGUSR1, self.sig_handler) signal.signal(signal.SIGTERM, self.term_handler)
MDEwOkRpc2N1c3Npb244MjI1MQ==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/1456#discussioncomment-238184
ValueError: `Dataloader` returned 0 length. Please make sure that it returns at least 1 batch
use this code, i can get test data. But when i use pl data module to fit train model, i got dataloader returned 0 length error import os from typing import Optional import PIL import cv2 import json import copy import numpy as np import pytorch_lightning as pl import torch from torchvision import transforms from torch.utils.data import Dataset, random_split, DataLoader from det.det_modules import ResizeShortSize, IaaAugment, EastRandomCropData, MakeBorderMap, MakeShrinkMap def load_json(file_path: str): with open(file_path, 'r', encoding='utf8') as f: content = json.load(f) return content class ICDARDataset(Dataset): def __init__(self, json_path, img_path, is_train=True): self.ignore_tags = ['*', '###'] self.load_char_annotation = False self.data_list = self.load_data(json_path, img_path) self.transform = transforms.Compose( [ transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ] ) self.iaa_augment = IaaAugment() self.east_random_crop_data = EastRandomCropData() self.make_border_map = MakeBorderMap() self.make_shrink_map = MakeShrinkMap() self.resize = ResizeShortSize(short_size=736, resize_text_polys=False) self.is_train = is_train def load_data(self, json_path: str, img_path) -> list: data_list = [] content = load_json(json_path) for item in content: p = os.path.join(img_path, item + '.jpg') polygons = [] texts = [] illegibility_list = [] for annotation in content[item]: if len(annotation['points']) == 0 or len(annotation['transcription']) == 0: continue polygons.append(annotation['points']) texts.append(annotation['transcription']) illegibility_list.append(annotation['illegibility']) data_list.append( { 'img_path': p, 'text_polys': np.array(polygons, dtype=object), 'texts': texts, 'ignore_tags': illegibility_list } ) return data_list def __getitem__(self, index): data = self.data_list[index] im = cv2.imread(data['img_path']) im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) data['img'] = im data['shape'] = [im.shape[0], im.shape[1]] if self.is_train: data = self.iaa_augment(data) data = self.east_random_crop_data(data) data = self.make_border_map(data) data = self.make_shrink_map(data) else: data = self.resize(data) # resize = ResizeShortSize(short_size=736, resize_text_polys=False) # data = resize(data) data['img'] = self.transform(data['img']) data['text_polys'] = data['text_polys'] return copy.deepcopy(data) def __len__(self): return len(self.data_list) class DetCollectFN: def __init__(self, *args, **kwargs): pass def __call__(self, batch): data_dict = {} to_tensor_keys = [] for sample in batch: for k, v in sample.items(): if k not in data_dict: data_dict[k] = [] if isinstance(v, (np.ndarray, torch.Tensor, PIL.Image.Image)): if k not in to_tensor_keys: to_tensor_keys.append(k) if isinstance(v, np.ndarray): v = torch.tensor(v) if isinstance(v, PIL.Image.Image): v = transforms.ToTensor()(v) data_dict[k].append(v) for k in to_tensor_keys: data_dict[k] = torch.stack(data_dict[k], 0) return data_dict class DBDataModule(pl.LightningDataModule): def __init__(self, train_json_path, train_img_path, val_json_path, val_img_path): super(DBDataModule, self).__init__() self.train = ICDARDataset(train_json_path, train_img_path, is_train=True) self.val = ICDARDataset(val_json_path, val_img_path, is_train=False) def train_dataloader(self): return DataLoader(self.train, batch_size=32, num_workers=0, shuffle=True, collate_fn=DetCollectFN) def val_dataloader(self): return DataLoader(self.val, batch_size=32, num_workers=0, collate_fn=DetCollectFN) if __name__ == '__main__': import torch from torch.utils.data import DataLoader from matplotlib import pyplot as plt def show_img(imgs: np.ndarray, title='img'): from matplotlib import pyplot as plt color = (len(imgs.shape) == 3 and imgs.shape[-1] == 3) imgs = np.expand_dims(imgs, axis=0) for i, img in enumerate(imgs): plt.figure() plt.title('{}_{}'.format(title, i)) plt.imshow(img, cmap=None if color else 'gray') def draw_bbox(img_path, result, color=(255, 0, 0), thickness=2): import cv2 if isinstance(img_path, str): img_path = cv2.imread(img_path) # img_path = cv2.cvtColor(img_path, cv2.COLOR_BGR2RGB) img_path = img_path.copy() for point in result: # point = point.astype(int) cv2.polylines(img_path, [point], True, color, thickness) return img_path dataset = ICDARDataset('/home/data/OCRData/icdar2019/train/train.json', '/home/data/OCRData/icdar2019/train/images') print(len(dataset)) train_loader = DataLoader(dataset=dataset, batch_size=1, shuffle=True, num_workers=0) for i, data in enumerate(train_loader): img = data['img'] shrink_label = data['shrink_map'] threshold_label = data['threshold_map'] print(threshold_label.shape, threshold_label.shape, img.shape) show_img(img[0].numpy().transpose(1, 2, 0), title='img') show_img((shrink_label[0].to(torch.float)).numpy(), title='shrink_label') show_img((threshold_label[0].to(torch.float)).numpy(), title='threshold_label') # img = draw_bbox(img[0].numpy().transpose(1, 2, 0), np.array(data['text_polys'])) # show_img(img, title='draw_bbox') plt.show() break
Dear @morestart, Would you mind unit-testing your code ? Can you check your DBDataModule train and val ICDARDataset length aren't 0 ? Lightning doesn't manipulate your dataset / dataloaders, so maybe your dataset are empty. Best, T.C
MDEwOkRpc2N1c3Npb24zNTY4NDY4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9478#discussioncomment-1315018
How to scale learning rate with batch size for DDP training?
When using LARS optimizer, usually the batch size is scale linearly with the learning rate. Suppose I set the base_lr to be 0.1 * batch_size / 256. Now for 1 GPU training with batch size 512, the learning rate should be 0.1 * 2 = 0.2 However when I use 2 GPUs with DDP backend and batch size of 512 on each GPU. Should my learning rate be: 0.1 * 2 = 0.2 or 0.1 * 2 * 2 (no. GPUs) = 0.4
As far as I know, learning rate is scaled with the batch size so that the sample variance of the gradients is kept approx. constant. Since DDP averages the gradients from all the devices, I think the LR should be scaled in proportion to the effective batch size, namely, batch_size * num_accumulated_batches * num_gpus * num_nodes In this case, assuming batch_size=512, num_accumulated_batches=1, num_gpus=2 and num_noeds=1 the effective batch size is 1024, thus the LR should be scaled by sqrt(2), compared to a single gpus with effective batch size 512.
MDEwOkRpc2N1c3Npb244MjI3Ng==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/3706#discussioncomment-238302
How to stop wandblogger uploading the checkpoint?
I want the checkpoint and the logs stay in the same place while only the logs are uploaded to wandb server.
Yes, it will be fixed with #6231
MDEwOkRpc2N1c3Npb24zMzMzODc0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7177#discussioncomment-649523
How to remove hp_metric initial -1 point and x=0 points?
Hi, I don't think this is a bug but I'm doing something wrong. I want to use my val_dice as hp_metric tabular AND also see the graph on "show metric" (radio button) under the Tensorboard HPARAMS tab: To achieve this I'm logging using self.log('hp_metric', mean_dice) (for the graph) and self.logger.log_hyperparams(params=self.hparams, metrics={'hp_metric': mean_val_dice}) (for the hparams value), both in the function validation_epoch_end How do I get rid of the initial -1 value? How can I fix my graph so it doesn't draw any points at x=0? (zoomed in version)
I already use your first way but setting default_hp_metric to False makes hp_metric be removed from "hparams" tab (this tab isn't there at all even if I have set some hyper parameters). Adding the final log_hyperparams creates the hparams tab but the graph of hp_metric gets a final value at iteration 0 instead of final iteration), and this step will also be skipped if the job is killed. Here are what I've tried so far: default_hp_metric=True: hparams tab visible in Tensorboad with hp_metric updated during training, hp_metric wrong initial value that makes the corresponding graph unsuitable with log scale and smoothing activated. default_hp_metric=False: no hparams tab in TensorBoard, hp_metric graph OK default_hp_metric=False and final log_hyperparams: no hparams tab in TensorBoard during training, hp_metric graph OK until the end of the training where final point is associated to iteration #0. No hparams tab at all if job is killed. I will try to calculate metric before training start to appropriately populate it, or to delay log_hyperparams at the first validation...
MDEwOkRpc2N1c3Npb24zMTc5Njcw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5890#discussioncomment-354902
Error with loading model checkpoint
Hi everyone. I was recently running a lightning model and saved a checkpoint to store the intermediate results. When I try to open the checkpoint, I get an error that positional arguments (used to initialize the lightning module) are not present. This wouldn't be a big deal but one of the positional arguments is the encoder (used for BarlowTwins training). I was worried if I loaded the model checkpoint with an encoder initialized with starting weights, this would overwrite the weight parameters stored in the checkpoint. See the error log and a block of code below. Any suggestions on how I can appropriately load this stored model to resume training? model_ckpt = BarlowTwins.load_from_checkpoint('/wynton/protected/home/ichs/dmandair/BRCAness/datasets/train/pcam/epoch=199-step=25599.ckpt') Traceback (most recent call last): File "/wynton/protected/home/ichs/dmandair/BRCA/barlow.py", line 435, in <module> main(default_config) File "/wynton/protected/home/ichs/dmandair/BRCA/barlow.py", line 427, in main model_ckpt = BarlowTwins.load_from_checkpoint('/wynton/protected/home/ichs/dmandair/BRCAness/datasets/train/pcam/epoch=199-step=25599.ckpt') File "/wynton/protected/home/ichs/dmandair/anaconda3/envs/BRCA/lib/python3.9/site-packages/pytorch_lightning/core/saving.py", line 156, in load_from_checkpoint model = cls._load_model_state(checkpoint, strict=strict, **kwargs) File "/wynton/protected/home/ichs/dmandair/anaconda3/envs/BRCA/lib/python3.9/site-packages/pytorch_lightning/core/saving.py", line 198, in _load_model_state model = cls(**_cls_kwargs) TypeError: __init__() missing 5 required positional arguments: 'encoder', 'encoder_out_dim', 'num_training_samples', 'batch_size', and 'weight_decay' original model loaded with: encoder = resnet18(zero_init_residual=True) model = BarlowTwins( encoder=encoder, encoder_out_dim=encoder_out_dim, learning_rate = default_config['LR'], weight_decay = default_config['WD'], num_training_samples=262144, batch_size=BATCH_SIZE, z_dim=default_config['Z_DIM'], lambda_coeff = default_config['LAMBDA'], max_epochs=MAX_EPOCHS )
hey @dmandair ! did you call self.save_hyperparameters() inside your LM.__init__? else hyperparameters won't be saved inside the checkpoint and you might need to provide them again using LMModel.load_from_checkpoint(..., encoder=encoder, encoder_out_dim=encoder_out_dim, ...). also note that, if you are passing an nn.Module inside your LM and calling self.save_hyperparameters(), it will save that too inside your hparams, which is not a good thing considering that nn.Modules are saved inside checkpoint state_dict and might create issues for you. Ideally, you should ignore them using self.save_hyperparameters(ignore=['encoder']). Check out this PR: #12068
D_kwDOCqWgoM4APFIR
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12399#discussioncomment-2413496
How to implement channels last memory format callback
Hi there Pytorch docs recommends using channels last when training vision models in mixed precision. To enable, you need to do two changes: Move you model to channel last format: model = model.to(memory_format=torch.channels_last) # Replace with your model. This can be done in on_fit_start callback hook as model.to performs an in place modification: class ChannelsLast(pl.Callback): def on_fit_start(self, trainer, pl_module: pl.LightningModule) -> None: # Inplace model modification pl_module.to(memory_format=torch.channels_last) Move input data to channel last format before feeding to the model: input = input.to(memory_format=torch.channels_last). My problem is in step 2. I don't find any PyTorch lightning hook that allows me to make this modification to the batch :/. The only options left are to add it as data transforms (that must be used in conjunction with the callback) or doing all channel last related logic inside the LightningModule. I would prefer to avoid this last solution as it could clutter the LightningModule with unnecessary code. Do you know a to do step 2 in a callback?
You can use any of the following if done inside the LightningModule: batch = self.on_before_batch_transfer(batch, dataloader_idx) batch = self.transfer_batch_to_device(batch, device) batch = self.on_after_batch_transfer(batch, dataloader_idx) If you really need to do it in the Callback, I guess you could use on_train_batch_start since the modification is in-place. But I wouldn't recommend it.
MDEwOkRpc2N1c3Npb24zMzUwMzU1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7349#discussioncomment-709247
How to use Accuracy with ignore class?
Please see same question on Stack Overflow. When using Accuracy with a class that should be ignored, meaning it has labels but can never be predicted, the scoring is wrong, because it is calculated with the never predicted labels that should be ignored. How to use Accuracy while ignoring some class? Thanks :)
It is currently not supported in the accuracy metric, but we have an open PR for implementing that exact feature PyTorchLightning/metrics#155 Currently what you can is instead calculate the confusion matrix and then ignore some classes based on that (remember that the true positive/correctly classified are found on the diagonal of the confusion matrix): ignore_index = 3 metric = ConfusionMatrix(num_classes=3) confmat = metric(preds, target) confmat = confmat[:2,:2] # remove last column and row corresponding to class 3 acc = confmat.trace() / confmat.sum()
MDEwOkRpc2N1c3Npb24zMzExNDI0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6890#discussioncomment-588405
How to monitor tensorboard logged scalar in modelcheckpoint?
If I save scalar log like this, self.logger.experiment.add_scalars(‘loss/nll’, {‘train’: trainloss, ‘valid’: validloss}) How to monitor valid loss in Modelcheckpoint?
you need to call it with self.log instead (we automate the rest for you), so that we are aware of you logging it :)
MDEwOkRpc2N1c3Npb24zMzM0MTk2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7184#discussioncomment-648824
`init_process_group` not called when training on multiple-GPUs
Hi, I’m trying to train a model on 2 GPUs. I do this by specifying Trainer(..., gpus=2). ddp_spawn should automatically be selected for the method, but I instead get the following message + error: UserWarning: You requested multiple GPUs but did not specify a backend, e.g. `Trainer( accelerator="dp"|"ddp"|"ddp2")`. Setting `accelerator="ddp_spawn"` for you. 'You requested multiple GPUs but did not specify a backend, e.g.' GPU available: True, used: True TPU available: False, using: 0 TPU cores Traceback (most recent call last): File "train.py", line 186, in <module> main(sys.argv[1:]) File "train.py", line 173, in main print(f"Logs for this experiment are being saved to {trainer.log_dir}") File ".../pypi__pytorch_lightning_python3_deps/pytorch_lightning/trainer/properties.py", line 137, in log_dir dirpath = self.accelerator.broadcast(dirpath) File ".../pypi__pytorch_lightning_python3_deps/pytorch_lightning/accelerators/accelerator.py", line 436, in broadcast return self.training_type_plugin.broadcast(obj, src) File ".../pypi__pytorch_lightning_python3_deps/pytorch_lightning/plugins/training_type/ddp_spawn.py", line 275, in broadcast return self.dist.broadcast(obj) File ".../pypi__pytorch_lightning_python3_deps/pytorch_lightning/distributed/dist.py", line 33, in broadcast broadcast_object_list(obj, 0, group=group or _group.WORLD) File ".../pypi__torch_python3_deps/torch/distributed/distributed_c10d.py", line 1700, in broadcast_object_list my_rank = get_rank() File ".../pypi__torch_python3_deps/torch/distributed/distributed_c10d.py", line 725, in get_rank default_pg = _get_default_group() File ".../pypi__torch_python3_deps/torch/distributed/distributed_c10d.py", line 358, in _get_default_group raise RuntimeError("Default process group has not been initialized, " RuntimeError: Default process group has not been initialized, please make sure to call init_process_group. I looked at the source code of ddp_spawn and it looks like it should print out a message when initializing ddp, but it didn’t. Could I please have advice on how to correct this error. Thank you!
The issue comes from the line File "train.py", line 173, in main print(f"Logs for this experiment are being saved to {trainer.log_dir}") which tries to access trainer.log_dir outside of the trainer scope. trainer.log_dir tries to broadcast the directory but fails as DDP hasn’t been initialized yet. File ".../pypi__pytorch_lightning_python3_deps/pytorch_lightning/trainer/properties.py", line 137, in log_dir dirpath = self.accelerator.broadcast(dirpath) This is fixed in the 1.4 release as broadcast becomes a no-op in that case
MDEwOkRpc2N1c3Npb24zNDcxODUx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8517#discussioncomment-1035094
Backward twice in one training_step
I have 2 losses for my model. And I need the grads of the first loss to compute the second one. The pseudocode in pytorch is like: optimizer.zero_grad() hidden_value = model.part1(input) output = model.part2(hidden_value) loss1 = criterion(output, label) loss1.backward(retain_graph=True) loss2 = criterion2(hidden_value.grad, label2) loss2.backward() optimizer.step() I found an API named manual_backward() which may fit my problem. However, I build this model on a project based on pytorch_lighting 0.6.0, and it doesn’t have this API. So, my questions are: 1.How can I implement my operation using pytorch_lightning 0.6.0? 2.If I can’t implement it in pytorch_lightning 0.6.0, which lighting version should I chose? (Please recommend a close version which may cause less error after I update the lightning.)
I don't think this is really possible in 0.6. This version is too old and manual optimization was introduced to cover your exact use case. I can only recommend the latest version because manual backward underwent many changes and bugfixes so believe it is worth it to invest the time to get that code updated.
MDEwOkRpc2N1c3Npb24zMzk4NjAw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7845#discussioncomment-832499
Why MultiGPU dp seems slower?
❓ Questions and Help Having 2 gpus with DP seems to be slowers than using just 1. Is it normal? My intuition is that if you are using 2 GPUs and the batch is being splitted into 2 batches, this should be faster. But when I tested the same code using 1 vs >1 my epoch time increased Code Minimalist Implementation of a BERT Sentence Classifier What have you tried? I also tried to run ddp but my code seems to break with a TypeError: cannot serialize '_io.TextIOWrapper' object error. I searched online but I couldn't find the reason... What's your environment? OS: Linux
you should double your batch size. dp still has overhead in communication, so it won't be linear scaling. also try ddp
MDEwOkRpc2N1c3Npb244MjIyNw==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/1005#discussioncomment-238110
ValueError: Expected positive integer total_steps, but got -1
def configure_optimizers(self): optimizer = torch.optim.SGD(self.parameters(), lr=self.lr) print(self.trainer.max_steps) lr_scheduler = { 'scheduler': torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=self.lr, total_steps=self.trainer.max_steps, anneal_strategy='linear', cycle_momentum=False, pct_start=0.1), 'interval': 'step', 'frequency': 1 } return {'optimizer': optimizer, 'lr_scheduler': lr_scheduler, "monitor": 'val_acc'} raise ValueError("Expected positive integer total_steps, but got {}".format(total_steps)) ValueError: Expected positive integer total_steps, but got -1
ok i know the answer,
D_kwDOCqWgoM4AOzN_
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11936#discussioncomment-2185929
How to test in low version
There is no train.test () in the lower version. How to test the test data set? my version: 0.4.6
Hi PyTorch Lightning 0.4.6 is extremely old. You should consider upgrading. That being said, you can always test your model as you would in plain pytorch, because the LightningModule is also just a nn.Module: for inp, target in test_dataloader: pred = model(inp) test_loss = loss(pred, target) ...
MDEwOkRpc2N1c3Npb24zNDI2NzQy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8103#discussioncomment-914275
Pytorch-lightning CPU-only installation
Hello all - just wanted to discuss a use-case with CPU vs GPU PL install. We do normal training on GPUs, but when deploying for prediction we use CPUs and would like to keep the Docker container size as small as possible. It't not clear if it's possible to install pytorch-lightning with CPU-only torch distribution, which is much smaller. Is there any possible equivalent to pip install pytorch-lightning[cpu] . Thanks for suggestions!
Hey, if you install pytorch first (cpu only) and then Lightning it will just use that version.
MDEwOkRpc2N1c3Npb24zNTU1NjYz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9325#discussioncomment-1281291
help defining new training_step() on a callback
Hi! I need to create a callback that once every N training steps performs the forward pass over the PL module and do some calculations. My first approach has been to simply create a callback and then define a new training_step() within that callback that does my needed calculations. The problem is that this is calculations are not being executed. Using the debugger I see that the callback is correctly initialized and correctly passed to the trainer, but it is not entering in this newly defined training step. Here is a minimal example of what I need to do Do you have some insights on what I am doing wrong?
hey @malfonsoarquimea ! Callback.training_step is not a hook so it won't be called automatically. For you use-case you can do something like: class CustomCallback(Callback): def __init__(..., every_n_train_steps): self.every_n_train_steps = every_n_train_steps ... def on_train_batch_end(self, trainer, pl_module, *args, **kwargs): if trainer.global_step % self.every_n_train_steps == 0: outputs = pl_module(self.input)
D_kwDOCqWgoM4APGyZ
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12442#discussioncomment-2430475
What's the difference between on_fit_start and on_train_start in LightningModule?
What's the difference between on_fit_start and on_train_start hooks in LightningModule?
I think the document here has answered your question very well.
MDEwOkRpc2N1c3Npb24zNDMxMzAw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8142#discussioncomment-930075
Example on training with TPU does not run at all
I am currently try this colab notebook https://colab.research.google.com/github/PytorchLightning/lightning-tutorials/blob/publication/.notebooks/lightning_examples/mnist-tpu-training.ipynb#scrollTo=2772a2e1 provided by PL teams to get some experience with TPU training. But when I try to execute the third cell, there is some Import error with the _XLAC module.
Hi @tungts1101! this error is raised when the PyTorch and PyTorch xla are not of the same versions. You could verify using pip list | grep torch and could install the latest versions for both!
D_kwDOCqWgoM4AN2Gd
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9966#discussioncomment-1670392
LightningCLI: how to configure logger using cmd-line args?
I would like to change the names of the logging directories from the default "version_{n}" to something of my own choosing. How can I do this using command-line arguments to LightningCLI? I know I can set the logger using trainer.logger but setting logger args e.g. trainer.logger.version does not work (unrecognized argument). So how can I pass args to the logger?
See my reply here: #10574 (comment) We'll be adding support for shorthand notation shortly too: #11533
D_kwDOCqWgoM4AOgOw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11491#discussioncomment-1993974
Pytorch Lightning doesn't have CUDA?
🐛 Bug Hello, I'm trying to use Pytorch Lightning in order to speed up my ESR GAN renders on Windows 10. However, when I ran the installation code and attempt to run Cupscale (which I use as a GUI for ESR GAN), I get an error saying "Pytorch compiled without CUDA". Is there a way to choose to install specifically the CUDA version of Pytorch with Lightning install, or are the two incompatible? If the latter is the case, that's not good to hear. If the former, can I have such a code?
@TrocelengStudios ho this seems like you do not have CUDA installed, can you run nvidia-smi? cc: @awaelchli
MDEwOkRpc2N1c3Npb24zMzMyNzMx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7158#discussioncomment-644560
Odd Performance Using Multi-GPU + Azure
I was wondering if anyone has observed odd performance when training multi-GPU models? I’ve developed a script which trains a toy dataset (in this case the cats and dogs model), using a ResNet or EfficientNet. The script works fine locally on the GPU. However, when I move the script to the cloud and train using multiple GPUs strange things start to happen. The script trains fine on the cloud using 1 GPU, albeit slow as I was testing using a M60. However, if I run the same script on 4x K80 with ddp I find that the training process is around ~15% slower (which I’m guessing is the difference between K80 and M60). I checked GPU usage and the GPUs are all being used. However, model performance seems slower/worse than using just one GPU. Any ideas why this could be?
@deepbakes Could this be because of benchmark=True ?
D_kwDOCqWgoM4AOxzH
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11905#discussioncomment-2168073
Disabling find_unused_parameters
When trying to disable find_unused_parameters in the trainer by doing the following, strategy=DDPStrategy(find_unused_parameters=False) Am being thrown an import error for from pytorch_lightning.strategies import DDPStrategy Error: No module named 'pytorch_lightning.strategies'
pytorch_lightning.strategies will be available in v1.6 release and is only available in master at the moment. For now, you can use: from pytorch_lightning.plugins import DDPPlugin trainer = pl.Trainer( ..., strategy=DDPPlugin(find_unused_parameters=False), ) See the stable version of docs (not latest) here: https://pytorch-lightning.readthedocs.io/en/stable/guides/speed.html?highlight=find_unused_parameters#when-using-ddp-plugins-set-find-unused-parameters-false
D_kwDOCqWgoM4AOqUS
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11664#discussioncomment-2076602
Clarification on reload_dataloaders_every_epoch
With a basic Datamodule like: class MyDM(pl.lightningDataModule): def __init__(self,): <init some stuff> def setup(self, stage:typing.Optional[str] = None): .... <sort out dataset etc> def train_dataloader(self): .... etc etc model = MyModel() data = MyDM() trainer pl.Trainer(reload_dataloaders_every_n_epochs=5) trainer.fit(model, data) Does the flag reload_dataloaders_every_n_epochs=N cause data.setup() to be called every reload. I expect my dataset to be constantly changing and am currently assuming that I can define anything I don't expect to be changing in init and anything I will need to change every N epochs in setup. A quick look at the pl.trainer source code (specifically the reset_train_dataloader method) doesn't immediately elucidate this for me.
no, it doesn't call setup at every reload, just the corresponding _dataloader hook. You can define the datasets in setup and access them inside dataloader_hooks or can initialize the corresponding dataset inside dataloader_hook as well.
D_kwDOCqWgoM4AO1rw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12023#discussioncomment-2218752
Pretrain some sections of a model, then initialize those parts when training a full model
I need to pretrain the encode and decode section of an autoencoder first, then later attach a transformer in the middle of the encode and decode section. When I load the weights of the encode and decode section when pretraining it first, while initializing the weights of the transformer section, will I get an error about missing layers?
You can use the strict flag of load_from_checkpoint to avoid the missing layer failure: pytorch-lightning/pytorch_lightning/core/saving.py Lines 94 to 95 in 079fe9b strict: Whether to strictly enforce that the keys in :attr:`checkpoint_path` match the keys returned by this module's state dict. Default: `True`.
MDEwOkRpc2N1c3Npb24zMjY0NDkw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6473#discussioncomment-469683
Optimization in a dual encoder LitModel
Hello, Currently, I am working in a Lit Model, which has two encoders. Each of them has its optimizer, scheduler, and loss, as shown below: import importlib import torch from pytorch_lightning.core.lightning import LightningModule from hydra.utils import instantiate from source.metric.ULMRRMetric import ULMRRMetric class LitModel(LightningModule): def __init__(self, hparams): super(LitModel, self).__init__() self.save_hyperparameters(hparams) # encoders self.x1_encoder = instantiate(hparams.x1_encoder) self.x2_encoder = instantiate(hparams.x2_encoder) # loss function self.x1_loss = instantiate(hparams.x1_loss) self.x2_loss = instantiate(hparams.x2_loss) def forward(self, x1, x2): x1_repr = self.x1_encoder(x1) x2_repr = self.x2_encoder(x2) return x1_repr, x2_repr def training_step(self, batch, batch_idx, optimizer_idx): x1, x2 = batch["x1"], batch["x2"] x1_repr, x2_repr = self(x1, x2) x1_loss=self.x1_loss(x1_repr, x2_repr) x2_loss = self.x2_loss(x1_repr, x2_repr) # what to return here? return def validation_step(self, batch, batch_idx): x1, x2 = batch["x1"], batch["x2"] x1_repr, x2_repr = self(x1, x2) self.log("val_x1_LOSS", self.x1_loss(x1_repr, x2_repr), prog_bar=True) self.log("val_x2_LOSS", self.x2_loss(x1_repr, x2_repr), prog_bar=True) # Alternating schedule for optimizer steps def optimizer_step( self, epoch, batch_idx, optimizer, optimizer_idx, optimizer_closure, on_tpu=False, using_native_amp=False, using_lbfgs=False, ): # update x1 encoder every even step if optimizer_idx == 0: if batch_idx % 2 == 0: optimizer.step(closure=optimizer_closure) # update x2 encoder every odd step if optimizer_idx == 1: if batch_idx % 2 != 0: optimizer.step(closure=optimizer_closure) def configure_optimizers(self): # optimizers optimizers = [ torch.optim.AdamW(self.x1_encoder.parameters(), lr=self.hparams.lr, betas=(0.9, 0.999), eps=1e-08, weight_decay=self.hparams.weight_decay, amsgrad=True), torch.optim.AdamW(self.x2_encoder.parameters(), lr=self.hparams.lr, betas=(0.9, 0.999), eps=1e-08, weight_decay=self.hparams.weight_decay, amsgrad=True) ] # schedulers step_size_up = round(0.03 * self.num_training_steps) schedulers = [ torch.optim.lr_scheduler.CyclicLR(optimizers[0], mode='triangular2', base_lr=self.hparams.base_lr, max_lr=self.hparams.max_lr, step_size_up=step_size_up, cycle_momentum=False), torch.optim.lr_scheduler.CyclicLR(optimizers[1], mode='triangular2', base_lr=self.hparams.base_lr, max_lr=self.hparams.max_lr, step_size_up=step_size_up, cycle_momentum=False) ] return optimizers, schedulers @property def num_training_steps(self) -> int: """Total training steps inferred from datamodule and number of epochs.""" steps_per_epochs = len(self.train_dataloader()) / self.trainer.accumulate_grad_batches max_epochs = self.trainer.max_epochs return steps_per_epochs * max_epochs My intention is to update each encoder in alternate steps (even steps: x1_encoder; odd steps: x2_encoder). After reading the documentation, it was not clear to me how it would be possible to update the parameters of each encoder from the loss of each one of them. For instance, I would like to update x1_encoder's parameters based on the x1_loss value and leveraging optimizer_1. Respectively, I would like to update x2_encoder's parameters based on the x2_loss value and employing optimizer_2. I appreciate any help you can provide.
I see two ways. I think your example is quite simple so it does not matter which way you choose in the end: 1) Automatic Optimization: def training_step(self, batch, batch_idx, optimizer_idx): x1, x2 = batch["x1"], batch["x2"] if optimizer_idx == 0; x1_repr = self.x1_encoder(x1) x1_loss=self.x1_loss(x1_repr, x2_repr) return x1_loss if optimizer_idx == 1: x2_repr = ... return x2_loss def configure_optimizers(self): return [ {"optimizer": torch.optim.AdamW(self.x1_encoder.parameters(), ...), "frequency": 1}, {"optimizer": torch.optim.AdamW(self.x2_encoder.parameters(), ...), "frequency": 1}, ] (and delete your overridden optimizer step method) Reference: https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#configure-optimizers (see frequency description) 2) Manual Optimization: def __init__(self, hparams): super().__init__() self.automatic_optimization = False ... def training_step(self, batch, batch_idx, optimizer_idx): x1, x2 = batch["x1"], batch["x2"] opt0, opt1 = self.optimizers() if batch_idx % 2 == 0: loss = ... opt0.zero_grad() loss.backward() opt0.step() else: loss = opt1.zero_grad() loss.backward() opt1.step() Reference: https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#manual-optimization Note: I converted this issue to a GitHub discussion article as this is the primary forum for implementation help questions :)
D_kwDOCqWgoM4ANwT_
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9807#discussioncomment-1421542
TypeError: __init__() got an unexpected keyword argument 'row_log_interval'
Question: I am trying to run the EpicKitchen codes from https://github.com/epic-kitchens/C1-Action-Recognition-TSN-TRN-TSM I am getting this Typeerror may be related to older and new versions of lightining module and i was not able to resolve it. Error: Traceback (most recent call last): File "src/test.py", line 145, in main(parser.parse_args()) File "src/test.py", line 139, in main trainer = Trainer(**cfg.trainer, callbacks=[saver]) File "/home/code-base/.intdoc/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/env_vars_connector.py", line 41, in overwrite_by_env_vars return fn(self, **kwargs) TypeError: init() got an unexpected keyword argument 'row_log_interval' Code: test.py file from collections import defaultdict import argparse import logging import os import pickle from pathlib import Path import colorlog import torch import numpy as np from omegaconf import OmegaConf from pytorch_lightning import Callback, Trainer from typing import Any, Dict, List, Sequence, Union from systems import EpicActionRecogintionDataModule from systems import EpicActionRecognitionSystem parser = argparse.ArgumentParser( description="Test model", formatter_class=argparse.ArgumentDefaultsHelpFormatter ) parser.add_argument("checkpoint", type=Path) parser.add_argument("results", type=Path) parser.add_argument("--split", choices=["val", "test"], default="test") parser.add_argument( "--n-frames", type=int, help="Overwrite number of frames to feed model, defaults to the " "data.test_frame_count or data.frame_count if the former is not present", ) parser.add_argument( "--batch-size", type=int, help="Overwrite the batch size for loading data, defaults to learning.batch_size", ) parser.add_argument( "--datadir", default=None, help="Overwrite data directory in checkpoint. Useful when testing a checkpoint " "trained on a different machine.", ) LOG = logging.getLogger("test") class ResultsSaver(Callback): def __init__(self): super().__init__() self.results: Dict[str, Dict[str, List[Any]]] = dict() def on_test_batch_end( self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx ): self._store_batch_results("test", outputs) def _store_batch_results( self, dataset_name: str, batch_outputs: Dict[str, Sequence[Any]] ): if dataset_name not in self.results: self.results[dataset_name] = {k: [] for k in batch_outputs.keys()} for k, vs in batch_outputs.items(): if isinstance(vs, torch.Tensor): vs = vs.detach().cpu().numpy() self.results[dataset_name][k].extend(vs) def save_results(self, dataset_name: str, filepath: Union[str, Path]): filepath = Path(filepath) filepath.parent.mkdir(parents=True, exist_ok=True) results_dict = self.results[dataset_name] new_results_dict = { k: np.stack(vs) for k, vs in results_dict.items() } with open(filepath, "wb") as f: pickle.dump(new_results_dict, f) def main(args): logging.basicConfig(level=logging.INFO) handler = colorlog.StreamHandler() handler.setFormatter( colorlog.ColoredFormatter("%(log_color)s%(levelname)s:%(name)s:%(message)s") ) logger = colorlog.getLogger("example") logger.addHandler(handler) ckpt = torch.load(args.checkpoint, map_location=lambda storage, loc: storage) # Publicly released checkpoints use dicts for longevity, so we need to wrap them # up in an OmegaConf object as this is what EpicActionRecognitionSystem expects. cfg = OmegaConf.create(ckpt["hyper_parameters"]) OmegaConf.set_struct(cfg, False) # allow writing arbitrary keys without raising # exceptions cfg.data._root_gulp_dir = os.getcwd() # set default root gulp dir to prevent # exceptions on instantiating the EpicActionRecognitionSystem system = EpicActionRecognitionSystem(cfg) system.load_state_dict(ckpt["state_dict"]) if not cfg.get("log_graph", True): # MTRN can't be traced due to the model stochasticity so causes a JIT tracer # error, we allow you to prevent the tracer from running to log the graph when # the summary writer is created try: delattr(system, "example_input_array") except AttributeError: pass if args.n_frames is not None: cfg.data.test_frame_count = args.n_frames if args.batch_size is not None: cfg.learning.batch_size = args.batch_size if args.datadir is not None: data_dir_key = f"{args.split}_gulp_dir" cfg.data[data_dir_key] = args.datadir # Since we don't support writing results when using DP or DDP LOG.info("Disabling DP/DDP") cfg.trainer.accelerator = None n_gpus = 1 LOG.info(f"Overwriting number of GPUs to {n_gpus}") cfg.trainer.gpus = n_gpus cfg["test.results_path"] = str(args.results) data_module = EpicActionRecogintionDataModule(cfg) if args.split == "val": dataloader = data_module.val_dataloader() elif args.split == "test": dataloader = data_module.test_dataloader() else: raise ValueError( f"Split {args.split!r} is not a recognised dataset split to " f"test on." ) saver = ResultsSaver() trainer = Trainer(**cfg.trainer, callbacks=[saver]) trainer.test(system, test_dataloaders=dataloader) saver.save_results("test", args.results) if __name__ == "__main__": main(parser.parse_args()) Versions: Python3.6, lightining-1.1.8, pytorch-1.7.1, Running on an linux machine I am new to this library, any help would be appreciated Thanks in advance
TypeError: init() got an unexpected keyword argument 'row_log_interval' row_log_interval got deprecated and removed. You must have taken this from an old version of the docs. Use log_every_n_steps instead.
MDEwOkRpc2N1c3Npb24zNDA3ODk2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7943#discussioncomment-863926
KeyError: 'Trying to restore training state but checkpoint contains only the model. This is probably due to `ModelCheckpoint.save_weights_only` being set to `True`.'
this is my code: checkpoint_callback = ModelCheckpoint( monitor='hmean', mode='max', dirpath='../weights', filename='DB-{epoch:02d}-{hmean:.2f}', save_last=True, save_weights_only=True, ) trainer = pl.Trainer( # open this, must drop last benchmark=True, checkpoint_callback=True, gpus=[0], max_epochs=1200, min_epochs=300, logger=[logger], callbacks=[early_stop, checkpoint_callback], resume_from_checkpoint='../weights/DB-epoch=130-hmean=0.70.ckpt' ) when i try resume train from checkpoint, i got this error: KeyError: 'Trying to restore training state but checkpoint contains only the model. This is probably due to ModelCheckpoint.save_weights_only being set to True.'
if you set save_weights_only=True in ModelCheckpoint then it won't save optimizer/scheduler states in an ideal case. So assigning this checkpoint to resume training won't work because it needs to restore optimizer/scheduler state as well to actually resume it.
D_kwDOCqWgoM4ANuaZ
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9745#discussioncomment-1400848
'NeuralNetwork' object has no attribute 'log'
Hello I am trying to train a neural network using pytorch lightning. I have run into an issue with the trainer when I try to run the program. I am getting the following issue: GPU available: False, used: False TPU available: False, using: 0 TPU cores Traceback (most recent call last): File "/home/PytorchLightningGRUtraining.py", line 179, in <module> main(args) File "/home/PytorchLightningGRUtraining.py", line 171, in main trainer.fit(model, dm.train_dataloader(), dm.val_dataloader()) File "/home/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 449, in fit self.train_loop.setup_fit(model, train_dataloader, val_dataloaders, datamodule) File "/home/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py", line 123, in setup_fit self.trainer.callback_connector.attach_model_logging_functions(model) File "/home/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/callback_connector.py", line 123, in attach_model_logging_functions callback.log = model.log File "/home/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'NeuralNetwork' object has no attribute 'log' I defined the logger for the trainer after I created an instance of the model class. What am I doing wrong?
did you pass in a lightningmodule instance? class YourModel(pl.LightningModule): <- here? ...
D_kwDOCqWgoM4AN6JX
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10096#discussioncomment-1527358
Training based on iterations
Hi, Could anyone advice me how to set up PyTorch lightning trainer to learn based on iterations instead of epochs? Thank you!
Hi @mshooter , you can use the min_steps and max_steps arguments on the Trainer to do training based on iterations instead of epochs. https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#max-steps
D_kwDOCqWgoM4APJ2u
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12511#discussioncomment-2463941
self.manual_backward() vs. loss.backward() when optimizing manually
According to the manual_backward() documentation, it takes care of scaling when using mixed precision. In that case, is it correct to assume one can simply and safely use loss.backward() during manual optimization if not using mixed precision?
hey @MGheini It's not just precision but a common hook to support all other strategies like deepspeed/ddp and certain hooks like on_after_backward are called too. So manual_backward is suggested to make sure no-code change is required for eg in case any of the strategies is updated by the user.
D_kwDOCqWgoM4AOa1k
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11318#discussioncomment-1908881
How to show the validation loss in progress bar?
Hi. I'm trying to come up with ways to get my validation loss shown in the progress bar. My model is defined like this: class DummyNet(pl.LightningModule): def __init__(self, batch_size): super().__init__() self.batch_size = batch_size self.fc = nn.Sequential( nn.Dropout(0.5), nn.Linear(512, 2) ) # loss self.loss_fn = nn.CrossEntropyLoss() # metrics metrics = torchmetrics.MetricCollection( { "accuracy": torchmetrics.Accuracy(), "precision": torchmetrics.Precision(), "recall": torchmetrics.Recall(), "auc": torchmetrics.AUC(reorder=True), }, ) self.f1 = nn.ModuleDict({ "train_f1": torchmetrics.F1(), "val_f1": torchmetrics.F1(), "test_f1": torchmetrics.F1(), }) self.metrics = nn.ModuleDict({ f"{k}_metrics": metrics.clone(prefix=k) for k in "train val test".split() }) def forward(self, x): x = self.fc(x) return x def loop_step(self, batch, stage): x, targets = batch["windows"], batch["diagnosis"] logits = self(x) loss = self.loss_fn(logits, targets) preds = logits.argmax(-1) # computing metrics f1_str = f"{stage}_f1" metric_str = f"{stage}_metrics" self.f1[f1_str](preds, targets) self.metrics[metric_str](preds, targets) # logging metrics on_step = False if stage != "train" else True self.log(f1_str, self.f1[f1_str], on_step=on_step, on_epoch=True) self.log_dict(self.metrics[metric_str], on_step=False, on_epoch=True) self.log(f"{stage}_loss", loss, on_step=on_step, on_epoch=True) return loss def training_step(self, batch, batch_idx): return self.loop_step(batch, "train") def validation_step(self, batch, batch_idx): return self.loop_step(batch, "val") def testing_step(self, batch, batch_idx): return self.loop_step(batch, "test") def configure_optimizers(self): return torch.optim.AdamW(self.parameters(), lr=0.001, weight_decay=0.01) But as of now none of my metrics nor my validation loss comes up in the progress bar. Is it because I'm returning loss in the dictionary and not "{stage}_loss"? Thank you.
Hi @FeryET, I believe the below should work as documented in https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html#automatic-logging. self.log(..., prog_bar=True)
D_kwDOCqWgoM4AOd26
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11412#discussioncomment-1945561
Computing expensive metrics less frequently than using validation_step()
Hi I need to compute some metrics that are not quick to compute (eg. Frechet Inception Distance). It is too expensive to compute them every validation epoch. Instead I would like to compute the metric once after every training epoch (or after some arbitrary number of steps). To do so, I need to be able to access the training dataset at the end of every training epoch, compute the metric and log it. It is not obvious how to do this, as I cannot access the training data-set during “on_epoch_end” or one of the other end of epoch hooks. Is there a good solution for this? Thanks Inigo.
Solved by using self.trainer.datamodule
MDEwOkRpc2N1c3Npb24zMzUwODY2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7356#discussioncomment-697581
Issue in fitting model and finding optimal learning rate parameter
following is the error: NotImplementedError: `val_dataloader` must be implemented to be used with the Lightning Trainer LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-11-263e8be26564> in <module>() 3 tft, 4 train_dataloader=train_dataloader, ----> 5 val_dataloaders=val_dataloader, 6 ) 11 frames /usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/hooks.py in val_dataloader(self) 590 will have an argument ``dataloader_idx`` which matches the order here. 591 """ --> 592 raise NotImplementedError("`val_dataloader` must be implemented to be used with the Lightning Trainer") 593 594 def predict_dataloader(self) -> EVAL_DATALOADERS: NotImplementedError: `val_dataloader` must be implemented to be used with the Lightning Trainer trainer.fit( tft, train_dataloader=train_dataloader, val_dataloaders=val_dataloader, )
I think it's a problem with pytorch-lightning==1.5.0. Had this problem too with code that worked before recreating my venv and the difference was version 1.5.0 release on 2. Nov. Switched to 1.4.9 and it worked again. Checked 1.5.0rc1 and it did not work either.
D_kwDOCqWgoM4AN-x1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10341#discussioncomment-1587190
Multiple Validation Sets
Hello, I'm trying to validate my model on multiple subsets of the initial validation set to compare performance. Reading this page I got the idea that returning a list contaning the multiple Dataloaders would be enough. My val_dataloader method became the following: But this isn't working properly. I get the following error: "TypeError: validation_step() takes 3 positional arguments but 4 were given" (it worked properly when I only used 1 validation Dataloader). What am I doing wrong? Can someone help me with this? Or just point me to some more documentation on this. Thanks in advance!
you must be missing the additional dataloader_idx required in the validation_step for multiple dataloaders docs: https://pytorch-lightning.readthedocs.io/en/latest/guides/data.html#multiple-validation-test-predict-dataloaders
D_kwDOCqWgoM4AOUM0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11135#discussioncomment-1837460
Gradient Clipping with mix precision in case of NaN loss
Greetings. I am getting NaN val loss Cannot log infinite or NaN value to attribute training/val_loss/ with cnnlstm network.I am thinking to use gradient clipping. But the doc say gradient clipping should not be used with mixed precision. If using mixed precision, the gradient_clip_val does not need to be changed as the gradients are unscaled before applying the clipping function. Further, i am doing regression, i dont know what value of gradient clipping should i use? Further i checked trainer doc and find that track_grad_norm (Union[int, float, str]) – -1 no tracking. Otherwise tracks that p-norm. May be set to ‘inf’ infinity-norm. If using Automatic Mixed Precision (AMP), the gradients will be unscaled before logging them. Can you explain what this mean 'If using Automatic Mixed Precision (AMP), the gradients will be unscaled before logging them'
But the doc say gradient clipping should not be used with mixed precision. You totally can, that's saying that any scaling applied by 16bit precision training will be undone before clipping the gradients. Which means you do not need to worry about changing the gradient clipping value with vs without precision=16 i dont know what value of gradient clipping should i use? Nobody does :P Try some experiments and find out! 'If using Automatic Mixed Precision (AMP), the gradients will be unscaled before logging them' Same thing as I explained above. It's just a technical detail, you do not need to worry about it
D_kwDOCqWgoM4AOd6S
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11413#discussioncomment-1952003
Lightning CLI is incompatible with models defined by data
Is there an easy way to make a Lightning CLI work with a Lightning Module defined by data? This seems like a very common design pattern. For example (from the docs) it doesn't appear possible to easily convert the following to a Lightning CLI: # init dm AND call the processing manually dm = ImagenetDataModule() dm.prepare_data() dm.setup() model = LitModel(out_features=dm.num_classes, img_width=dm.img_width, img_height=dm.img_height) trainer.fit(model, dm) The current CLI implementation works for model re-loading. However, it requires data-dependent attributes to be specified to the config files prior to fitting, which does not seem ideal.
Resolved: see #9473
MDEwOkRpc2N1c3Npb24zNTY2Nzgy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9452#discussioncomment-1317965
Why is the default implementation for train_dataloader in DataHooks a warning?
The default implementation for these is logging a warning that nothing is implemented. pytorch-lightning/pytorch_lightning/core/hooks.py Line 529 in 963c267 rank_zero_warn("`train_dataloader` must be implemented to be used with the Lightning Trainer") Why isn’t the default implementation to raise a NotImplementedError? This would make errors much clearer in case users forget to override these hooks
I believe this is just legacy code. No real reason. It's in the original implementation (to bolts!) PyTorchLightning/lightning-bolts@797464c Feel free to try changing it :)
MDEwOkRpc2N1c3Npb24zNTAzNDg3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8734#discussioncomment-1135763
When doing `fit()`, `self.training` in `forward()` keeps turning into False?
Hi all, I tried to train a model with pl. And I just ran the trainer.fit() as below: trainer.fit(model, train_dataloaders=model.train_dataloader(), val_dataloaders=model.val_dataloader()) And I found that model.training == False, when it gets into forward()... Is there any solution or does anybody know the potential reason for this? Or does anyone know where can I find the source code for training_step() so that I can check and debug forward() in fit()? Thank you very much.
does it print self.training = False for all the training steps? maybe you might have checked it during the initial steps where val sanity check happens.
D_kwDOCqWgoM4AOwQY
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11838#discussioncomment-2148203
How to apply uniform length batching(smart batching)?
How to apply uniform length batching(smart batching)? . Hi all! I have a question about applying a smart batching system like the above picture. To implement smart batching system, I write the code like below: Dataset Class class ExampleDataset(Dataset): def __init__(self, datas, tokenizer): super(ExampleDataset, self).__init__() self.tokenizer = tokenizer tokenized = [self.tokenize(data) for data in tqdm(datas, desc='Tokenizing..')] self.input_ids, self.attention_masks, self.labels = list(zip(*tokenized)) def tokenize(self, data): encodings_dict = self.tokenizer(data) return [ encodings_dict['input_ids'], encodings_dict['attention_mask'], encodings_dict['input_ids'] ] def __len__(self): return len(self.input_ids) def __getitem__(self, idx): return self.input_ids[idx], self.attention_masks[idx], self.labels[idx] Sampler Class class SmartBatchingSampler(Sampler): def __init__(self, data_source: torch.utils.data.Dataset, batch_size=1): super(SmartBatchingSampler, self).__init__(data_source) self.batch_size = batch_size self.data_source = data_source sentence_lengths = [len(sentence[0]) for sentence in data_source] sentence_indices = [idx for idx in range(len(data_source))] pack_by_length = list(zip(sentence_lengths, sentence_indices)) sort_by_length = sorted(pack_by_length) sentence_lengths, sentence_indices = zip(*sort_by_length) self.bins = [ sentence_indices[i: i + batch_size] for i in range(0, len(sentence_indices), batch_size) ] self.bins = list(chain.from_iterable(self.bins)) self.drop_last = drop_last def __iter__(self): for ids in self.bins: yield ids def __len__(self): return len(self.bins) def shuffle(self, epoch): np.random.shuffle(self.bins) collate_fn function def collate_fn(batch): def seq_length_(p): return len(p[0]) max_seq_sample = max(batch, key=seq_length_)[0] max_seq_size = len(max_seq_sample) batch_size = len(batch) input_ids = torch.zeros(batch_size, max_seq_size).fill_(0).long() attention_masks = torch.zeros(batch_size, max_seq_size).fill_(0).long() labels = torch.zeros(batch_size, max_seq_size).fill_(0).long() for idx in range(batch_size): sample = batch[idx] sample_input_ids = sample[0] sample_attention_masks = sample[1] sample_labels = sample[2] input_ids[idx].narrow(0, 0, len(sample_input_ids)).copy_(torch.LongTensor(sample_input_ids)) attention_masks[idx].narrow(0, 0, len(sample_attention_masks)).copy_(torch.LongTensor(sample_attention_masks)) labels[idx].narrow(0, 0, len(sample_labels)).copy_(torch.LongTensor(sample_labels)) return input_ids, attention_masks, labels LightningDataModule class ExampleDataModule(pl.LightningDataModule): ... ... def train_dataloader(self): sampler = SmartBatchingSampler(self.dataset['train'], batch_size=self.batch_size) return DataLoader( dataset=self.dataset['train'], # ExampleDataset class sampler=sampler, collate_fn=collate_fn, ) I have three questions. Can I apply it like this? If not, let me know how to apply it. If it is done in the same way as above, the batch size must be determined in advance. If 'auto_scale_batch_size' is performed, how can I know the determined batch size? If I designate SmartBatchingSampler that the batch size is 32, and 'auto_scale_batch_size' has set the batch size to 128, how does this work? Thank you.
you can sort the data by len initially while creating the dataset itself. now just use a sequential sampler to avoid shuffle by just setting shuffle=False inside dataloader. collate_fn looks good, although can be optimized a little bit. apart from that even if you use auto_scale_batch_size, it will work just fine since your dataset will already be sorted by length.
D_kwDOCqWgoM4ANuQ1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9740#discussioncomment-1398705
How to implement a deep ensemble
I am looking to implement n parallel independent ensembles. My idea is the following: class DeepEnsemble(LightningModule): def __init__(self, cfg): super().__init__(cfg) self.net = nn.ModuleList([configure_network(self.cfg) for _ in range(self.cfg.METHOD.ENSEMBLE)]) def configure_optimizers(self): return [torch.optim.Adam(net.parameters(), lr=self.cfg.SOLVER.LR) for net in self.net] def forward(self, x): x = [net.forward(x) for net in self.net] return x def training_step(self, batch, batch_idx, optimizer_idx): image, label = batch["image"], batch["label"] logits = self.forward(image) loss = [self.criterion(logit, label) for logit in logits] mean_logit = torch.stack(logits, dim=-1).mean(dim=-1) metrics = self.log_metrics(mean_logit, label, 'train') return loss def validation_step(self, batch, batch_idx): image, label = batch["image"], batch["label"] logits = self.forward(image) mean_logit = torch.stack(logits, dim=-1).mean(dim=-1) metrics = self.log_metrics(mean_logit, label, 'val') return metrics[self.cfg.CKPT.MONITOR] def test_step(self, batch, batch_idx): pass I have n networks and n optimisers. My solution works (I think), but the training_step gets called with a new optimizer_idx every time, which indicates that Pytorch Lightning expects to only train 1 network per training_step. Therefore, my solution is very inefficient, because n^2 forward passes are executed instead of n. If I only do the forward pass for the ith network, then I can't compute metrics based on all ensembles (e.g. disagreement) unless I write some very inelegant if statements. In addition It would be nice to have all forward passes done in parallel instead of sequential like in this list comprehension. So what is the most elegant way to train an ensemble and still access all predictions for metric logging together?
I see two potential options. cache the forward output for a specific batch idx. Check the automatic optimization flow: https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#automatic-optimization Use manual optimization. https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#manual-optimization
MDEwOkRpc2N1c3Npb24zNDcwNzE3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8505#discussioncomment-1035349
Is there an all_gather before training_step_end when using DDP?
From the training_step_end() docs it says: If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code When using dp or ddp2 the first dimension is equal to the number of GPUs, and it has the per-GPU results like gpu_n_pred = training_step_outputs[n]['pred'] as shown in the example in the docs. This makes sense since there is a gather in the forward pass. For DDP though there does not need to be a gather / sync / barrier in the forward pass, only for the gradients in the backward pass. So does this just pass through the single-GPU output in the DDP case? E.g. in the same example, is training_step_outputs just the dictionary from that single GPU, like gpu_pred = training_step_outputs['pred']? Or if I define this method does it add a barrier / gather as if doing outputs = self.all_gather(outputs), such that all of the GPU results are actually available like gpu_n_pred = training_step_outputs[n]['pred'] as in the DP/DDP2 case? I just want to make sure I'm not slowing down my code if I define this method for the dp / ddp2 case but then almost always use standard ddp. Sorry if this was already asked or is in the docs, I tried my best to find the answer. Thanks!
I'll answer my own question. Short answer is no, there is no barrier / gather before training_step_end() when using DDP. I could be wrong, but it appears these methods just get called using the normal callback mechanism, e.g. PyTorch-Lightning doesn't post-process the output beyond what DP / DDP will do. So in the DP case the outputs are automatically aggregated by concatenating the first dimension, and in the DDP case the outputs are just passed through. I tried returning a dictionary from training_step_end() containing (1) a scalar, e.g. loss, and (2) a tensor of output predictions e.g. pred_outs, shape (N, K) for batch size N and number of classes K. The results were as follows, using 2 GPUs for DP / DDP with a batch size of 128 for DP, and 64 for DDP (maintaining the effective batch size of 128): Rough setup: def training_step( self, batch: Tuple[torch.Tensor, torch.Tensor], batch_idx: int ) -> Dict[str, torch.Tensor]: inputs, targets = batch pred_odds = self.forward(inputs) log_probs = F.log_softmax(pred_odds, dim=1) loss = F.nll_loss(log_probs, targets) return {'pred_odds': pred_odds, 'loss': loss} def training_step_end( self, step_outputs: Dict[str, torch.Tensor] ) -> Dict[str, torch.Tensor]: print(step_outputs['loss']) print(step_outputs['loss'].shape) print(step_outputs['pred_odds'].shape) print(step_outputs['pred_odds'].device) Gave the following results: --------------------------- SINGLE GPU --------------------------- SCALAR OUTPUT: step_outputs['loss'] tensor([4.8134], device='cuda:0') step_outputs['loss'].shape torch.Size([1]) TENSOR OUTPUT: step_outputs['pred_odds'].shape torch.Size([128, 100]) step_outputs['pred_odds'].device device(type='cuda', index=0) --------------------------- DP --------------------------- SCALAR OUTPUT: step_outputs['loss'] tensor([4.8472, 4.8262], device='cuda:0') step_outputs['loss'].shape torch.Size([2]) TENSOR OUTPUT: step_outputs['pred_odds'].shape torch.Size([128, 100]) step_outputs['pred_odds'].device device(type='cuda', index=0) ----------------------------- DDP ----------------------------- SCALAR OUTPUT: step_outputs['loss'] tensor(4.8477, device='cuda:0') step_outputs['loss'].shape torch.Size([]) TENSOR_OUTPUT: step_outputs['pred_odds'].shape torch.Size([64, 100]) step_outputs['pred_odds'].device device(type='cuda', index=0) So if you're using a tensor (the only case that I actually needed, since the loss will be computed in training_step_end() anyway), you can just use the tensor as you normally would in training_step_end() as if it were coming from a single GPU in training_step(). If you're using a scalar, it will get converted into a 1D tensor equal to the number of GPUs in the no-backend and DP case, but it will be still be a scalar tensor in the DDP case (see the size above). That could throw you off, but again, you're probably not passing scalars to training_step_end() anyway. Hope this helps someone. Cheers.
MDEwOkRpc2N1c3Npb24zNDA3MTg3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7934#discussioncomment-863899
Error importing pytorch lighting
Hi, I am getting this weird error; I was able to run my code before and today I got this: import pytorch_lightning as pl "~/dir/miniconda3/envs/pytorchenv/lib/python3.7/site-packages/pytorch_lightning/init.py", line 66, in from pytorch_lightning import metrics ImportError: cannot import name 'metrics' from 'pytorch_lightning'
@mshooter Hi, could you try reinstalling it and running it again? I didn't experience the issue with the following command on Google Colab: !pip install pytorch-lightning --upgrade from pytorch_lightning import metrics If the problem persists, could you run the following commands and share the output? $ wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py $ python collect_env_details.py
MDEwOkRpc2N1c3Npb24zMjQzMTM4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6240#discussioncomment-412449
how to use pl to process tfrecords data?
how to use LightningDataModule to process tfrecords data, anyone can give a Tutorial
Depends on what data you have inside TFRecord, but you can see usage: https://github.com/vahidk/tfrecord#reading-tfexample-records-in-pytorch def train_dataloader(): # index_path = None # tfrecord_path = "/tmp/data.tfrecord" # description = {"image": "byte", "label": "float"} dataset = TFRecordDataset(tfrecord_path, index_path, description) loader = torch.utils.data.DataLoader(dataset, batch_size=32) return loader also check how to work with dataset - https://discuss.pytorch.org/t/read-dataset-from-tfrecord-format/16409/15
MDEwOkRpc2N1c3Npb24zNDMzNzM4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8161#discussioncomment-974908
How do I set the steps_per_epoch parameter of a lr scheduler in multi-GPU environment?
What is your question? For some learning rate schedulers, there is a required steps_per_epoch parameter. One example is the OneCycleLR scheduler. On a CPU or single GPU, this parameter should be set to the length of the train dataloader. My question is, how should this parameter be set on a multi-GPU machine using DDP. Does this parameter need to be updated to len(self.train_dataloader()) / num_gpus? Or is this done automatically? What have you tried? I've tried manually dividing the steps_per_epoch of the OneCycleLR scheduler by the number of GPUs when training on a multi-GPU machine. The LR doesn't seem to be following the expected update pattern and I think the scheduler may be the source of the problem. What's your environment? OS: Linux Packaging: conda Version: 0.7.6
After some more investigation, it seems like dividing the dataloader size by the number of GPUs is the correct way. The documentation could be more clear on this, but I'm closing this now.
MDEwOkRpc2N1c3Npb244MjI3MQ==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/2149#discussioncomment-238274
Multi-GPU Training GPU Usage
❓ Multi-GPU Training GPU Usage Before asking: search the issues. search the docs. Hi, I'm using lightning and ddp as backend to do multi-gpu training, with Apex amp (amp_level = 'O1'). The gpu number is 8. I noticed that during training, most of time GPU0's utilization is 0%, while others are almost 100%. But their memory usage are the same. Is this normal? I use OpenPAI and have attached their utilization and memeory usage below. Thanks. Code What have you tried? What's your environment? OS: [e.g. iOS, Linux, Win] Packaging [e.g. pip, conda] Version [e.g. 0.5.2.1]
Your cpu usage seems high. It could be the cpu is the bottleneck here. Try fewer gpus and observe then observe the gpu utilization.
MDEwOkRpc2N1c3Npb244MjI2MA==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/2701#discussioncomment-238229
Is it possible to call dist.all_reduce manually in train_step?
In my code, I would like to synchronize a tensor across all the gpus in train_step, which is a temporary variable. Is it allowed to call torch.distributed.all_reduce in this case? Or there is a specific function in pytorch_lightning that does the job?
Hey @sandylaker! You can use torch.distributed.all_reduce. There is also within the LightningModule this function, however it may be better to expose this within lightning to make it easier to access: x = self.trainer.accelerator.training_type_plugin.reduce(x)
MDEwOkRpc2N1c3Npb24zMzgwMjMw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7693#discussioncomment-785393
Attribute is reset per batch in `dp` mode
❓ Questions and Help Before asking: search the issues. search the docs. What is your question? I don't know whether this is a bug... As shown in the code below, I think the behavior of dp mode is unexpected? (The attribute is reset every batch) When using ddp mode, everything is fine. (The property will be initialized only once per GPU) Code import os import torch from torch.nn import functional as F from torch.utils.data import DataLoader from torchvision.datasets import MNIST from torchvision import transforms import pytorch_lightning as pl from pytorch_lightning import Trainer from argparse import Namespace class LitModel(pl.LightningModule): def __init__(self): super().__init__() self.l1 = torch.nn.Linear(28 * 28, 10) self._dummy_property = None @property def dummy_propery(self): if self._dummy_property is None: self._dummy_property = '*' * 30 print('print only once per gpu') return self._dummy_property def forward(self, x): return torch.relu(self.l1(x.view(x.size(0), -1))) def training_step(self, batch, batch_idx): print(self._dummy_property) # Access every batch self.dummy_propery print(self._dummy_property) x, y = batch y_hat = self(x) loss = F.cross_entropy(y_hat, y) return pl.TrainResult(loss) def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=0.02) train_loader = DataLoader( MNIST( os.getcwd(), download=True, transform=transforms.ToTensor() ), batch_size=128 ) trainer = pl.Trainer(gpus=2, distributed_backend='dp', max_epochs=2) model = LitModel() trainer.fit(model, train_loader) Output None print only once per gpu ****************************** None print only once per gpu ****************************** None print only once per gpu ****************************** None print only once per gpu ****************************** ... What's your environment? Version 0.9.0
oh, this is actually a known problem and comes from DataParallel in PyTorch itself. See #565 and #1649 for reference. @ananyahjha93 has been working on a workaround but it seems to be super non trivial #1895
MDEwOkRpc2N1c3Npb244MjI4OQ==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/3301#discussioncomment-238341
How to correctly apply metrics API in binary use case
How would one correctly apply the Precision metric from v1.2.0 on, with the revised metrics api? I am currently doing something like this: import torch from pytorch_lightning import metrics # example data preds = [0] * 200 + [1] * 30 + [0] * 10 + [1] * 20 targets = [0] * 200 + [1] * 30 + [1] * 10 + [0] * 20 preds = torch.tensor(preds) targets = torch.tensor(targets) # define method for printing metrics def _print_some_metrics(preds, targets, num_classes): precision = metrics.classification.Precision( num_classes=None, is_multiclass=False) recall = metrics.classification.Recall( num_classes=None, is_multiclass=False) f1 = metrics.classification.F1(num_classes=num_classes) f1beta = metrics.classification.FBeta( num_classes=num_classes, beta=2 ) accuracy = metrics.classification.Accuracy() avg_precision = metrics.classification.AveragePrecision( num_classes=None) confusion_matrix = metrics.ConfusionMatrix(num_classes=2) # print results print("Precision:\n{}\n".format(precision(preds, targets))) print("Recall:\n{}\n".format(recall(preds, targets))) print("F1:\n{}\n".format(f1(preds, targets))) print("F1-Beta:\n{}\n".format(f1beta(preds, targets))) print("AVG Precision:\n{}\n".format(avg_precision(preds, targets))) print("Accuracy:\n{}\n".format(accuracy(preds, targets))) print("ConfMat:\n{}\n".format(confusion_matrix(preds, targets))) _print_some_metrics(preds, targets, num_classes=2) Which gives me these results: Precision: 0.6000000238418579 Recall: 0.75 F1: 0.8846153616905212 F1-Beta: 0.8846153616905212 AVG Precision: 0.48846155405044556 Accuracy: 0.8846153616905212 ConfMat: tensor([[200., 20.], [ 10., 30.]]) However, when calculating precision by hand (TP / TP + FN) with the numbers from the contingency table, I get 30 / 50 = 0.6 Why does applying the precision class result in this (small) deviation? Further, when logging the metrics on epoch_end steps inside my model, I am not able to reproduce the logged precision, recall or accuraccy numbers on the validation set with the output from the contingency table, logged on the same steps (I haven't validated the other metrics yet by hand). It would be great to get some help on how to correctly apply the new metrics API for a binary use case.
for binary classification, where you are only interested in the positive class you should pass in num_classes=1. Here is your corrected code: def _print_some_metrics(preds, targets, num_classes): precision = metrics.classification.Precision( num_classes=num_classes, is_multiclass=False) recall = metrics.classification.Recall( num_classes=num_classes, is_multiclass=False) f1 = metrics.classification.F1(num_classes=num_classes) f1beta = metrics.classification.FBeta( num_classes=num_classes, beta=2 ) accuracy = metrics.classification.Accuracy() avg_precision = metrics.classification.AveragePrecision( num_classes=num_classes) confusion_matrix = metrics.ConfusionMatrix(num_classes=2) # print results print("Precision:\n{}\n".format(precision(preds, targets))) print("Recall:\n{}\n".format(recall(preds, targets))) print("F1:\n{}\n".format(f1(preds, targets))) print("F1-Beta:\n{}\n".format(f1beta(preds, targets))) print("AVG Precision:\n{}\n".format(avg_precision(preds, targets))) print("Accuracy:\n{}\n".format(accuracy(preds, targets))) print("ConfMat:\n{}\n".format(confusion_matrix(preds, targets))) _print_some_metrics(preds, targets, num_classes=1) only confusion_matrix need to be set num_classes=2 because you want the statistics for both classes.
MDEwOkRpc2N1c3Npb24zMjUwMzc1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6356#discussioncomment-433217
ddp: how to combine multi-gpus outputs like "training_step_end" which is only used in dp/ddp2?
My question is like title. Thank you!
checkout the example here: https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#training-with-dataparallel
D_kwDOCqWgoM4AOXqQ
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11265#discussioncomment-1876035
Gradient accumulation + DeepSpeed LR scheduler
How does gradient accumulation interact with DeepSpeed learning rate scheduling (e.g. the per-step warm-up scheduler)? Is the learning rate updated after every iteration, or only after the model weights are ultimately updated?
it considers the accumulation before doing lr_scheduler_step: pytorch-lightning/pytorch_lightning/loops/epoch/training_epoch_loop.py Lines 387 to 390 in 86b177e def update_lr_schedulers(self, interval: str, update_plateau_schedulers: bool) -> None: """updates the lr schedulers based on the given interval.""" if interval == "step" and self._should_accumulate(): return
D_kwDOCqWgoM4AOrGl
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11686#discussioncomment-2087865
Help understanding data module error
Hi there, I am trying to implement a data module however I keep getting an error that I cannot understand. I normally setup my data module as: class dataModule(pl.LightningDataModule): def __init__(self, batch_size, csv_file, data_dir): super().__init__() self.csv_file = csv_file self.data_dir = data_dir self.batch_size = batch_size self.preprocess = None self.transform = None self.train_set = None self.val_set = None self.test_set = None def get_augmentation_transform(self): augment = tio.Compose([ tio.RandomAffine(), tio.RandomFlip(p = 0.25), tio.RandomGamma(p=0.25), tio.RandomNoise(p=0.25), tio.RandomMotion(p=0.1), tio.RandomBiasField(p=0.25), ]) return augment def setup(self, stage=None): subjList = fmriDataset(csv_file = self.csv_file, root_dir = self.data_dir) train_size, val_size = int(0.7 * len(subjList)), int(0.2 * len(subjList)) test_size = len(subjList) - train_size - val_size if stage == 'fit' or stage is None: self.train_dataset, self.val_dataset, _ = torch.utils.data.random_split(subjList, [train_size, val_size, test_size]) if stage == 'test' or stage is None: _, _, self.test_dataset = torch.utils.data.random_split(subjList, [train_size, val_size, test_size]) augment = self.get_augmentation_transform() self.train_set = tio.SubjectsDataset(self.train_dataset, transform=augment) self.val_set = tio.SubjectsDataset(self.val_dataset, transform=None) self.test_set = tio.SubjectsDataset(self.test_dataset, transform=None) def train_dataloader(self): return DataLoader(self.train_set, self.batch_size, shuffle=True, num_workers=27) def val_dataloader(self): return DataLoader(self.val_set, self.batch_size, num_workers=27) def test_dataloader(self): return DataLoader(self.test_set, self.batch_size, num_workers=27) However, when I call trainer.tune(model = model, datamodule = data) I get the error: AttributeError: 'dataModule' object has no attribute 'test_dataset' However, if I change these lines if stage == 'fit' or stage is None: self.train_dataset, self.val_dataset, _ = torch.utils.data.random_split(subjList, [train_size, val_size, test_size]) if stage == 'test' or stage is None: _, _, self.test_dataset = torch.utils.data.random_split(subjList, [train_size, val_size, test_size]) to a single line: self.train_dataset, self.val_dataset, self.test_dataset = torch.utils.data.random_split(subjList, [train_size, val_size, test_size]) everything works. Is there something basic that I have missed? For completeness: Pytorch version: 1.9.0 Pytorch-lightning version: 1.3.7 And I initialise the data module/model/trainer with: data = dataModule(data_dir = '/home/data/', csv_dir = '/home/scanList.csv', batch_size = 24) model = cnnRnnClassifier() early_stop_callback = Earlystopping( monitor = 'val_loss', min_delta = 1e-4, patience = 10, Verbose = True, mode = 'min') trainer = Trainer( gpus = 1, fast_dev_run = False, max_epochs = 100, weights_summary = 'full', callbacks = [early_stop_callback], auto_lr_find = True, precision = 16) trainer.tune(model = model, datamodule = data) trainer.fit(model = model, datamodule = data) trainer.test(model = model, datamodule = data) Thanks in advance for your help!
The problem is the combination of this line: if stage == 'test' or stage is None: _, _, self.test_dataset = torch.utils.data.random_split(subjList, [train_size, val_size, test_size]) and this line: self.test_set = tio.SubjectsDataset(self.test_dataset, transform=None) as you can see, self.test_dataset is only defined if the condition above applies. With this hint you should be able to figure it out now. Let me know :)
MDEwOkRpc2N1c3Npb24zNDQ2Njgz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8298#discussioncomment-966766
How to reinit wanbd in a for loop with PL Trainer
I am training 5-fold CV with PyTorch Lightning in a for loop. I am also logging all the results to wandb. I want wanbd to reinitalize the run after each fold, but it seems to continue with the same run and it logs all the results to the same run. I also tried passing kwargs in the WandbLogger as mentioned in the docs here, with no luck. Here's a pseudo code of it: def run(fold): kwargs = { "reinit": True, "group": f"{CFG['exp_name']}" } wandb_logger = WandbLogger(project='<name>', entity='<entity>', config = CFG, name=f"fold_{fold}", **kwargs ) trainer = Trainer( precision=16, gpus=1, fast_dev_run=False, callbacks = [checkpoint_callback], logger=wandb_logger, progress_bar_refresh_rate=1, max_epochs=2, log_every_n_steps=1 ) trainer.fit( lit_model, data_module ) if __name__ == "__main__": for fold in range(5): run(fold)
@Gladiator07 you could try call wandb.finish() at the end of every run. This should close the wandb process. A new one will be started when you call the next run
MDEwOkRpc2N1c3Npb24zNDgwNTAw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8572#discussioncomment-1627086
load checkpoint model error
this is my model __init__ func def __init__(self, num_classes: int, image_channels: int = 3, drop_rate: int = 0.5, filter_config: tuple = (64, 128, 256, 512, 512), attention=False): this is my load code: m = SegNet(num_classes=1) model = m.load_from_checkpoint('checkpoints/epoch=99-step=312499.ckpt') when i try to load a checkpoint model, i got this error: TypeError: __init__() missing 1 required positional argument: 'num_classes' who can help me?
Dear @morestart, Great question ! You should use save_hyperparameters function, so Lightning can save your init arguments inside the checkpoint for future reload. Here is the associated doc: https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html?highlight=save_hyperparameters#save-hyperparameters class SegNet(LightningModule) def __init__( self, num_classes: int, image_channels: int = 3, drop_rate: int = 0.5, filter_config: tuple = (64, 128, 256, 512, 512), attention=False ): super().__init__() self.save_hyperparameters() .... Best, T.C
MDEwOkRpc2N1c3Npb24zNDIxMTQ4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8050#discussioncomment-896565
How to Log Metrics (eg. Validation Loss, Accuracy) To TensorBoard Hparams?
I am using Pytorch Lightning 1.2.6 to train my models using DDP and TensorBoard is the default logger used by Lightning. My code is setup to log the training and validation loss on each training and validation step respectively. class MyLightningModel(pl.LightningModule): def training_step(self, batch): x, labels = batch out = self(x) loss = F.mse_loss(out, labels) self.log("train_loss", loss) return loss def validation_step(self, batch): x, labels = batch out = self(x) loss = F.mse_loss(out, labels) self.log("val_loss", loss) return loss TensorBoard correctly plots both the train_loss and val_loss charts in the SCALERS tab. However, in the HPARAMS tab, on the left side bar, only hp_metric is visible under Metrics. However, in the HPARAMS tab, on the left side bar, only hp_metric is visible under Metrics. How can we add train_loss and val_loss to the Metrics section? This way, we will be able to use val_loss in the PARALLEL COORDINATES VIEW instead of hp_metric. Image showing hp_metric and no val_loss: Using Pytorch 1.8.1, Pytorch Lightning 1.2.6, TensorBoard 2.4.1
I think it is explained very well in this section of the documentation: https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html#logging-hyperparameters Basically, you just need to overwrite the hp_metric tag with whatever value you want to show up in the HPARAMS tab in tensorboard.
MDEwOkRpc2N1c3Npb24zMzEyMzgz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6904#discussioncomment-588486
Earlystopping callback metrics in different devices with single gpu
The device of the metric return by validation_step is GPU, related code is def validation_step(self, batch, batch_idx): x, y = batch if y.device != self.device: y = y.to(self.device) y_hat = self(x) loss = self.loss(y_hat, y) # loss.device is cuda. self.log('valid loss', loss.item()) return loss After an epoch of validation compeleted when using earlystopping, follow error occured: File "E:\Python\Python37\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 871, in run_train self.train_loop.run_training_epoch() File "E:\Python\Python37\lib\site-packages\pytorch_lightning\trainer\training_loop.py", line 584, in run_training_epoch self.trainer.run_evaluation(on_epoch=True) File "E:\Python\Python37\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1011, in run_evaluation self.evaluation_loop.on_evaluation_end() File "E:\Python\Python37\lib\site-packages\pytorch_lightning\trainer\evaluation_loop.py", line 102, in on_evaluation_end self.trainer.call_hook('on_validation_end', *args, **kwargs) File "E:\Python\Python37\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1228, in call_hook trainer_hook(*args, **kwargs) File "E:\Python\Python37\lib\site-packages\pytorch_lightning\trainer\callback_hook.py", line 227, in on_validation_end callback.on_validation_end(self, self.lightning_module) File "E:\Python\Python37\lib\site-packages\pytorch_lightning\callbacks\early_stopping.py", line 173, in on_validation_end self._run_early_stopping_check(trainer) File "E:\Python\Python37\lib\site-packages\pytorch_lightning\callbacks\early_stopping.py", line 193, in _run_early_stopping_check should_stop, reason = self._evalute_stopping_criteria(current, trainer) File "E:\Python\Python37\lib\site-packages\pytorch_lightning\callbacks\early_stopping.py", line 226, in _evalute_stopping_criteria elif self.monitor_op(current - self.min_delta, self.best_score.to(trainer.lightning_module.device)): RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! This error didn't appear until I updated the version of pytorch_lightning.
Looking into it in #8295
MDEwOkRpc2N1c3Npb24zNDQzODk2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8267#discussioncomment-966632
How to use predict function to return predictions
How to get predictions class OurModel(LightningModule): def __init__(self): super(OurModel,self).__init__() self.layer = MyModelV3() def forward(self,x): return self.layer(x) def train_dataloader(self): return DataLoader(DataReader(train_df)) def training_step(self,batch,batch_idx): return loss def test_dataloader(self): return DataLoader(DataReader(test_df)) def test_step(self,batch,batch_idx): image,label=batch out=self(image) loss=self.criterion(out,label) return loss def predict(self, batch): return self(batch) I am not sure, how to use predict function. How to define data loader for predict function. I want to get predictions for test_df. But now idea how to do this.
Hi @talhaanwarch, in order to get predictions from a data loader you need to implement predict_step in your LightningModule (docs here: https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#predict-step). You would then be able to call Trainer.predict with the dataloader you want use following the API here: https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.trainer.trainer.html#pytorch_lightning.trainer.trainer.Trainer.predict Hope that helps 😃
MDEwOkRpc2N1c3Npb24zNDE5NjM3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8038#discussioncomment-892907
Why accumulate_grad_batches cannot be used with manual optimization?
I've stumbled upon the problem of not being able to use accumulate_grad_batches argument in the Trainer as I was doing manual optimization in my LightningModule to use adversarial loss functions. However, I think it would be possible to implement something that would "store" calls to the step method for the module's optimizers and actually apply them once every accumulate_grad_batches iterations. I've seen several related issues about similar behavours when overriding optimizer_step or close to my use case (#5054, #5108). The proposed fixes always leave some manual get-arounds in the final code. My question: is there a reason for such incompatibility of accumulate_grad_batches with manual optimization ? One reason might be the need to step different optimizers at different paces (one every batch, another every n batches ...) but this seems to be an extreme use case.
Hey @NathanGodey, manual optimization was built to provide full control optimization control to the user while abstracting distributed training and precision. There is no way Lightning can automate properly accumulate grad batches for all the possible use cases and therefore isn't supported. However, you can easily implement it by not calling zero_grad, step every n batches.
D_kwDOCqWgoM4AOOv9
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10998#discussioncomment-1784339
training_epoch_end only returning the last train batch
I have a pytorch lightning module that includes the following section: `def training_step(self, batch, batch_idx): losses, tensors = self.shared_step(batch) return losses def validation_step(self, batch, batch_idx): losses, tensors = self.shared_step(batch) return losses, tensors def training_epoch_end(self, losses): my_function(losses) def validation_epoch_end(self, outputs): my_function2(outputs)` The losses variable is a dictionary that includes the key "loss". The input to validation_epoch_end() is a list of (losses, tensors) tuples from each of the batches as expected. The input to training_epoch_end() is a list of the correct length (the number of batches), but every element is the same losses dictionary from the final batch. How can I input the losses from every train batch into the training_epoch_end() method (like the inputs to validation_epoch_end())?
This issue was reported in #8603 - are you able to try Lightning 1.4.1, which contains the fix? And for broader discussion on these hooks, and alternatives you have to access the per-step outputs, see #8731
MDEwOkRpc2N1c3Npb24zNTA0NjA3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8757#discussioncomment-1136286
Hook for Fully Formed Checkpoints
I want to create a hook that uploads checkpoints to cloud storage (e.g. AWS, Azure). I tried using the on_save_checkpoint hook as follows: def on_save_checkpoint(self, trainer: pl.Trainer, pl_module: pl.LightningModule, checkpoint: Dict[str, Any]) -> dict: checkpoint_bytes = io.BytesIO() torch.save(checkpoint, checkpoint_bytes) # Upload the BytesIO... However, states for optimizers, learning rate schedulers, etc. are added to the checkpoint dict after on_save_checkpoint is called. Is there an elegant way to create a hook that receives fully formed checkpoints? edit: sorry, this is a duplicate -- GitHub was giving me errors when I posted
See #11704.
D_kwDOCqWgoM4AOsH5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11705#discussioncomment-2100641
model inference but self.training is save true
i use self.training param to judge what data return. return x if self.training else (torch.cat(z, 1), x) but when i load my model, i use debug mode find that the self.training is save True. self.model = CustomModel.load_from_checkpoint(model_path) self.model.training = False i use above code change model.training status, but its not work this is my inference full code: class CustomModelInference: def __init__( self, model_path: str, conf_thres: float = 0.25, iou_thres: float = 0.45, max_det: int = 1000, device: str = 'cuda:0', need_classes: list | None = None ): self.conf_thres = conf_thres self.iou_thres = iou_thres self.max_det = max_det self.device = device self.need_classes = need_classes self.model = CustomModel.load_from_checkpoint(model_path) self.model.training = False self.model.to(device) self.stride = int(self.model.stride.max()) self.names = self.model.names self.imgsz = self.model.imgsz @torch.no_grad() def infer(self, img: np.ndarray): imgsz = check_img_size(self.imgsz, s=self.stride) cudnn.benchmark = True img = letterbox(img, imgsz, stride=self.stride, auto=True)[0] # img = np.stack(img, 0) if len(img.shape) == 3: img = img[None] img = img[..., ::-1].transpose((0, 3, 1, 2)) img = np.ascontiguousarray(img) img = torch.from_numpy(img).to(self.device) img = img.float() img = img / 255.0 out, train_out = self.model(img)
fine, i know the answer, i have to set model.eval()......
D_kwDOCqWgoM4AOZMw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11285#discussioncomment-1890115
LightningCLI - instantiate model from config
So lets say i have a model and i'm using the newest CLI API to train it: The config uses sub modules and can look something like: # config.yaml model: class_path: pl_models_2.ModelPL init_args: margin: 0.3 basemodel: class_path: pl_models_2.ModelBackbone init_args: base_model: resnet50 pooling: both data: batch_size: 32 image_size: 224 augmentation_strategy: medium2 Now i want to run some scripts or notebooks using the model i trained with this config and i would like to instantiate the model using this config file. f/e. model = load_model(config.yaml) I was digging how this happens in LightningCLI and jsonargsparse but gave up after a while.. and started trying to hack this around with importlib, which works but feels like reinventing the wheel - i mean this logic has to be somewhere already :) i just cant find it.
If you want to use that exact config (not stripping out everything except the model) you can do the following: from jsonargparse import ArgumentParser parser = ArgumentParser() parser.add_argument('--model', type=ModelClass) parser.add_argument('--data', type=dict) # to ignore data config = parser.parse_path('config.yaml') config_init = parser.instantiate_classes(config) The instantiated model will be in config_init.model.
D_kwDOCqWgoM4AN_cS
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10363#discussioncomment-1595769
what's the difference between `load_from_checkpoint ` and `resume_from_checkpoint`
I'm confused about two API: Module.load_from_checkpoint trainer.resume_from_checkpoint
resume_from_checkpoint is used to resume the training using the checkpointed state_dicts. It will reload model's state_dict, optmizer's and schedulers's state_dicts, training state as well in a general case. use-case: to restart the training load_from_checkpoint just reloads the model's state_dict and return the model with the loaded weights. use-case: for quick evaluation/prediction.
D_kwDOCqWgoM4AOdFs
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11378#discussioncomment-1936061
What is the best practice to share a massive CPU tensor over multiple processes in pytorch-lightning DDP mode (read-only + single machine)?
Hi everyone, I wonder what is the best practice to share a massive CPU tensor over multiple processes in pytorch-lightning DDP mode (read-only + single machine)? I think torch.Storage.from_file with share=True may suit my needs, but I can’t find a way to save storage and read it as a tensor. (see here for details) I also tried to copy training data to /dev/shm (reference) and run DDP with 8 GPUs, but nothing is different. The memory usage when running with 8 GPUs is the same as before, but I tested with a single process, loading the dataset may occupy more than 1 GB of memory. Am I missing something here? For torch.shared_memory, how should I pass the same reference to all processes in pytorch-lightning pure DDP mode? Thank you.
I found that torch.Storage.from_file suits my needs and it can reduce the memory usage in my Lightning DDP program. For the way to create a storage file, see here.
MDEwOkRpc2N1c3Npb24zNDg4Mjgx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8611#discussioncomment-1129295
Training/Validation split in minimal example
If you still can't find what you need: What is your question? I think it's unclear how the training data is split into a training and validation split in the minimal example. (https://williamfalcon.github.io/pytorch-lightning/LightningModule/RequiredTrainerInterface/#minimal-example) Does this example use all training data for both training and validation? As far as I'm aware, this is bad practice. Is there some magic background process which compares the training and validation data loaders and does splitting? I skimmed through the code and couldn't find anything.
Is there some magic background process which compares the training and validation data loaders and does splitting? I skimmed through the code and couldn't find anything. No, I don't think this is happening. The dataset in the minimal example is the MNIST dataset, which only has two splits (train and test). In this example, the validation set is the same as the training set and if we wanted to split it, we would have to use something like torch.utils.random_split.
MDEwOkRpc2N1c3Npb24yNzkyNTAz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5815#discussioncomment-339795
How to set experiment name such that it can be some unique name instead of version_0, ... etc.
I'm currently running a lot of experiments and in order to track all of them in tensorboard I have to rename each experiment folder by hand (e.g. lightning_logs/version_0 -> lightning_logs/{unique_informative_exp_name}). However, it would be better if I could pass the as an argument to Trainer. I searched documentation thoroughly, but couldn't find anything related to names of experiments (I found only default_root_dir argument, however, it's not that interesting) The question is follows : Is there a way to set experiment name on program level? I'm using default logger.
You can pass Trainer a custom logger with the version specified. from pytorch_lightning.loggers import TensorBoardLogger logger = TensorBoardLogger("default_root_dir", version="your_version", name="my_model") trainer = Trainer(logger=logger) Here is the api of TensorBoardLogger
MDEwOkRpc2N1c3Npb24zNTQ0NzQ3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9185#discussioncomment-1249610
Running Average of my accuracy, losses etc.
What is your question? I want my tqdm logger to show me a history of my training on the terminal. Right now, when a epoch ends, all data for it is scrubbed from the command line and the new epoch data is shown. Also I want to see the running accuracy of my network and a running average of my loss on the tqdm bar. How should I go on about doing that ? What have you tried? I have looked at the docs and logging but am unable to figure out how to modify the tqdm logger, more so maintain a running average What's your environment? conda version: latest PyTorch version : 1.3 Lightning version : pip install pytorch-lightning at the date of this issue Test-tube version: came boot strapped with lightning. I installed everything on the date of this issue.
Use tensorboard! For running averages you have to implement the logic in training step
MDEwOkRpc2N1c3Npb24yNzkyNTM0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5819#discussioncomment-339807
How to collect batched predictions?
Hello :) Currently I use trainer.predict(model=..., dataloaders=...) which returns the results of predict_step(...) in a list where each element in the list corresponds to one batch input to the predict_step function which I already implemented. I am looking for an predict_epoch_end kind of function to collect to batched predictions into one data structure but only found the possibility to define a callback on_prediction_epoch_end() but this has no return possibility. How should one best proceed to collect the batched predictions? i.e. each predict_step() returns a tensor of shape batch_size x 10 and lets assume the dataloader gives 5 batches to predict. Then the trainer.predict() function will return a list of lenght 5 with each element beeing a tensor of shape batch_size x 10. However I would rather like to receive one tensor of shape 5*batch_size x 10. This is just a simple example to illustrate my problem. My actual return per prediction step is a little bit more involved and I would like to clean up the structure before returning it and also make this cleanup logic part of the module so that I dont have to remember the specifics. Is there a way to do that in the current framework and if so how? Thanks in advance for any help!
Issue to track #9380
MDEwOkRpc2N1c3Npb24zNTYyMjU2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9379#discussioncomment-1298537
How save deepspeed stage 3 model with pickle or torch
Hi, I'm trying to save a model trained using deepspeed stage 2 using this code: trainer = pl.Trainer( gpus=4, plugins=DeepSpeedPlugin( stage=3, cpu_offload=True, partition_activations=True,), precision=16, accelerator="ddp", ) trainer.fit(model, train_dataloader) With stage 2 it worked if I added this code: trainer = pl.Trainer(gpus=0,max_epochs=0,) trainer.fit(model, train_dataloader) pickle.dump(model,open("model.p","wb") But using stage=3 I get this error: Traceback (most recent call last): File "t5-11b-regression.py", line 227, in torch.save(model,fileName) File "/home/ec2-user/.local/lib/python3.7/site-packages/torch/serialization.py", line 379, in save _save(obj, opened_zipfile, pickle_module, pickle_protocol) File "/home/ec2-user/.local/lib/python3.7/site-packages/torch/serialization.py", line 484, in _save pickler.dump(obj) AttributeError: Can't pickle local object 'FP16_DeepSpeedZeroOptimizer_Stage3._register_hooks_recursively.._post_forward_module_hook' I also tried saving using torch.save, but got same error. I also tried both pytorch-lightning version 1.3.8 and 1.4.1 cc: @SeanNaren
After some debugging with a user, I've come up with a final script to show how you can use the convert_zero_checkpoint_to_fp32_state_dict to generate a single file that can be loaded using pickle, or lightning. import os import torch from torch.utils.data import DataLoader, Dataset from pytorch_lightning import LightningModule, Trainer from pytorch_lightning.callbacks import ModelCheckpoint from pytorch_lightning.plugins import DeepSpeedPlugin from pytorch_lightning.utilities.deepspeed import convert_zero_checkpoint_to_fp32_state_dict class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class BoringModel(LightningModule): def __init__(self): super().__init__() self.layer = torch.nn.Linear(32, 2) def forward(self, x): return self.layer(x) def training_step(self, batch, batch_idx): loss = self(batch).sum() self.log("train_loss", loss) return {"loss": loss} def validation_step(self, batch, batch_idx): loss = self(batch).sum() self.log("valid_loss", loss) def test_step(self, batch, batch_idx): loss = self(batch).sum() self.log("test_loss", loss) def configure_optimizers(self): return torch.optim.SGD(self.layer.parameters(), lr=0.1) if __name__ == "__main__": train_data = DataLoader(RandomDataset(32, 64), batch_size=2) val_data = DataLoader(RandomDataset(32, 64), batch_size=2) test_data = DataLoader(RandomDataset(32, 64), batch_size=2) model = BoringModel() trainer = Trainer( default_root_dir=os.getcwd(), limit_train_batches=1, limit_val_batches=1, limit_test_batches=1, num_sanity_val_steps=0, max_epochs=1, enable_model_summary=False, strategy=DeepSpeedPlugin(stage=2), precision=16, gpus=2, callbacks=ModelCheckpoint(dirpath='checkpoints', save_last=True) ) trainer.fit(model, train_dataloaders=train_data, val_dataloaders=val_data) # once saved via the model checkpoint callback, # it saves a folder containing the deepspeed checkpoint rather than a single file checkpoint_path = "checkpoints/last.ckpt/" if trainer.is_global_zero: single_ckpt_path = "single_model.pt" # magically converts the folder into a single lightning loadable pytorch file (for ZeRO 1,2 and 3) convert_zero_checkpoint_to_fp32_state_dict(checkpoint_path, single_ckpt_path) loaded_parameters = BoringModel.load_from_checkpoint(single_ckpt_path).parameters() model = model.cpu() # Assert model parameters are identical after loading for orig_param, saved_model_param in zip(model.parameters(), loaded_parameters): if model.dtype == torch.half: # moved model to float32 for comparison with single fp32 saved weights saved_model_param = saved_model_param.half() assert torch.equal(orig_param, saved_model_param) The above where we use the Trainer as an engine still works, but now you'd need to pass the checkpoint path like so trainer.predict(ckpt_path=..., ...)
MDEwOkRpc2N1c3Npb24zNTIxMjUy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8910#discussioncomment-1834003
GPU usage does not remain high for lightweight models when loaded CIFAR-10 as a custom dataset
I am experimenting with the following repository. Keiku/PyTorch-Lightning-CIFAR10: "Not too complicated" training code for CIFAR-10 by PyTorch Lightning I have implemented two methods, one is to load CIFAR-10 from torchvision and the other is to load CIFAR-10 as a custom dataset. Also, I have implemented two models: a lightweight model (eg scratch resnet18, timm MobileNet V3, etc.) and a relatively heavy model (eg scratch resnet50, timm resnet152). After some experiments, I found the following. GPU usage remains high (nearly 100%) on any model when loading CIFAR-10 with torchvision When loading CIFAR-10 as a custom dataset, GPU usage remains relatively high (still temporarily zero) for heavy models When loading CIFAR-10 as a custom dataset, GPU usage remains low (going back and forth between 0% and 100%) for lightweight models (resnet18, MobileNetV3) In this situation, is there a problem with the implementation code of the custom dataset? Also, please let me know if there is a way to increase GPU usage even for lightweight models. I am experimenting in the following EC2 g4dn.xlarge environment. ⋊> ~ lsb_release -a (base) 21:45:51 No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.5 LTS Release: 18.04 Codename: bionic ⋊> ~ nvidia-container-cli info (base) 21:48:20 NVRM version: 450.80.02 CUDA version: 11.0 Device Index: 0 Device Minor: 0 Model: Tesla T4 Brand: Tesla GPU UUID: GPU-ba54be15-066e-e7e5-87d0-84b8ac2672c6 Bus Location: 00000000:00:1e.0 Architecture: 7.5
I got a replay from ptrblck. https://discuss.pytorch.org/t/gpu-usage-does-not-remain-high-for-lightweight-models-when-loaded-cifar-10-as-a-custom-dataset/125738
MDEwOkRpc2N1c3Npb24zNDQ1MDEz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8274#discussioncomment-964122
how to load dataset only once on the same machine?
My dataset is large, with total CPU memory usage of 20 GB. I train on 2 nodes with 8 GPU. And I use slurm to train it. But I found that each process will consume 20 GB memory, which is equivelence to 80 GB each node. That's not what I want. I want a node to consume only 20GB in total. Is there a way to do that? class DataModule(LightningDataModule): def __init__(self): super().__init__() self.batch_size = 1 self.CT_dataset=np.load("./CT_dataset.npy")#shape:(7000,1,512,512) self.MR_dataset=np.load("./MR_dataset.npy")#shape:(7000,1,512,512) self.batch_size=1 self.CT_dataset = torch.from_numpy(self.CT_dataset) self.CT_dataset = self.CT_dataset.float() self.MR_dataset = torch.from_numpy(self.MR_dataset) self.MR_dataset = self.MR_dataset.float() self.train_dataset, self.test_dataset = random_split(TensorDataset(self.MR_dataset,self.CT_dataset), [len(self.CT_dataset)-100, 100]) def train_dataloader(self): return DataLoader(self.train_dataset, batch_size=self.batch_size) def test_dataloader(self): return DataLoader(self.test_dataset, batch_size=self.batch_size) model = CycleGAN() ds = DataModule() logger = TensorBoardLogger(save_dir="./run") trainer = pl.Trainer(max_epochs=1,fast_dev_run=False,profiler="pytorch",overfit_batches=8,gpus=4,logger=logger,accelerator='ddp',num_nodes=2,auto_scale_batch_size='power',weights_summary='full') trainer.fit(model, ds) trainer.test(model,datamodule=ds) My code will raise MemoryError: Unable to allocate 7.00 GiB for an array with shape (1879572480,) and data type int32, I can't think of a way to solve it.
Since your data is in one single binary file, it won't be possible to reduce the memory footprint. Each ddp process is independent from the others, there is no shared memory. You will have to save each dataset sample individually, so each process can access a subset of these samples through the dataloader and sampler.
MDEwOkRpc2N1c3Npb24zNDI3OTg1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8112#discussioncomment-914252
Update Adam learning rate after 10 epochs
hello, how can i update the learning rate of adam optimizer after 10 epochs ? my code is like this self.lr_decay_epoch = [15,] if epoch in self.lr_decay_epoch: self.lr = self.lr * 0.1 self.optimizer = Adam(filter(lambda p: p.requires_grad, self.net.parameters()), lr=self.lr, weight_decay=self.wd)
you can use LambdaLR where lambda function can be something like: lambda epoch: return lr * (0.1 if epoch in self.lr_decay_epoch else 1)
D_kwDOCqWgoM4ANx-L
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9853#discussioncomment-1442799
ModelCheckpoint creating unexpected subfolders
Hi! I have found a weird behavior when using ModelCheckpoint, if I have a metric that I want to save in my filename and it has a "/" on it it will create nested directories. For example checkpoint_callback = ModelCheckpoint( monitor='val/acc', dirpath=checkpoints_dir, filename='checkpoint_{epoch:02d}-{val/acc}', save_top_k=-1, ) This one will create one extra folder per checkpoint: checkpoints/base_lstm/checkpoint_epoch=00-val/acc=0.04-v1.ckp checkpoints/base_lstm/checkpoint_epoch=00-val/acc=0.05-v1.ckp Is there any way to make the modelcheckpoint callback store the "val/acc" value while not using the string "val/acc" to reference it? Something like: checkpoints/base_lstm/checkpoint_epoch=00-valAcc=0.04-v1.ckp I think is quite standard to use "/" on tensorboard to be able to use the inbuilt tabs to better group metrics.
hi, can you provide also your model sample, in particular, the metrics section I guess that the problem is with / as it is interpreted as a normal folder path, as you can see that 'val/acc' is not replaced by a number either... 🐰
MDEwOkRpc2N1c3Npb24zNDQ3MDYz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8300#discussioncomment-973830
CNN dimension error
Hi there! I am trying to build a very basic CNN for a binary classification task however I am getting an odd dimensionality issue. My CNN is: class convNet(nn.Module): def __init__(self): super().__init__() self.conv2d_1 = nn.Conv2d(193, 193, kernel_size=3) self.conv2d_2 = nn.Conv2d(193, 193, kernel_size=3) self.conv2d_3 = nn.Conv2d(193, 193, kernel_size=3) self.maxpool = nn.MaxPool2d(2) def forward(self, x): x = self.conv2d_1(x) x = F.relu(x) x = self.maxpool(x) x = self.conv2d_2(x) x = F.relu(x) x = self.maxpool(x) x = self.conv2d_3(x) return F.relu(x) And my lightning module is: class classifier(pl.LightningModule): def __init__(self, learning_rate = float): super().__init__() self.learning_rate = learning_rate self.cnn =covNet() self.flat = nn.Flatten() self.fc1 = nn.Linear(450076, 100) self.fc2 = nn.Linear(100, 10) self.fc3 = nn.Linear(10, 1) self.dropout = nn.Dropout(p = 0.2) self.criterion = nn.BCEWithLogitsLoss() self.accuracy = tm.Accuracy() def prepare_batch(self, batch): img = batch['image'][tio.DATA] img = torch.squeeze(img) diagnosis = batch['diagnosis'] return img, diagnosis def forward(self, x): cnn_out = self.cnn(x) flat = self.flat(cnn_out) fc1_out = self.dropout(F.relu(self.fc1(flat))) fc2_out = self.dropout(F.relu(self.fc2(fc1_out))) fc3_out = F.relu(self.fc3(fc2_out)) return fc3_out def training_step(self, batch, batch_idx): x, y = self.prepare_batch(batch) y = y.view(y.size(0), -1) y = y.type(torch.float) y_hat = self.forward(x) train_loss = self.criterion(y_hat, y) self.log('train_loss', train_loss, prog_bar = True) return train_loss def validation_step(self, batch, batch_idx): x, y = self.prepare_batch(batch) y = y.view(y.size(0), -1) y = y.type(torch.float) y_hat = self.forward(x) val_loss = self.criterion(y_hat, y) self.log('val_loss', val_loss, prog_bar = True) return val_loss def test_step(self, batch, batch_idx): x, y = self.prepare_batch(batch) y = y.view(y.size(0), -1) y = y.type(torch.float) y_hat = self.forward(x) testAcc = self.accuracy(y_hat, y) self.log_dict({'test_acc': testAcc}) return testAcc def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate) return optimizer The odd part is that the validation sanity completes and I get 75% through the first epoch when the validation loop starts before I get the error: And I have checked that all my inputs have the same dimensions of [193, 229, 193]. Have I missed something obvious? (sorry if it is obvious) Any help would be greatly appreciated! For completeness: pytorch version: 1.9.0 pytorch lightning version: 1.3.7
To me it seems like you have forgotten the batch dimension. 2D convolutions expect input to have shape [N, C, H, W] where C=193, H=229 and W=193 (is it correct that you have the same amount of channels as the width?). If you only want to feed in a single image you can do sample.unsqueeze(0) to add the extra batch dimension in front.
MDEwOkRpc2N1c3Npb24zNDQwNDk0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8238#discussioncomment-956752
Run Trainer.fit multiple times under DDP mode
Hi, I have a machine learning architecture project that requires modifying the network structure multiple times. I used PytorchLigtning codes to implement it. The overall structure is as followed. The model definition, I ignore the training_step, 'validation_step' for clearly demonstration. def ToyModel(pl.LightningModule): def __init__(self): super(ToyModel, self).__init__() self.list = nn.ModuleList() def forward(self, x): for op in self.list: x = op(x) return x def add(self): self.list.append(nn.Layer(...)) The following main script shows that I want to update the network structure and retrain the model in 10 iterations. model = ToyModel() for iter in range(10): model.add() trainer = Trainer(model, strategy='ddp', gpus=-1) trainer.fit(model) When iter == 1, the model has been propagated into different GPU, and the model.add() results in different models. So I add a flag to make sure the modification is happened in the main process by model = ToyModel() for iter in range(10): model.add() trainer = Trainer(model, strategy='ddp', gpus=-1) trainer.fit(model) if not trainer.is_global_zero: return # kill other processes But this time, the program get stuck when iter == 1. My questions are: I have a feeling that native Pytorch using spawn can do that, do I need to switch back to PyTorch? Is there any decent way to do that in PyTorch? Maybe ddp_spawn? Thanks for your time. Any comments or suggestions are welcome.
can you try it with ddp_spawn since ddp creates sub-scripts i.e it will execute your complete script on a specific device.
D_kwDOCqWgoM4APFNC
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12401#discussioncomment-2413512
How to call torch.distributed.get_rank() in model building phase
I implemented pytorch lightning-based learning as follows. dm = build_datamodule(config) model = build_model(config) trainer = Trainer( ... accelerator="ddp", ... ) trainer.fit(model, dm) In this situation, in order to set different model parameters for each gpu process, distributed.get_rank() must be called at the stage of model building. However, the trainer.fit function doesn't seem to be able to implement this because it requires an already built model. I wonder if there is any other way to do this.
Hello, you are right that this currently isn't supported. I am working on adding this feature as part of this issue: #11922 Could you confirm that the issue and proposed solution meet your needs?
D_kwDOCqWgoM4AO1fM
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12017#discussioncomment-2215108
DDP NCCL freezes in docker AWS Jupyter
❓ Questions and Help Problem Jupyter terminal freezes, and connection to AWS node closes. The problem is reproducible with any Lightning example. What have you tried? python pl_examples/basic_examples/image_classifier.py --gpus 4 --accelerator ddp What's your environment? Linux using docker image Lightning 1.0.4 pytorch 1.6 cuda 10.2
The solution was to use export NCCL_SOCKET_IFNAME=lo
MDEwOkRpc2N1c3Npb244MjI5OA==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/4518#discussioncomment-238373
Select GPU from cli
The CLI has a flag --gpus In a system with more than 1 GPU, is there a way to select the GPU you want to run on from CLI? I tried --gpus [1] to select cuda:1 but it doesn't work. Also the auto gpu selection didn't work for me. It tried to put the job on cuda:0, but cuda:0 didn't have enough memory to run it. in the end I resorted to CUDA_VISIBLE_DEVICES, but that seems silly. Thanks!
I'm assuming you are referring to the LightningCLI In that case, just do python yourscript.py --trainer.gpus=[1]
MDEwOkRpc2N1c3Npb24zMzU4MTIx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7461#discussioncomment-723023
RuntimeError: Trying to backward through the graph a second time
I'm migrating my repository to pytorch-lightning and I get the following error: RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time. The CNNLSTM model seems to be the problem, what should I do? [My repository] Keiku/Action-Recognition-CNN-LSTM: Action recognition tutorial using UCF-101 dataset. https://github.com/Keiku/Action-Recognition-CNN-LSTM
Hi @Keiku This error happens if you try to call backward on something twice in a row without calling optimizer.step. Are you able to share your LightningModule code? It looks like the code in your repo just uses vanilla pytorch. Thanks 😃
MDEwOkRpc2N1c3Npb24zNDc3MDE4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8549#discussioncomment-1050102
Exporting PyTorch Lightning model to ONNX format not working
I am using Jupyter Lab to run. It has pre-installed tf2.3_py3.6 kernel installed in it. It has 2 GPUS in it. PyTorch Lightning Version (e.g., 1.3.0): '1.4.6' PyTorch Version (e.g., 1.8): '1.6.0+cu101' Python version: 3.6 OS (e.g., Linux): system='Linux' CUDA/cuDNN version: 11.2 How you installed PyTorch (conda, pip, source): pip Here is the screenshot of my model and it got interrupted due to connection issue. I am saving the best model in checkpoint. I am doing multi-label classification using Hugging face model. After training the model I want to export the model using ONNX format. Here is the DataModule Class N_EPOCHS = 30 BATCH_SIZE = 10 class SRDataModule(pl.LightningDataModule): def __init__(self, X_train,y_train, X_test,y_test, tokenizer, batch_size=8, max_token_len=512): super().__init__() self.batch_size = batch_size self.train_df = X_train self.test_df = X_test self.train_lab = y_train self.test_lab = y_test self.tokenizer = tokenizer self.max_token_len = max_token_len def setup(self, stage=None): self.train_dataset = SRDataset( self.train_df, self.train_lab, self.tokenizer, self.max_token_len ) self.test_dataset = SRDataset( self.test_df, self.test_lab, self.tokenizer, self.max_token_len ) def train_dataloader(self): return DataLoader( self.train_dataset, batch_size=self.batch_size, shuffle=True, num_workers=10 ) def val_dataloader(self): return DataLoader( self.test_dataset, batch_size=self.batch_size, num_workers=10 ) def test_dataloader(self): return DataLoader( self.test_dataset, batch_size=self.batch_size, num_workers=10 ) Here is the model class: class SRTagger(pl.LightningModule): def __init__(self, n_classes: int, n_training_steps=None, n_warmup_steps=None): super().__init__() self.save_hyperparameters() self.bert = BertModel.from_pretrained(BERT_MODEL_NAME, return_dict=True) self.classifier = nn.Linear(self.bert.config.hidden_size, n_classes) self.n_training_steps = n_training_steps self.n_warmup_steps = n_warmup_steps self.criterion = nn.BCELoss() def forward(self, input_ids, attention_mask, labels=None): output = self.bert(input_ids, attention_mask=attention_mask) output = self.classifier(output.pooler_output) output = torch.sigmoid(output) loss = 0 if labels is not None: loss = self.criterion(output, labels) return loss, output def training_step(self, batch, batch_idx): input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] labels = batch["labels"] loss, outputs = self(input_ids, attention_mask, labels) self.log("train_loss", loss, prog_bar=True, logger=True) return {"loss": loss, "predictions": outputs, "labels": labels} def validation_step(self, batch, batch_idx): input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] labels = batch["labels"] loss, outputs = self(input_ids, attention_mask, labels) self.log("val_loss", loss, prog_bar=True, logger=True) return loss def test_step(self, batch, batch_idx): input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] labels = batch["labels"] loss, outputs = self(input_ids, attention_mask, labels) self.log("test_loss", loss, prog_bar=True, logger=True) return loss def training_epoch_end(self, outputs): labels = [] predictions = [] for output in outputs: for out_labels in output["labels"].detach().cpu(): labels.append(out_labels) for out_predictions in output["predictions"].detach().cpu(): predictions.append(out_predictions) labels = torch.stack(labels).int() predictions = torch.stack(predictions) for i, name in enumerate(LABEL_COLUMNS): class_roc_auc = auroc(predictions[:, i], labels[:, i]) self.logger.experiment.add_scalar(f"{name}_roc_auc/Train", class_roc_auc, self.current_epoch) def configure_optimizers(self): optimizer = optim.RAdam(self.parameters(), lr=2e-4) scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=self.n_warmup_steps, num_training_steps=self.n_training_steps ) return dict( optimizer=optimizer, lr_scheduler=dict( scheduler=scheduler, interval='step' ) ) Sample Data sample_batch = next(iter(DataLoader(train_dataset, batch_size=10, num_workers=2))) sample_batch["input_ids"].shape, sample_batch["attention_mask"].shape (torch.Size([10, 512]), torch.Size([10, 512])) sample_batch.keys() dict_keys(['text_data', 'input_ids', 'attention_mask', 'labels']) Model model = SRTagger( n_classes=100, n_warmup_steps=warmup_steps, n_training_steps=total_training_steps ) ONNX code # # Export the model torch.onnx.export(model, # model being run ##since model is in the cuda mode, input also need to be (sample_batch["input_ids"],sample_batch["attention_mask"]), # model input (or a tuple for multiple inputs) "model_torch_export.onnx", # where to save the model (can be a file or file-like object) export_params=True, # store the trained parameter weights inside the model file opset_version=10, # the ONNX version to export the model to do_constant_folding=True, # whether to execute constant folding for optimization input_names = ['input'], # the model's input names output_names = ['output'], # the model's output names dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes 'output' : {0 : 'batch_size'}}) Error RuntimeError: output 1 (0 [ CPULongType{} ]) of traced region did not have observable data dependence with trace inputs; this probably indicates your program cannot be understood by the tracer.
Hi A google search reveals some help on this issue here: pytorch/pytorch#31591 Citing the thread there As the error message indicates, the tracer detected that the output of your model didn't have any relationship to the input. If we look closer at your code, we see that loss=0 and labels=None. def forward(self, input_ids, attention_mask, labels=None): output = self.bert(input_ids, attention_mask=attention_mask) output = self.classifier(output.pooler_output) output = torch.sigmoid(output) loss = 0 if labels is not None: loss = self.criterion(output, labels) return loss, output the if condition does not hold, so the part of your output (the loss) cannot be traced back to any inputs by onnx. Change your code to something like this and try again please: if labels is not None: loss = self.criterion(output, labels) return loss, output return output
D_kwDOCqWgoM4AN49x
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10063#discussioncomment-1512926
What are ones options for manually defining the parallelization?
(Q1) Does PyTorch Lightning enable parallelization across multiple dimension or does it only allow data parallelism? The FlexFlow implements the parallelism across 4 different dimensions ("SOAP": the sample, operator, attribute and parameter dimensions). (Q2) Over which of these does PyTorch-Lightning do parallelization? (Q3) Does PyTorch-Lightning's API give an option to manually control the parallelization for each and every layer individually?
Dear @roman955b, 1 ) Currently, Lightning automatically implement distributed data parallelism. However, we are currently working on making manual parallelization for users who want deeper control of the parallelisation schema. 2 ) Lightning supports only (S, P) with DeepSpeed, FSDP integrations. 3 ) Yes, we are currently working on this. Here is an issue to track the conversation #9375 Best, T.C
D_kwDOCqWgoM4ANzLv
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9881#discussioncomment-1457902