question
stringlengths
9
229
context
stringlengths
0
17.6k
answer
stringlengths
5
3.54k
id
stringlengths
18
28
url
stringlengths
94
97
How to tell Lightning which data to load onto the GPU (for 3rd party compatibility)
Hi ! I'm currently working on a segmentation network using PyTorch Lightning and MONAI. Context In my LightningDataModule, I preprocess my images by applying transforms (e.g., to resample them) before feeding them to my DataLoaders. For the transforms, I use a 3rd party framework called MONAI. It stores the context information of all the applied transforms in a structured nested dictionary. After predicting the labels of my predict dataset, I would like to invert these transforms (e.g., to recover the original pixel dimensions) in my predict_step. Problem MONAI's inverting logic requires the context information to be numpy data or CPU tensors (GPU tensors are not supported). However, by default, Lightning moves all the data of my batch (including its context info) to GPU. It causes the bug I reproduced in this test example notebook (cf. last cell). My question How can I tell Lightning to only move the images and their labels to GPU and keep the context info on the CPU? For further details, please refer to the following discussion Project-MONAI/MONAI#2348. Thanks for your help !
Hi, your LightningModule has a hook def transfer_batch_to_device(self, batch: Any, device: torch.device, dataloader_idx: int) -> Any: that you can override and which should be a perfect fit for that. Just make sure to only use the provided device.
MDEwOkRpc2N1c3Npb24zNDA1OTg5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7919#discussioncomment-854277
Not able to Generate Predictions with Trainer.predict()
Hi, I'm new to PyTorch Lightning, used it for the first time and kind of liked it. However, I am facing this one problem, Implemented a classification task for which I trained the model with Huggingface pretrained model as base and classification head on top. The model is training successfully and giving decent validation losses. The problem is, I'm not quite able to figure out the inferencing part. can anyone please point out what is it that I'm doing wrong? It's probably something very basic. I'll add the classes of the lightning modules and the Data Modules below. # |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| # | Define the Pytorch Lightning Module Classifier Class | # |__________________________________________________________| class ABSASentimentClassifier(pl.LightningModule): def __init__(self, learning_rate = setup['lr'], weights=None, **kwargs): super().__init__() self.save_hyperparameters('learning_rate', 'max_epochs') self.model = ABSAModel_Bert() self.weights = weights self.preds = [] def training_step(self, batch, batch_nb): # Forward y_hat = self.model(batch) # if self.weights: # self.weights = torch.tensor(class_weights,dtype=torch.float) # Loss loss_fct = torch.nn.CrossEntropyLoss() loss = loss_fct(y_hat.view(-1, self.model.num_labels), batch['label'].view(-1)) # Logs self.log_dict({'training_loss':loss}, prog_bar=True) return loss def validation_step(self, batch, batch_nb): # Forward y_hat = self.model(batch) # Loss loss_fct = torch.nn.CrossEntropyLoss() loss = loss_fct(y_hat.view(-1, self.model.num_labels), batch['label'].view(-1)) # Acc a, y_hat = torch.max(y_hat, dim=1) val_acc = accuracy_score(y_hat.cpu(), batch['label'].cpu()) val_acc = torch.tensor(val_acc) # Logs self.log_dict({'val_loss':loss,'val_acc':val_acc}, prog_bar=True) return loss def test_step(self, batch, batch_nb): self.model.eval() # Forward yhat = self.model(batch) # Loss # loss_fct = torch.nn.CrossEntropyLoss() # loss = loss_fct(y_hat.view(-1, self.model.num_labels), batch['label'].view(-1)) # a, y_hat = torch.max(y_hat, dim=1) # test_acc = accuracy_score(y_hat.cpu(), batch['label'].cpu()) # Logs # self.log_dict({'test_loss':loss,'test_acc':test_acc}, prog_bar=True) self.preds = self.preds.extend(yhat.cpu().detach().numpy().tolist()) return def predict_dataloader(self, batch, batch_idx: int , dataloader_idx: int = None): return self.model(batch) ''' |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| | Training Setup | |_________________| ''' def configure_optimizers(self): ''' |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| | REQUIRED | | can return multiple optimizers and learning_rate schedulers | | (LBFGS it is automatically supported, no need for closure function) | |_______________________________________________________________________| ''' optimizer = torch.optim.Adam([p for p in self.parameters() if p.requires_grad], lr=self.hparams.learning_rate, eps=1e-08) scheduler = { 'scheduler': torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=2e-5, steps_per_epoch=len(self.trainer.datamodule.train_dataloader()), epochs=self.hparams.max_epochs), 'interval': 'step' # called after each training step } return [optimizer], [scheduler] @staticmethod def add_model_specific_args(parent_parser, root_dir): # pragma: no-cover """ |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| | Define parameters that only apply to this model | |_____________________________________________________| """ parser = ArgumentParser(parents=[parent_parser]) # data parser.add_argument('--data_root', default=os.path.join(root_dir, 'train_val_data'), type=str) # training params (opt) parser.add_argument('--learning_rate', default=setup['lr'], type=float, help = "type (default: %(default)f)") return parser also the dataset class is : class ABSADataset(Dataset): def __init__(self, df, tokenizer, max_len=setup['max_sen_length']): self.texts = df['text'] self.aspects = df['aspect'] if 'label' in df.columns: # print('****Labels Present****') self.targets = df['label'] else: self.targets = None self.tokenizer = tokenizer self.max_len = max_len def __len__(self): return len(self.aspects) def __getitem__(self, idx): # convert indexes, tensor->list if torch.is_tensor(idx): idx = idx.tolist() # define the aspect and text item text = (str(self.texts[idx])) aspect = str(self.aspects[idx]) # define the label target = self.targets[idx] # pair the aspect and text for pair-encoding pairs = [text, aspect] ''' # |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| # | For Debugging | # |__________________________| # print(f' text: {text}') # print(f' aspect: {aspect}') # print(type(text)) # print(type(aspect)) ''' # encode the feature pair encoded = self.tokenizer.encode_plus(pairs, add_special_tokens=True, padding='max_length', max_length=setup['max_sen_length'], return_attention_mask=True, return_tensors='pt', truncation=True) ''' # |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| # | For Debugging | # |__________________________| # for ids in encoded['input_ids']: # print('*'*20) # print(f'{self.tokenizer.decode(ids)} of length = {len(self.tokenizer.decode(ids).split(" "))}') # print(f'is encoded as : \n{ids} \nwith length = {len(ids)}') # print('*'*20) ''' return { 'label' : target, 'input_ids' : encoded['input_ids'], 'attention_mask' : encoded['attention_mask'] } My goal is to be able to generate predictions for data without any labels present, using the trained model (saved as checkpoint (.ckpt)) This is what I did: testset = ABSATest_Dataset(test, tokenizer=transformer_tokenizer) testLoader = DataLoader(testset, batch_size=setup['test_batch_size']) trainer.predict(model_infer, testLoader) Where model_infer is : model_infer = ABSASentimentClassifier.load_from_checkpoint(PATH_TO_CKPT_FILE) and got : --------------------------------------------------------------------------- MisconfigurationException Traceback (most recent call last) <ipython-input-44-c724efd019b7> in <module>() ----> 1 trainer.predict(model_infer, testLoader) 5 frames /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in predict(self, model, dataloaders, datamodule, return_predictions, ckpt_path) 987 """ 988 return self._call_and_handle_interrupt( --> 989 self._predict_impl, model, dataloaders, datamodule, return_predictions, ckpt_path 990 ) 991 /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in _call_and_handle_interrupt(self, trainer_fn, *args, **kwargs) 680 """ 681 try: --> 682 return trainer_fn(*args, **kwargs) 683 # TODO: treat KeyboardInterrupt as BaseException (delete the code below) in v1.7 684 except KeyboardInterrupt as exception: /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in _predict_impl(self, model, dataloaders, datamodule, return_predictions, ckpt_path) 1030 ) 1031 -> 1032 results = self._run(model, ckpt_path=self.predicted_ckpt_path) 1033 1034 assert self.state.stopped /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py in _run(self, model, ckpt_path) 1115 parsing.clean_namespace(model.hparams) 1116 -> 1117 verify_loop_configurations(self, model) 1118 1119 # attach model log function to callback /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/configuration_validator.py in verify_loop_configurations(trainer, model) 38 __verify_eval_loop_configuration(trainer, model, "test") 39 elif trainer.state.fn == TrainerFn.PREDICTING: ---> 40 __verify_eval_loop_configuration(trainer, model, "predict") 41 42 __verify_dp_batch_transfer_support(trainer, model) /usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/configuration_validator.py in __verify_eval_loop_configuration(trainer, model, stage) 187 raise MisconfigurationException("`predict_step` cannot be None to run `Trainer.predict`") 188 elif not has_step and not is_overridden("forward", model): --> 189 raise MisconfigurationException("`Trainer.predict` requires `forward` method to run.") 190 else: 191 # ----------------------------------- MisconfigurationException: `Trainer.predict` requires `forward` method to run. ALso, I haven't defined a forward function in the lightning module because it is present in the model class: class ABSAModel_Bert(torch.nn.Module): def __init__(self, num_labels=setup['num_labels'], config = setup, **kwargs): super(ABSAModel_Bert, self).__init__() self.num_labels = num_labels self.bert = transformers.AutoModel.from_pretrained(config['model_name']) self.bert_config = transformers.AutoConfig.from_pretrained(config['model_name']) self.pre_classifier = torch.nn.Linear(self.bert_config.hidden_size, self.bert_config.hidden_size) self.classifier = torch.nn.Linear(self.bert_config.hidden_size, self.num_labels) self.dropout = torch.nn.Dropout(self.bert_config.hidden_dropout_prob) # print(f'Using Dropout = {self.bert.config.seq_classif_dropout}') self.relu = torch.nn.ReLU() ''' |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| | freeze the layers of Bert for training if needed so that | | the embeddings of all layers of Bert are not changed | |__________________________________________________________| ''' # for param in self.bert.parameters(): # param.requires_grad = False def forward(self, batch): # print((batch['input_ids'].squeeze(1)).shape) # print("*"*10) # print(batch['input_ids']) # print("*"*10) outputs = self.bert(input_ids=batch['input_ids'].squeeze(1), attention_mask=batch['attention_mask']) # output from last hidden layer hidden_state = outputs[0] # (batch_size, seq_len, dim) ''' |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| | *output of [CLS] token | | | | [CLS] token contains the pooled embeddings of the entire | | Sequence, these are used for the classification. | |__________________________________________________________| ''' pooled_output = hidden_state[:, 0] # (batch_size, dim) ''' |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| | sending the [CLS] token embeddings through Linear, ReLU | | and Dropout layers | |__________________________________________________________| ''' pooled_output = self.pre_classifier(pooled_output) # (batch_size, dim) pooled_output = self.relu(pooled_output) # (batch_size, dim) pooled_output = self.dropout(pooled_output) # (batch_size, dim) logits = self.classifier(pooled_output) # (batch_size, num_labels) return logits def get_outputs(self, input_ids, attention_mask): outputs = self.bert(input_ids=input_ids, \ attention_mask=attention_mask)
since your model is an instance of your LightningModule it cannot rely on model.forward to generate the predictions because predict_step by default calls LightningModule.predict. you need to either override predict_step def predict_step(...): return self.model(...) or define forward method in your lightning module def forward(...): return self.model(...)
D_kwDOCqWgoM4AOL5N
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10897#discussioncomment-1736809
Hook for Fully Formed Checkpoints
I would like to create a hook that automatically uploads checkpoints to the cloud (e.g., AWS, Azure) when they're created. I tried using on_save_checkpoint roughly like this: def on_save_checkpoint(self, trainer: pl.Trainer, pl_module: pl.LightningModule, checkpoint: Dict[str, Any]) -> dict: checkpoint_bytes = io.BytesIO() torch.save(checkpoint, checkpoint_bytes) # Upload the BytesIO somewhere... However, states for optimizers, schedulers, AMP, etc. are added after on_save_checkpoint hooks are called. Is there an elegant way to create a hook that receives the fully formed checkpoint state?
hey @dcharatan ! I'd rather suggest using the remote filesystems. You can also specify the remote path inside ModelCheckpoint. or use CheckpointIO plugin.
D_kwDOCqWgoM4AOsH4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11704#discussioncomment-2099335
Custom training loop for LightningModule
Hello, I was wondering if it is possible to control the trainloop behavior of a module (beyond overriding training_step()). I want to manually override the .grad value of each parameter by myself. For example, let's say I have this routine: m_0 = MyModel() loader_1 = getTrainLoader(1) loader_2 = getTrainLoader(2) loader_3 = getTrainLoader(3) # train the first two models m_1 = train_model_for_one_epoch(m_0, loader_1) m_2 = train_model_for_one_epoch(m_1, loader_2) # train the third model based on the previous models m_3 = MyModel() criteriton = nn.CrossEntropyLoss() optimizer = optim.SGD(m_3.parameters(), lr) # main trainloop for data, target in loader_3: loss_1 = criteriton(m_1(data), target) loss_1.backward() grad_1 = get_gradient_vector(m_1) loss_2 = criterion(m_2(data), target) loss_2.backward() grad_2 = get_gradient_vector(m_2) # manually calculate & set gradient grad_3 = (grad_1 + grad_2) / 2.0 set_model_gradient(m_3, grad_3) optimizer.step() How can I implement the final loop in the above code in PL? Thanks!
Currently there's no easy way for users to manage the dataloaders themselves, but you can perform the optimization (and manipulate the gradients) by setting automatic_optimization=False see: https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#manual-optimization
MDEwOkRpc2N1c3Npb24zMjYwODA5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6456#discussioncomment-469755
share steps among test and validation steps
I'm implementing the same functionality for validation_step and test_step. Currently, I have implemented it by calling to a shared function (val_and_test_step) def val_and_test_step(self, data_batch, batch_nb): output = shared_functionality(data_batch, batch_nb) return output def validation_step(self, data_batch, batch_nb): return self.val_and_test_step(data_batch, batch_nb) def test_step(self, data_batch, batch_nb): return self.val_and_test_step(data_batch, batch_nb) Is there a more pythonic way to implement the above?
Dear @ItamarKanter, I believe this great and pretty pythonic ! You could do this to make it slightly cleaner. class Model(LightningModule) def common_step(self, batch, batch_idx, stage): logits = self(batch[0]) loss = self.compute_loss(logits, batch[1]) self.log(f"{state}_loss", loss) return loss def training_step(self, batch, batch_idx): return self.common_step(batch, batch_idx, "train") def validation_step(self, batch, batch_idx): self.common_step(batch, batch_idx, "val") def test_step(self, batch, batch_idx): self.common_step(batch, batch_idx, "test") Best, T.C
MDEwOkRpc2N1c3Npb24zNDIwNDgw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8046#discussioncomment-896590
How to access validation step outputs of complete epoch in a `on_validation_epoch_end` hook for a custom callback ?
I want to implement a custom callback which calculates a custom metric and needs all of the outputs from the complete epoch. Is there any way to pass all the outputs to on_validation_epoch_end hook of the callback ? Here's the pseudo-code of the setup class FeedBackPrize(pl.LightningModule): def __init__( self, num_train_steps, steps_per_epoch, model_name: str = "allenai/longformer-base-4096", lr: float = 1e-5, num_labels: int = 16, multi_sample_dropout=True, step_scheduler_after: str = "step", ): super().__init__() self.learning_rate = lr self.model_name = model_name self.multi_sample_dropout = multi_sample_dropout self.num_train_steps = num_train_steps self.num_labels = num_labels self.steps_per_epoch = steps_per_epoch self.step_scheduler_after = step_scheduler_after hidden_dropout_prob: float = 0.1 layer_norm_eps: float = 1e-7 config = AutoConfig.from_pretrained(model_name) config.update( { "output_hidden_states": True, "hidden_dropout_prob": hidden_dropout_prob, "layer_norm_eps": layer_norm_eps, "add_pooling_layer": False, "num_labels": self.num_labels, } ) self.transformer = AutoModel.from_pretrained(model_name, config=config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.dropout1 = nn.Dropout(0.1) self.dropout2 = nn.Dropout(0.2) self.dropout3 = nn.Dropout(0.3) self.dropout4 = nn.Dropout(0.4) self.dropout5 = nn.Dropout(0.5) self.output = nn.Linear(config.hidden_size, self.num_labels) def forward(self, ids, mask, token_type_ids=None): transformer_out = self.transformer(ids, mask) sequence_output = transformer_out.last_hidden_state sequence_output = self.dropout(sequence_output) if self.multi_sample_dropout: logits1 = self.output(self.dropout1(sequence_output)) logits2 = self.output(self.dropout2(sequence_output)) logits3 = self.output(self.dropout3(sequence_output)) logits4 = self.output(self.dropout4(sequence_output)) logits5 = self.output(self.dropout5(sequence_output)) logits = (logits1 + logits2 + logits3 + logits4 + logits5) / 5 logits = torch.softmax(logits, dim=-1) return logits else: return sequence_output def configure_optimizers(self): param_optimizer = list(self.named_parameters()) no_decay = ["bias", "LayerNorm.bias"] optimizer_parameters = [ { "params": [ p for n, p in param_optimizer if not any(nd in n for nd in no_decay) ], "weight_decay": 0.01, }, { "params": [ p for n, p in param_optimizer if any(nd in n for nd in no_decay) ], "weight_decay": 0.0, }, ] optimizer = AdamW(optimizer_parameters, lr=self.learning_rate) scheduler = get_cosine_schedule_with_warmup( optimizer, num_warmup_steps=int(0.1 * self.num_train_steps), num_training_steps=self.num_train_steps, num_cycles=1, last_epoch=-1, ) scheduler = { "scheduler": scheduler, "interval": self.step_scheduler_after, "frequency": 1, } return [optimizer], [scheduler] def _calculate_loss(self, outputs, targets, attention_mask): loss_fct = nn.CrossEntropyLoss() active_loss = attention_mask.view(-1) == 1 active_logits = outputs.view(-1, self.num_labels) true_labels = targets.view(-1) outputs = active_logits.argmax(dim=-1) idxs = np.where(active_loss.cpu().numpy() == 1)[0] active_logits = active_logits[idxs] true_labels = true_labels[idxs].to(torch.long) loss = loss_fct(active_logits, true_labels) return loss def training_step(self, batch, batch_idx): ids, mask, targets = batch['ids'], batch['mask'], batch['targets'] outputs = self(ids, mask) loss = self._calculate_loss(outputs, targets, mask) return loss def validation_step(self, batch, batch_idx): ids, mask, targets = batch['ids'], batch['mask'], batch['targets'] outputs = self(ids, mask) loss = self._calculate_loss(outputs, targets, mask) return { "loss": loss, "preds": outputs, "targets": targets } def validation_epoch_end(self, validation_step_outputs): preds = [] targets = [] for output in validation_step_outputs: preds += output['preds'] targets += output['targets'] targets = torch.stack(targets) #torch.Size([2, 1536]) preds = torch.stack(preds) # torch.Size([2, 1536, 15]) return { "targets": targets, "preds": preds } Custom callback class CompMetricEvaluator(Callback): def __init__(self): pass def on_validation_epoch_end(self, trainer, pl_module): print("After validation epoch [custom metric evaluation]") # calculate custom metric here....
hey @Gladiator07! you can either override on_validation_batch_end hook and cache the outputs in some variable of the callback use that. class CustomCallback(Callback): def __init__(self): self.val_outs = [] def on_validation_batch_end(self, trainer, pl_module, outputs, ...): self.val_outs.append(outputs) def on_validation_epoch_end(self, trainer, pl_module): self.val_outs # <- access them here or cache the val outputs in pl_module inside validation_epoch_end class LitModel(LightningModule): def validation_epoch_end(self, outputs): new_outputs = ... self.val_outs = new_outputs class CustomCallback(Callback): def on_validation_epoch_end(self, trainer, pl_module): pl_module.val_outs # <- access them here note that the trainer and pl_module passed inside callbacks are passed by reference so that ever changes in the original lightningmodule will reflect in this referred instance here too.
D_kwDOCqWgoM4AOp8R
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11659#discussioncomment-2077410
using EMA with model checkpoints
I'm trying to incorporate the pytorch_ema library into the PL training loop. I found one topic relating to using pytorch_ema in lightning in this discussion thread, but how would this work if i want to save a model checkpoint based on the EMA weights? for example if i want to save the model weights using just pytorch, i could do something like # using accuracy as an example if current_val_acc >= best_val_acc: with ema.average_parameters(): torch.save(model.state_dict(), saved_model_pth) so that i save the smoothed weights, but restore the original weights to the model so it doesn't affect training one workaround i can think of is to create my own model saving logic in the validation_epoch_end instead of relying on the ModelCheckpoint callback, but that seems to be a bit hacky. are there any potentially better solutions?
you can replace the model state_dict inside the checkpoint class LitModel(LightningModule): ... def on_save_checkpoint(self, checkpoint): with ema.average_parameters(): checkpoint['state_dict'] = self.state_dict()
D_kwDOCqWgoM4AOYvd
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11276#discussioncomment-1892335
How to handle pretrained models without training them
Hey gang! I have written an Encoder model and a decoder model and I want to train them separately. class Decoder(pl.LightningModule): def __init__(self, encoder_model):#visualize_latent): super().__init__() self.encoder_model = encoder_model However, when I give my Decoder an Encoder hyperparameter, how do I make sure it will not be trained? I actually asked that question before, but it wasn't answered for a while and I do not find sufficient support in the docs. Thanks in advance!
Ok, I found out from other forums that one should use .freeze(), in this case: self.encoder_model.freeze()
MDEwOkRpc2N1c3Npb24zNTA2MDQ1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8778#discussioncomment-1140803
AttributeError: 'Trainer' object has no attribute 'run_evaluation'
I am getting the below error when running trainer.fit: AttributeError: 'Trainer' object has no attribute 'run_evaluation' Full traceback: Traceback (most recent call last): File "sdr_main.py", line 81, in <module> main() File "sdr_main.py", line 28, in main main_train(model_class_pointer, hyperparams,parser) File "sdr_main.py", line 73, in main_train trainer.fit(model) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 741, in fit self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run self._dispatch() File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1279, in _dispatch self.training_type_plugin.start_training(self) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training self._results = trainer.run_stage() File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1289, in run_stage return self._run_train() File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1319, in _run_train self.fit_loop.run() File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/base.py", line 140, in run self.on_run_start(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/fit_loop.py", line 200, in on_run_start self.trainer.call_hook("on_train_start") File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1495, in call_hook callback_fx(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/callback_hook.py", line 138, in on_train_start callback.on_train_start(self, self.lightning_module) File "/content/SDR/utils/pytorch_lightning_utils/callbacks.py", line 10, in on_train_start return trainer.run_evaluation() AttributeError: 'Trainer' object has no attribute 'run_evaluation' My Trainer object: trainer = pytorch_lightning.Trainer( num_sanity_val_steps=2, gradient_clip_val=hparams.max_grad_norm, callbacks=[RunValidationOnStart()], checkpoint_callback=ModelCheckpoint( save_top_k=3, save_last=True, mode="min" if "acc" not in hparams.metric_to_track else "max", monitor=hparams.metric_to_track, dirpath=model.hparams.hparams_dir, filename="{epoch}", verbose=True, ), logger=logger, max_epochs=hparams.max_epochs, gpus=hparams.gpus, strategy="dp", limit_val_batches=hparams.limit_val_batches, limit_train_batches=hparams.limit_train_batches, limit_test_batches=hparams.limit_test_batches, check_val_every_n_epoch=hparams.check_val_every_n_epoch, profiler=SimpleProfiler(), accumulate_grad_batches=hparams.accumulate_grad_batches, reload_dataloaders_every_epoch=True, resume_from_checkpoint=hparams.resume_from_checkpoint, ) Any idea on how to fix this? My pytorch-lightning version is 1.5.10
hey ! this was removed in the previous release. You can try: trainer.validating = True trainer.reset_val_dataloader() trainer.val_loop.run() trainer.training = True
D_kwDOCqWgoM4AO4EP
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12097#discussioncomment-2249714
Is it okay to feed optimizer to `configure_optimizers`
Hi all, is it okay to feed the optimizer that's been initialized outside this code to pl.LightningModule? def Model(pl.LightningModule): def __init__(optimizer): self.optimizer = optimizer def configure_optimizers(self) -> Any: optimizer = self.optimizer # like this scheduler = { 'scheduler': LambdaLR(optimizer, self.lr_lambda), 'interval': 'step', 'frequency': 1, } return [optimizer], [scheduler] Thanks!
yes, I think you can, but not a good practice, we recommend. just curious, why are you feeding it like that?
D_kwDOCqWgoM4AOuWg
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11783#discussioncomment-2125220
Accessing available values to monitor when saving checkpoints
I would like to save the top-10 checkpionts along training. By checking documentations, setting save_top_k, monitor and mode options in ModelCheckpoint jointly seem to do the job. But I am not sure what are the parameters available for the this callback to monitor. Are they logged values saved during training_step() or validation_step() through self.log("loss", XYZ)? Thank you in advance!
yes, that's correct.
D_kwDOCqWgoM4AOP1k
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11032#discussioncomment-1788844
Unable to load pretrained weight into custom model in Pytorch Lightning
I have created an issue on this. Moderators, please delete this discussion
Will be discussed in #11420.
D_kwDOCqWgoM4AOeB9
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11419#discussioncomment-1947543
How to disable the automatic reduce/mean while using dp?
Hello everyone, I have upgraded pytorch-lightning to 1.2.6 recently, the behavior of dp seems different from 1.2.0. To be specific, the returned values of validation_step() are automatically reduced before sent to validation_epoch_end(). However, the metrics I use need the original predictions of each sample instead of the reduced values. Is there a way to disable the automatic reduce and pass the whole predictions to validation_epoch_end()? Note that the validation_step_end() is not implemented in my model. cc: @tchaton
I notice that the validation_step_end() and test_step_end() functions in dp.py script are: def validation_step_end(output): return self.reduce(output) def test_step_end(output): return self.reduce(output) Thus, overwrite these two methods as follows will disable the automatic reduce in evaluation and test: def validation_step_end(output): return output def test_step_end(output): return output
MDEwOkRpc2N1c3Npb24zMzIwNDE5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7009#discussioncomment-623585
New error in Trainer started appearing recently in a previously running code.
When I create a Trainer and run Trainer.fit() I am now getting the following error: raise TypeError("cannot assign '{}' as child module '{}' " TypeError: cannot assign 'int' as child module 'precision' (torch.nn.Module or None expected) This is a new error and this code was just working earlier. Do yall know what could be causing this issue?
Do you have an attribute precision defined in your lightning module? If so, this is an improper override of the lightning module which is leading to this error: pytorch-lightning/pytorch_lightning/core/lightning.py Lines 102 to 103 in 49a4a36 # the precision used self.precision: int = 32
D_kwDOCqWgoM4AO89y
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12250#discussioncomment-2318947
Multiple models, one dataloader?
I have a training regime that is disk-speed bound, because instances are loaded from disk. I would like to train multiple models with one dataloader. That way, I can do model selection over many models, but reduce the number of disk reads. Is this possible?
Dear @turian, Yes, it is possible. You could do something like this. class MultiModels(LightningModule): def __init__(self, models: List[nn.Module]): self.models = models def compute_loss(self, model, batch): loss = ... return loss def training_step(self, batch, batch_idx): loss = sum(compute_loss(model, batch) for model in self.models) return loss model = MultiModels([resnet50_model, alexnet_model, ...]) dm = ... trainer.fit(model, dm) Does this answer your questions ?
MDEwOkRpc2N1c3Npb24zNDc5ODU4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8565#discussioncomment-1086321
How to pass arrays to callbacks?
In a previous version of pytorch lightning I could return a dictionary in the method validation_epoch_end and then the content of the dictionary would automatically populate trainer.callback_metrics. I can then use this in my callbacks. However, if I try this in 1.4.8, trainer.callback_metrics is an empty dictionary. Do you suggest any alternative? Calling self.log is not an option, because it cannot log np.ndarrays: self.log(val_output, [[array]])` was called, but `ndarray` values cannot be logged
At the end, what I do is to place the dictionary as a new member (e.g. self.extra_data). Then, from the callback I can access it pl_module.extra_data. I guess it is not clean, but it works.
D_kwDOCqWgoM4ANuM0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9736#discussioncomment-1442071
Saving one single inference example per validation stage
I would like to save one single inference example per validation stage. To this end I came up with: class Model(pl.LightningModule): ... def validation_epoch_end(self, validation_step_outputs): # self.trainer.log_dir is not set during fast_dev_test if self.trainer.log_dir is not None: x, y = (d.unsqueeze(0) for d in self.trainer.datamodule.valid_set.dataset_pair()) y_hat = self.model(x) x, y, y_hat = (d.squeeze().cpu().numpy() for d in (x, y, y_hat)) save_dir = Path(self.trainer.log_dir) / 'wavs' save_dir.mkdir(exist_ok=True) soundfile.write(save_dir/f'{self.global_step:06}_input.wav', x, 44100, subtype='PCM_24') soundfile.write(save_dir/f'{self.global_step:06}_output.wav', y_hat, 44100, subtype='PCM_24') soundfile.write(save_dir/f'{self.global_step:06}_target.wav', y, 44100, subtype='PCM_24') This works OK when running on a single device (CPU) but when running on GPUs on a slurm cluster I get: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor I guess because the data has not been sent to the same device as the model. This seems like a problem that must have been solved before. Can anyone suggest a typical pytorch-lightning way of doing this? Thanks for your input Leo
here: x, y = (d.unsqueeze(0) for d in self.trainer.datamodule.valid_set.dataset_pair()) I think this is something not part of the dataloader, so PL won't move to the device automatically. you can do: x = x.to(self.device) y_hat = self.model(x) ...
D_kwDOCqWgoM4AN8UH
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10223#discussioncomment-1554605
Optimizers for nested modules
class MyNet(pl.LightningModule): def __init__(self): self.m1 = MyMod1() self.m2 = MyMod2() If I implement different configure_optimizers for different submodules MyMod(also pl.LightningModule), is it correct that parameters in each MyMod will be updated by their own optimizers returned by configure_optimizers? If I only implement configure_optimizers for the top module, is it correct that parameters in submodules will be optimized by the same optimizer returned by configure_optimizers of the top module?
When you have nested LightningModules, their configure_optimizers will never be called unless you explicitly call it in the top-level configure_optimizers. That being said, if you call, merge and return the optimizers created there, these optimizers should only contain parameters from the respective submodule (if implemented correctly)
MDEwOkRpc2N1c3Npb24zNDkwMDQ4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8637#discussioncomment-1090099
Restore the best model
What would be the most lightning way to restore the best model? Either directly after training (in the same script) or for later use (in another script)? Thanks in advance !
You can use the checkpont callback to save the only the best model as described here: https://williamfalcon.github.io/pytorch-lightning/Trainer/Checkpointing/ (note that that doc needs to be updated. Use save_top_k=1 instead of save_best_only) You can then use the load_from_checkpoint method to restore your checkpoint: https://williamfalcon.github.io/pytorch-lightning/LightningModule/methods/#load_from_metrics
MDEwOkRpc2N1c3Npb24yNzkyMzkw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5812#discussioncomment-339782
Unfreezing layers during training?
Freezing layers at the beginning of training works, however unfreezing in on_epoch_start() during training causes the gradient to explode. Without the unfreezing part (or without freezing at all), the model trains fine with no gradient issues. I'm using DDP + Apex O2 and the loss scaling will keep going down to 0 where it would encounter 0 division and crash. Is unfreezing during training not possible in pytorch/lightning? or am I missing snippet?
you can unfreeze whenever. if gradients explode it's for another reason
MDEwOkRpc2N1c3Npb24yNzkyNDgy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5814#discussioncomment-339793
Getting Validation Accuracy with validation_epoch_end
Hello i managed to implement training accuracy per epoch with training_epoch_end but i want to do the same with validation accuracy with validation_epoch_end but i get an error of "too many indices for tensor of dimension 0" when train.fit() . I am using pytorch-lightning==1.2.8 . Thanks in advance. Error is: <ipython-input-31-41c968828bca> in validation_epoch_end(self, outputs) 39 predictions = [] 40 for output in outputs: ---> 41 for out_labels in output["labels"].detach().cpu(): 42 labels.append(out_labels) 43 for out_predictions in output["predictions"].detach().cpu(): IndexError: too many indices for tensor of dimension 0 def validation_step(self, batch, batch_idx): input_ids = batch["input_ids"] attention_mask = batch["attention_mask"] labels = batch["labels"] loss, outputs = self(input_ids, attention_mask, labels) self.log("val_loss", loss, prog_bar=True, logger=True) return loss def validation_epoch_end(self, outputs): labels = [] predictions = [] for output in outputs: for out_labels in output["labels"].detach().cpu(): labels.append(out_labels) for out_predictions in output["predictions"].detach().cpu(): predictions.append(out_predictions) labels = torch.stack(labels).int() predictions = torch.stack(predictions) validation_acc = accuracy(predictions, labels) self.logger.experiment.add_scalar("Validation Accuracy", validation_acc, self.current_epoch)
I am a fool , i forgot to add return {"loss": loss, "predictions": outputs, "labels": labels} in the validation step.
MDEwOkRpc2N1c3Npb24zNDQwNTU3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8240#discussioncomment-950465
slow new epoch start with setting ddp, num_workers, gpus
❓ Questions and Help Before asking: search the issues. search the docs. What is your question? I am training MNIST with below code. 1 GPU training is ok. But it shows slow start of new epoch when num_workers is a large number and the number of gpus > 2. Even dataloading itself is slower than with 1gpu. Code import torch from torch import nn import pytorch_lightning as pl from torchvision import datasets, transforms from torch.utils.data import DataLoader, random_split from torchvision.datasets import MNIST from torch.nn import functional as F import torch.distributed as dist import os, sys class LightningMNISTClassifier(pl.LightningModule): def init(self): super(LightningMNISTClassifier, self).init() self.layer_1 = nn.Linear(28*28, 128) self.layer_2 = nn.Linear(128, 256) self.layer_3 = nn.Linear(256, 10) def forward(self, x): batch_size, channels, width, height = x.size() x = x.view(batch_size, -1) x = self.layer_1(x) x = torch.relu(x) x = self.layer_2(x) x = torch.relu(x) x = self.layer_3(x) x = torch.log_softmax(x, dim=-1) return x def prepare_data(self): transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307, ), (0.3081,))]) mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transform) self.mnist_test = MNIST(os.getcwd(), train=False, download=True, transform=transform) self.mnist_train, self.mnist_val = random_split(mnist_train, [55000,5000]) def train_dataloader(self): data_loader2 = DataLoader(self.mnist_train, batch_size=64, num_workers=7, shuffle=True) # data_loader2 = DataLoader(self.mnist_train, batch_size=64, shuffle=True) return data_loader2 def val_dataloader(self): return DataLoader(self.mnist_val, batch_size=64) # def test_dataloader(self): # return DataLoader(self.mnist_test, batch_size=64) def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=1e-3) return optimizer def cross_entropy_loss(self, logits, labels): return F.nll_loss(logits, labels) def training_step(self, batch, batch_idx): x, y = batch logits = self.forward(x) loss = self.cross_entropy_loss(logits, y) logs = {'train_loss': loss} return {"loss": loss, "log": logs} def validation_step(self, batch, batch_idx): x, y = batch logits = self.forward(x) loss = self.cross_entropy_loss(logits, y) return {"val_loss": loss} def validation_epoch_end(self, outputs): avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean() tensorboard_logs = {"val_loss": avg_loss} return {"avg_val_loss": avg_loss, 'log':tensorboard_logs} if name == 'main': model = LightningMNISTClassifier() trainer = pl.Trainer(gpus=4, distributed_backend='ddp') trainer.fit(model) What have you tried? Horovod backend does not show slow start of new epoch. What's your environment? OS: ubuntu 18.04 Packaging pip Version pytorch 1.5.0, 0.7.6
I found that the slow deprecationwarnings shown above are due to the torchvision library. I changed to a simple dataset and the slow start disappeared until now.
MDEwOkRpc2N1c3Npb244MjI0Ng==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/1884#discussioncomment-238164
Does .predict() also use the best weights?
On https://pytorch-lightning.readthedocs.io/en/latest/starter/converting.html, it says that ".test() loads the best checkpoint automatically". Is that also the case for .predict()?
yes, by default it does load the best checkpoint if you don't provide the model, you can set it too if you want! trainer.predict(ckpt_path='best')
D_kwDOCqWgoM4AOJpq
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10795#discussioncomment-1712263
How to get the perfect reproducibility
Hi, I'm currently trying to finetune a pretrained BERT model for intent classification using Huggingface's Transformers library and Pytorch Lightning. The structure is simple where a linear classifier is simply put on the BERT encoder. I want to get the same result at the same seed setting, but although the whole setting including the seed is identical, the result changes. I thought if I pre-fix the seed using seed_everything and set the flag workers=True, I can get the exact same result, but I don't know what the problem is. The fun fact is that all executions save the exact same best checkpoint with identical valid accuracy and actually the flow of the training seems also the same. But after executing the test, the results are not same. The main codes are as follows. from transformers import BertConfig, BertTokenizer, BertModel from pytorch_lightning import Trainer, seed_everything from pytorch_lightning.callbacks import ModelCheckpoint def run(args): # For directory setting ... # Tokenizer & Model => This model is a pre-trained encoder, so I think there is no need to fix a random seed. config = BertConfig.from_pretrained('bert-base-uncased') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') ... print("Loading datasets...") # For data loading train_set = Dataset(...) valid_set = Dataset(...) test_set = Dataset(...) total_train_steps = int(len(train_set) / batch_size * num_epochs) warmup_steps = int(total_train_steps * warmup_prop) # Random seed fixing for intent classification layer seed_everything(0, workers=True) # Lightning Module setting module = TrainModule(model) # Dataloaders ppd = PadCollate(...) # Reset random seed for data shuffle seed_everything(0, workers=True) train_loader = DataLoader(train_set, collate_fn=ppd.pad_collate, batch_size=batch_size, shuffle=True, num_workers=num_workers, pin_memory=True) valid_loader = DataLoader(valid_set, collate_fn=ppd.pad_collate, batch_size=batch_size, num_workers=num_workers, pin_memory=True) test_loader = DataLoader(test_set, collate_fn=ppd.pad_collate, batch_size=batch_size, num_workers=num_workers, pin_memory=True) print("Setting pytorch lightning callback & trainer...") # Model checkpoint callback filename = "best_ckpt_{epoch}_{train_all_acc:.4f}_{valid_all_acc:.4f}" monitor = "valid_all_acc" checkpoint_callback = ModelCheckpoint( dirpath=save_dir, filename=filename, verbose=True, monitor=monitor, mode='max', every_n_val_epochs=1, ) # Trainer setting trainer = Trainer( check_val_every_n_epoch=1, gpus=gpu, auto_select_gpus=True, num_nodes=num_nodes, max_epochs=num_epochs, gradient_clip_val=max_grad_norm, num_sanity_val_steps=0, deterministic=True, callbacks=[checkpoint_callback] ) print("Train starts.") trainer.fit(model=module, train_dataloader=train_loader, val_dataloaders=valid_loader) print("Training done.") print("Test starts.") trainer.test(model=module, test_dataloaders=test_loader, ckpt_path='best') print("GOOD BYE.") Also, TrainModule is designed like below. from torch import nn as nn import pytorch_lightning as pl class TrainModule(pl.LightningModule): def __init__(self, args, encoder): super().__init__() self.args = args self.save_hyperparameters(args) self.encoder = encoder self.output_layer = IntentDetection(args) self.output_layer.init_params() self.loss_func = nn.CrossEntropyLoss() def forward(self, input_ids, padding_masks=None): # input_ids: (B, L), padding_masks: (B, L) hidden_states = self.encoder(input_ids=input_ids, attention_mask=padding_masks)[0] # (B, L, d_h) return self.output_layer(hidden_states[:, 0]) # (B, L, C) or (B, C) def training_step(self, batch, batch_idx): ... class IntentDetection(nn.Module): def __init__(self, args): super(IntentDetection, self).__init__() self.hidden_size = args.hidden_size self.num_classes = args.num_classes self.linear = nn.Linear(self.hidden_size, self.num_classes) def forward(self, hiddens): # hiddens: (B, d_h) return self.linear(hiddens) # (B, C) def init_params(self): nn.init.xavier_uniform_(self.linear.weight) I also post the current training environment. OS: Ubuntu 18.04.5 LTS Python version: 3.8.5 Pytorch version: 1.7.1+cu110 Pytorch Lightning version: 1.3.0 GPU: A100-SXM4-40GB (DGX) CUDA version: 11.0 And I got 3 different results when I run the same codes 3 times. This is odd, since when I run the exact same code above in different environment, I could get the identical results from a same seed, even the number of workers in a dataloader is different. I also attach the environment which I was able to get the perfect reproducibility. OS: CentOs Linux release 7.9.2009 (Core) Python version: 3.7.4 Pytorch version: 1.7.1+cu110 Pytorch Lightning version: 1.3.0 GPU: RTX 3090 CUDA version: 11.2 It will be really great if anyone can help me to solve this issue... Thank you very much.
Can you try seeding again right before the trainer.test call? Not saying you should need to but to know if that makes any difference I think your best bet is to try to create a reproducible snippet. You can get started with the https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/bug_report_model.py
MDEwOkRpc2N1c3Npb24zMzU1MTUx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7423#discussioncomment-709164
Changing the computed gradients before backprop
I want to add noise to the gradients in pytorch lightning . Specifically, something similar to this paper: https://arxiv.org/pdf/1511.06807.pdf . Basically, I would compute the gradients and before the call to backward, i want to add noise. What is the best way to achieve this in pytorch lightning?
Dear @sebastiangonsal, From your paper, it seems you might want to call backward which computes the gradients and then add some noise right ? Therefore, you could override the before_optimizer_step and add noise to all the params.grad. class TestModel(LightningModule): def on_before_optimizer_step(self): for param in self.parameters(): param.grad += torch.rand_like(param.grad) If you want this re-usable, you could just move this to a callback instead.
MDEwOkRpc2N1c3Npb24zNTE0MTA0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8866#discussioncomment-1205543
Should I configure FP16, optimizers, batch_size in DeepSpeed config of Pytorch-Lightning?
My deepspeed_zero2_config.json: { "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } I have some questions about how to configure DeepSpeed in Pytorch-Lightning: I see that the custom deepspeed config includes optimizer and scheduler. Should I add them in my config even I have configured in Model.configure_optimizers? You have not specified an optimizer or scheduler within the DeepSpeed config. Using `configure_optimizers` to define optimizer and scheduler. Should I add fp16 config into deepspeed config json even I have passed precision="bf16" in pl.Trainer? Should I pass logging_batch_size_per_gpu to pl.plugins.DeepSpeedPlugin even I have configured batch_size in data loader? [2022-03-24 12:42:11,529] [WARNING] [deepspeed.py:630:_auto_select_batch_size] Tried to infer the batch size for internal deepspeed logging from the `train_dataloader()`. To ensure DeepSpeed logging remains correct, please manually pass the plugin with the batch size, `Trainer(strategy=DeepSpeedPlugin(logging_batch_size_per_gpu=batch_size))`. It appears in log before training every time. Is that okey? Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Thanks a lot!😊
yes, you don't need to set them inside config since this is done by Lightning already here if you set them in trainer and lightning module: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/strategies/deepspeed.py
D_kwDOCqWgoM4APH-L
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12465#discussioncomment-2443070
Global parameters yaml file
I am using CLI to train my model. Instead of specifying parameters directly, I provide a yaml file with variables defined. Since I'm using several loggers, they have a common name parameter. So in order to start a new experiment I have to change this parameter in each logger. This raises a question is there a way to create global variable in yaml file while using CLI?
@Serega6678 You can try jsonnet format config file which supports global variables. https://jsonargparse.readthedocs.io/en/stable/#jsonargparse.core.ArgumentParser To use jsonnet format, you can init CLI by passing {"parser_mode": "jsonnet"} to parser_kwargs.
MDEwOkRpc2N1c3Npb24zNDA3NzMw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7940#discussioncomment-857633
Torch accuracy and sklearn accuracy is v different
This is the code def test_step(self,batch,batch_idx): image,label=batch pred = self(image) loss=self.criterion(pred.flatten(),label.float()) #calculate loss acc=self.metrics(pred.flatten(),label)#calculate accuracy pred=torch.sigmoid(pred) return {'loss':loss,'acc':acc,'label':label,'pred':pred} def test_epoch_end(self, outputs): loss=torch.stack([x["loss"] for x in outputs]).mean().detach().cpu().numpy().round(2) acc=torch.stack([x["acc"] for x in outputs]).mean().detach().cpu().numpy().round(2) label=torch.cat([x["label"] for x in outputs]).detach().cpu().numpy().ravel() pred=torch.cat([x["pred"] for x in outputs]).detach().cpu().numpy().ravel() pred=pred.astype(int) print('torch acc',acc) print(classification_report(label,pred)) print('sklearn',accuracy_score(label,pred)) There is difference of 10-15% between accuracies obtained by torchmetrics and sklearn
This is the solution def test_step(self,batch,batch_idx): image,label=batch pred = self(image) return {'label':label,'pred':pred} def test_epoch_end(self, outputs): label=torch.cat([x["label"] for x in outputs]) pred=torch.cat([x["pred"] for x in outputs]) acc=self.metrics(pred.flatten(),label) pred=pred.detach().cpu().numpy().ravel() label=label.detach().cpu().numpy().ravel() pred=np.where(pred>0.5,1,0).astype(int) print('torch acc',acc) print(classification_report(label,pred)) print('sklearn',accuracy_score(label,pred))
D_kwDOCqWgoM4APAR6
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12311#discussioncomment-2492937
Bug in SLURMConnector?
nvm cheers
Cheers! 🍻
MDEwOkRpc2N1c3Npb24zMjUxMDg5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6369#discussioncomment-525503
Logging accuracy with batch accumulation
I wanted to ask how pytorch handles accuracy (and maybe even loss) logging when we have something like pl.Trainer(accumulate_grad_batches=ACCUMULATIONS). My training looks like this: def training_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = F.cross_entropy(y_hat, y, weight=self.weight) result = pl.TrainResult(loss) result.log("train_loss", loss, prog_bar=True) result.log("train_accuracy", self.accuracy(y_hat.argmax(dim=-1), y), prog_bar=True) return result where self.accuracy = pl.metrics.classification.Accuracy(). Is there a way to make sure that the loss and accuracy is averaged across the accumulated batches? If this is not currently the case, I'm happy to do a PR if someone can show me where to look in the source code to make such a change. Thanks in advance
@sachinruk Class based metrics have been revamped! Please checkout the documentation for the new interface. While the metrics package does not directly integrate with the accumulate_grad_batches argument (yet), you should be able to do something like this now: def training_step(self, batch, batch_idx): x, y = batch y_hat = self(x) self.accuracy.update(y_hat.argmax(dim=-1), y) if self.trainer.accumulate_grad_batches % self.global_step == 0: accumulated_val = self.accuracy.compute() self.log('acc_accumulate', accumulated_val) ... Closing this for now.
MDEwOkRpc2N1c3Npb24yNzkyMjEx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5805#discussioncomment-339757
How to not create lightning_logs when using a external logger like wandb ?
I would like my wandb logger to just place their data under wandb dir, and checkpointcallback to save ckpts under dir_path I specified. And I don't want pl to create lightning_logs and files under it, but I can't set logger=False b/c I use a logger. Is there any suggestion ?
You can set the save_dir in WandbLogger, something like logger = WandbLogger(save_dir="wandb", ...) Trainer(logger=logger, ...) This should work (haven't tested it). Then your logs and checkpoints will save to two different locations.
MDEwOkRpc2N1c3Npb24zMjkyOTQz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6685#discussioncomment-568960
How to apply a nn.Module (i.e. CNN) across an axis (i.e. Video input) in a parallelizable way
Hi, I’m trying to apply CNN to each image in a video. Currently, my implementation uses a for loop and torch.cat where I take each image and apply the CNN module in the loop. But clearly, this is sequential and I don’t see why it can’t be parallelized in theory since all images are independent from each other. However, I’m not sure how this can be accomplished. I couldn’t find any built-in function for PyTorch. Is there a way to do this in parallel in PyTorch Lightning? My video input shape looks like this: (batch_size, seq_len, channel, height, width) and CNN takes input shape of (batch_size, channel, height, width). Thanks in advance for your help!
You can simply convert your (batch_size, seq_len, channel, height, width) tensor into an (batch_size*seq_len, channel, height, width) tensor, run your model and then reshape your output back: batch_size, seq_len, channel, height, width = 5, 10, 3, 28, 28 # just random picked input = torch.randn(batch_size, seq_len, channel, height, width) input = input.reshape(batch_size * seq_len, channel, height, width) output = model(input) # split the batch dimension back into the original batch size and sequence length output = output.reshape(batch_size, seq_len, *output.shape[1:])
MDEwOkRpc2N1c3Npb24zMjMzMTgw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6135#discussioncomment-401729
How to flag certain modules as non-deterministic
Hey, Question: How can I set a module/layer in my model-class to always be non-deterministic (irrespective of the deterministic flag in pl.Trainer())? Context: I use pl to train a simple AutoEncoder that uses bilinear upscaling in the decoder part. For debugging, I use the deterministic flag of the pl.Trainer(). However, I receive the following error RuntimeError: upsample_bilinear2d_backward_out_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation. Unfortunately, the error does not hint at how to set the module to be non-deterministic and neither does the documentation. Cheers dsethz
Hi @dsethz! The error comes from PyTorch, but not from Lightning, and I think it's not (shouldn't be) feasible even in pure PyTorch because the flag is for reproducibility and if you allow randomness in certain layers, you can't reproduce the same result anymore. https://pytorch.org/docs/stable/notes/randomness.html
D_kwDOCqWgoM4AO0KS
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11963#discussioncomment-2340961
Remove parameters from autograd backward hook
Hello, I am trying to remove some layers from DistributedDataParallel to prevent them being synchronized between devices. I spent last 6 hours googling, and I have found out, that there's a attribute _ddp_params_and_buffers_to_ignore which can be set to module that is passed to DistributedDataParallel constructor. I've implemented custom strategy plugin to Trainer, I have checked that the parameters are then passed to a parameters_to_ignore attribute of the DistributedDataParallel but somehow if I check gradients, of the layer, they are always the same. Is there some simpler way to remove some layer / module from being synchronized between more devices in DDP strategy? Thank you in advance for any help!
Okay, It was my mistake, I deeply apologize for wasting your time there. The layer indeeds gets removed from the DistributedDataParallel (or rather not even getting there). But I've found another error when trying to set the _ddp_params_and_buffers_to_ignore inside the LightningModule, so I've created issue here - #11844 . Thank you anyway!
D_kwDOCqWgoM4AOwH-
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11835#discussioncomment-2150353
Datamodule without Trainer (for inference)
In my usage, LightningDatamodule is currently encapsulating batch collation, moving to device, and batch transformations (via on_after_batch_transfer). However, when I want to do inference on a bunch of inputs, I want the same steps to happen. What is the recommended way to achieve this? The problem is that Trainer drives the device transfers and hooks around it, and I don't have a Trainer during inference.
Why would you not want to use the Trainer? You can now use trainer.predict for inference (will be in beta after the 1.3 release)
MDEwOkRpc2N1c3Npb24zMjcwMTQ5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6502#discussioncomment-637641
Access datamodule in custom callbacks
I want to create a custom Callback class where I can access certain attributes from my DataModule and log/save them before the start of the train step. I am little confused on how to do this. Can anyone help me out with a quick snippet? Thanks!
Something like this should work :] from pytorch_lightning.callbacks import Callback class MyCallback(Callback): def __init__(self, ...): ... # hook for doing something with your datamodule before training step def on_train_batch_start(self, trainer, *args, **kwargs): dm = trainer.datamodule # this is a reference to your datamodule during training # do something here with your datamodule
MDEwOkRpc2N1c3Npb24zMjU1MDkx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6412#discussioncomment-451794
Saving checkpoint, hparams & tfevents after training to separate folder
Thanks to all the contributors of PyTorch Lightning for a fantastic product! I want to save a checkpoint, hparams & tfevents after training finishes. I have written this callback: class AfterTrainCheckpoint(pl.Callback): """ Callback for saving the checkpoint weights, hparams and tf.events after training finishes """ def __init__(self): super().__init__() def on_train_end(self, trainer: "pl.Trainer", pl_module: "pl.LightningModule") -> None: print(f"Saving final checkpoint...") # As we advance one step at end of training, we use `global_step - 1` final_checkpoint_name = f"final_models/final_step_{trainer.global_step - 1}.ckpt" final_hparams_name = f"final_models/final_step_{trainer.global_step - 1}.yaml" trainer.save_checkpoint(final_checkpoint_name) save_hparams_to_yaml(config_yaml=final_hparams_name, hparams=trainer.model.hparams) Is this the best 'Lightning' way to achieve this? How can I save the final events.out.tfevents file to a new directory? Should I be setting save_last=True? seen in ModelCheckpoint in the docs. I am slightly confused about "monitor metrics logged during training/validation steps or end of epochs are not guaranteed to be available at this stage."
hey @dispoth !! I'd say use on_fit_end instead, since the last checkpoint in the model checkpoint is saved in this hook, so it won't guarantee to have that ckpt when your callback calls it. you can copy the log files directly? the are available inside trainer.log_dir. yes, they will be available during both on_train_end and on_fit_end.
D_kwDOCqWgoM4AOuUN
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11779#discussioncomment-2124174
How to “lightninfy” the official PyTorch sentiment analysis tutorial?
Hi, I'm trying to refactor the official NLP (sentiment analysis) tutorial, using Lightning in order to take advantage of things like early stopping etc. I'm moving first steps, and the main hurdle is the creation of a Lightning module, and in particular coding the training_step. What I came up so far is class LitTextClassifier(pl.LightningModule): def __init__(self, num_class, criterion = CrossEntropyLoss): super().__init__() self.embedding = nn.EmbeddingBag(VOCAB_SIZE, EMBED_DIM, sparse=False) self.fc = nn.Linear(EMBED_DIM, num_class) self.init_weights() self.criterion = criterion def init_weights(self): initrange = 0.5 self.embedding.weight.data.uniform_(-initrange, initrange) self.fc.weight.data.uniform_(-initrange, initrange) self.fc.bias.data.zero_() def forward(self, text, offsets): embedded = self.embedding(text, offsets) return self.fc(embedded) def configure_optimizers(self): optimizer = optim.SGD(self.parameters(), lr=4.0) return optimizer def training_step(self, batch, batch_idx): # I am messing up things here text, offsets, cls = batch output = self.forward(text, offsets) loss = self.criterion(output, cls) return loss I think I am getting the training_step wrong. Can someone provide guidance here? A full gist to reproduce code + errors I get is here: https://gist.github.com/davidefiocco/3b6c6b1e09c4f664b3a73e5bf24d1668/5aa4c224f7772db835bbaa92d559837c7a40f4df
@davidefiocco Hi, I think you're trying to instantiate the criterion class with output and cls. You need to instantiate it in advance: - self.criterion = criterion + self.criterion = criterion()
MDEwOkRpc2N1c3Npb24zMjQyMjUy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6226#discussioncomment-412814
Compute Loss After Sharing Tensor Across GPUs
I’m currently attempting to make a Multi-GPU-supported CLIP training script, but am hitting a wall. I need to compute two matrices that are composed of whole batch statistics before I can compute loss. Namely, I need to compute the image and text embeddings of an entire batch. Only then can I compute the sub batch losses. How can I first calculate and share the whole batch matrices across GPUs before computing losses?
The LightningModule method all_gather(Tensor) solved it all!
MDEwOkRpc2N1c3Npb24zMzcxOTM0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7602#discussioncomment-761147
Data augmentation and reload_dataloaders_every_epoch
Hi! I'm training my neural network with Pytorch Lightning and MONAI (a PyTorch-based framework for deep learning in healthcare imaging). Because my training dataset is small, I need to perform data augmentation using random transforms. Context I use MONAI's CacheDataset (basically, a PyTorch Dataset with cache mechanism). From what I understood, , CacheDataset will cache the consistent result of the transforms until the first random transform, then reuse the cache content and apply the remaining random transforms (such as Gaussian noise, random intensity shift, etc.) for every epoch. As a result, the dataloader trains the network with a different dataset at each epoch. In the video presenting the reload_dataloaders_every_epoch flag, William Falcon mentions that: By default, Lighthning only loads your dataset once (so that you don't occur the cost of downloading that data and process it every single time). On every epoch, Lightning shuffles the data and feeds it into the training loop. My questions Did Lightning add a cache mechanism to load the data once? Must I use the reload_dataloader_every_epoch flag to do data augmentation or else my random transforms will only be applied once (therefore defeating my data augmentation goal)? Thanks in advance for your explanation :)
Did Lightning add a cache mechanism to load the data once? No, the flag just means that we call LightningModule.train_dataloader() every epoch if enabled, thus creating a new DataLoader instance. Must I use the reload_dataloader_every_epoch flag to do data augmentation or else my random transforms will only be applied once If I understand correctly, no. The transformations are applied directly in the Dataset, so every time an item is consumed from it, the random transforms should be applied regardless of whether the DataLoader has or hasn't been recreated.
MDEwOkRpc2N1c3Npb24zNDcwNjIz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8504#discussioncomment-1035364
How to analysis the time cost of each part
What is your question? Hi, I'm trying to implement my project with your framework, however, I'd like to count the time each part costs to make full use of GPUs, but it's puzzling that the time count by myself is not the same as tqdm does. So could you give me some advice about what happened? From process bar, the time is 1.4s/it while data time is 0.003s, gpu time is 0.5~0.7s. Code # what I add to the trainer # code added by me batch_start_tic = time.time() for batch_nb, data_batch in enumerate(self.tng_dataloader): self.batch_nb = batch_nb self.global_step += 1 model = self.__get_model() model.global_step = self.global_step # stop when the flag is changed or we've gone past the amount # requested in the batches self.total_batch_nb += 1 met_batch_limit = batch_nb > self.nb_tng_batches if met_batch_limit: break # --------------- # RUN TRAIN STEP # --------------- batch_fb = time.time() batch_result = self.__run_tng_batch(data_batch, batch_nb) early_stop_epoch = batch_result == -1 # code added by me batch_fb_end = time.time() self.__add_tqdm_metrics({'data time': batch_fb-batch_start_tic,'gpu time': batch_fb_end-batch_fb}) batch_start_tic = time.time() By the way, I find the gpu utils is about 80%, is there any tricks can make it up to 100%? What's your environment? PyTorch version 1.1.0 Lightning version 0.3.6.9 Test-tube version 0.6.7.6 Thanks a lot.
tqdm time is a running average. you have to let it warm up for a bit before it converges to the correct time.
MDEwOkRpc2N1c3Npb244MjIwNw==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/112#discussioncomment-238010
Where is EarlyStopping searching for metrics?
Where is EarlyStopping search for metrics? Code def validation_end(self, outputs): ... metrics = { 'val_acc': val_acc, 'val_loss': val_loss } ... output = OrderedDict({ 'val_acc': torch.tensor(metrics['val_acc']), 'val_loss': torch.tensor(metrics['val_loss']), 'progress_bar': metrics, 'log': metrics }) return output if I attempt to early stop according to val_acc I get the following error: RuntimeWarning: Early stopping conditioned on metric 'val_acc' which is not available. Available metrics are: loss,train_loss The metrics mentioned (loss,train_loss) are from training_step from what I could find. I guess I'm doing something wrong, could anyone point me in the correct direction? OS: Ubuntu Packaging: pip Version 0.5.3.2 Update #1: the same code works with version 0.5.1. Bug in 0.5.3? Update #2: I found that this line in trainer/training_loop.py: self.callback_metrics = {k: v for d in all_callback_metrics for k, v in d.items()} From what I see, before this line is executed, self.callback_metrics contains val_acc. After this line values that were put in callback_metrics after validation are gone, therefore EarlyStopping can't find them. Can anyone confirm this is an issue?
If I understand correctly it is a known issue. Please look at #490. #492 fixes this in master.
MDEwOkRpc2N1c3Npb24yNzkyNTU3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5822#discussioncomment-339817
How can I save and restore the trained model when I call fit() at pytorch_lightning every time?
Hi, everyone! I want to load model from checkpoint when start a training, and save it to disk when finished every epoch automatically, Is there any nice way to do that correctly? Shall we modify the Trianer code, or just use a special hook?
For loading the model, are you looking to load just the weights? If so, take a look at LightningModule.load_from_checkpoint: https://pytorch-lightning.readthedocs.io/en/latest/common/weights_loading.html#checkpoint-loading Otherwise, if you want to load the whole training state (for example, including the optimizer states), take a look here: https://pytorch-lightning.readthedocs.io/en/latest/common/weights_loading.html#restoring-training-state For saving, take a look at the ModelCheckpoint callback: https://pytorch-lightning.readthedocs.io/en/latest/common/weights_loading.html#automatic-saving
D_kwDOCqWgoM4ANo9F
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9551#discussioncomment-1336681
trainer.check
Hi pytorch-lightning friends! :) I'd like to start a discussion about trainer.check api. Basically, this API should check that all the user-define classes (models, data, callbacks, ...) are programmatically sound. I propose to use inspect to check for function correctness, here's my PR proposal at #3244
Locking in favor of #6029
MDEwOkRpc2N1c3Npb24xNjMyMDI5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5407#discussioncomment-510814
Manually averaging metrics when logging
I have a metric from torchmetric as follows: Accuracy( num_classes=self.model.out_channels, average='none', ignore_index=self.ignore_index ) Obviously I can not log this, however I don't want to set average to any aggregation. I want to log its mean in training_step but want to preserve the class wise metric to till end of epoch where I display it to terminal. I want the metric to reset at epoch end only, so can't call compute() in training step. How to solve this?
From this comment I had the idea that: if the .compute()method is called the internal state is reset. However, this behavior has changed, now calling compute() does not reset the state of the metrics. See PR #5409
D_kwDOCqWgoM4APOIS
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12636#discussioncomment-2516539
Control log_dict's on_epoch log name
Hey! I tried to change to using the new way of logging through MetricCollection and self.log_dict instead of logging every metric through self.log on training step and test_epoch_end. However, each metric is then logged as [metric_name]_epoch_[epoch_number] which creates a new graph for every epoch instead of allowing me to use epoch on the x-axis of my graphs (on Comet, if that is relevant). Is there a way to control this behaviour of log_dict, or do I just have to keep logging "manually" to control log name?
Hi @FluidSense, Could you try just updating the MetricCollection during the _step method and then log in _epoch_end method. Something like: def training_step(self, batch, batch_idx): logits = self(x) self.train_metrics.update(logits, y) def train_epoch_end(self, outputs): self.log_dict(self.train_metrics.compute())
MDEwOkRpc2N1c3Npb24zMzM4MDM3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7209#discussioncomment-660597
Train Discriminator less than Generator
I want to train my discriminator ones per 10 iterations but couldn't figure out how to implement it with lightning. Do you have any advice on this?
Check out the optimization docs. There are a few examples that may help you. https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#step-optimizers-at-arbitrary-intervals
MDEwOkRpc2N1c3Npb24zMjczNjgz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6526#discussioncomment-607335
embedding manual control location CPU vs GPU
I would like to create an embedding that does not fit in the GPU memory but can fit in the CPU memory. Select the subset for a batch, send it to the GPU at the start of mini-batch. GPU_tensor = embedding(idx) Then at the end of training update the CPU embedding from the GPU embedding. I am using pl.Trainer( gpus=[0,1], distributed_backend='ddp') and probably will need accumulate_grad_batches Any idea for how to do this ?
Duplicate of #6725
MDEwOkRpc2N1c3Npb24zMjk3NjM1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6726#discussioncomment-637645
TypeError: setup() got an unexpected keyword argument 'stage'
Hi, When calling train.fit(model) I get the following TypeError. I got a couple of more similar errors but could find solutions to them (mainly due to the newer version of pl), but I could not find any fix for following error: File "train.py", line 540, in <module> main(args) File "train.py", line 506, in main logger=logger, File "/Structure-Aware-BART/src/lightning_base.py", line 700, in generic_train trainer.fit(model) File "/anaconda3/envs/s-bart/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit self._run(model) File "/anaconda3/envs/s-bart/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 713, in _run self.call_setup_hook(model) # allow user to setup lightning_module in accelerator environment File "/anaconda3/envs/s-bart/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1161, in call_setup_hook model.setup(stage=fn) TypeError: setup() got an unexpected keyword argument 'stage' Any pointers would be much appreciated. Thx
Can you share your LightningModule code? Are you overriding the setup function? If so, are you overriding it with this signature? pytorch-lightning/pytorch_lightning/core/hooks.py Line 395 in 03bb389 def setup(self, stage: Optional[str] = None) -> None:
MDEwOkRpc2N1c3Npb24zMzkyNDAw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7800#discussioncomment-813966
Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance.
Hi, I started noticing the following warning message after setting up a new conda environment with Pytorch 1.8.1, which is an update from my previous environment that uses Pytorch 1.7.0. Epoch 0: 0%| [W reducer.cpp:1050] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters, consider turning this flag off. Note that this warning may be a false positive your model has flow control causing later iterations to have unused parameters. (function operator()) Any idea if this is a real concern? How can we disable find_unused_parameters? trainer = pl.Trainer( val_check_interval=0.1, gpus=-1, accelerator="ddp", callbacks=[checkpoint_callback, early_stop_callback], precision=16, ) Packages: pytorch 1.8.1 pytorch-lightning 1.2.6 cudatoolkit 11.1.1 cudnn 8.0.5 python 3.8
Hi @athenawisdoms the docs here cover how you can disable find_unused_parameters and speed up your DDP training https://pytorch-lightning.readthedocs.io/en/latest/benchmarking/performance.html#when-using-ddp-set-find-unused-parameters-false
MDEwOkRpc2N1c3Npb24zMzAwMjAw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6761#discussioncomment-551638
Accessing DataModule or DataLoaders within model hooks
Hey, as the title says, I want to access the DataModule or the DataLoader inside the on fit start hook. Is this possible and how can I do it? To be more specifc I want to access my model, when I have access to my DataModule, to get a batch of data, then use it to apply some pruning algorithm on my model.
self.datamodule or self.trainer.train_dataloader in the LightningModule
MDEwOkRpc2N1c3Npb24zNDI4MDcz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8114#discussioncomment-914199
[RFC] Thoughts on `on_init_start` and `on_init_end` hooks
These hooks are called when trainer initialization begins and ends, before the model has been set, essentially allowing the user to modify the Trainer constructor. Should we be giving the user this much control over Trainer constructor? Are there scenarios where this is needed? Or can we deprecate these hooks? pytorch-lightning/pytorch_lightning/trainer/callback_hook.py Lines 55 to 63 in 338f3cf def on_init_start(self): """Called when the trainer initialization begins, model has not yet been set.""" for callback in self.callbacks: callback.on_init_start(self) def on_init_end(self): """Called when the trainer initialization ends, model has not yet been set.""" for callback in self.callbacks: callback.on_init_end(self) cc @ananthsub
@carmocca @tchaton @awaelchli do you know how these hooks are used? Have you seen any examples of these being used by the community? These hooks go way way back, but I can't think of when they'd be needed given the user "owns" the Trainer initialization. It's also unclear when on_init_start actually happens: does that mean callbacks should be the first thing initialized? it seems a lot more straightforward to write this: trainer = Trainer(...) run_all_my_fancy_logic_now(trainer) # use the trainer here Let's discuss in #10894
D_kwDOCqWgoM4AOHL0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10677#discussioncomment-1734637
How to put all but some vars to GPU
By default, in Lightning everything that is returned by a dataset is collated by the data loader and shipped to the same device. However, I am frequently in the situation where I have let's say x, y which are tensors and something like y_semantic which is in principle related to y but of higher data type, say a dictionary with some meta information about augmentations or so. I don't need 'y_semantic' to be on the GPU. Is there some flag or some method that I can overwrite so that some variables stay on CPU? Something like def ship_batch(self, batch): batch[0] = batch[0].to(self.device) # ... batch[2] = batch[2].cpu() # just for illustration here
Dear @Haydnspass, You have several ways to do this: Create a custom Data / Batch Object and implement the .to function to move only what is required. Simpler: Override LightningModule.transfer_batch_to_device hook and add your own logic to move only x, y to the right device. Best, T.C
MDEwOkRpc2N1c3Npb24zMzgyODA2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7725#discussioncomment-786711
Getting error after completion of 1st epoch
I am training an image classifier on tpu and am getting errors after execution of the first epoch. https://colab.research.google.com/drive/1Lgz0mF6UiLirsDltPQH5HtctmVvb7gHm?usp=sharing
I don't see an error in the provided link. Is this still relevant?
MDEwOkRpc2N1c3Npb24zNDUxOTU2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8339#discussioncomment-1040381
fast_dev_run does not execute pl.LightningModule.test_step()
I may be misunderstanding something about the trainer argument fast_dev_run. When I provide fast_dev_run=1 and I add a print statement in my LightningModule's test_step function, the print statement does not appear. In addition, I can see a progress bar for my training set and validation set, but no progress bar appears for the test set. Is fast_dev_run actually running n batches of my training set? I have passed in a DataModule to trainer.fit() that includes a test_dataloader.
fit only runs training & validation, not testing. trainer.test runs the test_step
D_kwDOCqWgoM4AO6Tv
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12168#discussioncomment-2275238
About the Weight Initialization in PL
Hi, I am tring to use BERT for a project. The pretrained BERT model is part of my model. I am wondering how will PL initialize the model weights. Will it overwrite the pretrained BERT weights? Thanks.
lightning doesn’t do any magic like this under the hood. you control all the weights and what gets initiated
MDEwOkRpc2N1c3Npb24yNzkyNTEy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5816#discussioncomment-339800
Option for disable tf32
Hi, is there a way in trainer to disable tf32 for ampere architecture? It's motivated by this discussion:https://discuss.pytorch.org/t/numerical-error-on-a100-gpus/148032/2 cc @justusschock @kaushikb11 @awaelchli @Borda @rohitgr7
Hi @dnnspark! Simply setting the flags in your script doesn't work? torch.backends.cuda.matmul.allow_tf32 = False torch.backends.cudnn.allow_tf32 = False
D_kwDOCqWgoM4APM3c
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12601#discussioncomment-2498126
Clarification on reload_dataloaders_every_epoch
I have a PyTorch Lightning DataModule instance that defines train_dataloader, val_dataloader, and test_dataloader. Currently using a custom callback to reload the train_dataloader that will resample the data. I saw that there is a Trainer flag called reload_dataloaders_every_epoch and soon to be reload_dataloaders_every_n_epochs. Do these just reload the train_dataloader, or do the do all 3?
Only the train and validation dataloaders: pytorch-lightning/pytorch_lightning/trainer/training_loop.py Lines 168 to 170 in e4f3a8d # reset train dataloader if epoch != 0 and self.trainer.reload_dataloaders_every_epoch: self.trainer.reset_train_dataloader(model) pytorch-lightning/pytorch_lightning/trainer/training_loop.py Lines 203 to 207 in e4f3a8d if self.trainer.train_dataloader is None or not self.trainer.reload_dataloaders_every_epoch: self.trainer.reset_train_dataloader(model) if self.trainer.val_dataloaders is None and not self.trainer.reload_dataloaders_every_epoch: self.trainer.reset_val_dataloader(model)
MDEwOkRpc2N1c3Npb24zMjg2MjM2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6635#discussioncomment-637654
CUDA OOM during validation of first epoch
hi all, My model validation code (see below) appears to leak memory which leads to a rapid increase in GPU memory usage and, eventually, to an OOM error right before the validation loop is about to complete (about 90% done or so). CUDA memory usage hovers around 8-9GB during training, then increases rapidly to ca. 15+GB during validation, hitting the memory limit of my GPU card. What am I doing wrong here? class Lightning_WGAN_GP(pl.LightningModule): """Conditional Wasserstein GAN with gradient penalty.""" # (...) def _get_noise(self, X: torch.Tensor) -> torch.Tensor: bs, _, h, w = X.shape return torch.randn(bs, 1, h, w).type_as(X) def validation_step(self, batch: Tuple[Dict, ...], batch_idx: int) -> Dict: del batch_idx # not used X, X_hr, real = batch[0]["X_lr"], batch[0]["X_hr"], batch[1]["y"] with torch.no_grad(): noise = self._get_noise(X) fake = self.gen(noise, X, X_hr) # calling the generator loss_gen_val = F.l1_loss(fake, real) # generator loss disc_real = self.disc(X, real, X_hr).reshape(-1) # calling the discriminator disc_fake = self.disc(X, fake, X_hr).reshape(-1) loss_disc_val = -torch.mean(disc_real) + torch.mean(disc_fake) # discriminator loss self.log("gen_val_loss", loss_gen_val, on_epoch=True, on_step=False, prog_bar=True, logger=True) self.log("disc_val_loss", loss_disc_val, on_epoch=True, on_step=False, prog_bar=True, logger=True) return {"gen_val": loss_gen_val, "disc_val": loss_disc_val, "batch": batch} RuntimeError: CUDA out of memory. Tried to allocate 266.00 MiB (GPU 0; 16.00 GiB total capacity; 12.84 GiB already allocated; 96.55 MiB free; 13.52 GiB reserved in total by PyTorch) Decreasing (or increasing) the validation batch size doesn't make the problem go away. Any thoughts? $ conda list | grep pytorch pytorch 1.9.1 cuda102py38ha031fbe_3 conda-forge pytorch-gpu 1.9.1 cuda102py38hf05f184_3 conda-forge pytorch-lightning 1.5.3 pyhd8ed1ab_0 conda-forge Later edit: Skipping the validation loop, i.e., gan_trainer.fit(gan_model, train_dataloaders=dl_train) gets rid of the OOM error (the trainer makes it past the 1st epoch). Also, I am running in mixed precision (although i suspect precision doesn't have much to do with this issue?) Thank you!
Dear @mishooax, You are returning the batch from the validation_step, which would be stored. As it is currently on the GPU, after X batches, you would get a OOM. Unless you need the batch on epoch end, I would recommend to not return anything from the validation_step.
D_kwDOCqWgoM4AONtA
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10959#discussioncomment-1764585
Does Pytorch-Lightning have a multiprocessing (or Joblib) module?
❓ Questions and Help What is your question? I have been googling around but can't seem to find if there is a multiprocessing module available in Pytorch-Lightning, just like how Pytorch has a torch.multiprocessing module. Does anyone know if Pytorch-Lightning has this (or a Joblib similar) module? I am looking for a Pytorch-Lightning module which allows me to parallelize over multiple GPUs Many thanks in advance. Ps. Sorry if this this the wrong place to post this question. I have posted the same question in Stackoverflow, but haven't received a reply. Edit: To be more specific, I am looking for a multiprocessing module in Pytorch-Lightning which allows me to parallelize over multiple GPUs on non-neural network computations, such as: import numpy as np import torch from torch.multiprocessing import Pool X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]]) X = torch.DoubleTensor(X) def X_power_func(j): X_power = X.cuda()**j return X_power if __name__ == '__main__': with Pool(processes = 2) as p: # Parallelizing over 2 GPUs results = p.map(X_power_func, range(4)) results
PL is just PyTorch under the hood, you can use torch on joblib directly... in case you want train distributed in CPU only you can use ddp_cpu backend 🐰
MDEwOkRpc2N1c3Npb244MjI1Nw==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/2720#discussioncomment-238225
When use multiple optimizer, TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
I want build a Super-Resolution Network with multiple optimizer. The code is below, def configure_optimizers(self): d_optimizer = torch.optim.Adam([{'params': self.parameters()}], lr=self.lr, betas=(0.5, 0.9)) g_optimizer = torch.optim.Adam([{'params': self.parameters()}], lr=self.lr, betas=(0.5, 0.9)) id_optimizer = torch.optim.Adam([{'params': self.parameters()}], lr=self.lr, betas=(0.5, 0.9)) recon_optimizer = torch.optim.Adam([{'params': self.parameters()}], lr=self.lr, betas=(0.5, 0.9)) # use multi optimizer return [g_optimizer, d_optimizer, id_optimizer, recon_optimizer] def training_step(self, batch, batch_idx, optimizer_idx): print('optimizer_idx', optimizer_idx) # print('criterionG', next(self.criterionG.parameters()).requires_grad) # print('generator', next(self.generator.parameters()).requires_grad) lr_img, id_label, hr_img = batch fake_img = self(lr_img) d_fake = self.discriminator(fake_img) d_real = self.discriminator(hr_img) # train generator if optimizer_idx == 0: # log sampled images grid = torchvision.utils.make_grid(fake_img) self.logger.experiment.add_image('generated_images', grid, 0) g_loss = self.g_loss_function(d_fake, fake_img, hr_img) g_loss.requires_grad_(True) return {'loss': g_loss} # train discriminator elif optimizer_idx == 1: d_fake_loss = torch.mean(d_fake) d_real_loss = torch.mean(d_real) d_loss = (d_fake_loss + d_real_loss)/2 tqdm_dict ={'d_loss': d_loss} self.log('d_loss', d_loss) return {'d_loss': d_loss} # fine-tuning arcface model elif optimizer_idx == 2: fake_img = self.conv1(fake_img) pred = self.recognition(fake_img) loss = self.loss_function(pred, id_label) self.log('id_loss', loss) tqdm_dict = {'id_loss': loss} output = OrderedDict({ 'id_loss': loss, 'progress_bar': tqdm_dict, 'log': tqdm_dict }) return output # training reconstruction model elif optimizer_idx == 3: fake_lr = self.reconstruction(fake_img) loss = self.recon_loss_function(hr_img, fake_lr) self.log('recon_loss', loss) tqdm_dict = {'recon_loss': loss} output = OrderedDict({ 'recon_loss': loss, 'pregress_bar': tqdm_dict, 'log': tqdm_dict }) return output But, i got this error in 'if optimizer_idx == 0:' closure_loss = closure_loss / self.trainer.accumulate_grad_batches TypeError: unsupported operand type(s) for /: 'NoneType' and 'int' Can you give me a advice? Thank you.
Hi @choieq, training_step needs to return one of: Tensor - The loss tensor dict - A dictionary. Can include any keys, but must include the key 'loss' None - Training will skip to the next batch https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#training-step
MDEwOkRpc2N1c3Npb24zMjk2NjUz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6708#discussioncomment-564722
Saving and loading HF transformer model fine tuned with PL?
I am fine-tuning hugging face transformer models, essentially exactly as shown in the following example found in the pytorch lightning docs: https://pytorch-lightning.readthedocs.io/en/latest/notebooks/lightning_examples/text-transformers.html Where we instantiate the LightningModule doing something like this: class GLUETransformer(LightningModule): def __init__(self, ... ): super().__init__() self.config = AutoConfig.from_pretrained(model_name_or_path, num_labels=num_labels) self.model = AutoModelForSequenceClassification.from_pretrained( model_name_or_path, config=self.config ) But I have been confused about how I should be saving and loading checkpoints. When saving checkpoints, should I be using mymodel.model.save_pretrained("model_save_dir"), and reloading from this checkpoint using AutoModelForSequenceClassification.from_pretrained("model_save_dir"), or saving with trainer.save_checkpoint("model_save_dir/checkpoint.ckpt"), and reloading with GLUETransformer.load_from_checkpoint("model_save_dir/checkpoint.ckpt")?
Dear @brijow, You should be using the second approach. An even better one would be to rely on ModelCheckpoint to save the checkpoints and provide Trainer(resume_from_checkpoint=...) for reloading all the states. Best, T.C
MDEwOkRpc2N1c3Npb24zNTE3NDMw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8893#discussioncomment-1188532
AttributeError: Can't get attribute 'DataReader' on <module '__main__' (built-in)>
Here is DataReader class DataReader(torch.utils.data.Dataset): def __init__(self, df): super(DataReader,self).__init__() self.df = df def __len__(self): 'Denotes the total number of samples' return len(self.df) def __getitem__(self, index): 'Generates one sample of data' # Select sample file = self.df.iloc[index,0] label=self.df.iloc[index,1] return data,label when i do x=DataLoader(DataReader(train), batch_size = 2,collate_fn=my_collate) b=next(iter(x)) it worked. but when I call it inside LightningModule, it throws an error class OurModel(LightningModule): def __init__(self): super(OurModel,self).__init__() self.model =cnnmodel() def forward(self,x): x= self.model(x) return x def configure_optimizers(self): return torch.optim.AdamW(params=self.parameters(),lr=self.lr ) def train_dataloader(self): return DataLoader(DataReader(train)) def training_step(self,batch,batch_idx): return loss def val_dataloader(self): ds= DataLoader(DataReader(val)) print('ds',len(ds)) return def validation_step(self,batch,batch_idx): print('val step') return I am unable to figure out what is the error and how to resolve it.I tried to debug it, val_dataloader function is working, it print ds 7 but validation_step is not working, it should print val step, but its not printing it EDIT This issue is because of numworker, setting numworker to 0, resolve the issue, but I am getting this warning Consider increasing the value of thenum_workers argument (try 12 which is the number of cpus on this machine)` Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) AttributeError: Can't get attribute 'DataReader' on <module '__main__' (built-in)> Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) AttributeError: Can't get attribute 'DataReader' on <module '__main__' (built-in)> Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) AttributeError: Can't get attribute 'DataReader' on <module '__main__' (built-in)> Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Anaconda3\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) AttributeError: Can't get attribute 'DataReader' on <module '__main__' (built-in)> Traceback (most recent call last): File "C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 872, in _try_get_data data = self._data_queue.get(timeout=timeout) File "C:\Anaconda3\lib\multiprocessing\queues.py", line 108, in get raise Empty Empty The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<ipython-input-9-5ebd04eb49d1>", line 101, in <module> trainer.fit(model) File "C:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 458, in fit self._run(model) File "C:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 756, in _run self.dispatch() File "C:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 797, in dispatch self.accelerator.start_training(self) File "C:\Anaconda3\lib\site-packages\pytorch_lightning\accelerators\accelerator.py", line 96, in start_training self.training_type_plugin.start_training(trainer) File "C:\Anaconda3\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py", line 144, in start_training self._results = trainer.run_stage() File "C:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 807, in run_stage return self.run_train() File "C:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 842, in run_train self.run_sanity_check(self.lightning_module) File "C:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1107, in run_sanity_check self.run_evaluation() File "C:\Anaconda3\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 949, in run_evaluation for batch_idx, batch in enumerate(dataloader): File "C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 435, in __next__ data = self._next_data() File "C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 1068, in _next_data idx, data = self._get_data() File "C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 1034, in _get_data success, data = self._try_get_data() File "C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 885, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e RuntimeError: DataLoader worker (pid(s) 4984, 20904) exited unexpectedly
do i need to write whole code under `if __name__ == "__main__"` or just `trainer.fit(model)` … On Mon, Aug 16, 2021 at 2:00 PM thomas chaton ***@***.***> wrote: Did you try to add if *name* == "*main*" to your script ? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#8898 (reply in thread)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AI5FYO3X4RDTD7LMWYTWCBTT5DHTXANCNFSM5CEXIGTA> .
MDEwOkRpc2N1c3Npb24zNTE4OTIx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8898#discussioncomment-1193114
"Error: 'MyDataModule' object is not iterable" when calling trainer.predict(model, data)
Hi! I've trained a model successfully. I now want to have a look at the model predictions. I've overridden the predict_dataloader method in my DataModule: def predict_dataloader(self): pred_loader = torch.utils.data.DataLoader( self.val_ds, batch_size=1, num_workers=4) return pred_loader Then, I initialize the model using my checkpoint and call the predict method: checkpoint_dir = os.path.join(root_dir, "logs2/epoch=199-val_loss=0.26-val_dice=1.67.ckpt") # initialize the data module data = KeriDataModule(data_dir=data_dir, pix_dim=(0.6, 0.6, 0.937)) # initialize the LightningModule (from checkpoint) net = Net.load_from_checkpoint(checkpoint_path=checkpoint_dir) # initialize Lightning's trainer trainer = pl.Trainer(gpus=[0]) results = trainer.predict(net, data) Unfortunately, I get the following error Predicting: 0it [43:22, ?it/s] GPU available: True, used: True TPU available: False, using: 0 TPU cores LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] Predicting: 0it [00:00, ?it/s] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-12-4bd0b31315bd> in <module> 10 trainer = pl.Trainer(gpus=[0]) 11 ---> 12 results = trainer.predict(net, data) 13 print(results) 14 ~/miniconda3/envs/monai-lightning-latest/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in predict(self, model, dataloaders, datamodule, return_predictions) 629 self.data_connector.attach_data(model, predict_dataloaders=dataloaders, datamodule=datamodule) 630 --> 631 results = self._run(model) 632 633 assert self.state.stopped ~/miniconda3/envs/monai-lightning-latest/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in _run(self, model) 754 755 # dispatch `start_training` or `start_evaluating` or `start_predicting` --> 756 self.dispatch() 757 758 # plugin will finalized fitting (e.g. ddp_spawn will load trained model) ~/miniconda3/envs/monai-lightning-latest/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in dispatch(self) 793 self.accelerator.start_evaluating(self) 794 elif self.predicting: --> 795 self.accelerator.start_predicting(self) 796 else: 797 self.accelerator.start_training(self) ~/miniconda3/envs/monai-lightning-latest/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py in start_predicting(self, trainer) 100 101 def start_predicting(self, trainer: 'pl.Trainer') -> None: --> 102 self.training_type_plugin.start_predicting(trainer) 103 104 def pre_dispatch(self, trainer: 'pl.Trainer') -> None: ~/miniconda3/envs/monai-lightning-latest/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py in start_predicting(self, trainer) 150 def start_predicting(self, trainer: 'pl.Trainer') -> None: 151 # double dispatch to initiate the predicting loop --> 152 self._results = trainer.run_stage() 153 154 def training_step(self, *args, **kwargs): ~/miniconda3/envs/monai-lightning-latest/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_stage(self) 804 return self.run_evaluate() 805 if self.predicting: --> 806 return self.run_predict() 807 return self.run_train() 808 ~/miniconda3/envs/monai-lightning-latest/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py in run_predict(self) 1071 dataloader = self.accelerator.process_dataloader(dataloader) 1072 dl_max_batches = self.predict_loop.max_batches[dataloader_idx] -> 1073 for batch_idx, batch in enumerate(dataloader): 1074 if batch is None: 1075 continue TypeError: 'KeriDataModule' object is not iterable I don't understand what I messed up ^^'. Any help / tip would be greatly appreciated :)
Sorry! We did not add support for trainer.predict(model, datamodule). We'll do it asap! You need to do trainer.predict(model, datamodule=datamodule)
MDEwOkRpc2N1c3Npb24zMzU1MjUz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7425#discussioncomment-709205
How to customize training loop?
I want to marry lightning and https://pytorch-geometric.readthedocs.io/en/latest/ or in particular https://pytorch-geometric-temporal.readthedocs.io/en/latest/ When following the basic examples on their website such as for the ChickenpoxDatasetLoader() a RecurrentGCN is constructed. For me being a total newbie for lightning it is already pretty clear how t convert that to a regular lightning module - kudos to the easy API so far. However, it is rather unclear for me how to put the training loop into a lightning compatible trainer: from tqdm import tqdm model = RecurrentGCN(node_features = 4) # chickenpox model optimizer = torch.optim.Adam(model.parameters(), lr=0.01) model.train() for epoch in tqdm(range(200)): cost = 0 for time, snapshot in enumerate(train_dataset): y_hat = model(snapshot.x, snapshot.edge_index, snapshot.edge_attr) cost = cost + torch.mean((y_hat-snapshot.y)**2) cost = cost / (time+1) cost.backward() optimizer.step() optimizer.zero_grad() Would I need a custom trainer in lightning? In particular, I need to be able to handle temporal graphs with snapshots over time i.e. think of a social network of people who can perform a hobby at a certain location (lat, long ) and timestamp. I guess it would be fine to discretize the time to i.e. weekly or monthly slices, but the graph is dynamic. import pandas as pd df = pd.DataFrame({'person_1':[], 'person_2':[], 'time':[], 'lat':[], 'long':[], 'hobby':[]}) display(df) I want to perform link prediction - i.e. recommend new friends based on similar hobbies in similar locations & time ranges. With that being said: in the pytorch-geometric-temporal framework they denote snapshots over time (this is not meant as batches, currently they assume that a snapshot contains all the data in a single batch for that particular span of time). However, the default trainer does not offer this functionality to iterate over snapshots and to me it is unclear how to include it.
just found https://github.com/benedekrozemberczki/pytorch_geometric_temporal/blob/master/examples/lightning_example.py - seems to be the answer to my question - almost, except it is not covering how to handle the iteration over the temporal snapshots.
MDEwOkRpc2N1c3Npb24zMzY1OTU1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7549#discussioncomment-744017
ValueError('signal only works in main thread')
Has anyone else run into this error: ValueError('signal only works in main thread') I'm running a hyper parameter sweep using Weights and Biases's framework. Running on a GPU on Google Colab which causes all launched runs to fail. Running it locally (Mac OS) prompts 'signal only works in main thread' to be printed to stdout (which also happens on Colab) but it doesn't crash. Any ideas? It seems people using Ray with PL have come across this. The hacky solution presented there (os.environ['SLURM_JOB_NAME'] = 'bash') doesn't work in my case (neither on Mac OS or Colab).
@max0x7ba @borisdayma The issue has been with #10610
D_kwDOCqWgoM4ANqDI
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9589#discussioncomment-1669385
forward() takes 1 positional argument but 2 were given
when i try to write a model, i got forward() takes 1 positional argument but 2 were given error, this is my code, i want to know the wrong plcace, thanks!! i guess the error is in UpSample place, but i don't know why... class DownSample(nn.Module): def __init__(self, in_planes: int, out_planes: int, kernel_size: int): super(DownSample, self).__init__() self.down = nn.Sequential( nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=2, padding=1), nn.BatchNorm2d(out_planes), nn.LeakyReLU() ) init_weight.initialize(self) def forward(self, x): return self.down(x) class UpSample(nn.Module): def __init__(self, in_planes: int, out_planes: int, kernel_size: int, padding: int, output_padding: int, apply_dropout: bool = False): super(UpSample, self).__init__() self.up = nn.ModuleList() self.up.append( nn.ConvTranspose2d(in_planes, out_planes, kernel_size, stride=2, padding=padding, output_padding=output_padding), ) self.up.append(nn.BatchNorm2d(out_planes)) if apply_dropout: self.up.append(nn.Dropout()) self.up.append(nn.LeakyReLU()) init_weight.initialize(self) def forward(self, inputs): return self.up(inputs) class MyEncoder(nn.Module): def __init__(self): super(MyEncoder, self).__init__() down_stack = [ pix2pix.DownSample(3, 64, 4), pix2pix.DownSample(64, 128, 4), pix2pix.DownSample(128, 256, 4), pix2pix.DownSample(256, 512, 4), pix2pix.DownSample(512, 512, 4), pix2pix.DownSample(512, 512, 4), pix2pix.DownSample(512, 512, 4), pix2pix.DownSample(512, 512, 4), ] self.encoder = nn.ModuleList() for item in down_stack: self.encoder.append(item) def forward(self, inputs): feat = inputs for i in range(len(self.encoder)): feat = self.encoder[i](feat) return feat class MyDecoder(nn.Module): def __init__(self): super(MyDecoder, self).__init__() up_stack = [ pix2pix.UpSample(512, 512, 4, 1, 1, True), pix2pix.UpSample(512, 512, 4, 1, 1, True), pix2pix.UpSample(512, 512, 4, 1, 1, True), pix2pix.UpSample(512, 512, 4, 1, 1, True), pix2pix.UpSample(512, 256, 4, 1, 1, True), pix2pix.UpSample(256, 128, 4, 1, 1, True), pix2pix.UpSample(256, 128, 4, 1, 1, True), pix2pix.UpSample(128, 64, 4, 1, 1, True), ] self.up = nn.ModuleList() for item in up_stack: self.up.append(item) def forward(self, inputs): return self.up(inputs) class MyNet(pl.LightningModule): def __init__(self): super(MyNet, self).__init__() self.encoder = MyEncoder() self.decoder = MyDecoder() def forward(self, inputs): feat = self.encoder(inputs) feat = self.decoder(feat) return feat if __name__ == '__main__': from torchsummaryX import summary import torch x = torch.ones((1, 3, 512, 512)) u = UNet() summary(model=u, x=x)
Hi, have a look at the full stack trace so you know which of the forward methods of these different nn.Modules is meant. have you verified that u(x) works?
MDEwOkRpc2N1c3Npb24zNDI1OTU0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8091#discussioncomment-918111
Logging RL results and tracking them with ModelCheckpoint(monitor=...)
I am using Pytorch Lightning in an RL setting and want to save a model when it hits a new max average reward. I am using the Tensorboard logger where I return my neural network loss in the training_step() using: logs = {"policy_loss": pred_loss} return {'loss':pred_loss, 'log':logs} And then I am saving my RL environment rewards using in on_epoch_end(): self.logger.experiment.add_scalar("mean_reward", np.mean(reward_losses), self.global_step) self.logger.experiment.add_scalars('rollout_stats', {"std_reward":np.std(reward_losses), "max_reward":np.max(reward_losses), "min_reward":np.min(reward_losses)}, self.global_step) And every 5 epochs I am also writing out another RL reward loss where I use the best actions rather than sampling from them: if self.current_epoch % self.hparams['eval_every']==0 and self.logger: output = self.collect_rollouts(greedy=True, num_episodes=self.hparams['eval_episodes']) reward_losses = output[0] self.logger.experiment.add_scalar("eval_mean", np.mean(reward_losses), self.global_step) My question is, how can I set my ModelCheckpoint to monitor eval_mean (which is only written out every 5 epochs, this seems like it would be a problem)? I would also settle for monitoring mean_reward (written out every epoch)? Right now I can only successfully monitor policy_loss which does not always correspond to higher rewards obtained (setting monitor = to anything else throws an error). I know that in the new PL version self.log() should be used but after re-writing my code using this it still didn't solve my issue. I have spent a lot of time looking through the docs and for examples of this but I have found the logging docs on this to be quite sparse and difficult to even get everything to log in the first place. I am using Pytorch Lightning 1.0.5 and Pytorch 1.7.0. Thank you for any help/guidance.
I have multiple comments that I did not verify yet but they might help If I'm not mistaken, self.log only works within a selection of hooks currently. I suggest you try to move the relevant code to training_epoch_end where self.log should work correctly. set the monitor key in the ModelCheckpoint(monitor=) explicitly. You have the problem that you can only update/log every n epochs: I see two solutions: 1) synchronize your ModelCheckpoint with the period parameter to only run on the epochs you update the monitor quantity. 2) Cache the last value and log it in the epochs between your regular interval, to make the ModelCheckpoint see it as unchanged. The second option may even be the default behavior by Lightning but need to verify. So in summary, I imagine something like this: # Model def training_epoch_end(self, outputs): # ... compute reward losses if self.current_epoch % self.hparams['eval_every']==0: self.last_eval_mean = # compute the new eval mean self.log("eval_mean", self.last_eval_mean) # Trainer trainer = Trainer(callbacks=[ModelCheckpoint(monitor="eval_mean")] # or maybe also try trainer = Trainer(callbacks=[ModelCheckpoint(monitor="eval_mean", period=self.hparams['eval_every'])]
MDEwOkRpc2N1c3Npb24zMTc5NTk3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5883#discussioncomment-354870
multiple on_train_epoch_start callbacks but only one on_train_epoch_end?
I thought the number of on_train_epoch_start and on_train_epoch_end should be equal to the number of epochs. But when I passed the following callback function: class MyPrintingCallback(Callback): def on_train_epoch_start(self, trainer, pl_module): print('Train epoch start for epoch: ', pl_module.current_epoch) def on_train_epoch_end(self, trainer, pl_module): # will run only once in the beginning print('Train step end for epoch: ', pl_module.current_epoch) on_train_epoch_end is only called in the 0th epoch: Training: -1it [00:00, ?it/s]Train epoch start for epoch: 0 Epoch 0: : 4875it [00:33, 143.57it/s, Train step end for epoch: 0 Train epoch start for epoch: 1 Epoch 1: : 0it [00:00, 7096.96it/s, loss=2.13e+09, v_num=37] Train epoch start for epoch: 2 Epoch 2: : 0it [00:00, 11335.96it/s, loss=2.13e+09, v_num=37]Train epoch start for epoch: 3 Epoch 3: : 0it [00:00, 12052.60it/s, loss=2.13e+09, v_num=37]Train epoch start for epoch: 4 Epoch 4: : 0it [00:00, 4301.85it/s, loss=2.13e+09, v_num=37] Any idea why this is happening?
It turns out that the issue is because I was chaining many data loaders with itertools.chain() which was called only once by lightning.
D_kwDOCqWgoM4AN637
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10140#discussioncomment-1536062
import pytorch_lightning as pl does not work on colab
I can install pl in colab by !pip install pytorch-lightning==1.2.2 --quiet but I cannot import it by import pytorch_lightning as pl I am thankful if you help me with this issue.
Please upgrade to version 1.2.3 (released yesterday) where this issue was solved.
MDEwOkRpc2N1c3Npb24zMjU1NDQ5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6425#discussioncomment-462173
checkpoint every module in a different ckpt file
Hi! I am currently working on a project where I would like to checkpoint my model in separated pieces. My model has a backbone composed by: a backbone, which is also composed by 3 modules several heads, each one being a module I would like to save one ckpt with the backbone and one ckpt per head. I understand that I should create a custom callback inheriting from ModelCheckpoint and then modifying on_save_checkpoint, I am not really aware of how to do it. on_save_checkpoint is defined as: Another solution would be to modify my lightning module to load the ckpt when the training ends as a dict and then save each subpart as a ckpt using torch.save(), but I understand that this solution is much less elegant. Any suggestions? Thanks in advance!
I'd suggest using checkpoint_io plugin for your use-case.
D_kwDOCqWgoM4AOKiQ
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10840#discussioncomment-1722497
Scheduler.step() only called on the end of validation
What I did: def configure_optimizers(self): optimizer = torch.optim.AdamW( self.parameters(), lr=self.hparams.learning_rate, eps=1e-5, ) scheduler = torch.optim.lr_scheduler.OneCycleLR( optimizer, max_lr=self.hparams.learning_rate, epochs=self.trainer.max_epochs, steps_per_epoch=len(self.datamodule.train_dataloader()), ) set_trace() return {"optimizer": optimizer, "lr_scheduler": scheduler} I put an breakpoint on scheduler.step to see when it will be called. What I got: And I found I ran into breakpoint (scheduler.step is called) only on the end of validation. epochs=self.trainer.max_epochs=1 and steps_per_epoch=len(self.datamodule.train_dataloader())=144, which are correct. Code for your reference if needed import random from IPython.core.debugger import set_trace import torch import torch.nn.functional as F from transformers import AutoConfig, ElectraForTokenClassification import pytorch_lightning as pl from pytorch_lightning.callbacks.lr_monitor import LearningRateMonitor from pl_bolts.callbacks import PrintTableMetricsCallback from pytorch_lightning.loggers import WandbLogger class SortNumberDataset(torch.utils.data.Dataset): def __init__(self, dataset_size, vocab_size, sequence_length): super().__init__() self.dataset_size = dataset_size self.vocab_size = vocab_size self.sequence_length = sequence_length def __getitem__(self, i): x = [ random.randint(0, self.vocab_size - 1) for _ in range(self.sequence_length) ] y = sorted(x) return {"x": torch.tensor(x), "y": torch.tensor(y)} def __len__(self): return self.dataset_size class NumberSorting(pl.LightningModule): def __init__( self, hf_config, learning_rate, trainset_size, valset_size, vocab_size, sequence_length, batch_size=128, num_workers=4, ): super().__init__() self.save_hyperparameters() self.datamodule = pl.LightningDataModule.from_datasets( SortNumberDataset(trainset_size, vocab_size, sequence_length), SortNumberDataset(valset_size, vocab_size, sequence_length), batch_size=batch_size, num_workers=num_workers, ) # self.model = ElectraForTokenClassification(hf_config) # tie input/output embeddings delattr(self.model, "classifier") self.model.classifier = ( lambda x: x @ self.model.electra.embeddings.word_embeddings.weight.t() ) self.val_acc = pl.metrics.Accuracy() def forward(self, batch): result = self.model(input_ids=batch["x"], labels=batch["y"], return_dict=True) return result.logits.argmax(dim=-1), result.loss def training_step(self, batch, batch_idx): # self.log("lr", self.trainer.optimizers[0].param_groups[0]["lr"]) return self(batch)[-1] def validation_step(self, batch, batch_idx): preds, loss = self(batch) self.val_acc(preds.view(-1), batch["y"].view(-1)) self.log("valid_acc", self.val_acc, on_epoch=True) def configure_optimizers(self): timizer = torch.optim.AdamW(optimizer_grouped_parameters, eps=1e-5) optimizer = torch.optim.AdamW( self.parameters(), lr=self.hparams.learning_rate, ) # return optimizer scheduler = torch.optim.lr_scheduler.OneCycleLR( optimizer, max_lr=self.hparams.learning_rate, epochs=self.trainer.max_epochs, steps_per_epoch=len(self.datamodule.train_dataloader()), ) set_trace() return {"optimizer": optimizer, "lr_scheduler": scheduler} config = AutoConfig.from_pretrained( "google/electra-small-generator", pad_token_id=-1, max_position_embeddings=7, vocab_size=7, num_labels=7, embedding_size=10, hidden_size=10, intermediate_size=8, num_hidden_layers=2, num_attention_heads=2, ) plmodule = NumberSorting( config, learning_rate=0.05, trainset_size=18333, valset_size=20000, vocab_size=7, sequence_length=7, ) trainer = pl.Trainer( max_epochs=1, gpus="0", callbacks=[ PrintTableMetricsCallback(), LearningRateMonitor(logging_interval="step", log_momentum=True), ], logger=WandbLogger(), ) trainer.fit(plmodule)
That is the default behaviour of learning rate schedulers, that they step at the end of the training epoch. Can I ask you what you are trying to achieve? If you want the learning rate scheduler to step after each batch, you can read more about what the output of configure_optimizers should look like here: https://pytorch-lightning.readthedocs.io/en/0.9.0/optimizers.html#learning-rate-scheduling
MDEwOkRpc2N1c3Npb24zMjU4OTQz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6450#discussioncomment-462160
How to do this test in a lightning way?
My model has the property that I can prepare the test data in multiple different ways, which results in a set of equally plausible predictions for each data point (one prediction for each way of preparing the test data). By combining these predictions, it is possible to slightly boost overall performance on the test set. Right now, I do this in the following (abstract) way: for iPrep in range(nPrep): preppedData=prepare_data(testData,iPrep) predictions[iPrep]=trainer.test(model,preppedData) final_predictions=combinePredictions(predictions) (obviously it is much longer in reality) is there a proper, 'lightning' way of hiding this loop inside the model, so I can still use the trainer for this, but only call it once?
Maybe the prediction api can help you (currently beta, will be released in version 1.3). You can have multiple predict dataloaders (your different test data). If you do predictions = trainer.predict(model, predict_dataloaders=[data1, data2, data3, ...]) and it returns the predictions grouped by the dataloader index. Then you can combine them with your own function. final_predictions=combinePredictions(predictions) Not sure if this is 100% what you are looking for, but it can at least eliminate that one for loop you have. Optionally, you can also override predict_step in the LightningModule. If you install the latest version, you can use this predict feature already. The documentation will be included in the 1.3 release.
MDEwOkRpc2N1c3Npb24zMzE0MDUx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6938#discussioncomment-647200
Confusion in training_step_end() API
Hi! I am playing around with pytorch-lightning. Problem I tried to use 2 gpus and manually merge training loss described in Lightning in 2 steps. But when I call training_step_end(), it just gives me only one gpu's loss, not all gpus loss. Question Do I have to reduce loss myself in training_step_end()? My code import torch from torch import nn from torch import optim from torch.utils.data import DataLoader from torchvision import transforms, datasets import torch.nn.functional as F import pytorch_lightning as pl class BaseImageClassificationSystem(pl.LightningModule): def __init__(self): super().__init__() self.backbone = nn.Sequential(nn.Conv2d(1, 64, 3), nn.AdaptiveAvgPool2d((1, 1))) self.fc = nn.Linear(64, 10) def forward(self, x): return self.fc(torch.flatten(self.backbone(x), 1)) def training_step(self, batch, batch_idx): x, y = batch y_hat = self.fc(torch.flatten(self.backbone(x), 1)) loss = F.cross_entropy(y_hat, y) self.log('train/loss', loss) return loss def training_step_end(self, losses): print(losses) return (losses[0] + losses[1]) / 2 def configure_optimizers(self): return optim.SGD(self.parameters(), lr=0.01) train_dl = DataLoader(datasets.MNIST(root='./', train=True, transform=transforms.ToTensor(), download=True), batch_size=128) model = BaseImageClassificationSystem() trainer = pl.Trainer(num_processes=8, gpus='1, 2', accelerator='ddp', max_epochs=100) trainer.fit(model, train_dl) Output tensor(2.3002, device='cuda:2', grad_fn=<NllLossBackward>) tensor(2.2930, device='cuda:1', grad_fn=<NllLossBackward>)
It's mentioned in the doc that this configuration works only for DP or DDP2, but in your code, you are using DDP so there will only be 1 loss item since gradient sync happens within DDP so each device has its own loss and backward call and won't require manual reduction of loss across devices.
D_kwDOCqWgoM4ANrAK
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9617#discussioncomment-1882531
Run specific code only once (which generates randomized values) before starting DDP
Hi. I have a function which generates a set of random values (hyperparameters) which are then used to create my model. I want to run this function only once, then use it to create my model and then start ddp training on this model. However, with the current setup, when I start ddp, the randomize function gets called again, so now I have 2 GPU processes, each having initialized the model with different set of hyperparameters. (random values from both calls aren't same) If I add if os.getenv("LOCAL_RANK",0): before my randomize function, then there is no way for the second GPU process to access the hyperparameters generated by the first GPU process. How do I go about this ? Thanks.
Hey @Gateway2745, You could do this from pytorch_lightning.utilities.cli import LightningCLI from unittest import mock import optuna config_path = ... class MyModel(LightningModule): def __init__(self, num_layers): ... def objective(trial): num_layers = trial.suggest_uniform('num_layers', 10, 100) with mock.patch("sys.argv", ["any.py", "--config", str(config_path), "--trainer.accelerator", "ddp_spawn", "--trainer.gpu", "2", "--model.num_layers", str(num_layers)]): cli = LightningCLI(MyModel, MyDataModule) return cli.trainer.model_checkpoint.best_score study = optuna.create_study() study.optimize(objective, n_trials=100) study.best_params
MDEwOkRpc2N1c3Npb24zNTQwNzY5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9134#discussioncomment-1240962
how to use Apex DistributedDataParallel with Lightining?
I was wondering if there's a way to use apex.parallel.DistributedDataParallel instead of pytorch native DistributedDataParallel. (I am trying to reproduce a paper that used Apex DDP and apex mixed precision and i am getting lower results using pytorch native one)
Here is a quick draft of what you could try: from pytorch_lightning.plugins.training_type import DDPPlugin from apex.parallel import DistributedDataParallel class ApexDDPPlugin(DDPPlugin): def _setup_model(self, model: Module): return DistributedDataParallel(module=model, device_ids=self.determine_ddp_device_ids(), **self._ddp_kwargs) @property def lightning_module(self): return self.module.module I'm not sure if apex DistributedDataParallel supports device ids (it seems not??), you may need to remove it. Use it in the trainer: trainer = Trainer(gpus=2, strategy=ApexDDPPlugin(), precision=...) trainer.fit(model)
D_kwDOCqWgoM4AOMiz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10922#discussioncomment-1751106
Confusion about NeptuneLogger.save_dir implementation
In the docstring it says the save_dir is None, but then why does it return a path? Should we change either the docstring, or the implementation here? pytorch-lightning/pytorch_lightning/loggers/neptune.py Lines 516 to 524 in 9d8faec @property def save_dir(self) -> Optional[str]: """Gets the save directory of the experiment which in this case is ``None`` because Neptune does not save locally. Returns: the root directory where experiment logs get saved """ return os.path.join(os.getcwd(), ".neptune")
Hi @daniellepintz Prince Canuma here, a Data Scientist at Neptune.ai I will let the engineering team know about this, By default, Neptune will create a '.neptune' folder inside the current working directory. In the case of #6867 it changes the model checkpoint path to be '.neptune' folder in case the user doesn't define his own path using ModelCheckpointCallback() for example. Check this commit: 5ac80ec But still, a bit confused because I thought Neptune doesn't save anything locally. Neptune uses the '.neptune' folder to store metadata temporarily. For example, you track a run in offline mode or there is a network connectivity issue in which case neptune also automatically switches to offline mode and save the data to disk. Later you can synchronize the locally stored metadata with the servers using the neptune sync CLI command. Docs: https://docs.neptune.ai/api-reference/command-line-interface#neptune https://docs.neptune.ai/api-reference/command-line-interface#neptune-sync
D_kwDOCqWgoM4AOtvt
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11766#discussioncomment-2274385
Combine outputs in test epochs when using DDP
I'm training a model across two GPUs on patient data (id). In my test steps, I output dictionaries, which contain the id, as well as all the metrics. I store these (a list with a dict per id) at the end of the test epoch, so I can later on statistically evaluate model performances. I'm experiencing a problem with the test step, however. # Test step def test_step(self, batch, batch_idx): # Get new input and predict, then calculate loss x, y, id = batch["input"], batch["target"], batch["id"] # Infer and time inference start = time() y_hat = self.test_inference(x, self, **self.test_inference_params) end = time() # Calculate metrics id = id[0] if len(id) == 1 else tuple(id) # Output dict with duration of inference output = {"id": id, "time": end - start} # Add other metrics to output dict for m, pars in zip(self.metrics, self.metrics_params): metric_value = m(y_hat, y, **pars) if hasattr(metric_value, "item"): metric_value = metric_value.item() output[f"test_{m.__name__}"] = metric_value return output # Test epoch end (= test end) def test_epoch_end(self, outputs): # Go over outputs and gather self.test_results = outputs #self.all_gather(outputs) I hadn't considered this before (as I'm used to training on a single GPU), but the test_results attribute now only contains half of the outputs (one half per process). So when my main script reaches this section, only half the output is effectively stored: log("Evaluating model.") trainer.test(model=model, dataloaders=brats.val_dataloader()) results = model.test_results # Save test results log("Saving results.") np.save(file=join(result_dir, f'{model_name}_v{version}_fold{fold_index}.npy'), arr=results) I have read about the self.all_gather method, but I'm not sure it suits my needs. I want to merge the lists, not reduce anything. Also, they're not Tensors, but dicts. How can I store all dicts across both DDP processes?
all_gather is different from all_reduce. It doesn't do any math operation here. sort of like: all_gather -> collect outputs from all devices all_reduce -> in general, collect outputs from all devices and reduce (apply a math op) all_gather isn't working for you?
D_kwDOCqWgoM4AOSKC
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11086#discussioncomment-1823377
Lighting Module Loaded From Checkpoint Generates Different Output Each Time
I'm trying to gain some confidence in a model that seems to be training fine. As a simple sanity check I'm trying to make sure I can load then test a checkpoint with the same input, expecting to be able to produce the same output each and every time (I'm using the same input and checkpoint each time so I expect the output to be the same). Unfortunately, I'm observing different output each time I reload the checkpoint. Here is the essence of what I'm doing: for n in range(2): my_module = MyLightningModule.load_from_checkpoint(ckpt_path) my_dataset = MyDataset() batch = my_dataset.get_sanity_test_batch() # confirmed to be the same batch every time # this output is different every time (???) output = my_module.model.generate(batch, max_length=some_length) It is also probably worth noting that the model trained/loaded by my_module is a hugginface T5 transformer (T5ForConditionalGeneration ) Please help me figure out how to ensure output is consistent after loading a trained checkpoint.
Turns out that I was doing something a little different in the actual code than: my_module = MyLightningModule.load_from_checkpoint(ckpt_path) When I do this exactly, things work as expected :)
D_kwDOCqWgoM4APFBW
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12397#discussioncomment-2408892
__about__.py "version" field automatically updated (unwanted behavior)
When editing code in VSCode, the pytorch_lightning/__about__.py file keeps getting automatically updated __version__ = "1.5.0dev" -> __version__ = "20210827" like in 5f4f3c5#r697656347 Does anyone know how to stop this from happening? Thanks!
After a few days of my investigation, it turned out that it's not from any extensions but from our script in .github/: pytorch-lightning/.github/prepare-nightly_version.py Line 13 in 83ce1bf print(f"prepare init '{_PATH_INFO}' - replace version by {now_date}") I locally confirmed that $ pytest in the project root directory will run all the scripts matching *.py except for those in the excluded directories, and thus it also runs the above file. Not sure why everyone doesn't experience this issue, but I'll submit a fix anyway.
MDEwOkRpc2N1c3Npb24zNTQ0NDA4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9181#discussioncomment-1500203
How set number of epochs
How do I set the number of epochs to train? What have you tried? Looking for documentation. Looking for examples.
Up-to-date link: https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#max-epochs
MDEwOkRpc2N1c3Npb24yNzkyNTMw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5818#discussioncomment-459751
Proper way to log things when using DDP
Hi, I was wondering what is the proper way of logging metrics when using DDP. I noticed that if I want to print something inside validation_epoch_end it will be printed twice when using 2 GPUs. I was expecting validation_epoch_end to be called only on rank 0 and to receive the outputs from all GPUs, but I am not sure this is correct anymore. Therefore I have several questions: validation_epoch_end(self, outputs) - When using DDP does every subprocess receive the data processed from the current GPU or data processed from all GPUs, i.e. does the input parameter outputs contains the outputs of the entire validation set, from all GPUs? If outputs is GPU/process specific what is the proper way to calculate any metric on the entire validation set in validation_epoch_end when using DDP? I understand that I can solve the printing by checking self.global_rank == 0 and printing/logging only in that case, however I am trying to get a deeper understanding of what I am printing/logging in this case. Here is a code snippet from my use case. I would like to be able to report f1, precision and recall on the entire validation dataset and I am wondering what is the correct way of doing it when using DDP. def _process_epoch_outputs(self, outputs: List[Dict[str, Any]] ) -> Tuple[torch.Tensor, torch.Tensor]: """Creates and returns tensors containing all labels and predictions Goes over the outputs accumulated from every batch, detaches the necessary tensors and stacks them together. Args: outputs (List[Dict]) """ all_labels = [] all_predictions = [] for output in outputs: for labels in output['labels'].detach(): all_labels.append(labels) for predictions in output['predictions'].detach(): all_predictions.append(predictions) all_labels = torch.stack(all_labels).long().cpu() all_predictions = torch.stack(all_predictions).cpu() return all_predictions, all_labels def validation_epoch_end(self, outputs: List[Dict[str, Any]]) -> None: """Logs f1, precision and recall on the validation set.""" if self.global_rank == 0: print(f'Validation Epoch: {self.current_epoch}') predictions, labels = self._process_epoch_outputs(outputs) for i, name in enumerate(self.label_columns): f1, prec, recall, t = metrics.get_f1_prec_recall(predictions[:, i], labels[:, i], threshold=None) self.logger.experiment.add_scalar(f'{name}_f1/Val', f1, self.current_epoch) self.logger.experiment.add_scalar(f'{name}_Precision/Val', prec, self.current_epoch) self.logger.experiment.add_scalar(f'{name}_Recall/Val', recall, self.current_epoch) if self.global_rank == 0: print((f'F1: {f1}, Precision: {prec}, ' f'Recall: {recall}, Threshold {t}'))
Hi all, Sorry we have not got back to you in time, let me try to answer some of your questions: Is validation_epoch_end only called on rank 0? No, it is called by all processes What does the sync_dist flag do: Here is the essential code: pytorch-lightning/pytorch_lightning/core/step_result.py Lines 108 to 115 in a72a799 if sync_dist and isinstance(value, (torch.Tensor, numbers.Number)): is_dist_initialized = torch.distributed.is_available() and torch.distributed.is_initialized() # TODO: Find a way to make the reduction only once, so we don't need to clone. if is_dist_initialized and isinstance(value, torch.Tensor): value = value.clone() else: value = torch.tensor(value, device=device, dtype=torch.float) value = sync_fn(value, group=sync_dist_group, reduce_op=sync_dist_op) If sync_dist=True then it will as default call the sync_ddp function which will sum the value across all processes using torch.distributed.all_reduce pytorch-lightning/pytorch_lightning/utilities/distributed.py Line 120 in a72a799 def sync_ddp( Use this flag if you want to synchronize the value between different processes. How to print stuff in distributed lightning: Recommended is using either the rank_zero_info function. Import as: from pytorch_lightning.utilities import rank_zero_info or use the rank_zero_only decorator (imported from the same module) which can be wrapped around any function such that it only gets called on rank=0. Each logger experiment is decorated with rank_zero_experiment which internally calls rank_zero_only pytorch-lightning/pytorch_lightning/loggers/base.py Lines 31 to 43 in a72a799 def rank_zero_experiment(fn: Callable) -> Callable: """ Returns the real experiment on rank 0 and otherwise the DummyExperiment. """ @wraps(fn) def experiment(self): @rank_zero_only def get_experiment(): return fn(self) return get_experiment() or DummyExperiment() return experiment What about pytorch_lightning.metrics (now known as torchmetrics) Our own metrics have custom synchronization going on. Any metric will automatically synchronize between different processes whenever metric.compute() is called. Metrics calculated this way should therefore not be logged using sync_dist=True. Recommended way of logging: Using self.log in your lightning module Not sure this answers all questions.
MDEwOkRpc2N1c3Npb24zMjcwMTE3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6501#discussioncomment-553152
When are buffers moved to gpu?
I have an issue with a weighted mse function that I instantiate in the setup, with a buffer as parameter. Something like this: @torch.jit.script def weighted_mse_func(weights, y, y_hat): # weighted regression loss reg_loss = torch.dot(weights,torch.mean(F.mse_loss(y_hat, y, reduction='none'), dim=0)) return reg_loss def weighted_mse(weights): def func(y, y_hat): return weighted_mse_func(weights, y, y_hat) return func class model(pl.LightningModule): def __init__(self, weights): weights = torch.tensor(weights.copy(), dtype=self.dtype, device=self.device) self.register_buffer("weights", weights) def setup(self, stage): super().setup(stage) self.loss = weighted_mse(self.weights) When initializing training on the GPU I get an error because self.weights is on CPU and not in GPU, if after the error I check the device of the buffer it's on GPU. So if I re-run the trainer, it works fine, also works fine if I call model.cuda() before training. What is going on? Why is the buffer not in GPU on the setup where it fails, but it is afterward? Is something asynchronous going on here?
It seems that pytorch doesn't move buffers parameters in-place (like it is done for parameters), this results in references to buffers being useless if they are moved from one device to another. This issue is discussed in pytorch/pytorch#43815.
D_kwDOCqWgoM4AO7Ye
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12207#discussioncomment-2311707
How to have a silent lr_find()
Is there a way to make lr_find not print anything while searching?
Hey @grudloff, I don't believe this is supported. You could either capture the logs or contribute the feature :) Best, T.C
D_kwDOCqWgoM4ANrmA
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9635#discussioncomment-1368974
Loss Module with inner Network
I have a loss module which is part of my lightning module with its own inner pretrained vgg network. The problem comes when I am trying to use the checkpoint (that is saved automatically) to resume training or to test my model. Then I get an error unexpected key(s) in state_dict pytorch lightning which points to the keys of the network which is part of the loss function. Is there a way to load my model properly?
Dear @vasl12, You have multiple options: pass strict=False drop the key before saving the weights. Use on_save_checkpoint hook to access the checkpoint and drop the associated key. re-create the missing module in your model Best, T.C
MDEwOkRpc2N1c3Npb24zNDYzNjQy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8445#discussioncomment-1020834
Checkpoints not getting saved
I'm using the below callback `checkpoint_callback = pl.callbacks.ModelCheckpoint( dirpath=other_arguments.output_dir, monitor="val_loss", save_top_k=other_arguments.save_top_k, save_last=other_arguments.save_last, mode='min' ) train_params = dict( accumulate_grad_batches=other_arguments.gradient_accumulation_steps, gpus=training_arguments.n_gpu, deterministic=True, max_epochs=other_arguments.num_train_epochs, precision=16 if training_arguments.fp_16 else 32, amp_level=training_arguments.opt_level, gradient_clip_val=training_arguments.max_grad_norm, checkpoint_callback=checkpoint_callback, fast_dev_run=other_arguments.do_fast_dev_run, )` The output dir is empty which means the checkpoints are not getting saved. There is no error as well
Pass checkpoint_callback to the callbacks argument. checkpoint_callback is a boolean only flag to indicate whether checkpointing should be enabled or not.
MDEwOkRpc2N1c3Npb24zMzg1NzEw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7745#discussioncomment-795356
How to set Checkpoints to be used in the automatically generated `version_N` directories?
If the TensorBoard logger is set up as shown logger = TensorBoardLogger(name="MyModel") checkpoint_callback = ModelCheckpoint( filename="{epoch}-{step}-{val_loss:.2f}", monitor="val_loss", save_top_k=5, ) trainer = pl.Trainer( default_root_dir=ROOT_DIR, callbacks=[checkpoint_callback], logger=[logger], ) how do we configure the checkpoints to be written to directories that are automatically named version_0, version_1, the way it is if you do not pass a logger to Trainer? trainer = pl.Trainer( default_root_dir=ROOT_DIR, callbacks=[checkpoint_callback], ) If we pass in a logger to Trainer, the checkpoints are written to <root_path>/<experiment_name>/<integer>/checkpoints while the tensorboard logs and hparams.yaml are written to <root_path>/<experiment_name>/version_<integer>/ If we do not pass in a logger to Trainer, then checkpoint files, Tensorboard files and hparams.yaml are all written to the same directory <root_path>/<experiment_name>/version_<integer>/ How can both checkpoints and tensorboard files we written to the same version_<integer> directory?
Hi For this you need to set the "default_root_dir" in the Trainer, and set the save_dir of the Logger to the same. This works for me (latest PL version): from argparse import ArgumentParser import torch from torch.nn import functional as F import pytorch_lightning as pl from pl_examples.basic_examples.mnist_datamodule import MNISTDataModule from pytorch_lightning.callbacks import ModelCheckpoint from pytorch_lightning.loggers import TensorBoardLogger class LitClassifier(pl.LightningModule): def __init__(self, hidden_dim=128, learning_rate=1e-3): super().__init__() self.save_hyperparameters() self.l1 = torch.nn.Linear(28 * 28, self.hparams.hidden_dim) self.l2 = torch.nn.Linear(self.hparams.hidden_dim, 10) def forward(self, x): x = x.view(x.size(0), -1) x = torch.relu(self.l1(x)) x = torch.relu(self.l2(x)) return x def training_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = F.cross_entropy(y_hat, y) return loss def validation_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = F.cross_entropy(y_hat, y) self.log('valid_loss', loss) def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=self.hparams.learning_rate) @staticmethod def add_model_specific_args(parent_parser): parser = parent_parser.add_argument_group("LitClassifier") parser.add_argument('--hidden_dim', type=int, default=128) parser.add_argument('--learning_rate', type=float, default=0.0001) return parent_parser def cli_main(): pl.seed_everything(1234) parser = ArgumentParser() parser = pl.Trainer.add_argparse_args(parser) parser = LitClassifier.add_model_specific_args(parser) parser = MNISTDataModule.add_argparse_args(parser) args = parser.parse_args() dm = MNISTDataModule.from_argparse_args(args, num_workers=2) model = LitClassifier(args.hidden_dim, args.learning_rate) ROOT_DIR = "here" mylogger = TensorBoardLogger(name="MyModel", save_dir=ROOT_DIR) ckpt_callback = ModelCheckpoint(monitor="valid_loss", filename="{epoch}-{step}-{valid_loss:.2f}") trainer = pl.Trainer.from_argparse_args(args, default_root_dir=ROOT_DIR, logger=mylogger, callbacks=[ckpt_callback], limit_train_batches=2, limit_val_batches=2) trainer.fit(model, datamodule=dm) if __name__ == '__main__': cli_main()
MDEwOkRpc2N1c3Npb24zMzA1OTk3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6821#discussioncomment-568783
Question about the log system
Hello, I have some question about the self.log function and batch_size during the trainer. If I have two GPUs, and I want to train my model with batch_size 16 per GPU and I use DDP, so what's the number of batch_size in Datamodule and what's the number of batch_size in self.log, If I want to calculate my metrics correctly?
hey @exiawsh it should stay as batch_size for a single device only. With DDP if you set batch_size=7, then each device gets the batch of batch_size=7, and effective batch_size increases with the number of devices. Now if you want to log by accumulating metrics across devices, you need to set sync_dist=True. Check out the section here: https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html#automatic-logging
D_kwDOCqWgoM4AOqpi
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11670#discussioncomment-2080711
Why is my gpu-util low?
I use one node and 4 gpus for training. And I use dali dataloader, I don't know why my gpu util is low, and training is also slow. About 1:30 per epoch, I train for 200 epoches, which will cost 5 hours. It's slower than the project mmclassification, which only cost 3.5 hours. Compared to mmclassification project which can only support torch.utils.data.dataloader, I think if I use dali_dataloader, it will accelerate my training. But as you can see, it's the opposite. I don't know why. Could anyone give me some advice? I use cifar10 dataset. And I train on slurm. Here is my code. main.py import pytorch_lightning as pl from pytorch_lightning.callbacks import ModelCheckpoint from net import ResNet18 if __name__ == '__main__': model = ResNet18() trainer = pl.Trainer( max_epochs=200,log_every_n_steps=1, log_gpu_memory='min_max',gpus=4,num_nodes=1,accelerator='ddp', fast_dev_run=False,callbacks=[ModelCheckpoint(monitor='val_accuracy',mode='max')], progress_bar_refresh_rate=1,replace_sampler_ddp=False) trainer.fit(model) net.py import torch import torch.nn as nn import torch.nn.functional as F import pytorch_lightning as pl from dataloader import dali_DataLoader,HybridPipe,dali_CIFAR10 class BasicBlock(nn.Module): expansion = 1 def __init__(self, in_planes, planes, stride=1): super(BasicBlock, self).__init__() self.conv1 = nn.Conv2d( in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes) self.shortcut = nn.Sequential() if stride != 1 or in_planes != self.expansion*planes: self.shortcut = nn.Sequential( nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(self.expansion*planes) ) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = self.bn2(self.conv2(out)) out += self.shortcut(x) out = F.relu(out) return out class Bottleneck(nn.Module): expansion = 4 def __init__(self, in_planes, planes, stride=1): super(Bottleneck, self).__init__() self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes) self.conv3 = nn.Conv2d(planes, self.expansion * planes, kernel_size=1, bias=False) self.bn3 = nn.BatchNorm2d(self.expansion*planes) self.shortcut = nn.Sequential() if stride != 1 or in_planes != self.expansion*planes: self.shortcut = nn.Sequential( nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(self.expansion*planes) ) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = F.relu(self.bn2(self.conv2(out))) out = self.bn3(self.conv3(out)) out += self.shortcut(x) out = F.relu(out) return out class ResNet(pl.LightningModule): def __init__(self, block, num_blocks, num_classes=10): super(ResNet, self).__init__() self.in_planes = 64 self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(64) self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1) self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2) self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2) self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2) self.linear = nn.Linear(512*block.expansion, num_classes) self.correct = 0 self.total_size = 0 def _make_layer(self, block, planes, num_blocks, stride): strides = [stride] + [1]*(num_blocks-1) layers = [] for stride in strides: layers.append(block(self.in_planes, planes, stride)) self.in_planes = planes * block.expansion return nn.Sequential(*layers) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) out = self.layer1(out) out = self.layer2(out) out = self.layer3(out) out = self.layer4(out) out = F.avg_pool2d(out, 4) out = out.view(out.size(0), -1) out = self.linear(out) return out def training_step(self, batch, batch_idx): x, y = batch x = self(x) loss_fn = nn.CrossEntropyLoss() loss = loss_fn(x,y) predicted = torch.argmax(x, dim=1, keepdim=False) self.correct += (predicted == y).sum().item() self.total_size += y.size(0) self.log('train_loss', loss,prog_bar=True, logger=True) self.log('train_accuracy', self.correct/self.total_size,prog_bar=True, logger=True) return loss def validation_step(self, batch, batch_idx): x, y = batch x = self(x) loss_fn = nn.CrossEntropyLoss() loss = loss_fn(x,y) predicted = torch.argmax(x, dim=1, keepdim=False) self.correct += (predicted == y).sum().item() self.total_size += y.size(0) self.log('val_loss', loss,on_step=False, on_epoch=True,prog_bar=True, logger=True) self.log('val_accuracy', self.correct/self.total_size,prog_bar=True, logger=True) return loss def validation_epoch_end(self,out): self.log('val_accuracy', self.correct/self.total_size,prog_bar=True, logger=True) self.correct=0 self.total_size=0 def train_epoch_end(self,out): self.log('train_accuracy', self.correct/self.total_size,prog_bar=True, logger=True) self.correct=0 self.total_size=0 def configure_optimizers(self): optimizer = torch.optim.SGD(self.parameters(), lr=0.1) scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [100,150], gamma=0.1, last_epoch=-1, verbose=False) return [optimizer],[scheduler] def train_dataloader(self): loader = dali_DataLoader(pipelines=HybridPipe(dali_CIFAR10(root='./data'), batch_size=32, pad_ratio=1.25,num_threads=4, is_distribute=True, crop_size=32,ramdom_flip=True, normalize=dict(mean=[125.307, 122.961, 113.8575],std=[51.5865, 50.847, 51.255]))) return loader def val_dataloader(self): loader = dali_DataLoader(pipelines=HybridPipe(dali_CIFAR10(root='./data',test_mode=True), batch_size=100, normalize=dict(mean=[125.307, 122.961, 113.8575],std=[51.5865, 50.847, 51.255]))) return loader def ResNet18(): return ResNet(BasicBlock, [2, 2, 2, 2]) dataloader.py import os,sys,math,random,pickle import torch import numpy as np import torch.distributed as dist try: from nvidia import dali from nvidia.dali.pipeline import Pipeline import nvidia.dali.types as types import nvidia.dali.fn as fn import nvidia.dali.ops as ops from nvidia.dali.plugin.pytorch import DALIClassificationIterator except: print('Could not import DALI') class dali_DataLoader(): def __init__(self, pipelines, **kwargs): pipelines.build() try: self._dali_iterator = DALIClassificationIterator(pipelines=pipelines, size=len(pipelines.iterator.indices)) self.sampler = pipelines.iterator except: self._dali_iterator = DALIClassificationIterator(pipelines=pipelines, reader_name='Reader') self.sampler = self def set_epoch(self,epoch): pass def __iter__(self): return self def __len__(self): return int(math.ceil(self._dali_iterator._size / self._dali_iterator.batch_size)) def __next__(self): try: data = next(self._dali_iterator) except StopIteration: self._dali_iterator.reset() raise StopIteration # Decode the data output input = data[0]['data'] target = data[0]['label'].squeeze().long() return input,target class identity(): def __call__(self,x,*tmp,**kargs): return x class HybridPipe(Pipeline): def __init__(self,dataset, batch_size, file_root=None,filelist_path=None,num_threads=1, pad_ratio=1,is_distribute=True, resize=None,crop_size=[0,0],ramdom_flip=False,normalize=None,random_rotate_degree=None): device_id = torch.cuda.current_device() print("device_id",device_id) super(HybridPipe, self).__init__(batch_size, num_threads, device_id, seed=12 + device_id) if is_distribute: if filelist_path is not None: if file_root is None: raise Exception("if provide filelist_path, then must provide file_root") else: self.input = ops.readers.File(file_root=file_root,file_list=filelist_path,num_shards=dist.get_world_size(),prefetch_queue_depth=num_threads,read_ahead=True,shard_id=dist.get_rank()) self.decode = ops.decoders.Image(device="mixed", output_type=types.RGB) self.use_file=True else: self.iterator = iter(Distribute_Input_Iter(dataset, batch_size)) #self.input = ops.ExternalSource(source=self.iterator, num_outputs=2) self.input = ops.ExternalSource() self.input_label = ops.ExternalSource() self.use_file=False else: if filelist_path is not None: if file_root is None: raise Exception("if provide filelist_path, then must provide file_root") else: self.input = ops.readers.File(file_root=file_root,file_list=filelist_path,num_shards=dist.get_world_size(),prefetch_queue_depth=num_threads,read_ahead=True,shard_id=dist.get_rank()) self.decode = ops.decoders.Image(device="mixed", output_type=types.RGB) self.use_file=True else: self.iterator = iter(Normal_Input_Iter(dataset, batch_size)) self.input = ops.ExternalSource() self.input_label = ops.ExternalSource() self.use_file=False dali_device = "gpu" if isinstance(resize,(tuple,list)) and len(resize)==2: self.resize = ops.Resize(size=tuple(resize)) elif isinstance(resize,(int, float)): self.resize = ops.Resize(size=tuple(resize,resize)) else: self.resize = identity() if normalize is not None and isinstance(normalize,dict): self.mean = normalize.get('mean',0) self.std = normalize.get('std',1) else: self.mean = 0 self.std = 1 if isinstance(crop_size, (int, float)): crop_size = [crop_size,crop_size] if (len(crop_size)==2 and (crop_size[0]==0 or crop_size[1]==0)): self.crop = identity() else: self.crop = ops.Crop(device=dali_device, crop_h=crop_size[0], crop_w=crop_size[1]) if pad_ratio>1: self.pad = ops.Paste(device=dali_device, ratio=pad_ratio, fill_value=0) else: self.pad = identity() self.cmnp = ops.CropMirrorNormalize(device="gpu", dtype=types.FLOAT, output_layout=types.NCHW, mean=self.mean, std=self.std ) if ramdom_flip: self.coin = ops.random.CoinFlip(probability=0.5) else: self.coin = lambda :0 if random_rotate_degree is not None: try: tmp = math.abs(int(random_rotate_degree)) self.degree = ops.random.Uniform(range=(-tmp, tmp)) self.rotate = ops.Rotate() except: self.degree = lambda :0 self.rotate = identity() else: self.degree = lambda :0 self.rotate = identity() def iter_setup(self): if not self.use_file: (images, labels) = self.iterator.__next__() self.feed_input(self.jpegs, images, layout="HWC") self.feed_input(self.labels, labels) def define_graph(self): rng = self.coin() print() if self.use_file: self.jpegs,self.labels = self.input(name="Reader") self.jpegs = self.decode(self.jpegs) else: self.jpegs= self.input() self.labels = self.input_label() output = self.jpegs output = self.resize(output) output = self.rotate(output, angle=self.degree()) output = self.pad(output.gpu()) output = self.crop(output) output = self.cmnp(output, mirror=rng) return [output, self.labels] class Distribute_Input_Iter(): def __init__(self,dataset, batch_size, num_replicas=None,rank=None,shuffle=True,seed=0,drop_last=False): if num_replicas is None: if not dist.is_available(): raise RuntimeError("Requires distributed package to be available") num_replicas = dist.get_world_size() #num_replicas = 1 if rank is None: if not dist.is_available(): raise RuntimeError("Requires distributed package to be available") rank = dist.get_rank() #rank = 0 if rank >= num_replicas or rank < 0: raise ValueError( "Invalid rank {}, rank should be in the interval" " [0, {}]".format(rank, num_replicas - 1)) self.dataset = dataset self.batch_size = batch_size self.num_replicas = num_replicas self.rank = rank self.epoch = 0 self.drop_last = drop_last # If the dataset length is evenly divisible by # of replicas, then there # is no need to drop any data, since the dataset will be split equally. if self.drop_last and len(self.dataset) % self.num_replicas != 0: # type: ignore # Split to nearest available length that is evenly divisible. # This is to ensure each rank receives the same amount of data when # using this Sampler. self.num_samples = math.ceil( # `type:ignore` is required because Dataset cannot provide a default __len__ # see NOTE in pytorch/torch/utils/data/sampler.py (len(self.dataset) - self.num_replicas) / self.num_replicas # type: ignore ) else: self.num_samples = math.ceil(len(self.dataset) / self.num_replicas) # type: ignore self.total_size = self.num_samples * self.num_replicas self.shuffle = shuffle self.seed = seed self.epoch=0 indices = list(range(len(self.dataset))) # type: ignore if not self.drop_last: # add extra samples to make it evenly divisible padding_size = self.total_size - len(indices) if padding_size <= len(indices): indices += indices[:padding_size] else: indices += (indices * math.ceil(padding_size / len(indices)))[:padding_size] else: # remove tail of data to make it evenly divisible. indices = indices[:self.total_size] assert len(indices) == self.total_size,'len(indices) != self.total_size' # subsample indices = indices[self.rank:self.total_size:self.num_replicas] assert len(indices) == self.num_samples,'len(indices) != self.num_samples' self.indices = indices def set_epoch(self,epoch): self.epoch = epoch def __iter__(self): self.i = 0 self.n = len(self.indices) return self def __next__(self): batch = [] labels = [] should_shuffle = False for _ in range(self.batch_size): if self.i % self.n == self.n-1: should_shuffle = True img, label = self.dataset.__getitem__(self.indices[self.i]) batch.append(img) labels.append(label) self.i = (self.i + 1) % self.n if should_shuffle: random.shuffle(self.indices) return (batch, labels) class Normal_Input_Iter(): def __init__(self,dataset, batch_size): self.dataset = dataset self.batch_size = batch_size self.indices = list(range(len(self.dataset))) def __iter__(self): self.i = 0 self.n = len(self.dataset) return self def __next__(self): batch = [] labels = [] should_shuffle = False for _ in range(self.batch_size): if self.i % self.n == self.n-1: should_shuffle = True img, label = self.dataset.__getitem__(self.indices[self.i]) batch.append(img) labels.append(label) self.i = (self.i + 1) % self.n if should_shuffle: random.shuffle(self.indices) return (batch, labels)
When you compare the two implementations, make sure to leave out as many changing variables as possible. For example, since you train with DDP, run it only on 2 GPUs so that you can be sure it's not bottlenecked by CPU. I don't know the Dali data loader very well, but I doubt that they can guarantee a throughput increase for all use cases.
MDEwOkRpc2N1c3Npb24zMzI2NTUw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7082#discussioncomment-626240
Should the total epoch size be less when using multi-gpu DDP?
If I have 100 training examples and 100 validation examples, and I run on a single gpu with a batch size of 10, the tqdm bar will show 20 epoch iterations. If I run on 2 gpus with ddp and the same batch size, the tqdm bar will still show 20 epoch iterations, but isnt the effective batch size now 20 instead of 10 because theres 2 gpus? Shouldnt the total number of iterations be half? Thanks for any clarification.
Hi @jipson7 , First of all: You're right, that's how it should be. We tried to reproduce this, but for us this produced the following (correct) output. Do you have a minimal reproduction example? Epoch 0: 100%|███████████████████████████████████████████████████████| 10/10 [00:00<00:00, 17.23it/s, loss=-43.6, v_num=272] seen train: 5 seen train: 5
MDEwOkRpc2N1c3Npb24zMzMzNzUx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7175#discussioncomment-648416
Patience reset in EarlyStopping once loss has improved
Hey, im wondering whether the patience parameter is or can be reset once we have started improving again. Reading the docs it sounds like the patience parameter is the absolute number of steps the loss is allowed to not decrease. Is it possible to have it such that the patience counter is reset to its original value whenever we have improved?
This is exactly how patience is supposed to work. As you can see here, the counter resets to 0 upon improvement.
MDEwOkRpc2N1c3Npb24zMzk5MTEw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7849#discussioncomment-832492
Error with predict()
Hi @awaelchli and thanks for your time, as you asked in pull requests, i am pinging you here For other who see this, it's a discussion about Trainer.predict method where it is running BatchNorm Layers, code is below: https://colab.research.google.com/drive/1jujP4F_prSmbRz-F_wGfWPTKGOmY5DPE?usp=sharing What is the problem with my approach?
Predict takes a dataloader, not a tensor. It still "works" because the trainer just iterates through the batch dimension, but then you get an error later because the input lost the batch dimension, and batch norm doesn't work with batch size 1.
MDEwOkRpc2N1c3Npb24zMzI1MzIx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7068#discussioncomment-623687
Loading checkpoint for LightningModule that defines a system
Systems The style guide encourages to use systems, like this one class LitModel(LightningModule): def __init__(self, encoder: nn.Module = None, decoder: nn.Module = None): super().__init__() self.encoder = encoder self.decoder = decoder I have problems loading the checkpoint for such modules. Below example fails. How can I make it work? Minimal example import os import torch from torch import nn from torch.utils.data import Dataset from pytorch_lightning import Trainer, LightningModule class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len class SystemModel(LightningModule): def __init__(self, encoder: nn.Module = None, decoder: nn.Module = None, multiplier=10): super().__init__() self.save_hyperparameters('multiplier') self.encoder = encoder self.decoder = decoder self.multiplier = multiplier print("type of hparams", type(self.hparams)) print("class of hparams type", self.hparams.__class__.__name__) def forward(self, x): return self.multiplier * self.decoder(self.encoder(x)) def loss(self, batch, prediction): # An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction)) def step(self, x): x = self.forward(x) out = torch.nn.functional.mse_loss(x, torch.ones_like(x)) return out def training_step(self, batch, batch_idx): output = self.forward(batch) loss = self.loss(batch, output) return {"loss": loss} def training_step_end(self, training_step_outputs): return training_step_outputs def training_epoch_end(self, outputs) -> None: torch.stack([x["loss"] for x in outputs]).mean() def validation_step(self, batch, batch_idx): output = self.forward(batch) loss = self.loss(batch, output) return {"x": loss} def validation_epoch_end(self, outputs) -> None: torch.stack([x['x'] for x in outputs]).mean() def test_step(self, batch, batch_idx): output = self.forward(batch) loss = self.loss(batch, output) return {"y": loss} def test_epoch_end(self, outputs) -> None: torch.stack([x["y"] for x in outputs]).mean() def configure_optimizers(self): optimizer = torch.optim.SGD(self.parameters(), lr=0.1) lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1) return [optimizer], [lr_scheduler] train_data = torch.utils.data.DataLoader(RandomDataset(32, 64)) val_data = torch.utils.data.DataLoader(RandomDataset(32, 64)) test_data = torch.utils.data.DataLoader(RandomDataset(32, 64)) # model model = SystemModel(torch.nn.Linear(32, 16), torch.nn.Linear(16, 2), multiplier=15) trainer = Trainer( default_root_dir=os.getcwd(), limit_train_batches=1, limit_val_batches=1, max_epochs=1, weights_summary=None, ) trainer.fit(model, train_data, val_data) ckpt_path = trainer.checkpoint_callback.best_model_path # Try to load # Loading fails.... model = SystemModel.load_from_checkpoint(ckpt_path) # Fails # Edit: answer by Adrian # the correct way to load is model = SystemModel.load_from_checkpoint(ckpt_path, encoder=torch.nn.Linear(32, 16), decoder=torch.nn.Linear(16, 2)) print("multiplier:", model.multiplier)
When you reload you need to specify the missing arguments for instantiation: model = SystemModel.load_from_checkpoint(ckpt_path, encoder=..., decoder=...)
MDEwOkRpc2N1c3Npb24zMzk1MDQ3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7818#discussioncomment-829954
Enabling dropout during trainer.predict
I want to enable dropout during .predict and tried implementing the following: model.eval() for m in model.modules(): if m.__class__.__name__.startswith('Dropout'): m.train() ... trainer.predict( model, dataloaders=data_loader, return_predictions=True ) It seems like .predict is overriding this because I get identical predictions with different seeds. Can someone explain how to accomplish this, or point me to the relevant docs? (Couldn't find them & tried looking for while) Thank you!
hey @35ajstern ! you can enable this inside predict_step itself. Check this out: https://github.com/PyTorchLightning/pytorch-lightning/blob/f35e2210e240b443fd4dafed8fe2e30ee7d579ea/docs/source/common/production_inference.rst#prediction-api this is part of a PR, will be available in the docs once merged.
D_kwDOCqWgoM4AOsPR
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11710#discussioncomment-2102555
Model with best validation accuracy
Is there a way to save the model with the best validation accuracy when using early stopping? I believe right now, the model weights are the weights from the latest snapshot; but i am looking for a way to access the model with the best performance on validation.
by on validation you mean while calling trainer.validate or validation happening within trainer.fit call?
D_kwDOCqWgoM4AN6mz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10126#discussioncomment-1531522
How can one use an external optimizer with LightningCLI?
I would like to use Adafactor as my optimizer with LightningCLI. I've tried the method described in the documentation for custom optimizers but it didn't work. Can anybody tell me how they would train a model with this optimizer using LightningCLI?
Hi! I got it to work in the meantime. I added this to the main file where I call CLI: import transformers from pytorch_lightning.utilities.cli import OPTIMIZER_REGISTRY @OPTIMIZER_REGISTRY class Adafactor(transformers.Adafactor): def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) The main issue was in the config file---apparently one needs to write: optimizer: class_path: __main__.Adafactor instead of: optimizer: class_path: Adafactor Doing the former, I got it to work. By the way, is there a way to have the optimizer register in a separate file than the one that calls CLI?
D_kwDOCqWgoM4AOPZa
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11016#discussioncomment-1784783
Custom scheduler
I would like to provide my own learning rate scheduler. What I would love is do is doing this in the lightning style, e.g. implementing some hooks: class MyScheduler(pl.LightningScheduler): ... def on_step(self...): ... def on_epoch(self...): ... Is something like this possible? How do other people handle custom schedulers? PS: I asked this question before on the [deprecated forum.] (https://forums.pytorchlightning.ai/t/custom-scheduler-class/1238)
can you please elaborate on the usage more? like what do you want to do inside on_step and on_epoch methods?
D_kwDOCqWgoM4ANwn_
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9817#discussioncomment-1428743
on_train_epoch_end() runs int he middle of epochs
Hi I'm trying to calculate some metrics and generate some images to save at the end of each epoch. I put this code in the on_train_epoch_end() (I also tried using a custom callback) but the function seems to be called in the middle of the epochs, approximately 3-4 times per epoch. Surely this isn't intended behaviour? Could it be to do with me using a combined loader?
@inigoval thanks for reporting this. Could you provide a snippet code so that we can reproduce ?
MDEwOkRpc2N1c3Npb24zMzUxNTEz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7375#discussioncomment-697922
What's the difference between `on_step` and `on_epoch` of `pl_module.log`
https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html?highlight=on_epoch%20#pytorch_lightning.core.lightning.LightningModule.log.params.on_epoch I'm using horovod to train the model. I wonder if on_step and on_epoch average the metrics across all GPUs automatically. In other words, do we need to explicitly average the metrics in functions like training_epoch_end and validation_epoch_end?
Dear @marsggbo, When using self.log(..., on_step=True), this will compute the metric per step locally as synchronisation adds performance hit. When using self.log(..., on_step=True, sync_dist=True), this will compute the metric per step across GPUS. When using self.log(..., on_epoch=True), this will compute the metrics across GPUS and epoch batches automatically. Best, T.C
MDEwOkRpc2N1c3Npb24zNTA5MDI4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8806#discussioncomment-1149287
How to access the strategy of the trainer
Hi, I am trying to make my code invariant to the choice of strategies by being able to compute the global batch size which depends on the strategy. For example, for DDP it is N * batch_size with N being the number of processes. The use case I can think of is using the global batch size to initialize the optimizer. trainer(num_nodes=1, gpus=2, strategy='ddp') # pass the strategy ddp for example class MyLightningModule(pl.LightningModule): @property def global_batch_size(self) -> int: if self.trainer.strategy is None: return self.trainer.datamodule.train.loader.batch_size elif self.trainer.strategy is DDPStrategy: return self.trainer.num_nodes * self.trainer.gpus *\ # There might be a better way to compute the self.trainer.datamodule.train.loader.batch_size # number of processes using the strategy ... def configure_optimizers(self) -> Dict[Any, Any]: optimizer, scheduler = hydra.utils.instantiate( self.hparams.optimizer, model=self, batch_size=self.global_batch_size, _recursive_=False) return { 'optimizer': optimizer, 'lr_scheduler': scheduler } To do so, I would like to retrieve inside my Lightning module the strategy used by my trainer. I tried to find in the trainer code how to access the strategy and I found the property: # in pytorch_lightning.trainer.trainer.py class Trainer(...): ... @property def strategy(self) -> Strategy: return self._accelerator_connector.strategy However self.trainer.strategy in configure_optimizers raises AttributeError: 'Trainer' object has no attribute 'strategy'. Weirdly, self.trainer._accelerator_connector.strategy works and returns the passed strategy in the trainer. Yet, if I understood correctly the _accelerator_connector should resolve the strategy 'ddp' to DDPStrategy in its initialization but it returns 'ddp': # in pytorch_lightning.trainer.connectors.accelerator_connector.py class AcceleratorConnector(...): def __init__(...): ... self.strategy = self.final_strategy() ... Is it possible to access the strategy used for training?
just out of curiosity, what sort of scheduler/optimizer are you initializing using the global_batch_size?
D_kwDOCqWgoM4AOYR-
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11272#discussioncomment-1880949