instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Input type (MPSFloatType) and weight type (torch.FloatTensor) should be the same
I am trying to run this notebook on Apple M1 (1st gen) running MacOS 12.4, libs freeze: >pip3 freeze anyio @ file:///private/tmp/jupyterlab--anyio-20211211-70040-1yv1wmx/anyio-3.4.0 appnope==0.1.2 argon2-cffi @ file:///private/tmp/jupyterlab--argon2-cffi-20211211-70040-1er07d0/argon2-cffi-21.2.0 argon2-cffi-bindings @ file:///private/tmp/jupyterlab--argon2-cffi-bindings-20211211-70040-o64kwi/argon2-cffi-bindings-21.2.0 asttokens==2.0.5 attrs @ file:///private/tmp/jupyterlab--attrs-20211211-70040-6u3qxt/attrs-21.2.0 Babel==2.9.1 backcall @ file:///private/tmp/jupyterlab--backcall-20211211-70040-acdr42/backcall-0.2.0 beniget==0.4.1 black==21.12b0 bleach==4.1.0 certifi==2022.5.18.1 cffi==1.15.0 charset-normalizer==2.0.12 click==8.0.3 cycler==0.10.0 Cython==0.29.24 debugpy @ file:///private/tmp/jupyterlab--debugpy-20211211-70040-2j9lay/debugpy-1.5.1 decorator==5.1.0 defusedxml @ file:///private/tmp/jupyterlab--defusedxml-20211211-70040-uowur4/defusedxml-0.7.1 entrypoints @ file:///private/tmp/jupyterlab--entrypoints-20211211-70040-1r2y5g4/entrypoints-0.3 et-xmlfile==1.1.0 executing==0.8.2 finnhub-python==2.4.5 gast==0.5.2 GDAL==3.4.0 gensim==4.1.2 graphviz==0.19.1 idna==3.3 imageio==2.13.5 ipykernel==6.6.0 ipython==7.30.1 ipython-genutils==0.2.0 ipywidgets==7.6.5 jedi==0.18.1 Jinja2==3.0.3 joblib==1.1.0 json5==0.9.6 jsonschema @ file:///private/tmp/jupyterlab--jsonschema-20211211-70040-1np642r/jsonschema-4.2.1 jupyter==1.0.0 jupyter-client==7.1.0 jupyter-console==6.4.0 jupyter-core==4.9.1 jupyter-server @ file:///private/tmp/jupyterlab--jupyter-server-20211211-70040-1u7h7vl/jupyter_server-1.13.1 jupyterlab @ file:///private/tmp/jupyterlab-20211211-70040-1ltrjpx/jupyterlab-3.2.5 jupyterlab-pygments==0.1.2 jupyterlab-server @ file:///private/tmp/jupyterlab--jupyterlab-server-20211211-70040-iufjhi/jupyterlab_server-2.8.2 jupyterlab-widgets==1.0.2 kiwisolver==1.3.2 lxml==4.6.3 MarkupSafe==2.0.1 matplotlib==3.4.3 matplotlib-inline==0.1.3 midi @ git+https://github.com/vishnubob/python-midi.git@abb85028c97b433f74621be899a0b399cd100aaa midi-to-dataframe @ git+https://github.com/TaylorPeer/midi-to-dataframe@35347f787f01a2326234ad278d8c40bee3817f1d mido==1.2.10 mistune==0.8.4 multitasking==0.0.9 mypy-extensions==0.4.3 nbclassic @ file:///private/tmp/jupyterlab--nbclassic-20211211-70040-1fah2fe/nbclassic-0.3.4 nbclient @ file:///private/tmp/jupyterlab--nbclient-20211211-70040-ptwp5d/nbclient-0.5.9 nbconvert==6.3.0 nbformat==5.1.3 nest-asyncio @ file:///private/tmp/jupyterlab--nest-asyncio-20211211-70040-72pz5e/nest_asyncio-1.5.4 networkx==2.6.3 notebook==6.4.6 numpy==1.23.0rc1 openpyxl==3.0.9 packaging @ file:///private/tmp/jupyterlab--packaging-20211211-70040-1f14ddt/packaging-21.3 pandas==1.4.2 pandocfilters==1.5.0 parso==0.8.3 pathspec==0.9.0 pexpect==4.8.0 pickleshare==0.7.5 Pillow==9.1.1 platformdirs==2.4.1 ply==3.11 prometheus-client==0.12.0 prompt-toolkit @ file:///private/tmp/jupyterlab--prompt-toolkit-20211211-70040-hcpjwc/prompt_toolkit-3.0.24 ptyprocess @ file:///private/tmp/jupyterlab--ptyprocess-20211211-70040-wjbvpa/ptyprocess-0.7.0 pure-eval==0.2.1 pybind11==2.8.0 pycparser==2.21 Pygments==2.10.0 pyparsing==3.0.6 pyrsistent @ file:///private/tmp/jupyterlab--pyrsistent-20211211-70040-1fnadg/pyrsistent-0.18.0 python-dateutil==2.8.2 pythran==0.10.0 pytz==2022.1 PyWavelets==1.2.0 PyYAML==6.0 pyzmq @ file:///private/tmp/jupyterlab--pyzmq-20211211-70040-2xtuon/pyzmq-22.3.0 qtconsole==5.2.2 QtPy==2.0.0 requests==2.27.1 scikit-image==0.19.1 scikit-learn==1.1.dev0 scipy==1.8.1 seaborn==0.11.2 Send2Trash==1.8.0 six==1.16.0 smart-open==5.2.1 sniffio @ file:///private/tmp/jupyterlab--sniffio-20211211-70040-wu3dri/sniffio-1.2.0 squarify==0.4.3 stack-data==0.1.4 terminado @ file:///private/tmp/jupyterlab--terminado-20211211-70040-dw1vl6/terminado-0.12.1 testpath @ file:///private/tmp/jupyterlab--testpath-20211211-70040-895z1/testpath-0.5.0 threadpoolctl==3.0.0 tifffile==2021.11.2 tomli==1.2.3 torch==1.13.0.dev20220528 torchaudio==0.11.0 torchsummary==1.5.1 torchtext==0.10.0 torchvision==0.14.0a0+f0f8a3c torchviz==0.0.2 tornado==6.1 tqdm==4.62.3 traitlets @ file:///private/tmp/jupyterlab--traitlets-20211211-70040-ru76xv/traitlets-5.1.1 typing_extensions==4.2.0 urllib3==1.26.9 wcwidth==0.2.5 webencodings==0.5.1 websocket-client==1.2.3 wget==3.2 widgetsnbextension==3.5.2 yfinance==0.1.64 in the code , am setting device = torch.device('mps') at this line: history = [evaluate(model, valid_dl)] am getting runtime error Input type (MPSFloatType) and weight type (torch.FloatTensor) should be the same Trace: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <timed exec> in <module> /opt/homebrew/Cellar/jupyterlab/3.2.5/libexec/lib/python3.9/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) 28 return cast(F, decorate_context) 29 /var/folders/mz/qfpvpvf550s039lrnxg70whh0000gn/T/ipykernel_11483/1143432410.py in evaluate(model, val_loader) 3 def evaluate(model, val_loader): 4 model.eval() ----> 5 outputs = [model.validation_step(batch) for batch in val_loader] 6 return model.validation_epoch_end(outputs) 7 /var/folders/mz/qfpvpvf550s039lrnxg70whh0000gn/T/ipykernel_11483/1143432410.py in <listcomp>(.0) 3 def evaluate(model, val_loader): 4 model.eval() ----> 5 outputs = [model.validation_step(batch) for batch in val_loader] 6 return model.validation_epoch_end(outputs) 7 /var/folders/mz/qfpvpvf550s039lrnxg70whh0000gn/T/ipykernel_11483/446280773.py in validation_step(self, batch) 16 def validation_step(self, batch): 17 images, labels = batch ---> 18 out = self(images) # Generate prediction 19 loss = F.cross_entropy(out, labels) # Calculate loss 20 acc = accuracy(out, labels) # Calculate accuracy /opt/homebrew/Cellar/jupyterlab/3.2.5/libexec/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] /var/folders/mz/qfpvpvf550s039lrnxg70whh0000gn/T/ipykernel_11483/3789274317.py in forward(self, xb) 29 30 def forward(self, xb): # xb is the loaded batch ---> 31 out = self.conv1(xb) 32 out = self.conv2(out) 33 out = self.res1(out) + out /opt/homebrew/Cellar/jupyterlab/3.2.5/libexec/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] /opt/homebrew/Cellar/jupyterlab/3.2.5/libexec/lib/python3.9/site-packages/torch/nn/modules/container.py in forward(self, input) 137 def forward(self, input): 138 for module in self: --> 139 input = module(input) 140 return input 141 /opt/homebrew/Cellar/jupyterlab/3.2.5/libexec/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1129 or _global_forward_hooks or _global_forward_pre_hooks): -> 1130 return forward_call(*input, **kwargs) 1131 # Do not call functions when jit is used 1132 full_backward_hooks, non_full_backward_hooks = [], [] /opt/homebrew/Cellar/jupyterlab/3.2.5/libexec/lib/python3.9/site-packages/torch/nn/modules/conv.py in forward(self, input) 457 458 def forward(self, input: Tensor) -> Tensor: --> 459 return self._conv_forward(input, self.weight, self.bias) 460 461 class Conv3d(_ConvNd): /opt/homebrew/Cellar/jupyterlab/3.2.5/libexec/lib/python3.9/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias) 453 weight, bias, self.stride, 454 _pair(0), self.dilation, self.groups) --> 455 return F.conv2d(input, weight, bias, self.stride, 456 self.padding, self.dilation, self.groups) 457 RuntimeError: Input type (MPSFloatType) and weight type (torch.FloatTensor) should be the same MPS is still new and am trying to figure out the cause here, any suggestions are welcome, the code runs fine if torch device is set to CPU - just takes so much time. Thanks, Deep Kamal Singh
My guess is that the model has not been placed onto the MPS device. If you place your model onto the MPS device (by calling model.to(device)), does your code work?
https://stackoverflow.com/questions/72421284/
How can I efficiently mask out certain pairs in (2, N) tensor?
I have a torch tensor edge_index of shape (2, N) that represents edges in a graph. For each (x, y) there is also a (y, x), where x and y are node IDs (ints). During the forward pass of my model I need to mask out certain edges. So, for example, I have: n1 = [0, 3, 4] # list of node ids as x n2 = [1, 2, 1] # list of node ids as y edge_index = [[1, 2, 0, 1, 3, 4, 2, 3, 1, 4, 2, 4], # actual edges as (x, y) and (y, x) [2, 1, 1, 0, 4, 3, 3, 2, 4, 1, 4, 2]] # do something that efficiently removes (x, y) and (y, x) edges as formed by n1 and n2 Final edge_index should look like: >>> edge_index [[1, 2, 3, 4, 2, 4], [2, 1, 4, 3, 4, 2]] Preferably we need to efficiently make some kind of boolean mask that I can apply to edge index e.g. as edge_index[:, mask] or something like that. Could also be done in numpy but I'd like to avoid converting back and forth. Edit #1: If that can't be done, then I can think of a way so that, instead of n1 and n2, I have access to the indices of the positions I need to exclude in one tensor e.g. _except=[2, 3, 6, 7, 8, 9] (by making a dict/index once in the beginning). Is there a way to get the desired result by "telling" edge_index to drop the indices in except? edge_index[:, _except] gives me the ones I want to get rid of. I need its complement operation. Edit #2: I managed to do it like this: mask = torch.ones(edge_index.shape[1], dtype=torch.bool) for i in range(len(n1)): mask = mask & ~(torch.tensor([n1[i], n2[i]], dtype=torch.long) == edge_index.T).all(dim=1) & ~(torch.tensor([n2[i], n1[i]], dtype=torch.long) == edge_index.T).all(dim=1) edge_index[:, mask] but it is too slow and I can't use it. How can I speed it up? Edit #3: I managed to solve this Edit#1 efficiently with: mask = torch.ones(edge_index.shape[1], dtype=torch.bool) mask[_except] = False edge_index[:, mask] Still interested in solving the original problem if someone comes up with something...
If you're ok with the way you suggested at Edit#1, you get the complement result by: edge_index[:, [i for i in range(edge_index.shape[1]) if not (i in _except)]] hope this is fast enough for your requirement. Edit 1: from functools import reduce ids = torch.stack([torch.tensor(n1), torch.tensor(n2)], dim=1) ids = torch.cat([ids, ids[:, [1,0]]], dim=0) res = edge_index.unsqueeze(0).repeat(6, 1, 1) == ids.unsqueeze(2).repeat(1, 1, 12) mask = ~reduce(lambda x, y: x | (reduce(lambda p, q: p & q, y)), res, reduce(lambda p, q: p & q, res[0])) edge_index[:, mask] Edit 2: ids = torch.stack([torch.tensor(n1), torch.tensor(n2)], dim=1) ids = torch.cat([ids, ids[:, [1,0]]], dim=0) res = edge_index.unsqueeze(0).repeat(6, 1, 1) == ids.unsqueeze(2).repeat(1, 1, 12) mask = ~(res.sum(1) // 2).sum(0).bool() edge_index[:, mask]
https://stackoverflow.com/questions/72427054/
Difference between sum and mean in .backward()
I know we are converting the tensor in scaler than applying backward(), but when to sum and when to mean? some_loss_function.sum().backward() -OR- some_loss_function.mean().backward()
There is no canonical answer to your question. Essentially what you're asking is should I average or sum my loss, as readers we have no knowledge of your problem and what this loss function corresponds to. It all depends on your use case. Generally though, you would average over summation because you often don't wish the loss value to scale with the dimensionality of the output. Indeed high dimensionality of your output would lead to a higher value to your loss than a summation which is meant to be constant w.r.t. the dimensions of your output tensor. If you sum your loss you will end up scaling your loss value and the gradients that are inferred from it uncontrollably.
https://stackoverflow.com/questions/72429838/
Difference between forward and train_step in Pytorch Lightning?
I have a transfer learning Resnet set up in Pytorch Lightning. the structure is borrowed from this wandb tutorial https://wandb.ai/wandb/wandb-lightning/reports/Image-Classification-using-PyTorch-Lightning--VmlldzoyODk1NzY and from looking at the documentation https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html I am confused about the difference between the def forward () and the def training_step() methods. Initially in the PL documentation, the model is not called in the training step, only in forward. But forward is also not called in the training step. I have been running the model on data and the outputs look sensible (I have an image callback and I can see that the model is learning, and getting a good accuracy result at the end). But I am worried that given the forward method is not being called, the model is somehow not being implemented? Model code is: class TransferLearning(pl.LightningModule): "Works for Resnet at the moment" def __init__(self, model, learning_rate, optimiser = 'Adam', weights = [ 1/2288 , 1/1500], av_type = 'macro' ): super().__init__() self.class_weights = torch.FloatTensor(weights) self.optimiser = optimiser self.thresh = 0.5 self.save_hyperparameters() self.learning_rate = learning_rate #add metrics for tracking self.accuracy = Accuracy() self.loss= nn.CrossEntropyLoss() self.recall = Recall(num_classes=2, threshold=self.thresh, average = av_type) self.prec = Precision( num_classes=2, average = av_type ) self.jacq_ind = JaccardIndex(num_classes=2) # init model backbone = model num_filters = backbone.fc.in_features layers = list(backbone.children())[:-1] self.feature_extractor = nn.Sequential(*layers) # use the pretrained model to classify damage 2 classes num_target_classes = 2 self.classifier = nn.Linear(num_filters, num_target_classes) def forward(self, x): self.feature_extractor.eval() with torch.no_grad(): representations = self.feature_extractor(x).flatten(1) x = self.classifier(representations) return x def training_step(self, batch, batch_idx): x, y = batch logits = self(x) loss = self.loss(logits, y) # training metrics preds = torch.argmax(logits, dim=1) acc = self.accuracy(preds, y) recall = self.recall(preds, y) precision = self.prec(preds, y) jac = self.jacq_ind(preds, y) self.log('train_loss', loss, on_step=True, on_epoch=True, logger=True) self.log('train_acc', acc, on_step=True, on_epoch=True, logger=True) self.log('train_recall', recall, on_step=True, on_epoch=True, logger=True) self.log('train_precision', precision, on_step=True, on_epoch=True, logger=True) self.log('train_jacc', jac, on_step=True, on_epoch=True, logger=True) return loss def validation_step(self, batch, batch_idx): x, y = batch logits = self(x) loss = self.loss(logits, y) # validation metrics preds = torch.argmax(logits, dim=1) acc = self.accuracy(preds, y) recall = self.recall(preds, y) precision = self.prec(preds, y) jac = self.jacq_ind(preds, y) self.log('val_loss', loss, prog_bar=True) self.log('val_acc', acc, prog_bar=True) self.log('val_recall', recall, prog_bar=True) self.log('val_precision', precision, prog_bar=True) self.log('val_jacc', jac, prog_bar=True) return loss def test_step(self, batch, batch_idx): x, y = batch logits = self(x) loss = self.loss(logits, y) # validation metrics preds = torch.argmax(logits, dim=1) acc = self.accuracy(preds, y) recall = self.recall(preds, y) precision = self.prec(preds, y) jac = self.jacq_ind(preds, y) self.log('test_loss', loss, prog_bar=True) self.log('test_acc', acc, prog_bar=True) self.log('test_recall', recall, prog_bar=True) self.log('test_precision', precision, prog_bar=True) self.log('test_jacc', jac, prog_bar=True) return loss def configure_optimizers(self,): print('Optimise with {}'.format(self.optimiser) ) # optimizer = self.optimiser_dict[self.optimiser](self.parameters(), lr=self.learning_rate) # Support Adam, SGD, RMSPRop and Adagrad as optimizers. if self.optimiser == "Adam": optimiser = optim.AdamW(self.parameters(), lr = self.learning_rate) elif self.optimiser == "SGD": optimiser = optim.SGD(self.parameters(), lr = self.learning_rate) elif self.optimiser == "Adagrad": optimiser = optim.Adagrad(self.parameters(), lr = self.learning_rate) elif self.optimiser == "RMSProp": optimiser = optim.RMSprop(self.parameters(), lr = self.learning_rate) else: assert False, f"Unknown optimizer: \"{self.optimiser}\"" return optimiser
I am confused about the difference between the def forward () and the def training_step() methods. Quoting from the docs: "In Lightning we suggest separating training from inference. The training_step defines the full training loop. We encourage users to use the forward to define inference actions." So forward() defines your prediction/inference actions. It doesn't even need to be part of your training_step in which you would define you whole training loop. Nonetheless you can choose to have it in your training_step if you want it that way. An example where forward() isn't part of the training_step: def forward(self, x): # in lightning, forward defines the prediction/inference actions embedding = self.encoder(x) return embedding def training_step(self, batch, batch_idx): # training_step defined the train loop. # in this case it is independent of forward x, y = batch x = x.view(x.size(0), -1) z = self.encoder(x) x_hat = self.decoder(z) loss = F.mse_loss(x_hat, x) # Logging to TensorBoard by default self.log("train_loss", loss) return loss the model is not called in the training step, only in forward. But forward is also not called in the training step The fact that forward() is not called in your train_step is because self(x) does it for you. You can alternatively call forward() explicitly instead of using call(x). I am worried that given the forward method is not being called, the model is somehow not being implemented? As long as you see your metrics logged with self.log move in the right direction you will know that you model gets called correctly and its learning.
https://stackoverflow.com/questions/72437583/
How to rescale a pytorch tensor to interval [0,1]?
I would like to rescale a pytorch tensor named outmap to the [0, 1] interval. I tried this: outmap_min = torch.min(outmap, dim=1, keepdim=True) outmap_max = torch.max(outmap, dim=1, keepdim=True) outmap = (outmap - outmap_min) / (outmap_max - outmap_min) And I am getting this error: TypeError: unsupported operand type(s) for -: 'Tensor' and 'torch.return_types.min'
There are many ways to answer the question posed in your title (e.g., min-max normalization, or one of many non-linear functions mapping (-infinity, infinity) to [0, 1]). Based on the context of the post, I'm assuming you just want to implement min-max normalization. torch.min() does not return a tensor; it returns a type akin to a tuple (and in fact the documentation says it's a namedtuple). It's a two-tuple; the first element is the tensor you're looking for (min), and the second element is an indexing tensor specifying the indices of the minimum values (argmin). The same goes for torch.max(). So, suppose your tensor is of size [N, M]. If you're trying to min-max normalize each "row" (dimension 0) based on the min and max of the M elements (columns) in that row, you'd compute the min and max along dimension 1. This would give you N mins and N maxes -- a min and max for each row. You'd then apply the normalization. This is what your code was intended to do, but you have to unpack the return values of torch.min() and torch.max(), discarding the argmin and argmax in this case: outmap_min, _ = torch.min(outmap, dim=1, keepdim=True) outmap_max, _ = torch.max(outmap, dim=1, keepdim=True) outmap = (outmap - outmap_min) / (outmap_max - outmap_min) # Broadcasting rules apply If your tensor is of size [N, ...], where ... indicates an arbitrary number of dimensions each of arbitrary size, and you want to min-max normalize each row as before, then the solution is a bit more tricky. Unfortunately, torch.min() and torch.max() each only accept a single dimension. If you want to compute extrema along multiple dimensions, you either have to call torch.min() / torch.max() once for each dimension, or you have to reshape / flatten your tensor so that all of the targeted dimensions are merged into one. So if your tensor is of size [N, ...], it just has to be reshaped to size [N, R, 1, 1, ...], where R is the product of the sizes of the remaining dimensions, and the 1's preserve the number of dimensions (this makes broadcasting easier later). For instance, if your tensor is of size [N, A, B, C], you should reshape it to size [N, A*B*C, 1, 1]. Following that, you can compute mins / maxes along the R-sized dimension 1, as before: flattened_outmap = outmap.view(outmap.shape[0], -1, 1, 1) # Use 1's to preserve the number of dimensions for broadcasting later, as explained outmap_min, _ = torch.min(flattened_outmap, dim=1, keepdim=True) outmap_max, _ = torch.max(flattened_outmap, dim=1, keepdim=True) outmap = (outmap - outmap_min) / (outmap_max - outmap_min) # Broadcasting rules apply If you wanted to, you could programmatically construct the placeholder 1's as a list whose length is equal to the number of dimensions minus 2 (to account for the [N, R] dimensions)), and then expand the list. e.g.: flattened_outmap = outmap.view(outmap.shape[0], -1, *([1] * (len(outmap.shape) - 2))) In more complicated cases where you want the min / max normalization statistics to be computed across multiple non-contiguous dimensions, you'll probably have to permute the tensor prior to reshaping it. This can't be done with a view, so you'd have to create an extra copy of the tensor. This would surely require more memory than simply calling torch.min() / torch.max() once per dimension, but the tensorization of it might make it faster (it'd probably depend on the number of dimensions; you'd have to benchmark it).
https://stackoverflow.com/questions/72440228/
Convert from tensor to cpu for dictionary values
I have a dictionary which has the following values and I am trying to convert my tensors in 'train_acc' to a list of float values like the rest so that I can use it to plot graph but I have no idea how to do it. defaultdict(list, {'train_acc': [tensor(0.9889, device='cuda:0', dtype=torch.float64), tensor(0.9909, device='cuda:0', dtype=torch.float64), tensor(0.9912, device='cuda:0', dtype=torch.float64)], 'train_loss': [0.049552333343110315, 0.040933397413570306, 0.04100083970214572], 'val_acc': [0.9779669504256384, 0.9779669504256384, 0.9779669504256384], 'val_loss': [0.11118546511442401, 0.11118546511442401, 0.11118546511442401]})
It can be done .cpu() - moving to cpu then get the value of the tensor by .item(). If the dict looks like below: dict = { 'train_acc': [tensor(0.9889, device='cuda:0', dtype=torch.float64), tensor(0.9909, device='cuda:0', dtype=torch.float64), tensor(0.9912, device='cuda:0', dtype=torch.float64)], 'train_loss':[0.049552333343110315, 0.040933397413570306, 0.04100083970214572], 'val_acc': [0.9779669504256384, 0.9779669504256384, 0.9779669504256384], 'val_loss': [0.11118546511442401, 0.11118546511442401, 0.11118546511442401] } Then, the below code can modify the dict: dict['train_acc'] = [x.cpu().item() for x in dict['train_acc']]
https://stackoverflow.com/questions/72443945/
How to stop updating the parameters of a part of a layer in a CNN model (not the parameters of the whole layer)?
For example, there are ten parameters(filters) in a CNN layer, how I can do to only update five of them and keep the rest unchanged?
In Pythorch is easy to freeze only part of the net thanks to the requires_grad property: Here is a simple script: def freeze_layers(model, num_of_layers): freezed = 0 for layer in model.children(): freezed += 1 if layer < num_of_layers: layer.requires_grad = False Consider however that every model has a different structure and it can have leyer nested into each other, with this code you are iterating through the first level of layers in the net. I suggest you print the layers before, to understand the network structure spefic to your case.
https://stackoverflow.com/questions/72473409/
Training on GPU produces slightly different results then when trained on CPU
I just tried my new Script to train a model on the GPU rather than on the CPU. And the training values (loss, metrics) differ to when trained on the CPU. I was under the impression that running on cuda vs on cpu should not make a difference. Was I wrong or has it something to do with my code? Using pytorch=10.1.2 and cudatoolkit=10.1
How big a difference? Tiny differences are to be expected. Order of commutative operations matters for floating point computations. That is: serialised_total = 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 parallelised_total = (0.1 + 0.1 + 0.1) + (0.1 + 0.1 + 0.1) # No actual parallelisation is performed. The above is just example of how # the serialised summation could be broken up into two separate summations. assert serialised_total != parallelised_total # 0.6 != 0.6000000000000001 The results of each side of the equation are still very very close, they're just not exactly the same. See this answer for why. If you are using the GPU then it will be making use of parallelisation, and so the order of operations will not be the same. For instance, if you sum a series of floating point values then you can speed things up by breaking the list up into chunks and sending each chunk to a different core to be summed. You can then sum the results of each chunk. This will be much quicker, but the order of operations will be different than if you summed the values serially. In the above example, it is the "parallelised" total that is less accurate than the "serialised" total. This is not a rule, and sometimes it will be the "parallelised" total that is more accurate. For example: # n = 8 serialised_total = 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 parallelised_total = (0.1 + 0.1 + 0.1 + 0.1) + (0.1 + 0.1 + 0.1 + 0.1) assert serialised_total != parallelised_total # 0.7999999999999999 != 0.8 Without knowing more about your problem, any answers are just speculation about the issue. Including this one.
https://stackoverflow.com/questions/72477224/
how to transform a posting file to a pytorch tensor
Is there a python package that transforms a postings file to a pytorch tensor? By a posting file I mean a csv file with the following format: "docID" ,"wordID" ,"count" I also have a dictionary.txt which associates each wordID to a word. At the end, my text data consists of postings file and a dictionary and I want to use it with a deep learning model that I have implemented with Pytorch.
No, you have to do it youself. You can simply convert each elemnt into a pytorch tensor or use the pytorch dataset api like this. import csv import torch from torch.utils.data import Dataset from typing import List, NamedTuple CsvRowItem = NamedTuple("CsvRowItem", [ ("docId", int), ("wordId", int), ("count", int) ]) data: List[CsvRowItem] = [] with open("data.csv", mode='r') as file: reader = csv.reader(file) next(reader) # Skip the header for row in reader: data.append( CsvRowItem(docId=int(row[0]), wordId=int(row[1]), count=int(row[2])) ) class YourDataset(Dataset): def __init__(self, data: List[CsvRowItem]): self.data = data def __len__(self): return len(self.data) def __getitem__(self, index: int) -> torch.IntTensor: item = self.data[index] return torch.IntTensor([item.docId, item.wordId, item.count]) dataset = YourDataset(data=data) print(f"Length of data: {len(dataset):,}") print(dataset[0])
https://stackoverflow.com/questions/72481831/
What is the purpose of with torch.no_grad():
Consider the following code for Linear Regression implemented using PyTorch: X is the input, Y is the output for the training set, w is the parameter that needs to be optimised import torch X = torch.tensor([1, 2, 3, 4], dtype=torch.float32) Y = torch.tensor([2, 4, 6, 8], dtype=torch.float32) w = torch.tensor(0.0, dtype=torch.float32, requires_grad=True) def forward(x): return w * x def loss(y, y_pred): return ((y_pred - y)**2).mean() print(f'Prediction before training: f(5) = {forward(5).item():.3f}') learning_rate = 0.01 n_iters = 100 for epoch in range(n_iters): # predict = forward pass y_pred = forward(X) # loss l = loss(Y, y_pred) # calculate gradients = backward pass l.backward() # update weights #w.data = w.data - learning_rate * w.grad with torch.no_grad(): w -= learning_rate * w.grad # zero the gradients after updating w.grad.zero_() if epoch % 10 == 0: print(f'epoch {epoch+1}: w = {w.item():.3f}, loss = {l.item():.8f}') What does the 'with' block do? The requires_grad argument for w is already set to True. Why is it then being put under a with torch.no_grad() block?
The requires_grad argument tells PyTorch that we want to be able to calculate the gradients for those values. However, the with torch.no_grad() tells PyTorch to not calculate the gradients, and the program explicitly uses it here (as with most neural networks) in order to not update the gradients when it is updating the weights as that would affect the back propagation.
https://stackoverflow.com/questions/72504734/
How to setup CMake project to use PyTorch C++ API installed via Conda
I have Miniconda3 on a Linux system (Ubuntu 22.04). The environment has Python 3.10 as well as a functioning (in Python) installation of PyTorch (installed following official instructions). I would like to setup a CMake project that uses PyTorch C++ API. The reason is not important and also I am aware that it's beta (the official documentation states that), so instability and major changes are not excluded. Currently I have this very minimal CMakeLists.txt: cmake_minimum_required(VERSION 3.19) # or whatever version you use project(PyTorch_Cpp_HelloWorld CXX) set(PYTORCH_ROOT "/home/$ENV{USER}/miniconda/envs/ML/lib/python3.10/site-packages/torch") list(APPEND CMAKE_PREFIX_PATH "${PYTORCH_ROOT}/share/cmake/Torch/") find_package(Torch REQUIRED CONFIG) ... # Add executable # Link against PyTorch library When I try to configure the project I'm getting error: CMake Error at CMakeLists.txt:21 (message): message called with incorrect number of arguments -- Could NOT find Protobuf (missing: Protobuf_LIBRARIES Protobuf_INCLUDE_DIR) -- Found Threads: TRUE CMake Warning at /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/public/protobuf.cmake:88 (message): Protobuf cannot be found. Depending on whether you are building Caffe2 or a Caffe2 dependent library, the next warning / error will give you more info. Call Stack (most recent call first): /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:56 (include) /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 (find_package) CMakeLists.txt:23 (find_package) CMake Error at /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:58 (message): Your installed Caffe2 version uses protobuf but the protobuf library cannot be found. Did you accidentally remove it, or have you set the right CMAKE_PREFIX_PATH? If you do not have protobuf, you will need to install protobuf and set the library path accordingly. Call Stack (most recent call first): /home/USER/miniconda/envs/ML/lib/python3.10/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 (find_package) CMakeLists.txt:23 (find_package) I installed libprotobuf (again via conda) but, while I can find the library files, I can't find any *ProtobufConfig.cmake or anything remotely related to protobuf and its CMake setup. Before I go fight against wind mills I would like to ask here what the proper setup would be. I am guessing building from source is always an option, however this will pose a huge overhead on people, who I collaborate with.
Using this conda env: name: pytorch_latest channels: - pytorch - conda-forge - defaults dependencies: - pytorch=1.11.0 - torchvision - torchaudio - cpuonly I copied the small example from here and got it to run. The key was to set the correct library directories (both torch in site-packages, but also the lib folder of the environment). The cmake file is written so that the folders are automatically found : example-app.cpp #include <torch/torch.h> #include <iostream> int main() { torch::Tensor tensor = torch::rand({2, 3}); std::cout << tensor << std::endl; } CMakeLists.txt: cmake_minimum_required(VERSION 3.0 FATAL_ERROR) project(example-app) #Add the torch library directory list(APPEND CMAKE_PREFIX_PATH "$ENV{CONDA_PREFIX}/lib/python3.10/site-packages/torch") #This is needed to be able to find the mkl and other dependent libraries link_directories("$ENV{CONDA_PREFIX}/lib") set(ENV{MKLROOT} "$ENV{CONDA_PREFIX}/lib") find_package(Torch REQUIRED) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}") add_executable(example-app example-app.cpp) #We need to add pthread and omp manually here target_link_libraries(example-app "${TORCH_LIBRARIES}" pthread omp) set_property(TARGET example-app PROPERTY CXX_STANDARD 14) Exact environment (in case there are problems with reproducibility): name: pytorch_latest channels: - pytorch - conda-forge - defaults dependencies: - _libgcc_mutex=0.1=conda_forge - _openmp_mutex=4.5=2_kmp_llvm - blas=2.115=mkl - blas-devel=3.9.0=15_linux64_mkl - brotlipy=0.7.0=py310h5764c6d_1004 - bzip2=1.0.8=h7f98852_4 - ca-certificates=2022.5.18.1=ha878542_0 - certifi=2022.5.18.1=py310hff52083_0 - cffi=1.15.0=py310h0fdd8cc_0 - charset-normalizer=2.0.12=pyhd8ed1ab_0 - cpuonly=2.0=0 - cryptography=37.0.1=py310h9ce1e76_0 - ffmpeg=4.3=hf484d3e_0 - freetype=2.10.4=h0708190_1 - giflib=5.2.1=h36c2ea0_2 - gmp=6.2.1=h58526e2_0 - gnutls=3.6.13=h85f3911_1 - idna=3.3=pyhd8ed1ab_0 - jpeg=9e=h166bdaf_1 - lame=3.100=h7f98852_1001 - lcms2=2.12=hddcbb42_0 - ld_impl_linux-64=2.36.1=hea4e1c9_2 - lerc=3.0=h9c3ff4c_0 - libblas=3.9.0=15_linux64_mkl - libcblas=3.9.0=15_linux64_mkl - libdeflate=1.10=h7f98852_0 - libffi=3.4.2=h7f98852_5 - libgcc-ng=12.1.0=h8d9b700_16 - libgfortran-ng=12.1.0=h69a702a_16 - libgfortran5=12.1.0=hdcd56e2_16 - libiconv=1.17=h166bdaf_0 - liblapack=3.9.0=15_linux64_mkl - liblapacke=3.9.0=15_linux64_mkl - libnsl=2.0.0=h7f98852_0 - libpng=1.6.37=h21135ba_2 - libstdcxx-ng=12.1.0=ha89aaad_16 - libtiff=4.4.0=h0fcbabc_0 - libuuid=2.32.1=h7f98852_1000 - libuv=1.43.0=h7f98852_0 - libwebp=1.2.2=h3452ae3_0 - libwebp-base=1.2.2=h7f98852_1 - libxcb=1.13=h7f98852_1004 - libzlib=1.2.12=h166bdaf_0 - llvm-openmp=14.0.4=he0ac6c6_0 - lz4-c=1.9.3=h9c3ff4c_1 - mkl=2022.1.0=h84fe81f_915 - mkl-devel=2022.1.0=ha770c72_916 - mkl-include=2022.1.0=h84fe81f_915 - ncurses=6.3=h27087fc_1 - nettle=3.6=he412f7d_0 - numpy=1.22.4=py310h4ef5377_0 - openh264=2.1.1=h780b84a_0 - openjpeg=2.4.0=hb52868f_1 - openssl=3.0.3=h166bdaf_0 - pillow=9.1.1=py310he619898_1 - pip=22.1.2=pyhd8ed1ab_0 - pthread-stubs=0.4=h36c2ea0_1001 - pycparser=2.21=pyhd8ed1ab_0 - pyopenssl=22.0.0=pyhd8ed1ab_0 - pysocks=1.7.1=py310hff52083_5 - python=3.10.4=h2660328_0_cpython - python_abi=3.10=2_cp310 - pytorch=1.11.0=py3.10_cpu_0 - pytorch-mutex=1.0=cpu - readline=8.1=h46c0cb4_0 - requests=2.27.1=pyhd8ed1ab_0 - setuptools=62.3.2=py310hff52083_0 - sqlite=3.38.5=h4ff8645_0 - tbb=2021.5.0=h924138e_1 - tk=8.6.12=h27826a3_0 - torchaudio=0.11.0=py310_cpu - torchvision=0.12.0=py310_cpu - typing_extensions=4.2.0=pyha770c72_1 - tzdata=2022a=h191b570_0 - urllib3=1.26.9=pyhd8ed1ab_0 - wheel=0.37.1=pyhd8ed1ab_0 - xorg-libxau=1.0.9=h7f98852_0 - xorg-libxdmcp=1.1.3=h7f98852_0 - xz=5.2.5=h516909a_1 - zlib=1.2.12=h166bdaf_0 - zstd=1.5.2=h8a70e8d_1
https://stackoverflow.com/questions/72531611/
Is GradScaler necessary with Mixed precision training with pytorch?
So going the AMP: Automatic Mixed Precision Training tutorial for Normal networks, I found out that there are two versions, Automatic and GradScaler. I just want to know if it's advisable / necessary to use the GradScaler with the training becayse it is written in the document that: Gradient scaling helps prevent gradients with small magnitudes from flushing to zero (“underflowing”) when training with mixed precision. scaler = torch.cuda.amp.GradScaler() for epoch in range(1): for input, target in zip(data, targets): with torch.cuda.amp.autocast(): output = net(input) loss = loss_fn(output, target) scaler.scale(loss).backward() scaler.step(opt) scaler.update() opt.zero_grad() Also, looking at NVIDIA Apex Documentation for PyTorch, they have used it as, from apex import amp model, optimizer = amp.initialize(model, optimizer) loss = criterion(…) with amp.scale_loss(loss, optimizer) as scaled_loss: scaled_loss.backward() optimizer.step() I think this is what GradScaler does too so I think it is a must. Can someone help me with the query here.
Short answer: yes, your model may fail to converge without GradScaler(). There are three basic problems with using FP16: Weight updates: with half precision, 1 + 0.0001 rounds to 1. autocast() takes care of this one. Vanishing gradients: with half precision, anything less than (roughly) 2e-14 rounds to 0, as opposed to single precision 2e-126. GradScaler() takes care of this one. Explosive loss: similar to the above, overflow is also much more likely with half precision. This is also managed by autocast() context.
https://stackoverflow.com/questions/72534859/
module 'torch' has no attribute 'has_mps'
I just followed a youtube video that teaches how to install PyTorch nightly for MacBook to accelerate by m1 chip. However, I came across a problem really wierd. I can see in the jupyter notebook that torch.has_mps = True. But in jupyter notebook in vscode, it shows that module 'torch' has no attribute 'has_mps'. Can anyone kindly tell me why? really confusing.
Just make sure you installed the nightly build of PyTorch. Apple Silicon support in PyTorch is currently available only in nightly builds. e.g., if you're using conda, try this: conda install pytorch torchvision -c pytorch-nightly or with pip pip3 install --pre torch torchvision --extra-index-url https://download.pytorch.org/whl/nightly/cpu See more here: https://pytorch.org/get-started/locally/ The script to verify that you're using the correct version is like you write, you can simply open python REPL in your env where you installed the above: import torch torch.has_mps And you should get True To select the device, use "mps" instead of "cuda" (what you see in tutorials): device = "mps" if torch.has_mps else "cpu" print(f'Using device: {device}') P.S. Although the guide suggest to install torchaudio, it will not work, at least with conda environments. P.P.S. Also, try with the environment in this github repo: https://github.com/causevic/mlboxm1/blob/main/pytorch_mac_m1.yml
https://stackoverflow.com/questions/72535034/
RuntimeError: torch.nn.functional.binary_cross_entropy and torch.nn.BCELoss are unsafe to autocast
I am trying to implement U^2 Net for Salient Object detection. Since this code is not optimised for training, following this official documentation for AMP, I have made some changes to the original code in my fork to check the effects. I have used the code exactly and when you run my version of training code on colab as : ! git clone https://github.com/deshwalmahesh/U-2-Net %cd ./U-2-Net/ !python u2net_train.py It'll throw you some error. The whole stack is posted in the end. I dug up and found that it is due to the custom loss function as muti_bce_loss_fusion which the authors have used as: bce_loss = nn.BCELoss(size_average=True) def muti_bce_loss_fusion(d0, d1, d2, d3, d4, d5, d6, labels_v): loss0 = bce_loss(d0,labels_v) loss1 = bce_loss(d1,labels_v) loss2 = bce_loss(d2,labels_v) loss3 = bce_loss(d3,labels_v) loss4 = bce_loss(d4,labels_v) loss5 = bce_loss(d5,labels_v) loss6 = bce_loss(d6,labels_v) loss = loss0 + loss1 + loss2 + loss3 + loss4 + loss5 + loss6 return loss0, loss Also, in the last line i.e line 526 of the model definition, the model returns 7 sigmoid values which are passed to the loss function. F.sigmoid(d0), F.sigmoid(d1), F.sigmoid(d2), F.sigmoid(d3), F.sigmoid(d4), F.sigmoid(d5), F.sigmoid(d6) Now what can be done to avoid this error? Error trace /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:780: UserWarning: Note that order of the arguments: ceil_mode and return_indices will changeto match the args list in nn.MaxPool2d in a future release. warnings.warn("Note that order of the arguments: ceil_mode and return_indices will change" /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:3704: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead. warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.") /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1944: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead. warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.") Traceback (most recent call last): File "u2net_train.py", line 148, in <module> loss2, loss = muti_bce_loss_fusion(d0, d1, d2, d3, d4, d5, d6, labels_v) File "u2net_train.py", line 33, in muti_bce_loss_fusion loss0 = bce_loss(d0,labels_v) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py", line 612, in forward return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) File "/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py", line 3065, in binary_cross_entropy return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum) RuntimeError: torch.nn.functional.binary_cross_entropy and torch.nn.BCELoss are unsafe to autocast. Many models use a sigmoid layer right before the binary cross entropy layer. In this case, combine the two layers using torch.nn.functional.binary_cross_entropy_with_logits or torch.nn.BCEWithLogitsLoss. binary_cross_entropy_with_logits and BCEWithLogits are safe to autocast.
The main reason why it was due to unstable nature of Sigmoid + BCE. Referring to documentation and torch community, all I had to to do was to replace the models from F.sigmoid(d0)... to d0..... and then in turn replace nn.BCELoss(size_average=True) with nn.BCEWithLogitsLoss(size_average=True). Now the model is running fine.
https://stackoverflow.com/questions/72536002/
Find the biggest of two pytorch tensor on size
How to find the biggest of two pytorch tensors on size >>> tensor1 = torch.empty(0) >>> tensor2 = torch.empty(1) >>> tensor1 tensor([]) >>> tensor2 tensor([5.9555e-34]) torch.maximum is returrning the empty tensor as the biggest tensor >>> torch.maximum(tensor1,tensor2) tensor([]) Is there a way to find the biggest tensor among two tensors (mostly 1d), base on the number of elements in the tensor.
Why not comparing their first dimension size? To do so you can use equivalents: x.size(0), x.shape[0], and len(x). To return the tensor with longest size, you can use the built-in max function with the key argument: >>> max((tensor1, tensor2), key=len)
https://stackoverflow.com/questions/72540912/
Keras to Pytorch -> troubles with layers and shape
I am in the process of converting a Keras model to PyTorch and would need your help. Keras Code: def model(input_shape): input_layer = keras.layers.Input(input_shape) conv1 = keras.layers.Conv1D(filters=16, kernel_size=3, padding="same")(input_layer) conv1 = keras.layers.BatchNormalization()(conv1) conv1 = keras.layers.ReLU()(conv1) global_average_pooling = keras.layers.GlobalAveragePooling1D()(conv1) output_layer = keras.layers.Dense(number_of_classes, activation="softmax")(global_average_pooling ) return keras.models.Model(inputs=input_layer, outputs=output_layer) Summary of Model: My Code Is: class model(nn.Module): def __init__(self): super(CNN, self).__init__() #number_of_classes = data_config.number_of_classes self.conv1 = nn.Conv1d(256,128,1) # PyTorch does not support same padding, self.bn1=nn.BatchNorm1d(128) #self.relu=nn.functional.relu_(16) self.avg = nn.AvgPool1d(1) def forward(self,x): x = self.conv1(x) x = self.bn1(x) #x = self.relu(x) x = self.avg(x) output = F.log_softmax(x) return output Could someone please help me with this? Greets!
My Current Code is: class Net(nn.Module): def __init__(self): super(Net,self).__init__() self.conv1 = nn.Conv1d(256,128,1) self.batch1 = nn.BatchNorm1d(128) self.avgpl1 = nn.AvgPool1d(1, stride=1) self.fc1 = nn.Linear(128,3) def forward(self,x): x = self.conv1(x) x = self.batch1(x) x = F.relu(x) x = self.avgpl1(x) x = torch.flatten(x,1) x = F.log_softmax(self.fc1(x)) return x Keras Model Parameters: And my Parameters currently are:
https://stackoverflow.com/questions/72547478/
pip install does not work with python3 on windows
I am getting acquainted with python development and it's been some time since I wrote python code. I am setting up my IDE (Pycharm) and Python3 binaries on Windows 10. I want to start working with the pytorch library and I am used to before in python2 just typing pip install and it works fine. Now it seems pip is not installed by default and has been replaced by something called conda? What's the best way to install the pytorch package from the command line? here is a screenshot link
PyTorch Documentation does have a selector that will give you the commands for pip and conda. However, Python3 should have pip installed, it may just be pip3 (that's what it was for me).
https://stackoverflow.com/questions/72549322/
Vectorised pairwise distance
TLDR: given two tensors t1 and t2 that represent b samples of a tensor with shape c,h,w (i.e, every tensor has shape b,c,h,w), i'm trying to calculate the pairwise distance between t1[i] and t2[j] for all i,j efficiently some more context - I've extracted ResNet18 activations for both my train and test data (CIFAR10) and I'm trying to implement k-nearest-neighbours. A possible pseudo-code might be: for te in test_activations: distances = [] for tr in train_activations: distances.append(||te-tr||) neighbors = k_smallest_elements(distances) prediction(te) = majority_vote(labels(neighbors)) I'm trying to vectorise this process given batches from the test and train activations datasets. I've tried iterating the batches (and not the samples) and using torch.cdist(train_batch,test_batch), but I'm not quite sure how this function handles multi-dimensional tensors, as in the documentation it states torch.cdist(x1, x2,...): If x1 has shape BxPxM and x2 has shape BxRxM then the output will have shape BxPxR Which doesn't seem to handle my case (see below) A minimal example can be found here: b,c,h,w = 1000,128,28,28 # actual dimensions in my problem train_batch = torch.randn(b,c,h,w) test_batch = torch.randn(b,c,h,w) d = torch.cdist(train_batch,test_batch) You can think of test_batch and train_batch as the tensors in the for loop for test_batch in train: for train_batch in test:... EDIT: im adding another example: both t1[i] and t2[j] are tensors shaped (c,h,w), and the distance between them is a scalar d. so for example, if we have t1 = torch.randn(2,128,28,28) t2 = torch.randn(2,128,28,28) the distance matrix would look something like [[d(t1[0],t2[0]), d(t1[0],t2[1])], [d(t1[1],t2[0]), d(t1[1],t2[1])]] and have a shape (2,2) (or (b,b) more generally) where d is the scalar distance between the two tensors t1[i] and t2[j].
It is common to have to reshape your data before feeding it to a builtin PyTorch operator. As you've said torch.cdist works with two inputs shaped (B, P, M) and (B, R, M) and returns a tensor shaped (B, P, R). Instead, you have two tensors shaped the same way: (b, c, h, w). If we match those dimensions we have: B=b, M=c, while P=h*w (from the 1st tensor) and R=h*w (from the 2nd tensor). This requires flattening the spatial dimensions together and swapping the last two axes. Something like: >>> x1 = train_batch.flatten(2).transpose(1,2) >>> x2 = test_batch.flatten(2).transpose(1,2) >>> d = torch.cdist(x1, x2) Now d contains distance between all possible pairs (train_batch[b, :, iy, ix], test_batch[b, :, jy, jx]) and is shaped (b, h*w, h*w). You can then apply a knn using argmax to retrieve the k closest neighbour from one element of the training batch to the test batch.
https://stackoverflow.com/questions/72551247/
How to prevent NVIDIA from automatically upgrading the driver on Ubuntu?
I was training models last night on my Ubuntu workstation, and then woke up this morning and saw this message: Failed to initialize NVML: Driver/library version mismatch Apparently the NVIDIA system driver automatically updated itself, and now I need to reboot the machine to use my GPUs... How do I prevent automatic updates from NVIDIA?
I think I have had the same issue. It is because of so-called unattended upgrades on Ubuntu. Solution 1: check the changed packages and revert the updates Check the apt history logs less /var/log/apt/history.log Then you can see what packages have changed. Use apt or aptitude to revert the changes. Solution 2: disable unattended upgrades Use this guide to disable unattended upgrades. Please consider if this solution works for you as you have to install security updates manually after this change. Solution 3: hold specific packages Use this guide on how to hold certain packages. Read the apt history as mentioned above to determine which packages you have to put on hold. Probably CUDA related packages such as nvidia-cuda-toolkit. Hard to say since some information is missing from your post. You can see all nvidia related packages like this dpkg -l *nvidia* I hope at least one of my solutions works for you :) P.S. you have to change the title. NVIDIA isn't upgrading anything on your system by itself. Ubuntu is the one causing your trouble ;)
https://stackoverflow.com/questions/72560165/
how to expand the dimensions of a tensor in pytorch
i'm a newcomer for pytorch. if i have a tensor like that: A = torch.tensor([[1, 2, 3], [ 4, 5, 6]]), but my question is how to get a 2 dimensions tensor like: B = Tensor([[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 5, 6]]])
You can concatenate ... A tensor([[[1., 2., 3.], [4., 5., 6.]]]) B = torch.cat((a, a)) B tensor([[[1., 2., 3.], [4., 5., 6.]], [[1., 2., 3.], [4., 5., 6.]]])
https://stackoverflow.com/questions/72566000/
FutureWarning: Passing a set as an indexer is deprecated and will raise in a future version
I am building a model to train it for binary classification. While processing the data before feeding it to the model i come across this warning FutureWarning: Passing a set as an indexer is deprecated and will raise in a future version. Use a list instead. Here is my code import torch import torch.nn as nn import matplotlib.pyplot as pyp import numpy as np import pandas as pd from sklearn.metrics import confusion_matrix,accuracy_score from sklearn.metrics import precision_score,recall_score,roc_curve,auc,roc_auc_score from sklearn.model_selection import train_test_split from sklearn.utils import shuffle #loading the dataset path='G:/My Drive/datasets/bank.csv' df=pd.read_csv(path) print(df.head(5)) print(df.shape) #distirbuting the target values print("Distribution of Target Values in Dataset -") df.deposit.value_counts() #check f we have na values in the datset df.isna().sum() #extracting columns whith strings cartegorical_columns=df.select_dtypes(include='object').columns print('cartegprical columns:',list(cartegorical_columns)) #for all cartegorical column if values in(yes/no) convert into a 1/10 flag for col in cartegorical_columns: if df[col].nunique()==2: df[col]=np.where(df[col]=='yes',1,0) print(df.head(5)) #for the remaining cartegorical values that have no binary values #crate one hot encoded version of the dataset new_df=pd.get_dummies(df) #define the target and predictors for the model target='deposit' predictors=set(new_df.columns) - set([target]) print('new_df shape:',new_df.shape) print(new_df[predictors].head()) The specific error FutureWarning: Passing a set as an indexer is deprecated and will raise in a future version. Use a list instead. print(new_df[predictors].head()) What could be raising this error in my code and how can i solve it
you are trying to access the new_df with predictors which is set. convert it to list. example: print(new_df[list(predictors)].head())
https://stackoverflow.com/questions/72583625/
How to use pytorch to perform gradient descent, and print out the true minimum?
I'm trying to use a gradient method to find a minimizer of a function f, where x(k+1)=x(k)-α▽f(x), and α=0.1. The function f is f(x) = 2*(x^3+1)^(-1/2), x(0)=1 Here is the pytorch sample import torch import torch.nn as nn from torch import optim alpha = 0.1 class MyModel(nn.Module): def __init__(self): super().__init__() self.x = 1 def forward(self, x): return 2*pow((pow(x,3)+1),(-1/2)) model = MyModel() optimizer = optim.SGD(model.parameters(),lr=alpha) terminationCond = False while not terminationCond: f = model.forward(x) f.backward() optimizer.step() if x.grad < 0.001: terminationCond = True optimizer.zero_grad() But I cannot output the correct value of x, how to modify my code in order to find a minizer of a function f?
There are a few things that need to be considered ... For x to be a parameter (model.parameter()) x should be nn.Parameter(torch.as_tensor([1.])) You are passing what (and why) x to the forward method? The function f(x) = 2*(x^3+1)^(-1/2) is inversely proportional to x. When x goes up in value, the function goes down. The SGD will minimize the value of the function, hence maximize x. f(0) = 2.0 f(1) = 1.41421 f(2) = 0.66666 Here is a working example of minimizing f(x) ie maximizing x. import torch import torch.nn as nn from torch import optim alpha = 0.1 class MyModel(nn.Module): def __init__(self): super().__init__() self.x = nn.Parameter(torch.as_tensor([1.0])) def forward(self): return 2*pow((pow(self.x, 3)+1), (-1/2)) model = MyModel() optimizer = optim.SGD(model.parameters(), lr=alpha) terminationCond = False print(model.x) f = model() f.backward() optimizer.step() optimizer.zero_grad() print(model.x)
https://stackoverflow.com/questions/72596741/
Pytorch NLP Huggingface: model not loaded on GPU
I have this code that init a class with a model and a tokenizer from Huggingface. On Google Colab this code works fine, it loads the model on the GPU memory without problems. On Google Cloud Platform it does not work, it loads the model on gpu, whatever I try. class OPT: def __init__(self, model_name: str = "facebook/opt-2.7b", use_gpu: bool = False): self.model_name = model_name self.use_gpu = use_gpu and torch.cuda.is_available() print(f"Use gpu:: {self.use_gpu}") if self.use_gpu: print("Using gpu") self.model = AutoModelForCausalLM.from_pretrained( self.model_name, torch_dtype=torch.float16 ).cuda() else: print("Using cpu") self.model = AutoModelForCausalLM.from_pretrained( self.model_name, torch_dtype=torch.float32, low_cpu_mem_usage=True ) # the fast tokenizer currently does not work correctly self.tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) The printed output is correct: Use gpu:: True Using gpu But the nvidia-smi says that there is no process running on the gpu: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.82.01 Driver Version: 470.82.01 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:04.0 Off | 0 | | N/A 40C P8 9W / 70W | 0MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ And with htop I can see that the process is using the cpu ram.
You should use the .to(device) method like this: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = nameofyourmodel.to(device)
https://stackoverflow.com/questions/72601714/
How to learn multiple binary classifiers?
I have an input of shape 14 x 10 x 128 x 128, where 14 is batch_size, 10 is the sequence_length and each item in the sequence is of shape 128 x 128. I want to learn to map this input to output of shape 14 x 10 x 128, i.e., for each item in the sequence I want to learn 128-binary classifiers. Does the following model make sense? So, first I reshape my input to 140 x 128 x 128 and then pass it through the model and reshape the output back to 14 x 10 x 128. classifier = nn.Sequential( nn.Conv1d(128, 128, 1), nn.ReLU(), nn.BatchNorm1d(128), nn.Conv1d(128, 128, 1), nn.ReLU(), nn.BatchNorm1d(128), nn.Conv1d(128, 1, 1) ) Thank you.
Not really convinced a 1D convolution will get you anywhere since it reasons in two dimensions only. In your case, you are dealing with a sequence of 2D elements. Naturally a nn.Conv2d would seem more appropriate for this kind of task. You are looking to do a one-to-one mapping with your sequence elements and can therefore consider each one of them as an independent instance. Then a straightforward approach is to simply collapse the sequence into the batch axis and use a CNN coupled with a fully-connected layer. Here is a very minimal example with a single layer: model = nn.Sequential(nn.Conv2d(1, 8, 2), nn.ReLU(), nn.AdaptiveAvgPool2d(1), nn.Flatten(), nn.LazyLinear(128)) This requires you to reshape the tensor before and after to collapse and expand the sequence dimensions: >>> x = torch.rand(14, 10, 128, 128) >>> y = model(x.view(-1,1,128,128)).view(-1,10,128)
https://stackoverflow.com/questions/72623344/
Loss for Multi-label Classification
I am working on a multi-label classification problem. My gt labels are of shape 14 x 10 x 128, where 14 is the batch_size, 10 is the sequence_length, and 128 is the vector with values 1 if the item in sequence belongs to the object and 0 otherwise. My output is also of same shape: 14 x 10 x 128. Since, my input sequence was of varying length I had to pad it to make it of fixed length 10. I'm trying to find the loss of the model as follows: total_loss = 0.0 unpadded_seq_lengths = [3, 4, 5, 7, 9, 3, 2, 8, 5, 3, 5, 7, 7, ...] # true lengths of sequences optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) criterion = nn.BCEWithLogitsLoss() for data in training_dataloader: optimizer.zero_grad() # shape of input 14 x 10 x 128 output = model(data) batch_loss = 0.0 for batch_idx, sequence in enumerate(output): # sequence shape is 10 x 128 true_seq_len = unpadded_seq_lengths[batch_idx] # only keep unpadded gt and predicted labels since we don't want loss to be influenced by padded values predicted_labels = sequence[:true_seq_len, :] # for example, 3 x 128 gt_labels = gt_labels_padded[batch_idx, :true_seq_len, :] # same shape as above, gt_labels_padded has shape 14 x 10 x 128 # loop through unpadded predicted and gt labels and calculate loss for item_idx, predicted_labels_seq_item in enumerate(predicted_labels): # predicted_labels_seq_item and gt_labels_seq_item are 1D vectors of length 128 gt_labels_seq_item = gt_labels[item_idx] current_loss = criterion(predicted_labels_seq_item, gt_labels_seq_item) total_loss += current_loss batch_loss += current_loss batch_loss.backward() optimizer.step() Can anybody please check to see if I'm calculating loss correctly. Thanks Update: Is this the correct approach for calculating accuracy metrics? # batch size: 14 # seq length: 10 for epoch in range(10): TP = FP = TN = FN = 0. for x, y, mask in tr_dl: # mask shape: (10,) out = model(x) # out shape: (14, 10, 128) y_pred = (torch.sigmoid(out) >= 0.5).float().type(torch.int64) # consider all predictions above 0.5 as 1, rest 0 y_pred = y_pred[mask] # y_pred shape: (14, 10, 10, 128) y_labels = y[mask] # y_labels shape: (14, 10, 10, 128) # do I flatten y_pred and y_labels? y_pred = y_pred.flatten() y_labels = y_labels.flatten() for idx, prediction in enumerate(y_pred): if prediction == 1 and y_labels[idx] == 1: # calculate IOU (overlap of prediction and gt bounding box) iou = 0.78 # assume we get this iou value for objects at idx if iou >= 0.5: TP += 1 else: FP += 1 elif prediction == 1 and y_labels[idx] == 0: FP += 1 elif prediction == 0 and y_labels[idx] == 1: FN += 1 else: TN += 1 EPOCH_ACC = (TP + TN) / (TP + TN + FP + FN)
It is usually recommended to stick with batch-wise operations and avoid going into single-element processing steps while in the main training loop. One way to handle this case is to make your dataset return padded inputs and labels with additionally a mask that will come useful for loss computation. In other words, to compute the loss term with sequences of varying sizes, we will use a mask instead of doing individual slices. Dataset The way to proceed is to make sure you build the mask in the dataset and not in the inference loop. Here I am showing a minimal implementation that you should be able to transfer to your dataset without much hassle: class Dataset(data.Dataset): def __init__(self): super().__init__() def __len__(self): return 100 def __getitem__(self, index): i = random.randint(5, SEQ_LEN) # for demo puporse, generate x with random length x = torch.rand(i, EMB_SIZE) y = torch.randint(0, N_CLASSES, (i, EMB_SIZE)) # pad data to fit in batch pad = torch.zeros(SEQ_LEN-len(x), EMB_SIZE) x_padded = torch.cat((pad, x)) y_padded = torch.cat((pad, y)) # construct tensor to mask loss mask = torch.cat((torch.zeros(SEQ_LEN-len(x)), torch.ones(len(x)))) return x_padded, y_padded, mask Essentially in the __getitem__, we not only pad the input x and target y with zero values, we also construct a simple mask containing the positions of the padded values in the currently processed element. Notice how: x_padded, shaped (SEQ_LEN, EMB_SIZE) y_padded, shaped (SEQ_LEN, N_CLASSES) mask, shaped (SEQ_LEN,) are all three tensors which are shape invariant across the dataset, yet mask contains the padding information necessary for us to compute the loss function appropriately. Inference The loss you've used nn.BCEWithLogitsLoss, is the correct one since it's a multi-dimensional loss used for binary classification. In other words, you can use it here in this multi-label classification task, considering each one of the 128 logits as an individual binary prediction. Do not use nn.CrossEntropyLoss) as suggested elsewhere, since the softmax will push a single logit (i.e. class), which is the behaviour required for single-label classification tasks. Therefore, in the training loop, we simply have to apply the mask to our loss. for x, y, mask in dl: y_pred = model(x) loss = mask*bce(y_pred, y) # backpropagation, loss postprocessing, logs, etc.
https://stackoverflow.com/questions/72626063/
AttributeError: module 'torch.distributed' has no attribute 'is_initialized' in pytorch==1.11.X
whenever we are creating the object for TrainingArguments from transformers import Trainer, TrainingArguments batch_size = 64 logging_steps = len(emotions_encoded["train"]) // batch_size model_name = f"{model_ckpt}-finetuned-emotion" training_args = TrainingArguments(output_dir=model_name, num_train_epochs=2, learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, weight_decay=0.01, evaluation_strategy="epoch", disable_tqdm=False, logging_steps=logging_steps, push_to_hub=True, log_level="error") I am getting AttributeError: module 'torch.distributed' has no attribute 'is_initialized'
In order to solve this problem Actually Window and Mac doesn't support distributed training facility so this issue is occuring To solve this problem go to your transformers package where you install it in my case it is Desktop/rajesh/pytorch_env/env/lib/python3.8/site-packages/transformers/training_args.py replace line-1024 if torch.distributed.is_initialized() and self.local_rank == -1: with if True and self.local_rank == -1: Restart your kernal
https://stackoverflow.com/questions/72641886/
How do I get my conda environment to recognize my GPU?
I'm pretty new to this and I'm wondering how do I get my conda environment to be able to recognize my GPU? Purpose is for me to emulate the Linux environment on my windows 10 device. I'm using the Anaconda prompt. I'm following the instructions in this set up guide : https://medium.com/analytics-vidhya/4-steps-to-install-anaconda-and-pytorch-onwindows-10-5c9cb0c80dfe But when I run the torch.cuda.is_available() it returns "false" Not sure where I'm going wrong, would appreciate any help on how to get it to show "true" instead.
Check your CUDA version. Not all the version of CUDA supports official pytorch. Check the official website of pytorch
https://stackoverflow.com/questions/72641958/
fastest way to calculate edges (derivatives) of a big torch tensor
Given a tensor with shape (b,c,h,w), I want to extract edges of the spatial data, that is, calculate x, y direction derivatives of the (h,w) and calculate the magnitude I=sqrt(|x_amplitude|^2+|y_amplitude|^2) My current implementation is as followed row_mat = np.asarray([[0, 0, 0], [1, 0, -1], [0, 0, 0]]) col_mat = row_mat.T row_mat = row_mat[None, None, :, :] # expand dim to convolve with tensor (batch,channel,width,height) col_mat = col_mat[None, None, :, :] # expand dim to convolve with tensor (batch,channel,width,height) def derivative(batch: torch.Tensor) -> torch.Tensor: """ uses convolution to perform x and y derivatives :param batch: input tensor batch :return: image derivative magnitudes """ x_amplitude = ndimage.convolve(batch, row_mat) y_amplitude = ndimage.convolve(batch, col_mat) magnitude = np.sqrt(np.abs(x_amplitude) ** 2 + np.abs(y_amplitude) ** 2) return torch.tensor(magnitude) I was wondering if there's a faster way, as this approach actually convolves using the definition of a derivative, so there might be downsides to that. PS. to test this you can use the tensor torch.randn(1000,128,28,28), as these are the dimension I'm dealing with
For this specific operation you might be able to speed things up a bit by doing it "manually": import torch.nn.functional as nnf def derivative(batch: torch.Tensor) -> torch.Tensor: # pad batch x = nnf.pad(batch, (1, 1, 1, 1), mode='reflect') dx2 = (x[..., 1:-2, :-2] - x[..., 1:-2, 2:])**2 dy2 = (x[..., :-2, 1:-2] - x[..., 2:, 1:-2])**2 mag = torch.sqrt(dx2 + dy2) return mag
https://stackoverflow.com/questions/72644166/
How can I Modify pytorch dataset __getitem__ function to return a bag of 10 images?
I have a directory with multiple images separated into folders. Each folder has up to 3000 images. I would like to modify the pytorch dataset getitem function so that it returns bags of images, where each bag contains 10 images. Here is what I have so far: transform = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor() ]) dataset = datasets.ImageFolder('./../BCNB/patches/WSI_1', transform=transform) data_loader = torch.utils.data.DataLoader(dataset, batch_size = 1) My output of DataLoader should be a tensor with a shape of [1, 10, 3, 256, 256]. Any input would be very helpful! Thank you very much in advance!
Why do you need "bags of 10 images"? If you need them as mini batches for training -- don't change the Dataset, but use a DataLoader for that. A DataLoader takes a dataset and does the "batching" for you. Alternatively, you can overload the __getitem__ method and implement your own that returns 10 images instead of just one.
https://stackoverflow.com/questions/72646497/
Deep Convolutional GAN (DCGAN) works really well on one set of data but not another
I am working on using PyTorch to create a DCGAN to use to generate trajectory data for a robot based on a dataset of 20000 other simple paths. The DCGAN works great on the MNIST dataset, but does not work well on my custom dataset. I am trying to tweak/tune the DCGAN to give good results after being trained with my custom dataset. Below is an example of output from the GAN (top), along with example training data for MNIST (bottom) after 20 epochs. The loss for the Generator and Discriminator each plateau at around 0.7. Below is the output for my custom trajectory dataset after a similar number of epochs. The top figure shows the output, and bottom figure shows the training set of the batch. It is clear that the same GAN is much better at making predictions for the MNIST dataset then for my custom dataset. It is interesting to note that the Discriminator and Generator losses also plateau at similar values of about 0.7 for this dataset too. This makes me think that there is some limit to the network of how low the loss can go. Discriminator Code: class Discriminator(nn.Module): def __init__(self, channels_img, features_d): super(Discriminator, self).__init__() self.disc = nn.Sequential( # input: N x channels_img x 64 x 64 nn.Conv2d( channels_img, features_d, kernel_size=4, stride=2, padding=1 ), nn.LeakyReLU(0.2), # _block(in_channels, out_channels, kernel_size, stride, padding) self._block(features_d, features_d * 2, 4, 2, 1), self._block(features_d * 2, features_d * 4, 4, 2, 1), self._block(features_d * 4, features_d * 8, 4, 2, 1), # After all _block img output is 4x4 (Conv2d below makes into 1x1) nn.Conv2d(features_d * 8, 1, kernel_size=4, stride=2, padding=0), nn.Sigmoid(), ) def _block(self, in_channels, out_channels, kernel_size, stride, padding): return nn.Sequential( nn.Conv2d( in_channels, out_channels, kernel_size, stride, padding, bias=False, ), nn.BatchNorm2d(out_channels), nn.LeakyReLU(0.2), ) def forward(self, x): return self.disc(x) Generator Code: class Generator(nn.Module): def __init__(self, channels_noise, channels_img, features_g): super(Generator, self).__init__() self.net = nn.Sequential( # Input: N x channels_noise x 1 x 1 self._block(channels_noise, features_g * 16, 4, 1, 0), # img: 4x4 self._block(features_g * 16, features_g * 8, 4, 2, 1), # img: 8x8 self._block(features_g * 8, features_g * 4, 4, 2, 1), # img: 16x16 self._block(features_g * 4, features_g * 2, 4, 2, 1), # img: 32x32 nn.ConvTranspose2d( features_g * 2, channels_img, kernel_size=4, stride=2, padding=1 ), # Output: N x channels_img x 64 x 64 nn.Tanh(), ) def _block(self, in_channels, out_channels, kernel_size, stride, padding): return nn.Sequential( nn.ConvTranspose2d( in_channels, out_channels, kernel_size, stride, padding, bias=False, ), nn.BatchNorm2d(out_channels), nn.ReLU(), ) def forward(self, x): return self.net(x) Training loop: opt_gen = optim.Adam(gen.parameters(), lr=LEARNING_RATE_GEN, betas=(0.5, 0.999)) opt_disc = optim.Adam(disc.parameters(), lr=LEARNING_RATE_DISC, betas=(0.5, 0.999)) criterion = nn.BCELoss() for epoch in range(NUM_EPOCHS): # Target labels not needed! <3 unsupervised # for batch_idx, (real, _) in enumerate(dataloader): for batch_idx, real in enumerate(dataloader): real = real.to(device) noise = torch.randn(BATCH_SIZE, NOISE_DIM, 1, 1).to(device) fake = gen(noise) ### Train Discriminator: max log(D(x)) + log(1 - D(G(z))) disc_real = disc(real.float()).reshape(-1) loss_disc_real = criterion(disc_real, torch.ones_like(disc_real)) disc_fake = disc(fake.detach()).reshape(-1) loss_disc_fake = criterion(disc_fake, torch.zeros_like(disc_fake)) loss_disc = (loss_disc_real + loss_disc_fake) / 2 disc.zero_grad() loss_disc.backward() opt_disc.step() ### Train Generator: min log(1 - D(G(z))) <-> max log(D(G(z)) output = disc(fake).reshape(-1) loss_gen = criterion(output, torch.ones_like(output)) gen.zero_grad() loss_gen.backward() opt_gen.step() # Print losses occasionally and print to tensorboard if batch_idx % 100 == 0: print( f"Epoch [{epoch}/{NUM_EPOCHS}] Batch {batch_idx}/{len(dataloader)} \ Loss D: {loss_disc:.4f}, loss G: {loss_gen:.4f}" ) with torch.no_grad(): fake = gen(fixed_noise) # take out (up to) 32 examples img_grid_real = torchvision.utils.make_grid( real[:BATCH_SIZE], normalize=True ) img_grid_fake = torchvision.utils.make_grid( fake[:BATCH_SIZE], normalize=True ) writer_real.add_image("Real", img_grid_real, global_step=step) writer_fake.add_image("Fake", img_grid_fake, global_step=step) step += 1
(Not enough reputation to comment yet, so I'll reply) Since your code works fine on the MNIST dataset, the main problem here is that the trajectories in your training data are practically imperceptible, even to a human observer. Note that the trajectory lines are much thinner than the digit lines in the MNIST dataset.
https://stackoverflow.com/questions/72651806/
How can I extract features from pytorch fasterrcnn_resnet50_fpn
I tried to extract features from following code. However, it says 'FasterRCNN' object has no attribute 'features' I want to extract features with (36, 2048) shape features when it has 36 classes. Is there any method to extract with pretrained pytorch models. model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True).to(device) features = list(model.features) dummy_img = torch.zeros((1, 3, 800, 800)).float() # test image array req_features = [] output = dummy_img.clone().to(device) for feature in features: output = feature(output) if output.size()[2] < 800//16: # 800/16=50 break req_features.append(feature) out_channels = output.size()[1] faster_rcnn_feature_extractor = nn.Sequential(*req_features) output_map = faster_rcnn_feature_extractor(dummy_img ) print(output_map.shape)
The function you are calling returns a FasterRCNN object which is based on GeneralizedRCNN. As you have experienced, this object doesn't indeed have a feature attribute. Looking at its source code, if you want to acquire the feature maps, you can follow L83 and L101: >>> images, _= model.transform(images, None) >>> features = model.backbone(images.tensors)
https://stackoverflow.com/questions/72655498/
Find jaccard similarity among a batch of vectors in PyTorch
I'm having a batch of vectors of shape (bs, m, n) (i.e., bs vectors of dimensions mxn). For each batch, I would like to calculate the Jaccard similarity of the first vector with the rest (m-1) of them Example: a = [ [[3, 8, 6, 8, 7], [9, 7, 4, 8, 1], [7, 8, 8, 5, 7], [3, 9, 9, 4, 4]], [[7, 3, 8, 1, 7], [3, 0, 3, 4, 2], [9, 1, 6, 1, 6], [2, 7, 0, 6, 6]] ] Find pairwise jaccard similarity between a[:,0,:] and a[:,1:,:] i.e., [3, 8, 6, 8, 7] with each of [[9, 7, 4, 8, 1], [7, 8, 8, 5, 7], [3, 9, 9, 4, 4]] (3 scores) and [7, 3, 8, 1, 7] with each of [[3, 0, 3, 4, 2], [9, 1, 6, 1, 6], [2, 7, 0, 6, 6]] (3 scores) Here's the Jaccard function I have tried def js(la1, la2): combined = torch.cat((la1, la2)) union, counts = combined.unique(return_counts=True) intersection = union[counts > 1] torch.numel(intersection) / torch.numel(union) While this works with unequal-sized tensors, the problem with this approach is that the number of uniques in each combination (pair of tensors) might be different and since PyTorch doesn't support jagged tensors, I'm unable to process batches of vectors at once. If I'm not able to express the problem with the expected clarity, do let me know. Any help in this regard would be greatly appreciated EDIT: Here's the flow achieved by iterating over the 1st and 2nd dimensions. I wish to have a vectorised version of the below code for batch processing bs = 2 m = 4 n = 5 a = torch.randint(0, 10, (bs, m, n)) print(f"Array is: \n{a}") for bs_idx in range(bs): first = a[bs_idx,0,:] for row in range(1, m): second = a[bs_idx,row,:] idx = js(first, second) print(f'comparing{first} and {second}: {idx}')
I don't know how you could achieve this in pytorch, since AFAIK pytorch doesn't support set operations on tensors. In your js() implementation, union calculation should work, but intersection = union[counts > 1] doesn't give you the right result if one of the tensors contains duplicated values. Numpy on the other hand has built-on support with union1d and intersect1d. You can use numpy vectorization to calculate pairwise jaccard indices without using for-loops: import numpy as np def num_intersection(vec1: np.ndarray, vec2: np.ndarray) -> int: return np.intersect1d(vec1, vec2, assume_unique=False).size def num_union(vec1: np.ndarray, vec2: np.ndarray) -> int: return np.union1d(vec1, vec2).size def jaccard1d(vec1: np.ndarray, vec2: np.ndarray) -> float: assert vec1.ndim == vec2.ndim == 1 and vec1.shape[0] == vec2.shape[0], 'vec1 and vec2 must be 1D arrays of equal length' return num_intersection(vec1, vec2) / num_union(vec1, vec2) jaccard2d = np.vectorize(jaccard1d, signature='(m),(n)->()') def jaccard(vecs1: np.ndarray, vecs2: np.ndarray) -> np.ndarray: """ Return intersection-over-union (Jaccard index) between two sets of vectors. Both sets of vectors are expected to be flattened to 2D, where dim 0 is the batch dimension and dim 1 contains the flattened vectors of length V (jaccard index of an n-dimensional vector and of its flattened 1D-vector is equal). Args: vecs1 (ndarray[N, V]): first set of vectors vecs2 (ndarray[M, V]): second set of vectors Returns: ndarray[N, M]: the NxM matrix containing the pairwise jaccard indices for every vector in vecs1 and vecs2 """ assert vecs1.ndim == vecs2.ndim == 2 and vecs1.shape[1] == vecs2.shape[1], 'vecs1 and vecs2 must be 2D arrays with equal length in axis 1' return jaccard2d(vecs1, vecs2) This is of course suboptimal because the code doesn't run on the GPU. If I run the jaccard function with vecs1 of shape (1, 10) and vecs2 of shape (10_000, 10) I get a mean loop time of 200 ms ± 1.34 ms on my machine, which should probably be fast enough for most use cases. And conversion between pytorch and numpy arrays is very cheap. To apply this function to your problem with array a: a = torch.tensor(a).numpy() # just to demonstrate ious = [jaccard(batch[:1, :], batch[1:, :]) for batch in a] np.array(ious).squeeze() # 2 batches with 3 scores each -> 2x3 matrix # array([[0.28571429, 0.4 , 0.16666667], # [0.14285714, 0.16666667, 0.14285714]]) Use torch.from_numpy() on the result to get a pytorch tensor again if needed. Update: If you need a pytorch version for calculating the Jaccard index, I partially implemented numpy's intersect1d in torch: from torch import Tensor def torch_intersect1d(t1: Tensor, t2: Tensor, assume_unique: bool = False) -> Tensor: if t1.ndim > 1: t1 = t1.flatten() if t2.ndim > 1: t2 = t2.flatten() if not assume_unique: t1 = t1.unique(sorted=True) t2 = t2.unique(sorted=True) # generate a m x n intersection matrix where m is numel(t1) and n is numel(t2) intersect = t1[(t1.view(-1, 1) == t2.view(1, -1)).any(dim=1)] if not assume_unique: intersect = intersect.sort().values return intersect def torch_union1d(t1: Tensor, t2: Tensor) -> Tensor: return torch.cat((t1.flatten(), t2.flatten())).unique() def torch_jaccard1d(t1: Tensor, t2: Tensor) -> float: return torch_intersect1d(t1, t2).numel() / torch_union1d(t1, t2).numel() To vectorize the torch_jaccard1d function, you might want to look into torch.vmap, which lets you vectorize a function over an arbitrary batch dimension (similar to numpy's vectorize). The vmap function is a prototype feature and not yet available in the usual pytorch distributions, but you can get it using nightly builds of pytorch. I haven't tested it but this might work.
https://stackoverflow.com/questions/72657212/
Print activations of a neural network in Pytorch
I was doing as suggested in this StackOverflow's questions: Pytorch: why print(model) does not show the activation functions? Basically I want to train a neural network on MNIST, and I want to print the activations. This is the neural networks: class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 128, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(128, 256, 5) self.fc1 = nn.Linear(out_features, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) self.myRelu = nn.ReLU() def forward(self, x): x = self.conv1(x) x = self.myRelu(x) x = self.pool(x) x = self.conv2(x) x = self.myRelu(x) x = self.pool(x) x = torch.flatten(x, 1) # flatten all dimensions except batch x = self.fc1(x) x = self.myRelu(x) x = self.fc2(x) x = self.myRelu(x) x = self.fc3(x) return x However, if I do list(model.myRelu.parameters()) the result is an empty list: []. Any suggestion on how to print the activations?
I think you are getting mixed up between what the an activation function is an what you might be trying to show which are the activations of your intermediate layers. The question you are linking is asking about why isn't the activation function not showing in the summary of the PyTorch model. On one hand, activation functions are non-linear functions, most generally non-parameterized, such as a ReLU function. This is the reason why you are not getting any parameters when looking at the content of model.myRelu.parameters(). It simply doesn't have any learned parameters! On the other, you can be looking to inspect the intermediate results of your network. This means printing the output of a given layer, i.e. the activation of that layer. This is of course with respect to an input: for a given input x you are looking at the output of a given intermediate layer. But reading your question again it seems you are just trying to print your model summary showing the activation functions inside it. In your case, this can be done by defining your model as a sequential model: class Net(nn.Sequential): def __init__(self, out_features): super().__init__(nn.Conv2d(1, 128, 5), nn.ReLU(), nn.MaxPool2d(2, 2), nn.Conv2d(128, 256, 5), nn.ReLU(), nn.MaxPool2d(2, 2), nn.Flatten(), nn.Linear(out_features, 120), nn.ReLU(), nn.Linear(120, 84), nn.ReLU(), nn.Linear(84, 10)) Then in the summary, you have: >>> Net(10) Net( (0): Conv2d(1, 128, kernel_size=(5, 5), stride=(1, 1)) (1): ReLU() (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (3): Conv2d(128, 256, kernel_size=(5, 5), stride=(1, 1)) (4): ReLU() (5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (6): Flatten(start_dim=1, end_dim=-1) (7): Linear(in_features=10, out_features=120, bias=True) (8): ReLU() (9): Linear(in_features=120, out_features=84, bias=True) (10): ReLU() (11): Linear(in_features=84, out_features=10, bias=True) )
https://stackoverflow.com/questions/72660186/
Does Huggingface's "resume_from_checkpoint" work?
I currently have my trainer set up as: training_args = TrainingArguments( output_dir=f"./results_{model_checkpoint}", evaluation_strategy="epoch", learning_rate=5e-5, per_device_train_batch_size=4, per_device_eval_batch_size=4, num_train_epochs=2, weight_decay=0.01, push_to_hub=True, save_total_limit = 1, resume_from_checkpoint=True, ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_qa["train"], eval_dataset=tokenized_qa["validation"], tokenizer=tokenizer, data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer), compute_metrics=compute_metrics ) After training, in my output_dir I have several files that the trainer saved: ['README.md', 'tokenizer.json', 'training_args.bin', '.git', '.gitignore', 'vocab.txt', 'config.json', 'checkpoint-5000', 'pytorch_model.bin', 'tokenizer_config.json', 'special_tokens_map.json', '.gitattributes'] From the documentation it seems that resume_from_checkpoint will continue training the model from the last checkpoint: resume_from_checkpoint (str or bool, optional) — If a str, local path to a saved checkpoint as saved by a previous instance of Trainer. If a bool and equals True, load the last checkpoint in args.output_dir as saved by a previous instance of Trainer. If present, training will resume from the model/optimizer/scheduler states loaded here. But when I call trainer.train() it seems to delete the last checkpoint and start a new one: Saving model checkpoint to ./results_distilbert-base-uncased/checkpoint-500 ... Deleting older checkpoint [results_distilbert-base-uncased/checkpoint-5000] due to args.save_total_limit Does it really continue training from the last checkpoint (i.e., 5000) and just starts the count of the new checkpoint at 0 (saves the first after 500 steps -- "checkpoint-500"), or does it simply not continue the training? I haven't found a way to test it and the documentation is not clear on that.
You also should add resume_from_checkpoint parametr to trainer.train with link to checkpoint trainer.train(resume_from_checkpoint="{<path-where-checkpoint-were_stored>/checkpoint-0000") 0000- example of checkpoin number. Don't forget to mount your drive during whole this process.
https://stackoverflow.com/questions/72672281/
How to bin a pytorch tensor into an array of values
Say I have an array of values w = [w1, w2, w3, ...., wn] and this array is sorted in ascending order, all values being equally spaced. I have a pytorch tensor of any arbitrary shape. For the sake of this example, lets say that tensor is: import torch a = torch.rand(2,4) assuming w1=torch.min(a) and wn=torch.max(a), I want to create two separate tensors, amax and amin, both of shape (2,4) such that amax contains values from w that are the nearest maximum value to the elements of a, and vice-versa for amin. As an example, say: a = tensor([[0.7192, 0.6264, 0.5180, 0.8836], [0.1067, 0.1216, 0.6250, 0.7356]]) w = [0.0, 0.33, 0.66, 1] therefore, I would like amax and amin to be, amax = tensor([[1.000, 0.66, 0.66, 1.000], [0.33, 0.33, 0.66, 1.00]]) amin = tensor([[0.66, 0.33, 0.33, 0.66], [0.00, 0.00, 0.33, 0.66]]) What is the fastest way to do this?
You could compute for all points in a, the difference with each bin inside w. For this, you need a little bit of broadcasting: >>> z = a[...,None]-w[None,None] tensor([[[ 0.7192, 0.3892, 0.0592, -0.2808], [ 0.6264, 0.2964, -0.0336, -0.3736], [ 0.5180, 0.1880, -0.1420, -0.4820], [ 0.8836, 0.5536, 0.2236, -0.1164]], [[ 0.1067, -0.2233, -0.5533, -0.8933], [ 0.1216, -0.2084, -0.5384, -0.8784], [ 0.6250, 0.2950, -0.0350, -0.3750], [ 0.7356, 0.4056, 0.0756, -0.2644]]]) We have to identify for each point in a, at which index (visually represented as columns here) the sign change occurs. We can apply the sign operator, then compute difference z[i+1]-z[i] between columns with diff, retrieve the non zero values with nonzero, then finally select and reshape the resulting tensor: >>> index = z.sign().diff(dim=-1).nonzero()[:,2].view(2,4) tensor([[2, 1, 1, 2], [0, 0, 1, 2]]) To get amin, simply index w with index: >>> w[index] tensor([[0.6600, 0.3300, 0.3300, 0.6600], [0.0000, 0.0000, 0.3300, 0.6600]]) And to get amax, we can offset the indices to jump to the upper bound: >>> w[index+1] tensor([[1.0000, 0.6600, 0.6600, 1.0000], [0.3300, 0.3300, 0.6600, 1.0000]])
https://stackoverflow.com/questions/72673094/
PyTorch running under WSL2 getting "Killed" for Out of memory even though I have a lot of memory left?
I'm on Windows 11, using WSL2 (Windows Subsystem for Linux). I recently upgraded my RAM from 32 GB to 64 GB. While I can make my computer use more than 32 GB of RAM, WSL2 seems to be refusing to use more than 32 GB. For example, if I do >>> import torch >>> a = torch.randn(100000, 100000) # 40 GB tensor Then I see the memory usage go up until it hit's 30-ish GB, at which point, I see "Killed", and the python process gets killed. Checking dmesg, it says that it killed the process because "Out of memory". Any idea what the problem might be, or what the solution is?
According to this blog post, WSL2 is automatically configured to use 50% of the physical RAM of the machine. You'll need to add a memory=48GB (or your preferred setting) to a .wslconfig file that is placed in your Windows home directory (\Users\{username}\). [wsl2] memory=48GB After adding this file, shut down your distribution and wait at least 8 seconds before restarting. Assume that Windows 11 will need quite a bit of overhead to operate, so setting it to use the full 64 GB would cause the Windows OS to run out of memory.
https://stackoverflow.com/questions/72693671/
Practically understanding gradient dimensions on backprop
I've wrote this piece of torch code that implements a Linear(1,1) -> Linear(1,1) -> MSE forward/backward pass: l1 = torch.nn.Linear(1, 1, bias=False) l2 = torch.nn.Linear(1, 1, bias=False) l1.weight = torch.nn.Parameter( torch.tensor([[ 0.6279 ]])) l2.weight = torch.nn.Parameter( torch.tensor([[ 0.731 ]])) loss_fn = torch.nn.MSELoss() optimizer_l1 = optim.SGD(l1.parameters(), lr=0.001) optimizer_l2 = optim.SGD(l2.parameters(), lr=0.001) x = torch.tensor([0.3]) y = torch.tensor([1]).float() l1_o = l1(x) l2_o = l2(l1_o) loss = loss_fn(l2_o, y) optimizer_l1.zero_grad() optimizer_l2.zero_grad() loss.backward() which outputs that: l1 weight Parameter containing: tensor([[0.6279]], requires_grad=True) l2 weight Parameter containing: tensor([[0.7310]], requires_grad=True) ====================== x tensor([0.3000]) y tensor([1.]) >>>>>>>>>>>>>>>>>>>>>> l1_o tensor([0.1884], grad_fn=<SqueezeBackward3>) l2_o tensor([0.1377], grad_fn=<SqueezeBackward3>) loss tensor(0.7436, grad_fn=<MseLossBackward>) ---------------------- l1_grad tensor([[-0.3782]]) l2_grad tensor([[-0.3249]]) Now, backwards logic of this makes sense: x = 0.3 l1w = 0.6279 l2w = 0.7310 l1o = 0.1884 -> l2w * x -> 0.6279 * 0.3 l2o = 0.1377 l2_grad = mse_grad(l2o, y, l1o) -> -0.3782 l1_grad = mse_grad(l2o, y, l2w*x) -> -0.3249 However, it stops making sense after we go from 1x1 layers to 2x2: l1 = torch.nn.Linear(2, 2, bias=False) l2 = torch.nn.Linear(2, 2, bias=False) l1.weight = torch.nn.Parameter( torch.tensor([[ -0.6279, -0.4686 ], [ -0.0907, 0.6363 ]])) l2.weight = torch.nn.Parameter( torch.tensor([[ 0.731, 0.6026 ], [ -0.1873, -0.6037 ]])) loss_fn = torch.nn.MSELoss() optimizer_l1 = optim.SGD(l1.parameters(), lr=0.001) optimizer_l2 = optim.SGD(l2.parameters(), lr=0.001) x = torch.tensor([0.3, 0.5]) y = torch.tensor([1, 0]).float() l1_o = l1(x) l2_o = l2(l1_o) loss = loss_fn(l2_o, y) optimizer_l1.zero_grad() optimizer_l2.zero_grad() loss.backward() Which gives this output: l1 weight Parameter containing: tensor([[-0.6279, -0.4686], [-0.0907, 0.6363]], requires_grad=True) l2 weight Parameter containing: tensor([[ 0.7310, 0.6026], [-0.1873, -0.6037]], requires_grad=True) x tensor([0.3000, 0.5000]) y tensor([1., 0.]) l1_o tensor([-0.4227, 0.2909], grad_fn=<SqueezeBackward3>) l2_o tensor([-0.1337, -0.0965], grad_fn=<SqueezeBackward3>) loss tensor(0.6472, grad_fn=<MseLossBackward>) --------------- l1_grad tensor([[-0.2432, -0.4053], [-0.1875, -0.3124]]) l2_grad tensor([[ 0.4792, -0.3298], [ 0.0408, -0.0281]]) When I try to reproduce this from scratch, I can't figure out how to keep gradients shape equal to weights shape without messing with my mse_grad function (I want to take gradient with respect to outputs, not weights, this should be possible since they're proportionate): l1_o = x.dot(l1.T) l2_o = l1_o.dot(l2.T) loss = mse(l2_o, y) l2_o_grad = mse_grad(l2_o, y, l1_o) # shape of l2 (weights) is (2,2), shape of my gradient is (2), since both the layer output and y are of (2) shape My forward pass, mse loss and mse gradient when called standalone give exact same values as torch.
I'm not a 100% sure if I understood your problem correctly, but the gradient w.r.t a particular layer does have the same dimensionality as that layer's output. The gradient w.r.t (linearly applied) weights in that layer, is not computed by backpropagation but by multiplying the input activations (those before the weights) with the backpropagated gradient. Hence, input activations of [2, 1] multiplied by backpropagated gradients of [1, 2] give you a [2, 2] matrix. Pay attention that the input activations are a vector "standing up" and the backpropagated gradients are "lying on the side". I would also recommend you google "The Matrix Cookbook" and familiarize yourself with https://en.wikipedia.org/wiki/Matrix_calculus. Not so much because of the mathematical insights but rather to make sure that derivatives between matrices and vectors and scalars have the right dimensionality (numerator-layout and denominator-layout notation).
https://stackoverflow.com/questions/72693772/
How to select fairseq option `--ddp-backend`
I'm learning how to use fairseq to implement a simple translation model based on Transformer. I would like to use 2 GeForce RTX 3090 GPUs on my lab server. Which option should I select for --ddp-backend of fairseq-train? Furthermore, could you explain about the meaning of all following options for --ddp-backend and when to use them respectively? From fairseq Documentation: Command-line Tools => fairseq-train => distributed_training --ddp-backend: Possible choices: c10d, fully_sharded, legacy_ddp, no_c10d, pytorch_ddp, slowmo DistributedDataParallel backend Default: “pytorch_ddp” I'm new to stack exchange community, sorry if there is any inappropriate action.
I am not too sure, but I found this on GitHub DDP_BACKEND_CHOICES = ChoiceEnum( [ "c10d", # alias for pytorch_ddp "fully_sharded", # FullyShardedDataParallel from fairscale "legacy_ddp", "no_c10d", # alias for legacy_ddp "pytorch_ddp", "slowmo", ] ) Might be helpful, but I am also struggling with this
https://stackoverflow.com/questions/72694641/
What are the PyTorch's model.eval() + no_grad() equivalent in TensorFlow?
I am trying to extract BERT embeddings and reproduce this code using tensorflow instead of pytorch. I know tf.stop_gradient() is the equivalent of torch.no_grad() but what about model.eval() / combination of both ? # Put the model in "evaluation" mode, meaning feed-forward operation. model.eval() # Run the text through BERT, and collect all of the hidden states produced # from all 12 layers. with torch.no_grad(): outputs = model(tokens_tensor, segments_tensors) # Evaluating the model will return a different number of objects based on # how it's configured in the `from_pretrained` call earlier. In this case, # becase we set `output_hidden_states = True`, the third item will be the # hidden states from all layers. See the documentation for more details: # https://huggingface.co/transformers/model_doc/bert.html#bertmodel hidden_states = outputs[2]
TLDR; eval and no_grad are two completely different things but will often be used in conjunction, primarily for performing fast inference in the case of evaluation/testing loops. The nn.Module.eval function is applied on a PyTorch module and gives it the ability to change its behaviour depending on the stage type: training or evaluation. Only for a handful of layers does this actually have an effect on the layer. Functions such as dropout layers and normalization layers have different behaviours depending on whether they are in training or evaluation mode. You can read more about it on this thread. The torch.no_grad utility is however a context manager, it changes the way the code contained inside that scope runs. When applied no_grad has the effect of preventing gradient computation. In practice, this means no layer activation is been cached in memory. This is most generally used for evaluation and testing loops where no backpropagation is expected after an inference. However, it can also be used during training, for example when an inference on a frozen component and the gradient is not required to pass through it.
https://stackoverflow.com/questions/72716490/
Create custom connection/ non-fully connected layers in Pytorch
As shown in the figure, it is a 3 layer with NN, namely input layer, hidden layer and output layer. I want to design the NN(in PyTorch, just the arch) where the input to hidden layer is fully-connected. However, from hidden layer to output, the first two neurons of the hidden layer should be connected to first neuron of the output layer, second two should be connected to the second in the output layer and so on. How shall this should be designed ? from torch import nn layer1 = nn.Linear(input_size, hidden_size) layer2 = ??????
As @Jan said here, you can overload nn.Linear and provide a point-wise mask to mask the interaction you want to avoid having. Remember that a fully connected layer is merely a matrix multiplication with an optional additive bias. Looking at its source code, we can do: class MaskedLinear(nn.Linear): def __init__(self, *args, mask, **kwargs): super().__init__(*args, **kwargs) self.mask = mask def forward(self, input): return F.linear(input, self.weight, self.bias)*self.mask Having F defined as torch.nn.functional Considering the constraint you have given to the second layer: the first two neurons of the hidden layer should be connected to the first neuron of the output layer It seems you are looking for this pattern: tensor([[1., 0., 0.], [1., 0., 0.], [0., 1., 0.], [0., 1., 0.], [0., 0., 1.], [0., 0., 1.]]) Which can be obtained using torch.block_diag: mask = torch.block_diag(*[torch.ones(2,1),]*output_size) Having this, you can define your network as: net = nn.Sequential(nn.Linear(input_size, hidden_size), MaskedLinear(hidden_size, output_size, mask)) If you feel like it, you can even implement it inside the custom layer: class LocalLinear(nn.Linear): def __init__(self, *args, kernel_size=2, **kwargs): super().__init__(*args, **kwargs) assert self.in_features == kernel_size*self.out_features self.mask = torch.block_diag(*[torch.ones(kernel_size,1),]*self.out_features) def forward(self, input): return F.linear(input, self.weight, self.bias)*self.mask And defining it like so: net = nn.Sequential(nn.Linear(input_size, hidden_size), LocalLinear(hidden_size, output_size))
https://stackoverflow.com/questions/72725944/
How convert this Pytorch loss function to Tensorflow?
This Code for a paper I read had a loss function written using Pytorch, I tried to convert it as best as I could but am getting all Zero's as model predictions, so would like to ask the following: Are the methods I used the correct equivalent in Tensorflow? Why is the model predicting only Zero's? Here is the function: #Pytorch class AdjMSELoss1(nn.Module): def __init__(self): super(AdjMSELoss1, self).__init__() def forward(self, outputs, labels): outputs = torch.squeeze(outputs) alpha = 2 loss = (outputs - labels)**2 adj = torch.mul(outputs, labels) adj[adj>0] = 1 / alpha adj[adj<0] = alpha loss = loss * adj return torch.mean(loss) #Tensorflow def custom_loss_function(outputs,labels): outputs = tf.squeeze(outputs) alpha = 2.0 loss = (outputs - labels) ** 2.0 adj = tf.math.multiply(outputs,labels) adj = tf.where(tf.greater(adj, 0.0), tf.constant(1/alpha), adj) adj = tf.where(tf.less(adj, 0.0), tf.constant(alpha), adj) loss = loss * adj return tf.reduce_mean(loss) The function compiles correctly and is being used in the loss and metric parameters, it is outputing results in metrics logs that appear to be correct (Similar to val_loss) but the output of the model after running is just predicting all 0's model.compile( loss= custom_loss_function, optimizer=optimization, metrics = [custom_loss_function] ) MODEL #Simplified for readability model = Sequential() model.add(LSTM(32,input_shape=(SEQ_LEN,feature_number),return_sequences=True,)) model.add(Dropout(0.3)) model.add(LSTM(96, return_sequences = False)) model.add(Dropout(0.3)) model.add(Dense(1)) return model Inputs/Features are pct_change Price for the previous SEQ_LEN days. (Given SEQ_LEN days tries to predict next day: Target) Outputs/Targets are the next day's price pct_change * 100 (Ex: 5 for 5%). (1 value per row) Note: The model predicts normally when RMSE() is set as the loss function, as mentioned when using the custom_loss_function above it's just predicting Zero's
Try this custom_loss: def custom_loss(y_pred, y_true): alpha = 2.0 loss = (y_pred - y_true) ** 2.0 adj = tf.math.multiply(y_pred,y_true) adj = tf.where(tf.greater(adj, 0.0), tf.constant(1/alpha), adj) adj = tf.where(tf.less(adj, 0.0), tf.constant(alpha), adj) loss = loss * adj return tf.reduce_mean(loss) I check with the below code and work correctly (Code for creating a model for learning and predicting the sum of two variables with the custom_loss): from keras.models import Sequential from keras.layers import Dense import tensorflow as tf import numpy as np x = np.random.rand(1000,2) y = x.sum(axis=1) y = y.reshape(-1,1) def custom_loss(y_pred, y_true): alpha = 2.0 loss = (y_pred - y_true) ** 2.0 adj = tf.math.multiply(y_pred,y_true) adj = tf.where(tf.greater(adj, 0.0), tf.constant(1/alpha), adj) adj = tf.where(tf.less(adj, 0.0), tf.constant(alpha), adj) loss = loss * adj return tf.reduce_mean(loss) model = Sequential() model.add(Dense(128, activation='relu', input_dim=2)) model.add(Dense(64, activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(16, activation='relu')) model.add(Dense(1,)) model.compile(optimizer='adam', loss=custom_loss) model.fit(x, y, epochs=200, batch_size=16) for _ in range(10): rnd_num = np.random.randint(50, size=2)[None, :] pred_add = model.predict(rnd_num) print(f'predict sum of {rnd_num[0]} -> {pred_add}') Output: Epoch 1/200 63/63 [==============================] - 1s 2ms/step - loss: 0.2903 Epoch 2/200 63/63 [==============================] - 0s 2ms/step - loss: 0.0084 Epoch 3/200 63/63 [==============================] - 0s 2ms/step - loss: 0.0016 ... Epoch 198/200 63/63 [==============================] - 0s 2ms/step - loss: 3.3231e-07 Epoch 199/200 63/63 [==============================] - 0s 2ms/step - loss: 5.1004e-07 Epoch 200/200 63/63 [==============================] - 0s 2ms/step - loss: 9.8688e-08 predict sum of [43 44] -> [[82.81973]] predict sum of [39 13] -> [[48.97299]] predict sum of [36 46] -> [[78.05187]] predict sum of [46 7] -> [[49.445843]] predict sum of [35 11] -> [[43.311478]] predict sum of [33 1] -> [[31.695848]] predict sum of [6 8] -> [[13.433815]] predict sum of [14 38] -> [[49.54941]] predict sum of [ 1 40] -> [[39.709686]] predict sum of [10 2] -> [[11.325197]]
https://stackoverflow.com/questions/72750343/
undo torch.chunk to rebuild image in right order
I am trying to process a 3D image in chunks (non-overlapping windows). Once this is done I want to put the chunks back together in the right order. I have been chunking the image as below: tens = torch.tensor(range(64)) tens = tens.view((4,4,4)) print(tens) >>>tensor([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]], [[16, 17, 18, 19], [20, 21, 22, 23], [24, 25, 26, 27], [28, 29, 30, 31]], [[32, 33, 34, 35], [36, 37, 38, 39], [40, 41, 42, 43], [44, 45, 46, 47]], [[48, 49, 50, 51], [52, 53, 54, 55], [56, 57, 58, 59], [60, 61, 62, 63]]]) tens = torch.chunk(tens,2, -1) tens = torch.stack(tens) tens = torch.chunk(tens,2, -2) tens = torch.concat(tens) tens = torch.chunk(tens,2, -3) tens = torch.concat(tens) print(torch.shape) >>>torch.Size([8, 2, 2, 2]) Then I want to put it back together in the original order tens = tens.view([4,4,2,2]) tens = tens.view([2,4,4,2]) tens = tens.view([4,4,4]) print(tens) >>>tensor([[[ 0, 1, 4, 5], [16, 17, 20, 21], [ 2, 3, 6, 7], [18, 19, 22, 23]], [[ 8, 9, 12, 13], [24, 25, 28, 29], [10, 11, 14, 15], [26, 27, 30, 31]], [[32, 33, 36, 37], [48, 49, 52, 53], [34, 35, 38, 39], [50, 51, 54, 55]], [[40, 41, 44, 45], [56, 57, 60, 61], [42, 43, 46, 47], [58, 59, 62, 63]]]) and I can't figure out how to get the elements in the right order. I realise I probably missed something in the docs or something else obvious but I can't find it. Any ideas?
Operator torch.chunk doesn't reduce dimensions so its inverse is torch.cat, not torch.stack. Here are the transforms with the corresponding inverse operations: Splitting the last dimension in two: >>> chunks = tens.chunk(chunks=2, dim=-1) >>> torch.cat(chunks, dim=-1) Splitting the second dimension into two: >>> chunks = tens.chunk(chunks=2, dim=-2) >>> torch.cat(chunks, dim=-2) Splitting the first dimension into two: >>> chunks = tens.chunk(chunks=2, dim=-3) >>> torch.cat(chunks, dim=-3) If you want to invert the whole sequence, you just have to keep in mind that torch.cat is the reverse operation of torch.chunk: >>> tens = torch.cat(tens.chunk(2), dim=-3) >>> tens = torch.cat(tens.chunk(2), dim=-2) >>> tens = torch.cat(tens.chunk(2), dim=-1)
https://stackoverflow.com/questions/72750405/
How to flow gradient through parameter updates?
I have a parameter import torch import torch.nn as nn x = nn.Parameter(torch.tensor([1.0])) I need to update the value of this parameter while maintaining the gradient flow (for a meta learning problem). PyTorch does not allow in-place operations on leaf-variable like a = torch.tensor([2.0], requires_grad=True) x.copy_(a) x.backward() >>> RuntimeError: a leaf Variable that requires grad is being used in an in-place operation. In addition, I also tried the following, but while working, the gradient didn't flow: x = nn.Parameter(torch.tensor([1.0])) a = torch.tensor([2.0], requires_grad=True) x = nn.Parameter(a) x.backward() print(a.grad) >>> None How can I update the value of a parameter while maintaining gradient flow?
What you are looking for is torch.Tensor.clone: This function is differentiable, so gradients will flow back from the result of this operation to input. To create a tensor without an autograd relationship to input see detach(). >>> a = torch.tensor([2.0], requires_grad=True) >>> x = a.clone() >>> x.backward() >>> a.grad tensor([1.])
https://stackoverflow.com/questions/72758128/
Fine Tuning Blenderbot
I have been trying to fine-tune a conversational model of HuggingFace: Blendebot. I have tried the conventional method given on the official hugging face website which asks us to do it using the trainer.train() method. I also tried it using the .compile() method. I have tried fine-tuning using PyTorch as well as TensorFlow on my dataset. Both methods seem to fail and give us an error saying that there is no method called compile or train for the Blenderbot model. I have also looked everywhere online to check how Blenderbot could be fine-tuned on my custom data and nowhere does it mention properly that runs without throwing an error. I have gone through Youtube tutorials, blogs, and StackOverflow posts but none answer this question. Hoping someone would respond here and help me out. I am open to using other HuggingFace Conversational Models as well for fine-tuning. Thank you! :)
Here is a link I am using to fine-tune the blenderbot model. Fine-tuning methods: https://huggingface.co/docs/transformers/training Blenderbot: https://huggingface.co/docs/transformers/model_doc/blenderbot from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration mname = "facebook/blenderbot-400M-distill" model = BlenderbotForConditionalGeneration.from_pretrained(mname) tokenizer = BlenderbotTokenizer.from_pretrained(mname) #FOR TRAINING: trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, ) trainer.train() #OR model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=tf.metrics.SparseCategoricalAccuracy(), ) model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3) None of these work! :(
https://stackoverflow.com/questions/72774975/
Blenderbot FineTuning
I have been trying to fine-tune a conversational model of HuggingFace: Blendebot. I have tried the conventional method given on the official hugging face website which asks us to do it using the trainer.train() method. I tried it using the .compile() method. I have tried fine-tuning using PyTorch as well as TensorFlow on my dataset. Both methods seem to fail and give us an error saying that there is no method called compile or train for the Blenderbot model. I have even looked everywhere online to check how Blenderbot could be fine-tuned on my custom data and nowhere does it mention properly that runs without throwing an error. I have gone through Youtube tutorials, blogs, and StackOverflow posts but none answer this question. Hoping someone would respond here and help me out. I am open to using other HuggingFace Conversational Models as well for fine-tuning. Here is a link I am using to fine-tune the blenderbot model. Fine-tuning methods: https://huggingface.co/docs/transformers/training Blenderbot: https://huggingface.co/docs/transformers/model_doc/blenderbot from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration mname = "facebook/blenderbot-400M-distill" model = BlenderbotForConditionalGeneration.from_pretrained(mname) tokenizer = BlenderbotTokenizer.from_pretrained(mname) #FOR TRAINING: trainer = Trainer( model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset, compute_metrics=compute_metrics, ) trainer.train() #OR model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=tf.metrics.SparseCategoricalAccuracy(), ) model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3) None of these work.
Maybe try using the TFBlenderbotForConditionalGeneration class for Tensorflow. It has what you need: import tensorflow as tf from transformers import BlenderbotTokenizer, TFBlenderbotForConditionalGeneration mname = "facebook/blenderbot-400M-distill" model = TFBlenderbotForConditionalGeneration.from_pretrained(mname) tokenizer = BlenderbotTokenizer.from_pretrained(mname) model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=tf.metrics.SparseCategoricalAccuracy(), ) .... See the docs for more information.
https://stackoverflow.com/questions/72776834/
why is my loss function only returning NaN values?
below is my code import numpy as np import torch from torch.utils import data import torch.nn as nn import pandas as pd # PREPPING DATA FROM CSV FILE csvFile = pd.read_csv('/Users/ericbeep999/Desktop/Web Development/Projects/Python/pytorch/3. Linear Regression/weather.csv') labels, features = csvFile.iloc[:, 4], csvFile.iloc[:, 5] #labels - min temp #features - max temp labels = torch.tensor(labels, dtype=torch.float32).reshape(-1, 1) features = torch.tensor(features, dtype=torch.float32).reshape(-1,1) # READING DATASET def load_array(data_arrays, batch_size, is_train = True): dataset = data.TensorDataset(*data_arrays) return data.DataLoader(dataset, batch_size, shuffle= is_train) batch_size = 20 data_set = load_array((features, labels), batch_size) #DEFININING MODEL AND PARAMETERS model = nn.Sequential(nn.Linear(1, 1)) model[0].weight.data.normal_(0, 0.1) model[0].bias.data.fill_(0) #DEFINING LOSS FUNCTION AND OPTIMIZATION ALGORITHMN lossFunc = nn.MSELoss() learning_rate = 0.01 gradient = torch.optim.SGD(model.parameters(), learning_rate) #TRAINING MODEL num_epochs = 100 for epoch in range(num_epochs): for X, Y in data_set: loss = lossFunc(model(X), Y) gradient.zero_grad() loss.backward() gradient.step() loss = lossFunc(model(features), labels) print(f'epoch: {epoch + 1}, loss: {loss}') print(f"{model[0].weight.data}, {model[0].bias.data}") the csv file I am importing the data from can be found at https://www.kaggle.com/datasets/smid80/weatherww2?datasetId=3759&searchQuery=pytorch My labels are the min temperature and my features are the max temperature whenever I run the code, the only thing that prints is epoch: 1, loss: nan epoch: 2, loss: nan epoch: 3, loss: nan epoch: 4, loss: nan epoch: 5, loss: nan epoch: 6, loss: nan epoch: 7, loss: nan epoch: 8, loss: nan epoch: 9, loss: nan epoch: 10, loss: nan i don't really understand why it is only printing NaN
I changed your learning rate to 0.001 and it runs without giving NaNs (albeit not learning anything since predicting min temperature from max temperature in that data may not be easily learned). My guess is the issue is with the scale of your input/output data, i.e. they're in the range of something like 0-40 which isn't great for neural networks - it can cause the learning to go unstable more easily. I would suggest you first scale your inputs/outputs to be in a range of [0, 1] or [-1, 1]. I'll direct you to this blog for details on achieving that, they also discuss in more detail why scaling the data is important for learning with neural networks.
https://stackoverflow.com/questions/72778247/
How to set a 3-d tensor to 0 according to the value of a 2-d tensor
Suppose I have a 3-d tensor P shaped (B, N, d) and a 2-d tensor Q shaped (B,N), where values in Q are smaller than d. I want to set some values in P to 0 using the indices from Q: For instance, P = torch.randn(2,3,6) Q = torch.tensor([[0,3,4], [2,1,3]]) How can I set P[0,0,0]=0; P[0,1,3]=0; P[0,2,4]=0; P[1,0,2]=0; P[1,1,1]=0; P[1,2,3]=0 using Q and keep other values in P unchanged?
What you are looking to do is: P[i, j, Q[i, j]] = 0 This is the perfect use case for torch.Tensor.scatter, which has the effect of placing an arbitrary value at designated positions. We first need the input and indexer to have matching shapes: >>> Q_ = Q[...,None].expand_as(P) And apply the scatter function on dim=2, with the hidden value argument... P.scatter(dim=2, index=Q_, value=0) tensor([[[0, 2, 5, 4, 3, 7], [4, 1, 5, 0, 3, 8], [7, 8, 4, 3, 0, 1]], [[7, 1, 0, 1, 1, 3], [5, 0, 5, 3, 0, 5], [6, 3, 5, 0, 7, 2]]])
https://stackoverflow.com/questions/72785645/
PyTorch - How to specify an input layer? Is it included by default?
I am working on a Reinforcement Learning problem in StableBaselines3, but I don't think that really matters for this question. SB3 is based on PyTorch. I have 101 input features, and even though I designed a neural architecture with the first layer having only 64 nodes, the network still works. Below is a screenshot of my model architecture: I am concerned because I thought that the first layer of the neural network needed to have a number of nodes equal to the number of input features. Does PyTorch include an input layer by default, and doesn't display it? If so, how can I know and control what the activation functions etc. are for the input layer? EDIT: Here are my imports and basic code, in response to Michael's comment. import gym from gym import Env import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd from gym import spaces from gym.utils import seeding from stable_baselines3.common.vec_env import DummyVecEnv, SubprocVecEnv from stable_baselines3.common.utils import set_random_seed from stable_baselines3.common.evaluation import evaluate_policy from stable_baselines3.common.env_util import make_vec_env from stable_baselines3 import PPO import math import random import torch as th from sb3_contrib.common.maskable.policies import MaskableActorCriticPolicy from sb3_contrib.common.wrappers import ActionMasker from sb3_contrib.ppo_mask import MaskablePPO from sb3_contrib.common.envs import InvalidActionEnvDiscrete from sb3_contrib.common.maskable.evaluation import evaluate_policy from sb3_contrib.common.maskable.utils import get_action_masks env = MyCustomEnv(....) env = ActionMasker(env, mask_fn) # Wrap to enable masking # Defining custom neural network architecture mynetwork = dict(activation_fn=th.nn.LeakyReLU, net_arch=[dict(pi=[64, 64], vf=[64, 64])]) # Maskable PPO behaves just like regular PPO model = MaskablePPO(MaskableActorCriticPolicy, env, verbose=1, learning_rate=0.0005, gamma=0.975, seed=10, batch_size=256, clip_range=0.2, tensorboard_log="./log1/", policy_kwargs=mynetwork) # To get the screenshot I gave print(model.policy)
We can do a little bit of digging inside Stable Baselines' source code. Looking inside the MaskableActorCriticPolicy, we can see it builds a MLP extractor by initializing an instance of MlpExtractor and creating the policy_net sub-network here whose layers are defined in this loop. Ultimately the layers feature sizes are dictated by the "pi" and "vf" lists in side of net_arch (see here). If you trace back this you will notice those parameters can be modified as the net_arch argument on MaskableActorCriticPolicy header.
https://stackoverflow.com/questions/72792063/
Creating an image grid of tensors out of a batch using matplotlib/pytorch
I am trying to create a grid of images (e.g. 3 by 3) from a batch of tensors that will be fed into a GAN through a data loader in the next step. With the below code I was able to transform the tensors into images that are displayed in a grid in the right position. The problem is, that they are all displayed in a separate grid as shown here: Figure 1 Figure 2 Figure 5. How can put them in one grid and just get one figure returned with all 9 images?? Maybe I am making it too complicated. :D In the end tensors out of the real_samples have to be transformed and put into a grid. real_samples = next(iter(train_loader)) for i in range(9): plt.figure(figsize=(9, 9)) plt.subplot(330 + i + 1) plt.imshow(np.transpose(vutils.make_grid(real_samples[i].to(device) [:40], padding=1, normalize=True).cpu(),(1,2,0))) plt.show()
And here is how to display a variable number of wonderful CryptoPunks using matplotlib XD: import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import ImageGrid import matplotlib.image as mpimg row_count = 3 col_count = 3 cryptopunks = [ mpimg.imread(f"cryptopunks/{i}.png") for i in range(row_count * col_count) ] fig = plt.figure(figsize=(8., 8.)) grid = ImageGrid(fig, 111, nrows_ncols=(row_count, col_count), axes_pad=0.1) for ax, im in zip(grid, cryptopunks): ax.imshow(im) plt.show() Please note that the code allows you to generate all the images you want, not only 3 times 3. I have a folder called cryptopunks with a lot of images called #.png (e.g., 1.png, ..., 34.png, ...). Just change the row_count and col_count variable values. For instance, for row_count=6 and col_count=8 you get: If your image files do not have that naming pattern above (i.e., just random names), just replace the first lines with the following ones: import os from pathlib import Path import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import ImageGrid import matplotlib.image as mpimg for root, _, filenames in os.walk("cryptopunks/"): cryptopunks = [ mpimg.imread(Path(root, filename)) for filename in filenames ] row_count = 3 col_count = 3 # Here same code as above. (I have downloaded the CryptoPunks dataset from Kaggle.)
https://stackoverflow.com/questions/72803919/
my nn.sigmoid() gradient different with my manual calculation
so im doing a manual calculation of lstm backpropagation in excel and want to compare it to my code, but im having trouble with the gradient of sigmoid at the pytorch. : the output here: tensor([[0.8762]], grad_fn=<SigmoidBackward>) tensor([-0.1238]) epoch: 0 loss: 0.13214068 so the first line is the sigmoid value and the second line is the gradient of sigmoid value. why the value of sigmoid gradient is -0.1238 while the formula of sigmoid gradient are σ(x)⋅(1−σ(x). if i calculate the sigmoid gradient manually the value is 0.10845, but in the code the sigmoid gradient is -0.1238 .is the formula for the sigmoid gradient in pytorch wrong?
I can't reproduce your error. For one thing, your value of 0.10845 is not correct: remember that it might be computed that way (i.e. z * (1 - z)) because you expect z to be logistic(z) in your implementation. But, in any case, the value I compute agrees with what PyTorch produces. Here's the logistic function: import numpy as np def logistic(z, derivative=False): if not derivative: return 1 / (1 + np.exp(-z)) else: return logistic(z) * (1 - logistic(z)) logistic(0.8762, derivative=True) This produces 0.20754992931590668. Now with PyTorch: import torch t = torch.Tensor([0.8762]) t.requires_grad = True torch.sigmoid(t).backward() t.grad This produces tensor([0.2075]).
https://stackoverflow.com/questions/72809218/
GAN LOSS of Generator and Discriminator Lowest at First Epoch - Is that normal?
I am trying to train a simple GAN and I noticed that the loss for the generator and discriminator is the lowest in the first epoch? How can that be? Did I miss something? Below you find the plot of the Loss over the iterations: Here is the code I was using: I adapted the code according to your suggest @emrejik. It doesn't seem to have changed much though. I couldn't work with torch.ones() at the suggested lines as I was receiving this error message: "argument size (position 1) must be tuple of ints, not Tensor". Any idea how come? I ran through 10 epochs and this came out now: from glob import glob import sys import torch import torch.nn as nn from torch.utils.data import Dataset, DataLoader from torchvision import transforms import torchvision.utils as vutils import torchvision from torch.utils.tensorboard import SummaryWriter from torchinfo import summary from mpl_toolkits.axes_grid1 import ImageGrid from skimage import io import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np from tqdm import trange manual_seed = 999 path = 'Punks' image_paths = glob(path + '/*.png') img_size = 28 batch_size = 32 device = 'cuda' if torch.cuda.is_available() else 'cpu' transform = transforms.Compose( [ transforms.ToPILImage(), transforms.Resize(img_size), transforms.CenterCrop(img_size), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), ] ) class ImageDataset(Dataset): def __init__(self, paths, transform): self.paths = paths self.transform = transform def __len__(self): return len(self.paths) def __getitem__(self, index): image_path = self.paths[index] image = io.imread(image_path) if self.transform: image_tensor = self.transform(image) return image_tensor if __name__ == '__main__': dataset = ImageDataset(image_paths, transform) train_loader = DataLoader( dataset, batch_size=batch_size, num_workers=2, shuffle=True) class Discriminator(nn.Module): def __init__(self): super().__init__() self.model = nn.Sequential( nn.Linear(784*3, 2048), nn.ReLU(), nn.Dropout(0.3), nn.Linear(2048, 1024), nn.ReLU(), nn.Dropout(0.3), nn.Linear(1024, 512), nn.ReLU(), nn.Dropout(0.3), nn.Linear(512, 256), nn.ReLU(), nn.Dropout(0.3), nn.Linear(256, 1), nn.Sigmoid(), ) def forward(self, x): x = x.view(x.size(0), 784*3) output = self.model(x) return output discriminator = Discriminator().to(device=device) class Generator(nn.Module): def __init__(self): super().__init__() self.model = nn.Sequential( nn.Linear(100, 256), nn.ReLU(), nn.Linear(256, 512), nn.ReLU(), nn.Linear(512, 1024), nn.ReLU(), nn.Linear(1024, 2048), nn.ReLU(), nn.Linear(2048, 784*3), nn.Tanh(), ) def forward(self, x): output = self.model(x) output = output.view(x.size(0), 3, 28, 28) return output generator = Generator().to(device=device) lr = 0.0001 num_epochs = 10 loss_function = nn.BCELoss() #noise = torch.randn(batch_size, 100, device=device) optimizer_discriminator = torch.optim.Adam( discriminator.parameters(), lr=lr) optimizer_generator = torch.optim.Adam(generator.parameters(), lr=lr) model = Discriminator().to(device=device) summary(model, input_size=(batch_size, 3, 28, 28)) model = Generator().to(device=device) summary(model, input_size=(batch_size, 100)) image_list = [] Dis_losses = [] Gen_losses = [] iters = 0 epochs = 0 for epoch in trange((num_epochs), bar_format='{desc:<5.5}{percentage:3.0f}%|{bar:120}{r_bar}\n'): for n, real_samples in enumerate(train_loader): batch_size = len(real_samples) real_samples = real_samples.to(device=device) real_samples_labels = torch.ones((batch_size, 1)).to(device=device) latent_space_samples = torch.randn( (batch_size, 100)).to(device=device) fake_samples = generator(latent_space_samples) fake_samples_labels = torch.zeros( (batch_size, 1)).to(device=device) discriminator.zero_grad() output_discriminator_real = discriminator(real_samples) loss_discriminator_real = loss_function( output_discriminator_real, real_samples_labels) output_discriminator_fake = discriminator(fake_samples) loss_discriminator_fake = loss_function( output_discriminator_fake, fake_samples_labels) loss_discriminator = ( loss_discriminator_real + loss_discriminator_fake)/2 loss_discriminator.backward() optimizer_discriminator.step() latent_space_samples = torch.randn( (batch_size, 100)).to(device=device) generator.zero_grad() fake_samples = generator(latent_space_samples) output_discriminator_fake = discriminator(fake_samples) loss_generator = loss_function( output_discriminator_fake, real_samples_labels) loss_generator.backward() optimizer_generator.step() image_list.append(vutils.make_grid( fake_samples_labels, padding=2, normalize=True)) Dis_losses.append(loss_discriminator.item()) Gen_losses.append(loss_generator.item()) iters += 1 epochs += 1 if n == batch_size - 1: print(f"Epoch: {epoch} Loss D.: {loss_discriminator}") print(f"Epoch: {epoch} Loss G.: {loss_generator}") latent_space_samples = torch.randn((batch_size, 100)).to(device=device) plt.figure(figsize=(10, 5)) plt.title("Generator and Discriminator Loss During Training") plt.plot(Dis_losses, label="D") plt.plot(Gen_losses, label="G") plt.xlabel("iterations") plt.ylabel("Loss") plt.legend() plt.show()
I didn't see the proper use of loss function for the discriminator. You should give real samples and generated samples separately to the discriminator. I think you should change your code to a form like this: fake = generator(noise) disc_real = disc(real) loss_disc_real = loss_func(disc_real, torch.ones_like(disc_real)) disc_fake = disc(fake) loss_disc_fake = loss_func(disc_fake,torch.zeros_like(disc_fake)) loss_disc = (loss_disc_real+loss_disc_fake)/2 .... loss_generator = loss_func(disc_fake,torch.ones_like(disc_fake)) ... plot loss_disc and loss_generator, this should be work
https://stackoverflow.com/questions/72815092/
How to reduce the size of Bert model(checkpoint/model_state.bin) using pytorch
I used torch.quantization.quantize_dynamic to reduce the model size but it is reducing my prediction Accuracy score. I'm using that model file inside the Flask and doing some real time predictions, Due to the large size i'm facing issues while predicting. So could anyone please help me on reducing the bert model size using pytorch and guide me on who to do the real time predictions.
Here is the doc of torch.quantization.quantize_dynamic where dtype is set to torch.qint8. so if you dont want your accuracy to drastically decrease use the below syntax torch.quantization.quantize_dynamic(model, qconfig_spec=None, dtype=torch.float16)
https://stackoverflow.com/questions/72818221/
Overfitting LSTM pytorch
I was following the tutorial on CoderzColumn to implement a LSTM for text classification using pytorch. I tried to apply the implementation on the bbc-news Dataset from Kaggle, however, it heavily overfits, achieving a max accuracy of about 60%. See the train/loss curve for example: Is there any advice (I am quite new to RNN/LSTM), to adapt the model to prevent that high overfiting? The model is taken from the above tutorial and looks kind of like this: class LSTMClassifier(nn.Module): def __init__(self, vocab, target_classes, embed_len = 50, hidden_dim=75, n_layers=1): super(LSTMClassifier, self).__init__() self.n_layers = n_layers self.embed_len = embed_len self.hidden_dim = hidden_dim self.embedding_layer = nn.Embedding(num_embeddings=len(vocab), embedding_dim=embed_len) # self.lstm = nn.LSTM(input_size=embed_len, hidden_size=hidden_dim,dropout=0.2, num_layers=n_layers, batch_first=True) self.lstm = nn.LSTM(input_size=embed_len, hidden_size=hidden_dim, num_layers=n_layers, batch_first=True) self.fc = nn.Linear(hidden_dim, len(target_classes)) def forward(self, X_batch): embeddings = self.embedding_layer(X_batch) hidden, carry = torch.randn(self.n_layers, len(X_batch), self.hidden_dim), torch.randn(self.n_layers, len(X_batch), self.hidden_dim) output, (hidden, carry) = self.lstm(embeddings, (hidden, carry)) return self.fc(output[:,-1]) I would be really thankful for any adive how to adapt the version in the tutorial to use it more effectively on other datasets
Have you tried adding nn.Dropout layer before the self.fc? Check what p = 0.1 / 0.2 / 0.3 will do. Another thing you can do is to add regularisation to your training via weight_decay parameter: optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, weight_decay=1e-5) Use small values first, and increase by 10 times, see which will get you the best result. Also, goes without saying, make sure that there is no test data points in train set. Make sure you did not forget to shuffle your train set: train_loader = DataLoader(train_dataset, batch_size=1024, collate_fn=vectorize_batch, shuffle=True)
https://stackoverflow.com/questions/72854736/
PyTorch: Handling multiple functions and arguments
I have a large number of neural networks (such that I have to use list comprehension to produce them), say [f_1, f_2, f_3 ...], collectively denoted by F. I have the argument X = [x_1, x_2, x_3 ...], such that input tensor x_i is intended for the network f_i. The tensors I have will be big, owing to the data and then the gradients. Is there an elegant (and efficient) way of obtaining the sequence [f_1(x_1), f_2(x_2), f_3(x_3) ...]?
You could again use a list comprehension: out = [f(x) for f,x in zip(F,X)]
https://stackoverflow.com/questions/72855924/
Difficulties in using jacobian of torch.autograd.functional
I am solving PDE, so I need the jacobian matrix of the residual with respect to variables. Here is my code import torch from torch.autograd.functional import jacobian def get_residual (pgnew, swnew): residual_w = 5*(swnew-swold)+T_w*((pgnew[2:,:,:]-pgnew[1:-1,:,:])-(pc[2:,:,:]-pc[1:-1,:,:])) - T_w*((pgnew[1:-1,:,:]-pgnew[0:-2,:,:])-(pc[1:-1,:,:]-pc[0:-2,:,:])) residual_g = 5*((1-swnew)-(1-swold))+T_g*(pgnew[2:,:,:]-pgnew[1:-1,:,:]) - T_g*(pgnew[1:-1,:,:]-pgnew[0:-2,:,:]) residual = torch.ravel(torch.column_stack((residual_w,residual_g))) return residual if __name__ == '__main__': dt = 0.01 T_w = 10 T_g = 12 swnew = torch.zeros(3, 1, 1, requires_grad=True, dtype=torch.float64) swold = torch.ones(3, 1, 1, requires_grad=True, dtype=torch.float64) pgnew = 2*torch.ones(5, 1, 1, requires_grad=True, dtype=torch.float64) pc = 3*torch.ones(5, 1, 1, requires_grad=True, dtype=torch.float64) unknown = torch.ravel(torch.column_stack((pgnew[1:-1,:,:],swnew))) residual = get_residual(pgnew, swnew) print('Check Jacobian \n', jacobian(get_residual, unknown)) I am following this tutorial; however, it shows an error, namely, get_residual() missing 1 required positional argument: 'swnew', so I change it to print('Check Jacobian \n', jacobian(get_residual(pgnew, swnew),unknown) Then it shows 'Tensor' object is not callable' Thank you
You have to give the jacobian a tuple as second argument, containing the arguments of get_residual: print('Check Jacobian \n', jacobian(get_residual, (pgnew, swnew))) (You may need to make sure that they have the shape you want etc.)
https://stackoverflow.com/questions/72862653/
PyTorch lightening resume training from last epoch and weights
I am using PyTorch Lightening trainer for pre-training a large model. I know I can resume training from old weights but that does not contain old hyper-parameters (lr, last_epoch, etc.). Is there any automatic way to resume training? OR Do I need to overload Checkpoint Callback or CSVLogger to search for old cvs logs and get last epoch number?
Instead of load from checkpoint, use resume_from_checkpoint in trainer. # resume from a specific checkpoint trainer = Trainer(resume_from_checkpoint="some/path/to/my_checkpoint.ckpt") See more details here https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#resume-from-checkpoint .
https://stackoverflow.com/questions/72870376/
Good way to down-dimension (extract) of a 3D tensor (or same as numpy)
I have some data stored in a certain 3D tensor data1 = torch.ones(3, 3, 3, requires_grad=True, dtype=torch.float64) data2 = torch.zeros(3, 3, 3, requires_grad=True, dtype=torch.float64) When I perform the calculation temp= data1[:,0,0]+data2[:,0,0] I would like to see the result in form of size ([3])tensor instead of ([3,1,1]) So considering the performance, I should extract from the data1, data2 or temp? How to do this?
As @paime said, you can use single element slices instead of single indexes: >>> data1[:,:1,:1] + data2[:,:1,:1] tensor([[[1.]], [[1.]], [[1.]]], dtype=torch.float64, grad_fn=<AddBackward0>) Alternatively, there are different ways of unsqueezing multiple dimensions at the same time: Using two torch.Tensor.unsqueeze >>> temp.unsqueeze(-1).unsqueeze(-1) Reshaping the tensor with a view: >>> temp.view(*temp.shape, 1, 1) Or with fancy indexing: >>> temp[..., None, None]
https://stackoverflow.com/questions/72872462/
Neural Network for Regression using PyTorch
I am trying to implement a Neural Network for predicting the h1_hemoglobin in PyTorch. After creating a model, I kept 1 in the output layer as this is Regression. But I got the error as below. I'm not able to understand the mistake. Keeping a large value like 100 in the output layer removes the error but renders the model useless as I am trying to implement regression. Data: from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0) ##### Creating Tensors X_train=torch.tensor(X_train) X_test=torch.tensor(X_test) y_train=torch.LongTensor(y_train) y_test=torch.LongTensor(y_test) class ANN_Model(nn.Module): def __init__(self,input_features=4,hidden1=20,hidden2=20,out_features=1): super().__init__() self.f_connected1=nn.Linear(input_features,hidden1) self.f_connected2=nn.Linear(hidden1,hidden2) self.out=nn.Linear(hidden2,out_features) def forward(self,x): x=F.relu(self.f_connected1(x)) x=F.relu(self.f_connected2(x)) x=self.out(x) return x loss_function = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr = 0.01) epochs = 500 final_losses = [] for i in range(epochs): i = i + 1 y_pred = model.forward(X_train.float()) loss=loss_function(y_pred, y_train) final_losses.append(loss.item()) if i%10==1: print("Epoch number: {} and the loss: {}".format(i, loss.item())) optimizer.zero_grad() loss.backward() optimizer.step() Error:
Since you are performing regression, the CrossEntropyLoss() internally implements the NLLLoss() function. The CrossEntropyLoss() expects C classes for C predictions but you have specified only one class. The NLLLoss() tries to index into the prediction logits based on the ground-truth value. E.g., in your case, the ground-truth is a single value 14. The loss step tries to index into the 14th logit of your predictions to get its corresponding value so that it can compute the negative log likelihood on it, which is essentially - -log(probability_k) where k is the index that the ground-truth outputs. Since you have only logit in your predictions, it throws an error - index out of bounds. For regression problems, you should consider using distance based losses such as MSELoss(). Try replacing your loss function - loss_function = CrossEntropyLoss() with loss_function = MSELoss()
https://stackoverflow.com/questions/72874433/
How to use fairseq interactive with multiple gpu?
I am trying to generate new prediction for the model, but I found it is not that intuitive to use fairseq. I found fairseq-interactive could help to generate with a good settings of batch_size, however, it seems that it will use 1 GPU at a time, I wonder if it is possible to use multiple GPU? Hope someone can kindly help! Many thanks :)
You cannot do this natively within fairseq. The best way to do this is to shard your data and run fairseq-interactive on each shard in the background. Be sure to set CUDA_VISIBLE_DEVICES for each shard so you put each shard's generation on a different GPU. This advice also applies to fairseq-generate (which will be significantly faster for large inference jobs).
https://stackoverflow.com/questions/72881589/
Using reshape or view in a certain fashion
import torch import numpy as np a = torch.tensor([[1, 4], [2, 5],[3, 6]]) bb=a.detach().numpy() b = a.view(6).detach().numpy() Element b is like: [1 4 2 5 3 6] How do I reshape back to the following: [1 2 3 4 5 6] This is just an example, want some generic answers, even 3D.
If you want to remain in PyTorch, you can view b in a's shape, then apply a transpose and flatten: >>> b.view(-1,2).T.flatten() tensor([1, 2, 3, 4, 5, 6]) In the 3D case, you can perform similar manipulations using torch.transpose which enables you to swap two axes. You get the desired result by combining it with torch.view: First case (extra dimension last): >>> b = a.view(-1, 1).expand(-1,3).flatten() tensor([1, 1, 1, 4, 4, 4, 2, 2, 2, 5, 5, 5, 3, 3, 3, 6, 6, 6]) >>> b.view(-1,2,3).transpose(0,1).flatten() tensor([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6]) Second case (extra dimension first): >>> b = a.view(1,-1).expand(3,-1).flatten() tensor([1, 4, 2, 5, 3, 6, 1, 4, 2, 5, 3, 6, 1, 4, 2, 5, 3, 6]) >>> b.view(3,-1).T.view(-1,2,3).transpose(0,1).flatten() tensor([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6])
https://stackoverflow.com/questions/72888827/
How to understand the code "cc[bb] += aa" in pytorch?
import torch aa=torch.tensor([[1,2,3],[4,5,6]]).T bb=torch.tensor([0,1,1]).T cc = torch.zeros(2, 2) cc[bb] += aa Then the result is cc=tensor([[1., 4.],[3., 6.]]), why?
Let us reason with pseudo code: >>> aa = [[1,2,3], [4,5,6]].T >>> aa [[1,4], [2,5], [3,6]] >>> bb = [0,1,1].T >>> b [0, 1, 1] >>> cc = zeros(2, 2) >>> cc [[0,0], [0,0]] The next instruction is an assignment which consists in first indexing cc with bb values. Here we are picking entire rows from cc using indices in bb. Since there are three rows in bb, the resulting tensor will consist of cc[bb[0]], cc[bb[1]], and cc[bb[2]] but bb[1] and bb[2] are equal which means it comes down to cc[0] and cc[1]. The right-hand side operand is aa and consists of three rows: [1,4], [2,5], and [3,6]. This means that the final operation performed will be equivalent to (row-wise): cc[0] += [1,4] cc[1] += [3,6] Since cc is initialized with zero values, we can sum this up to: >>> cc[0] = [1,4] >>> cc[1] = [3,6] This means that: >>> cc [[1,4], [3,6]]
https://stackoverflow.com/questions/72895220/
Count all different pixel values given a set of images
In order to sanity check masks for the semantic segmentation task, I would like to know how I can find all different pixel values, given a set of images. I tried: l = [] for img in glob.glob('/content/Maschere/*png'): im = Image.open(img) data = torch.from_numpy(np.asarray(im)) v = torch.unique(data) l.append(v) print(set(l)) The aforementioned code displays the unique pixel values per image, instead, I want get the unique for the whole set of images NOTE: I get this output format: {tensor([ 2, 255], dtype=torch.uint8), tensor([ 2, 255], dtype=torch.uint8), tensor([ 2, 255], dtype=torch.uint8), tensor([ 2, 255], dtype=torch.uint8), tensor([ 3, 255], dtype=torch.uint8), tensor([ 9, 255], dtype=torch.uint8) I would get this kind of result instead : tensor([ 2, 3, 9 255], dtype=torch.uint8)
I didnt' test it, but something along the lines of: l = set() for img in glob.glob('/content/Maschere/*png'): im = Image.open(img) data = torch.from_numpy(np.asarray(im)) v = set(torch.unique(data)) l.update(v) print(l) It maintains a single set which you update with any new values you encounter.
https://stackoverflow.com/questions/72923811/
Model training converges to a fixed value of loss with low accuracy
I have been trying to train a simple model on Chinese MNIST Kaggle dataset. However it keeps converging to a CrossEntropyLoss of 2.708050 even with model training setting picked up from pytorch tutorials. The losses change but converge to a high value with low accuracy (everytime < 10%). There is no error in how the dataset is created. I had initially tried custom made models, training loops as well as test loops, but those didn't work. Finally, I switched to tested functions to figure out the problem. Here is how I have defined the dataset function, initialized using a list of filepaths and target values converted to indices using a value_idx dictionary. class Custom_dataset(Dataset): def __init__(self,filepath,value): data = [cv2.imread(fp,-1) for fp in filepath] data = [i/np.max(i) for i in data] data = torch.tensor(data, dtype=torch.double) data = data.view(-1,1,64,64) self.data = data self.size = len(value) assert len(filepath)==len(value), print('length mismatch') self.index = torch.tensor([value_idx[i] for i in value]).reshape(-1,1) self.target = torch.zeros((self.size,15)).scatter_(dim=1 ,index=self.index#.unsqueeze_(dim=1) ,value=1) print('data shape ',self.data.shape) def __getitem__(self,idx): return (self.data[idx,:],self.target[idx]) def __len__(self): return self.size Here is my training loop and test loop (also picked up from the pytorch tutorials page) def train_loop(dataloader, model, loss_fn, optimizer): size = len(dataloader.dataset) for batch, (X, y) in enumerate(dataloader): X,y = X.to(device), y.to(device) # Compute prediction and loss pred = model(X) loss = loss_fn(pred, y) # Backpropagation optimizer.zero_grad() loss.backward() optimizer.step() if batch % 100 == 0: loss, current = loss.item(), batch * len(X) print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]") def test_loop(dataloader, model, loss_fn): size = len(dataloader.dataset) num_batches = len(dataloader) test_loss, correct = 0, 0 with torch.no_grad(): for X, y in dataloader: pred = model(X) test_loss += loss_fn(pred, y).item() correct += (pred.argmax(1) == y).type(torch.float).sum().item() test_loss /= num_batches correct /= size print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n") and model definition class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.flatten = nn.Flatten() self.linear_relu_stack = Sequential(Linear(64*64,64*32) ,ReLU() ,Linear(64*32,32*32) ,ReLU() ,Linear(32*32,16*32) ,ReLU() ,Linear(32*16,16*16) ,ReLU() ,Linear(16*16,15) ,Sigmoid(),) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits with the final code being model_mk_47 = NeuralNetwork().double().to(device) NUM_EPOCHS=10 for i in tqdm(range(NUM_EPOCHS)): train_loop(train_dl,model_mk_47, CrossEntropyLoss(), Adam(model_mk_47.parameters()) )
Calculating accuracy with sigmoid is not an issue as you are using argmax, and softmax and sigmoid will return different values but they will be in the same order. However, one issue i'm seeing with your code is that you are including your activation within your forward pass code. What I might encourage you to do is to remove the Sigmoid from your model. This is because CrossEntropyLoss takes in: The input is expected to contain raw, unnormalized scores for each class. input has to be a Tensor of size (C)(C) for unbatched input, (minibatch, C)(minibatch,C) or (minibatch, C, d_1, d_2, ..., d_K)(minibatch,C,d1,d2,...,d K) with K \geq 1K≥1 for the K-dimensional case. The last being useful for higher dimension inputs, such as computing cross entropy loss per-pixel for 2D images. This means it is expecting logits (which are the output of the last layer without any activation). If you want to use a sigmoid (would recommend softmax), use functional softmax after the loss has been calculated (or include a flag which only activates the model when self.training == True in the model code. I assume you are getting the performance of your model on your train data. If this is not improving it either means your data pipeline is faulty or your approach isn't working. Are you sure that your labels align properly with your images?
https://stackoverflow.com/questions/72930012/
How to solve RuntimeError: CUDA out of memory?
I try to run an inference using a cli to get the predictions from a detection and recognition model. With cuda10.2 it takes 15 mins for the inference to complete but I have cuda11.3 which takes 3 hours, I want to reduce this time. Note : My hardware does not support cuda10.2. hence I have following packages installed, cudatoolkit 11.3.1 h2bc3f7f_2 pytorch 1.10.0 py3.7_cuda11.3_cudnn8.2.0_0 pytorch torchvision 0.11.0 py37_cu113 pytorch I get this error while I run the inference cli, RuntimeError: CUDA out of memory. Tried to allocate 2.05 GiB (GPU 0; 5.81 GiB total capacity; 2.36 GiB already allocated; 1.61 GiB free; 2.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Tried : To change the batch_size both for detection and recognition Kindly help! Thank you.
I use the following solutions whenever I encounter "CUDA out of memory" error. Here are the solutions, from simple to hard: 1- Try to reduce the batch size. First, train the model on each datum (batch_size=1) to save time. If it works without error, you can try a higher batch size but if it does not work, you should look to find another solution. 2- Try to use a different optimizer since some optimizers require less memory than others. For instance, SGD requires less memory than Adam. 3- Try to use a simpler model with fewer parameters. 4- Try to divide the model into two (or more than two) separate parts. Then update each part's parameters separately in each epoch. Note that whenever you want to compute gradients and update parameters of one part, the parameters of the other part of the model should be frozen. This leads to a lower amount of RAM required in each step which may solve the problem. 5- Lastly, if none of the above solutions work, GPU computation cannot be used. Note that you can use a combination of these solutions which may lead to preventing the error. For instance, perhaps using a smaller batch size and simpler optimizer may work in some situations.
https://stackoverflow.com/questions/72950365/
BCEWithLogitsLoss Multi-label Classification
I'm a bit confused about how to accumulate the batch losses to obtain the epoch loss. Two questions: Is #1 (see comments below) correct way to calculate loss with masks) Is #2 correct way to report epoch loss) optimizer = torch.optim.Adam(model.parameters, lr=1e-3, weight_decay=1e-5) criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight) for epoch in range(10): EPOCH_LOSS = 0. for inputs, gt_labels, masks in training_dataloader: optimizer.zero_grad() outputs = model(inputs) #1: Is this the correct way to calculate batch loss? Do I multiply batch_loss with outputs.shape[0[ before adding it to epoch_loss? batch_loss = (masks * criterion(outputs, gt_labels.float())).mean() EPOCH_LOSS += batch_loss loss.backward() optimizer.step() #2: then what do I do here? Do I divide the EPOCH_LOSS with len(training_dataloader)? print(f'EPOCH LOSS: {EPOCH_LOSS/len(training_dataloader)}:.3f')
In your criterion, you have got the default reduction field set (see the docs), so your masking approach won't work. You should use your masking one step earlier (prior to the loss calculation) like so: batch_loss = (criterion(outputs*masks, gt_labels.float()*masks)).mean() OR batch_loss = (criterion(outputs[masks], gt_labels.float()[masks])).mean() But, without seeing your data it might be a different format. You might want to check that this is working as expected. In regards to your actual question, it depends on how you want to represent your data. What I would do is just to sum all of the batches' losses and represent that, but you can choose to divide by the number of batches if you want to represent the AVERAGE loss of each batch in the epoch. Because this is purely an illustrative property of your model, it actually doesn't matter which one you pick, as long as it's consistent between epochs to represent the fact that your model is learning.
https://stackoverflow.com/questions/72959174/
unable to load torchaudio even after installing
I'm trying to use torchaudio but I'm unable to import it. I have installed it and it is also visible through the pip list. <ipython-input-6-4cf0a64f61c0> in <module> ----> 1 import torchaudio ModuleNotFoundError: No module named 'torchaudio' pytorch-lightning 1.2.0 torch 1.10.1 torchaudio 0.10.1 torchvision 0.11.2 WARNING: You are using pip version 21.1.2; however, version 21.3.1 is available. You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.
Since you are using linux and CPU, you should consider uninstalling every pytorch related packages and re-install them via: pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu as shown here.
https://stackoverflow.com/questions/72962998/
Dataloades TypeError: __getitem___() takes 1 positional argument but 2 were given
it's my first time approaching pytorch. I built a dataset class to load tensors by Dataloader, like this: train_loader = DataLoader(dataset_train, batch_size=6, drop_last=True) But at the following line: for i,train_batch in enumerate(train_loader): I receive this error: TypeError: __ getitem__() takes 1 positional argument but 2 were given Any help would be great. I'm stuck on it. My concern is that it could depend on the libraries versions I'm using: matplotlib 3.5.2 numpy 1.23.0 opencv-python 4.6.0.66 torch 1.12.0 torch-tb-profiler 0.4.0 torchaudio 0.12.0 torchvision 0.13.0 Thank you.
I believe you expected to enumerate your dataloader: for i, train_batch in enumerate(dataloader): # train loop
https://stackoverflow.com/questions/72963448/
How can I delete the background outside the drawn contours?
How can I delete the background outside the drawn contours? My main goal is measure size of ONLY cardboard boxes. I have 2 differend code. First code measuring EVERYTHING with aruco marker. Second code is detecting boxes with yolo.(I need this because measure code detects everything) Both of them drawn contours. My measure code is measuring everything thats why i want to remove background except contoured objects. how can i manage this? Please help.
Create black mask image. For each detection draw it contour on mask: cv2.drawContours(img, contours, -1, color=(255, 255, 255), thickness=cv2.FILLED) And after make bitwise_and with this mask.
https://stackoverflow.com/questions/72973571/
MultiplicativeLR scheduler not working properly when call scheduler.step()
PytorchLightning Framework, I am configuring the optimizers like this: def configure_optimizers(self): opt = torch.optim.Adam(self.model.parameters(), lr=cfg.learning_rate) #modified to fit lightning sch = torch.optim.lr_scheduler.MultiplicativeLR(opt, lr_lambda = 0.95) #decrease of 5% every epoch return [opt], [sch] Then in the training_step, I can either call manually the lr_scheduler or let lightning do it automatically. Fact is that in any case I got this kind of error: lr_scheduler["scheduler"].step() File "/home/lsa/anaconda3/envs/randla_36/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 152, in step values = self.get_lr() File "/home/lsa/anaconda3/envs/randla_36/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 329, in get_lr for lmbda, group in zip(self.lr_lambdas, self.optimizer.param_groups)] File "/home/lsa/anaconda3/envs/randla_36/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 329, in <listcomp> for lmbda, group in zip(self.lr_lambdas, self.optimizer.param_groups)] TypeError: 'float' object is not callable But ifI use any other scheduler, not only VSCode recognize it as belonging to pytorch, I also do not get this error. Pytorch version 1.10 Lightning Version 1.5
I think that you need to change the value of `lr_lambda'. Here is the link to the documentation: https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiplicativeLR.html lr_lambda (function or list) – A function which computes a multiplicative factor given an integer parameter epoch, or a list of such functions, one for each group in optimizer.param_groups. So, if you want a decrease of 5% every epoch, then you could do the following: def configure_optimizers(self): opt = torch.optim.Adam(self.model.parameters(), lr=cfg.learning_rate) #modified to fit lightning lmbda = lambda epoch: 0.95 sch = torch.optim.lr_scheduler.MultiplicativeLR(opt, lr_lambda = lmbda) #decrease of 5% every epoch return [opt], [sch]
https://stackoverflow.com/questions/72981846/
Upsampling Only the Last Two Dimensions of a 5D Tensor
I have a 5D tensor x (frames of a video) and I want to upsample the spatial size (the last two dimensions) of this tensor but when I use upsampling, the last three dimensions of the tensor are upsampled. For upsampling I use the following class: class Upsample(nn.Module): def __init__(self, scale_factor, mode, align_corners=False): self.interp = interpolate self.scale_factor = scale_factor self.mode = mode self.align_corners=align_corners def forward(self, x): x = self.interp(x, scale_factor=self.scale_factor, mode=self.mode) return x And for example, the main class that I want to upsample a 5D tensor is as follows (I condensed my code): class Main(nn.Module): def __init__(self): super(Main, self).__init__() self.upsample = Upsample(scale_factor=2, mode='trilinear') def forward(self, x): x = self.upsample(x) return x To be clearer, for example by applying upsampling on a tensor of x=(2,4,3,10,20), the outcome based on the aforementioned class is x=(2,4,6,20,40) but I need to have x=(2,4,3,20,40). What is the problem and how can I solve this?
The trilinear mode of pytorch's interpolate function only supports interpolation of 5D tensor including your third dimension. If you don't mind in resizing your input tensor, you may reduce the dimension and apply bicubic mode for interpolation. class Upsample(nn.Module): def __init__(self, scale_factor, mode = 'bicubic', align_corners=False): self.interp = interpolate self.scale_factor = scale_factor self.mode = mode self.align_corners=align_corners def forward(self, x): B, C, T, W, H = x.size() x = x.reshape(B, C*T, W, H) x = self.interp(x, scale_factor=self.scale_factor, mode=self.mode) x = x.reshape(B, C, T, W, H) return x
https://stackoverflow.com/questions/73004534/
Cannot import name 'ResNet50_Weights' from 'torchvision.models.resnet'
I was previously loading a ResNet model with the ResNet50_Weights parameter successfully, but then suddenly I started getting the following error: Traceback (most recent call last): File "splitting_models.py", line 3, in <module> from torchvision.models.resnet import ResNet50_Weights ImportError: cannot import name 'ResNet50_Weights' from 'torchvision.models.resnet' (/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torchvision/models/resnet.py) Here is the import: from torchvision.models import ResNet50_Weights How could I go about fixing this? PyTorch version: 1.2.0 TorchVision version: 0.4.0 EDIT Upgrading using pip install --upgrade torch torchvision to the following versions fixed the issue: PyTorch version: 1.12.0 TorchVision version: 0.13.0
The problem took me some days to solve it. Before running a code containing the Pytorch models, make sure you are connected to a stable network. This because for the first time when you are running a Pytorch model such resnet50, alexnet, resnet18 it downloads its' functionalities, so incase of installation error it caches its downloads and draws such error, if you try to re-run. To solve the problem locate where the cache file is, do delete it and try to re-run the code having a stable network. for my case: C:\Users\user/.cache\torch\hub\checkpoints\resnet18-f37072fd.pth this where the is cached file was, delete it and try to re-run having a stable network. I hope it will help. Thanks regards,
https://stackoverflow.com/questions/73029425/
Pytorch fine tuned CNN model giving always the same prediction in training and validation data
I decided to move from TensorFlow to Pytorch and I am with some issues in understanding how it works. I tried to follow This Tutorial which has a very simple example of Feature Extraction from ImegeNet CNNs for a binary classification problem. In summary, in my code, the network is defined and called as follows #Function for feature extraction def set_parameter_requires_grad(model, feature_extracting): if feature_extracting: for param in model.parameters(): param.requires_grad = False # Initialize these variables which will be set in this if statement. Each of these # variables is model specific. def initialize_model(model_name, num_classes, feature_extract, use_pretrained): model_ft = None input_size = 0 if model_name == "resnet": """ Resnet18 """ if use_pretrained: model_ft = models.resnet18(weights="ResNet18_Weights.IMAGENET1K_V1") else: model_ft = models.resnet18(weights=None) set_parameter_requires_grad(model_ft, feature_extract) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, num_classes) input_size = 224 return model_ft, input_size model_ft, input_size = initialize_model("resnet", 2, feature_extract = True, use_pretrained = True) The transforms I do in the data are defined below: data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(input_size), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(input_size), transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } I call the training this way: if feature_extract: params_to_update = [] for name, param in model_ft.named_parameters(): if param.requires_grad: params_to_update.append(param) print("\t", name) else: for name, param in model_ft.named_parameters(): if param.requires_grad: print("\t", name) # Observe that all parameters are being optimized optimizer_ft = optim.SGD(params_to_update, lr=0.001, momentum=0.9) criterion = nn.CrossEntropyLoss() model_ft, hist = train_model(model_ft, dataloaders_dict, criterion, optimizer_ft, num_epochs=num_epochs, is_inception=(model_name == "inception")) The model has 94% validation accuracy. After training, I save and load the model again. # Save the model (SIMPLEST WAY) print("save model the easiest way") torch.save(model_ft, "MY_FIRST_TORCH_MODEL") print("Load model the easiest way") model = torch.load("MY_FIRST_TORCH_MODEL") print("Evaluate model") model.eval() So here it goes my problem: I don't know how to process a testing data sample in order to make it be tested by the Pytorch model. I tried to load and test a training/validation sample just to be sure that the same accuracies in the training and test will also remain when I do a manual test. Here is what I did: for name in os.listdir("./hymenoptera_data/train/bees/"): # testing one example image print("Predicting "+ "./hymenoptera_data/train/bees/"+name) img = cv2.imread("./hymenoptera_data/train/bees/"+name) img_to_test= cv2.resize(img, (224,224), interpolation = cv2.INTER_AREA) print(img_to_test.shape) img_to_test = img_to_test.astype('float32') test_x = img_to_test.reshape(1, 3, 224, 224) test_x = torch.from_numpy(test_x) output = model(test_x.cuda()) pred = torch.argmax(output, dim=1) print(output) print(pred) So, no matter which sample I choose to test, the prediction is always the same What am I doing wrong here? Should the model be loaded in a different way? Is my function to test correct, especially considering that here I am doing feature extraction instead of fine tuning? Should I pre-process the data using the same transforms.compose function used to train and validate? What else should be added to test correctly an image given the example above?
Be careful, img_to_test is in the HWC format. You are reshaping the image when you should be transposing its axes from HWC to CHW. You may want to replace the following: >>> test_x = img_to_test.reshape(1, 3, 224, 224) With a call to np.transpose or torch.transpose instead: >>> test_x = image_to_test.transpose(2,0,1).unsqueeze(0)
https://stackoverflow.com/questions/73033021/
Segfault in pytorch on M1: torch.from_numpy(X).float()
I'm using an M1. I'm trying to use pytorch for a conv net. I have a numpy array that I'm trying to turn into a torch tensor. When I call torch.from_numpy(X) pytorch throws an error that it got a double when it expected a float. When I call torch.from_numpy(X).float() on a friends computer, everything is fine. But when I call this command on my computer, I get a segfault. Has anyone seen this / know what might be happening / know how to fix?
What's your pytorch vision? I've encountered the same problem on my Macbook Pro M1, and my pytorch version is 1.12.0 at first. The I downgraded it to version 1.10.0 and the problem is solved. I suspect this has something to do with the compatibility with M1 in newer torch versions. Actually I first uninstalled torch using pip3 uninstall torch and then reinstalled with pip3 install torch==1.10.0 But if you are using torchvision or some other affiliated packages, you may also need to downgrade them too.
https://stackoverflow.com/questions/73044398/
How do I save custom functions and parameters in PyTorch?
Firstly, the network function is defined: def softmax(X): X_exp=torch.exp(X) partition=X_exp.sum(1,keepdim=True) return X_exp/partition def net(X): return softmax(torch.matmul(X.reshape(-1,W.shape[0]),W)+b) Then update the function parameters by training train(net,train_iter,test_iter,cross_entropy,num_epoches,updater) Finally, the function is saved and loaded for prediction PATH='./net.pth' torch.save(net,PATH) saved_net=torch.load(PATH) predict(saved_net,test_iter,6) The prediction results show that the updated parameters W and b are not saved and loaded. What is the correct way to save custom functions and updated parameters ?
The correct way is to implement your own nn.Module and then use the provided utilities to save and load the model's state (their weights) on demand. You must define two functions: __init__: the class initializer logic where you define your model's parameters. forward: the function which implements the model's forward pass. A minimal example would be of the form: class LinearSoftmax(nn.Module): def __init__(self, in_feat, out_feat): super().__init__() self.W = torch.rand(in_feat, out_feat) self.b = torch.rand(out_feat) def softmax(X): X_exp = torch.exp(X) partition = X_exp.sum(1, keepdim=True) return X_exp / partition def forward(X): return softmax(torch.matmul(X.reshape(-1,W.shape[0]),W)+b) You can initialize a new model by doing: >>> model = LinearSoftmax(10, 3) You can then save and load weights W and b of a given instance: save the dictionary returned by nn.Module.state_dict with torch.save: >>> torch.save(model.state_dict(), PATH) load the weight into memory with torch.load and mount on model with nn.Module.load_state_dict >>> model.load_state_dict(torch.load(PATH))
https://stackoverflow.com/questions/73046375/
Can I use pytorch .backward function without having created the input forward tensors first?
I have been trying to understand RNNs better and am creating an RNN from scratch myself using numpy. I am at the point where I have calculated a Loss but it was suggested to me that rather than do the gradient descent and weight matrix updates myself, I use pytorch .backward function. I started to read some of the documentation and posts here about how it works and it seems like it will calculate the gradients where a torch tensor has requires_grad=True in the function call. So it seems that unless create a torch tensor, I am not able to use the .backward. When I try to do this on the loss scalar, I get a 'numpy.float64' object has no attribute 'backward' error. I just wanted to confirm. Thank you!
Yes, this will only work on PyTorch Tensors. If the tensors are on CPU, they are basically numpy arrays wrapped into PyTorch Tensors API (i.e., running .numpy() on such a tensor returns exactly the data, it can modified etc.)
https://stackoverflow.com/questions/73057998/
Split PyTorch tensor into overlapping chunks
Given a batch of images of shape (batch, c, h, w), I want to reshape it into (-1, depth, c, h, w) such that the i-th "chunk" of size d contains frames i -> i+d. Basically, using .view(-1, d, c, h, w) would reshape the tensor into d-size chunks where the index of the first image would be a multiple of d, which isnt what I want. Scalar example: if the original tensor is something like: [1,2,3,4,5,6,7,8,9,10,11,12] and d is 2; view() would return : [[1,2],[3,4],[5,6],[7,8],[9,10],[11,12]]; however, I want to get: [[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,8],[8,9],[9,10],[10,11],[11,12]] I wrote this function to do so: def chunk_slicing(data, depth): output = [] for i in range(data.shape[0] - depth+1): temp = data[i:i+depth] output.append(temp) return torch.Tensor(np.array([t.numpy() for t in output])) However I need a function that is useable as part of a PyTorch model as this function causes this error : RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.
IIUC, You need torch.Tensor.unfold. import torch x = torch.arange(1, 13) x.unfold(dimension = 0,size = 2, step = 1) tensor([[ 1, 2], [ 2, 3], [ 3, 4], [ 4, 5], [ 5, 6], [ 6, 7], [ 7, 8], [ 8, 9], [ 9, 10], [10, 11], [11, 12]]) Another example with size = 3 and step = 2. >>> torch.arange(1, 10).unfold(dimension = 0,size = 3, step = 2) tensor([[1, 2, 3], # window with size = 3 # step : ---1--2--- [3, 4, 5], # 'step = 2' so start from 3 [5, 6, 7], [7, 8, 9]])
https://stackoverflow.com/questions/73061237/
cudnn error when runnung pytorch on Google Colab
Since yesterday when I try to run Pytorch using GPU on Google Colab I recieve the error provided below. Previously it worked fine. I have tried to install different versions of Pytorch and I have got different errors. # Use PyTorch to check versions, CUDA version and cuDNN import torch print("PyTorch version: ") print(torch.__version__) print("CUDA Version: ") print(torch.version.cuda) print("cuDNN version is: ") print(torch.backends.cudnn.version()) PyTorch version: 1.12.0+cu113 CUDA Version: 11.3 cuDNN version is: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-23-93b5c974c4be> in <module>() 8 print(torch.version.cuda) 9 print("cuDNN version is: ") ---> 10 print(torch.backends.cudnn.version()) 1 frames /usr/local/lib/python3.7/dist-packages/torch/backends/cudnn/__init__.py in version() 48 def version(): 49 """Returns the version of cuDNN""" ---> 50 if not _init(): 51 return None 52 return __cudnn_version /usr/local/lib/python3.7/dist-packages/torch/backends/cudnn/__init__.py in _init() 39 raise RuntimeError( 40 'cuDNN version incompatibility: PyTorch was compiled against {} ' ---> 41 'but linked against {}'.format(compile_version, runtime_version)) 42 return True 43 else: RuntimeError: cuDNN version incompatibility: PyTorch was compiled against (8, 3, 2) but linked against (8, 0, 5)
Use this code to upgrade Python to newer version(3.9), this solved a problem for me. !wget -O mini.sh https://repo.anaconda.com/miniconda/Miniconda3-py39_4.9.2-Linux-x86_64.sh !chmod +x mini.sh !bash ./mini.sh -b -f -p /usr/local !conda install -q -y jupyter !conda install -q -y google-colab -c conda-forge !python -m ipykernel install --name "py39" --user
https://stackoverflow.com/questions/73062764/
In simple terms, what is the relationship between the GPU, Nvidia driver, CUDA and cuDNN in the context for using a deep learning framework?
I have always been doing deep learning on Google Colab or on school clusters that have everything set up nicely. Recently I needed to set up a workstation to do deep learning from scratch and I realized I have very limited understanding of the things that I need to install to run a framework like tensorflow or pytorch on GPU. So can anyone explain in simple term possible, what is purpose of Nvidia driver, CUDA and cuDNN? How do they work together or on top of one another, and why do I need to install all of them for tensorflow/pytorch?
Python code runs on the CPU, not the GPU. This would be rather slow for complex Neural Network layers like LSTM's or CNN's. Hence, TensorFlow and PyTorch know how to let cuDNN compute those layers. cuDNN requires CUDA, and CUDA requires the NVidia driver. All of the last 3 components are provided by NVidia; they've decided to organize their code that way.
https://stackoverflow.com/questions/73074402/
nn.Parameter not getting updated not sure about the usage
I have declared two nn.Parameter() variables with requires_grad=True and I am using those in a different function that's being called inside the init method of the class where variables are declared. lparam and rparam are not getting updated My question is am I doing it the right way? if not how it should be done? here is the code example: class LG(BaseNetwork): def __init__(self, opt): super().__init__() self.opt = opt self.lparam = nn.Parameter(torch.zeros(1), requires_grad=True).cuda(device=opt.gpu_ids[0]) self.rparam = nn.Parameter(torch.zeros(1), requires_grad=True).cuda(device=opt.gpu_ids[0]) def foo(self, a, b, k=1.0, lparam=0, rparam=0): t = bar(a, b, k=k, lparam=lparam, rparam=rparam) return t def forward(self, a, b): x = self.foo(a, b, k=self.opt.k, lparam=self.lparam, rparam=self.rparam) return x BaseNetwork is just initializing functions and uses nn.Module def bar(a, b, k=1.0, lparam=0, rparam=0): return n(a) * (b.std() * (k * lparam)) + (b.mean() * (k * rparam)) When I print the named params I can not get lparam and rparam
thanks, I got the solution here https://discuss.pytorch.org/t/nn-parameter-not-getting-updated-not-sure-about-the-usage/157226 I had to remove the cuda(device=opt.gpu_ids[0]) because it is supposed to get the device that the model is put on.
https://stackoverflow.com/questions/73074622/
How do I convert a 3*1 vector into a 2*2 lower triangular matrix in pytorch
I have created a network in pytorch, whose output is B*N*H*W. I want that N is equal to 3 and then convert the output into a 2*2 lower trangular matrix with a upper zero in the 2nd demension. There may be two ways to achieve that. class Net(nn.Module): def __init__(self, in_ch=3, out_ch=N): super().__init__(): self.net = nn.Sequatial([...]) def forward(self, x): return self.net(x) First, let the network output be B*4*H*W, and then mutiple a mask of [1,0,1,1] in the 2nd dimension to make one channel to be zero. This way can get a 2*2 lower trangular matrix with a upper zero, while I don't know if there will be information lost. And I don't know how to check the loss of information while training. class Net(nn.Module): def __init__(self, in_ch=3, out_ch=4): super().__init__(): self.net = nn.Sequatial([...]) self.mask = torch.FloatTensor([1.,0.,1.,1.]).cuda() def forward(self, x): return torch.einsum('abij,b->abij', self.net(x), self.mask) Second, I think that the direct solution is to output a B*3*H*W tensor and convert the 2nd dimension into a 2*2 lower trangular matrix with a upper zero. But I don't have an effictive approach to implementing this. The transform operation in 2nd dimension may be like: array[2.3, 5.1, 6.3] --> array[[2.3, 0.], [5.1, 6.3]] There are still two questions. How to convert the 2nd dimension of B*3*H*W to a tensor of shape B*2*2*H*W and if the first method will make the information lost while training.
Try this: You can easily calculate N from len(a) as N*(N+1)/2 = len(a) => N Here is the numpy version: a = np.array([2.3, 5.1, 6.3]) N = 2 c = np.zeros((N, N)) c[np.tril_indices(N)] = a Output: c >array([[2.3, 0. ], [5.1, 6.3]]) Here is the pytorch version: a = torch.tensor([2.3, 5.1, 6.3]) c = torch.zeros(N, N) c[torch.tril_indices(N, N, offset=0).tolist()] = a c Output: tensor([[2.3000, 0.0000], [5.1000, 6.3000]])
https://stackoverflow.com/questions/73075280/
Why this operation results in a tensor full of 'nan'?
err = reduce((mu_students - teacher_pred)**2, 'b h w vec -> b h w', 'sum') where mu_students and teacher_pred are 2 tensors with size (1,14,14,256). The two tensors do not contain nan values before the reduce. Moreover, this exact same operation work fine with the previous layer of the network, when mu_students and teacher_pred have size = (1,28,28,128). This error presents itself only on one kind of images (but it shouldn't be related to size, because images are always resized to 224)
Could you provide more about the function reduce and the input tensors? By assuming that; (1) reduce is from einops and written as from einops import reduce as reduce (2) mu_students and teacher_pred is random tensors, when I try this code on Google Colab, it works well without NaN. !pip install einops from einops import reduce as reduce mu_students = torch.rand(1,14,14,256) teacher_pred = torch.rand(1,14,14,256) err = reduce((mu_students - teacher_pred)**2, 'b h w vec -> b h w', 'sum') The values are, for example, # mu_students tensor([[[[0.4079, 0.5835, 0.6807, ..., 0.6041, 0.7366, 0.9291], [0.0338, 0.9161, 0.7018, ..., 0.6035, 0.1816, 0.4059], [0.5949, 0.9535, 0.1460, ..., 0.4049, 0.5120, 0.0734], ... # teacher_pred tensor([[[[0.7493, 0.2193, 0.6465, ..., 0.6262, 0.0270, 0.5532], [0.5343, 0.9384, 0.9916, ..., 0.2127, 0.0370, 0.6322], [0.3568, 0.7474, 0.5562, ..., 0.0589, 0.1356, 0.6062], ... # err tensor([[[42.6367, 36.5668, 42.7598, 37.0643, 40.7744, 43.6076, 45.5338, 45.3421, 40.0736, 45.2035, 43.3516, 40.8768, 39.1142, 40.8736], [50.8344, 40.1921, 40.7194, 44.3139, 37.7520, 39.2365, ... I wonder if this also works in your environment. Do you use specific tensors as mu_students or teacher_pred?
https://stackoverflow.com/questions/73082317/
AssertionError: If capturable=False, state_steps should not be CUDA tensors
I get this error while loading model weights of a previous epoch on Google colab. I'm using PyTorch version 1.12.0. I can't downgrade to a lower version as there are external libraries that Im using that require Pytorch 1.12.0 Thanks!
If you are using pytorch 1.12.0 with cuda binaries 11.6/11.7 then on your shell or command prompt, paste the following; pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116 The Adam Optimizer regression was removed in the updated torch version
https://stackoverflow.com/questions/73095460/
Error happend when import torch (pytorch)
Try to use pytorch, when I do import torch --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-2-eb42ca6e4af3> in <module> ----> 1 import torch C:\Big_Data_app\Anaconda3\lib\site-packages\torch\__init__.py in <module> 124 err = ctypes.WinError(last_error) 125 err.strerror += f' Error loading "{dll}" or one of its dependencies.' --> 126 raise err 127 elif res is not None: 128 is_loaded = True OSError: [WinError 182] <no description> Error loading "C:\Big_Data_app\Anaconda3\lib\site-packages\torch\lib\shm.dll" or one of its dependencies. Not sure what happend. The truth is, I installed pytorch around the end of last year.(I think) I don't remember how, I installed it because I want try to use it. But I guess I never used it after I installed. don't remember running some script with pytorch. And now I start to use it, get this error. Have no clue What I should check first. That means, I don't know what module, framework, driver, app are related with pytorch. So I don't know where to beginning, check is there any other module might cause this Error. If anyone knows how to solve this. Or where to start the problem checking. Please let me know. Thank you. My pytorch version is 1.11.0 OS: window 10
I solved the problem. Just reinstall your Anaconda. !!Warning!!: you will lose your lib. Referring solution: Problem with Torch 1.11
https://stackoverflow.com/questions/73098560/
Pytorch-Lightning Misconfiguration Exception; The closure hasn't been executed
I have been trying to train a torch.nn.TransformerEncoderLayer using the standard Pytorch-Lightning Trainer class. Before the first epoch even starts, I face the following error: MisconfigurationException: The closure hasn't been executed. HINT: did you call optimizer_closure() in your optimizer_step hook? It could also happen because the optimizer.step(optimizer_closure) call did not execute it internally. I have very properly defined the configure_optimizers() method in the trainer and it works for every other model (say, LSTM, GRU, MultiHeadAttention). If I replace them with the TransformerEncoder, the aforementioned error pops up. Here is the model code I am using: class PositionalEncoder(nn.Module): def __init__(self, d_model=512, max_seq_len=512): super().__init__() self.d_model = d_model pe = torch.zeros(max_seq_len, d_model) for pos in range(max_seq_len): for i in range(0, d_model, 2): pe[pos, i] = sin(pos / (10000 ** ((2 * i)/d_model))) pe[pos, i+1] = cos(pos / (10000 ** ((2 * (i + 1))/d_model))) pe = pe.unsqueeze(0) self.register_buffer('pe', pe) def forward(self, x): x *= sqrt(self.d_model) x += self.pe[:,:x.size(1)] return x class TRANSFORMER(pl.LightningModule): def __init__(self, input_dim, d_model=512, nhead=8, num_layers=6, dropout=0.5, use_scheduler=True, num_tags=len(TAG2IDX), total_steps=1024, train_dataset=None, val_dataset=None, test_dataset=None): super().__init__() self.crf = CRF(num_tags=num_tags, batch_first=True) self.fc = nn.Linear(d_model, num_tags) self.use_scheduler = use_scheduler self.embedding = nn.Embedding(num_embeddings=input_dim, embedding_dim=d_model, padding_idx=0) self.pos_encoder = PositionalEncoder(d_model=d_model) self.encoder_layer = nn.TransformerEncoderLayer(d_model=d_model, nhead=nhead, dropout=dropout, activation="gelu", batch_first=True) self.encoder = nn.TransformerEncoder(encoder_layer=self.encoder_layer, num_layers=num_layers) ## Hyperparameters ## self.learning_rate = LEARNING_RATE self.weight_decay = WEIGHT_DECAY self.total_steps = total_steps self.batch_size = BATCH_SIZE ## Datasets ## self.train_dataset = train_dataset self.val_dataset = val_dataset self.test_dataset = test_dataset ## steps ## if self.use_scheduler: self.total_steps = len(train_dataset) // self.batch_size # create the dataloaders # add shuffle only for train_dataloader # make sure num_workers is set appropriately and drop_last is set to False def train_dataloader(self): return DataLoader(self.train_dataset, batch_size=self.batch_size, num_workers=N_JOBS, shuffle=True, drop_last=False) def val_dataloader(self): return DataLoader(self.val_dataset, batch_size=self.batch_size, num_workers=N_JOBS, shuffle=False, drop_last=False) def test_dataloader(self): return DataLoader(self.test_dataset, batch_size=self.batch_size, num_workers=N_JOBS, shuffle=False, drop_last=False) def forward(self, input_ids, masks): out = self.embedding(input_ids) out = self.pos_encoder(out) out = self.encoder(out, src_key_padding_mask=~masks) out = self.fc(out) return out def _shared_evaluation_step(self, batch, batch_idx): ids, masks, lbls = batch emissions = self(ids, masks) loss = -self.crf(emissions, lbls, mask=masks) pred = self.crf.decode(emissions, mask=masks) r, p, f1 = f1score(lbls, pred) return loss, r, p, f1 def training_step(self, batch, batch_idx): loss, r, p, f1 = self._shared_evaluation_step(batch, batch_idx) self.log("train_loss", loss, on_step=False, on_epoch=True, prog_bar=True) self.log("train_recall", r, on_step=False, on_epoch=True, prog_bar=True) self.log("train_precision", p, on_step=False, on_epoch=True, prog_bar=True) self.log("train_f1score", f1, on_step=False, on_epoch=True, prog_bar=True) return loss def validation_step(self, batch, batch_idx): loss, r, p, f1 = self._shared_evaluation_step(batch, batch_idx) self.log("val_loss", loss, on_step=False, on_epoch=True, prog_bar=True) self.log("val_recall", r, on_step=False, on_epoch=True, prog_bar=True) self.log("val_precision", p, on_step=False, on_epoch=True, prog_bar=True) self.log("val_f1score", f1, on_step=False, on_epoch=True, prog_bar=True) def test_step(self, batch, batch_idx): loss, r, p, f1 = self._shared_evaluation_step(batch, batch_idx) self.log("test_loss", loss, on_step=False, on_epoch=True, prog_bar=True) self.log("test_recall", r, on_step=False, on_epoch=True, prog_bar=True) self.log("test_precision", p, on_step=False, on_epoch=True, prog_bar=True) self.log("test_f1score", f1, on_step=False, on_epoch=True, prog_bar=True) def predict_step(self, batch, batch_idx, dataloader_idx=0): ids, masks, _ = batch return self.crf.decode(self(ids, masks), mask=masks) def configure_optimizers(self): optimizer = Ranger(self.parameters(), lr=self.learning_rate, weight_decay=self.weight_decay) if self.use_scheduler: scheduler = get_cosine_schedule_with_warmup(optimizer=optimizer, num_warmup_steps=1, num_training_steps=self.total_steps) lr_scheduler = { 'scheduler': scheduler, 'interval': 'epoch', 'frequency': 1 } return [optimizer], [lr_scheduler] else: return [optimizer] and here is how I am using the trainer class: trainer = pl.Trainer(accelerator="gpu", max_epochs=EPOCHS, precision=32, log_every_n_steps=1, callbacks=[earlystopping_callback, checkpoint_callback])
You are right. This happens because the special optimizer you have does not call the closure when passing it to the .step() method. But Lightning relies on this because it calls the step method like this: optimizer.step(training_step_closure) where training_step_closure consists of essentially executing the LightningModule.training_step. It looks like Ranger does not follow the standard contract of calling the closure inside of itself. To overcome this issue, I recommend switching to manual optimization: Set self.automatic_optimization = False in your LightningModule. Modify your training step by inserting manual backward, optimizer step and optionally the lr scheduler call: Like so: def training_step(self, batch, batch_idx): loss, r, p, f1 = self._shared_evaluation_step(batch, batch_idx) # Insert these lines: self.manual_backward(loss) optimizer = self.optimizers() scheduler = self.lr_schedulers() optimizer.step() optimizer.zero_grad() scheduler.step() ... return loss No other changes should be necessary.
https://stackoverflow.com/questions/73111496/
module 'torch' has no attribute 'frombuffer' in Google Colab
data_root = os.path.join(os.getcwd(), "data") transform = transforms.Compose( [ transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ] ) fashion_mnist_dataset = FashionMNIST(data_root, download = True, train = True, transform = transform) Error Message /usr/local/lib/python3.7/dist-packages/torchvision/datasets/mnist.py in read_sn3_pascalvincent_tensor(path, strict) 524 # we need to reverse the bytes before we can read them with torch.frombuffer(). 525 needs_byte_reversal = sys.byteorder == "little" and num_bytes_per_value > 1 --> 526 parsed = torch.frombuffer(bytearray(data), dtype=torch_type, offset=(4 * (nd + 1))) 527 if needs_byte_reversal: 528 parsed = parsed.flip(0) AttributeError: module 'torch' has no attribute 'frombuffer' what can i do for this err in Colab
I tried your code in my Google Colab by adding the codes (to import the libraries) below, but it works well without errors. import os from torchvision import transform from torchvision.datasets import FashionMNIST I used torchvision 0.13.0+cu113 google-colab 1.0.0 Runtime GPU (when I set "None," it also works) Do you get errors when you also use the same codes above? Do you use another versions?
https://stackoverflow.com/questions/73116818/
how to multiply three arrays with different dimension in PyTorch
enter image description here L array dimension is (d,a) ,B is (a,a,N) and R is (a,d). By multiplying these arrays I have to get an array size of (d,d,N). How could I implement this is PyTorch
A possible and straightforward approach is to apply torch.einsum (read more here): >>> torch.einsum('ij,jkn,kl->iln', L, B, R) Where j and k are the reduced dimensions of L and R respectively. And n is the "batch" dimension of B. The first matrix multiplication will reduce L@B (let this intermediate result be o): ij,jkn->ikn The second matrix multiplication will reduce o@R: ikn,kl->iln Which overall sums up to the following form: ij,jkn,kl->iln
https://stackoverflow.com/questions/73119792/
Force a neural network to have 0-sum outputs
I have a pytorch neural net with n-dimensional output which I want to have 0-sum during training (my training data, i.e. the true outputs, have 0 sum). Of course I could just add a line computing the sum s and then subtract s/n from each element of the output. But this way, the network would be driven even less to actually finding outputs with zero sum, as this would get taken care of anyways (I've been getting worse test results with this approach). Also, as the true outputs in the training data have 0 sum, obviously the network converges to having almost 0 sum outputs, but not quite. Hence, I was wondering whether there is a smart way to force the network to have outputs that sum to 0, without just brute-force subtracting the sum in the end (which would corrupt learning outputs to have sum 0)? I.e. some sort of solution directly incorporated in the network? (Probably there isn't, at least I couldn't think of any...)
Your approach with "explicitly substracting the mean" is the correct way. The same way we use softmax to nicely parametrise distributions, and you could complain that "this makes the network not learn about probability even more!", but in fact it does, it simply does so in its own, unnormalised space. Same in your case - by subtracting the mean you make sure that you match the target variable while allowing your network to focus on hard problems, and not waste its compute on having to learn that the sum is zero. If you do anything else your network will literally have to learn to compute the mean somewhere and subtract it. There are some potential corner cases where there might be some deep representational reason for mean to be zero that could be argues for, but these cases are rare enough that chances that this is actually happening "magically" in the network are zero (and if you knew it was happening there would be better ways of targeting it than by zero ensuring).
https://stackoverflow.com/questions/73122031/
Logging in Custom Handler for TorchServe
I have written a custom handler for an DL model using torch-serve and am trying to understand how to add manual log messages to the handler. I know that I can simply print any messages and it will show them within the MODEL_LOG logger at level INFO. What if I want to add a custom message at DEBUG level or ERROR level? When I try to initialise a logger using logger = logging.getLogger('model_log') and add a message within the handler using logger.error I see the output on screen but at INFO level. What is the best way to create log messages within a handler at a chosen log-level?
I am trying to do something similar. I found example logger usage in base_handler.py, where the logger is initialized on line 23 as: logger = logging.getLogger(__name__) and used in several places in the rest of the source file, typically as: logger.debug() or logger.warning(), etc. I am assuming that these messages end up in the relevant files under the logs/ folder of TorchServe. I hope it is useful for your particular use case.
https://stackoverflow.com/questions/73136920/
Implement "same" padding for convolution operations with dilation > 1, in Pytorch
I am using Pytorch 1.8.1 and although I know the newer version has padding "same" option, for some reasons I do not want to upgrade it. To implement same padding for CNN with stride 1 and dilation >1, I put padding as follows: padding=(dilation*(cnn_kernel_size[0]-1)//2, dilation*(cnn_kernel_size[1]-1)//2)) According to the Pytorch document, I expected the input and output size will be the same, but it did not happen! It is written in Pytorch document that: Hout​=⌊( Hin​ + 2×padding[0] − dilation[0]×(kernel_size[0]−1) −1) /stride[0] ​+ 1⌋ Wout​=⌊( Win​ + 2×padding[1] − dilation[1]×(kernel_size[1]−1) −1) /stride[1] + 1⌋ The input of torch.nn.Conv2d was with the shape of (1,1,625,513) which based on the Conv2d pytorch document, indicates batch size = 1, C in = 1, H in = 625 and Win = 513 and after using: 64 filters kernel size of (15,15) stride = (1,1) dilation =5 padding =(35,35) Putting those values in the formulas above gives us: Hout​=⌊(625 ​+ 2×35 −5×(15−1) −1) /1 ​+1⌋=⌊(625 ​+ 70 −5×14 -1) + 1⌋=625 Wout​=⌊(513 ​+ 2×35 −5×(15−1) −1) /1 ​+1⌋=⌊(513 ​+ 70 −5×14 -1) + 1⌋=513 However, the given output shape by pytorch was (1,64,681,569) I can understand the value of 1 and C out = 64. But I don't know why H out and W out are not the same as H in and W in? Does anyone has any explanation that can help?
I figured it out! The reason that I ended up with the wrong dimension was that I didn't put a numeric value for padding. I gave it the numeric value of dilation and based on that it calculate itself the value for padding as padding=(dilation*(cnn_kernel_size[0]-1)//2, dilation*(cnn_kernel_size[1]-1)//2)) I think Pytorch needs to be given the numeric value of padding because when I change my code and gave the network the value of padding and calculate the dilation based on padding (Dilation = (2*padding)/(kernel size -1) I got the right output shape.
https://stackoverflow.com/questions/73149873/
Can I create a color image with a GAN consisting of only FC layers?
I understand that in order to create a color image, three channel information of input data must be maintained inside the network. However, data must be flattened to pass through the linear layer. If so, can GAN consisting of only FC layer generate only black and white images?
Your fully connected network can generate whatever you want. Even three channel outputs. However, the question is: does it make sense to do so? Flattened your input will inherently lose all kinds of spatial and feature consistency that is naturally available when represented as an RGB map. Remember that an RGB image can be thought of as 3-element features describing each spatial location of a 2D image. In other words, each of the three channels gives additional information about a given pixel, considering these channels as separate entities is a loss of information.
https://stackoverflow.com/questions/73161416/
Deep Neural Network in Python with 3 inputs and 3 outputs
I would like to implement a deep neural network in Python (preferably PyTorch, but TensorFlow is also possible) which predicts the next location and the time of the arrival at that location. For the raw data I have a csv file with a sequence of three values: latitude, longitude, and time: 39.984702,116.318417,2008-10-23,02:53:04 39.984683,116.31845,2008-10-23,02:53:10 39.984686,116.318417,2008-10-23,02:53:15 ... The number of such rows is around 100 000. So, here is my question. How should I split the data, normalize it and transform it, in order to feed it into the DNN (preferably GRU or LSTM, but as I read CNN are also possible) and receive as an output a predicted location and time of arrival? Based on my current research, what should be done is to split the data into sequences (of n length), normalize the values, maybe even change the format of the time (for sure not feeding it as a string), and treat the last value in the sequence as a label during the teaching of the DNN. A simple code would be really helpful, with my problems of understanding the different dimensions of the input and outputs for the NNs.
Just a tip, for the time I would transform it into an Epoch Unix Timestamp.
https://stackoverflow.com/questions/73166253/
"PackagesNotFoundError: The following packages are not available from current channels:" While Installing PyTorch in Anaconda
When trying to install PyTorch inside an Anaconda environment with the command conda install pytorch torchvision torchaudio cudatoolkit=11.6 -c pytorch -c conda-forge, I get the error: PackagesNotFoundError: The following packages are not available from current channels: - pytorch - cudatoolkit=11.6 - torchaudio Current channels: - https://conda.anaconda.org/pytorch/win-32 - https://conda.anaconda.org/pytorch/noarch - https://conda.anaconda.org/conda-forge/win-32 - https://conda.anaconda.org/conda-forge/noarch - https://repo.anaconda.com/pkgs/main/win-32 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/win-32 - https://repo.anaconda.com/pkgs/r/noarch - https://repo.anaconda.com/pkgs/msys2/win-32 - https://repo.anaconda.com/pkgs/msys2/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page., I also tried adding conda-forge to my list of channels and got the same error. Can anyone help me?
You have a 32 bit version of anaconda. You need to uninstall and install a 64 bit version to have access to these packages
https://stackoverflow.com/questions/73212317/
Custom Operations on Multi-dimensional Tensors
I am trying to compute the tensor R (see image) and the only way I could explain what I am trying to compute is by doing it on a paper: o = torch.tensor([[[1, 3, 2], [7, 9, 8], [13, 15, 14], [19, 21, 20], [25, 27, 26]], [[31, 33, 32], [37, 39, 38], [43, 45, 44], [49, 51, 50], [55, 57, 56]]]) p = torch.tensor([[[19, 21, 20], [7, 9, 8], [13, 15, 14], [1, 3, 2], [25, 27, 26]], [[55, 57, 56], [31, 33, 32], [37, 39, 38], [43, 45, 44], [49, 51, 50]]]) # this is O' in image o_prime = torch.tensor([[0.1, 0.2, 0.3, 0.4, 0.5], [0.6, 0.7, 0.8, 0.9, 0.11]]) # this is P' in image p_prime = torch.tensor([[1.1, 1.2, 1.3, 1.4, 1.5], [1.6, 1.7, 1.8, 1.9, 1.11]]) # this is R (this is what I need) r = torch.tensor([[[0, 0, 0, 6.1, 0], [0, 24.2, 0, 0, 0], [0, 0, 42.3, 0, 0], [60.4, 0, 0, 0, 0], [0, 0, 0, 0, 78.5]], [[0, 96.6, 0, 0, 0], [0, 0, 114.7, 0, 0], [0, 0, 0, 132.8, 0], [0, 0, 0, 0, 150.9], [168.11, 0, 0, 0, 0]]]) How do I get R without looping over tensors? correction: In the image, I forgot to add value of p' along with sum(o) + o'
You can construct a helper tensor containing the resulting values sum(o) + o' + p': >>> v = o.sum(2, True) + o_prime[...,None] + o_prime[...,None] tensor([[[ 7.2000], [ 25.4000], [ 43.6000], [ 61.8000], [ 80.0000]], [[ 98.2000], [116.4000], [134.6000], [152.8000], [169.2200]]]) Then you can assemble a mask for the final tensor via broadcasting: >>> eq = o[:,None] == p[:,:,None] Ensuring all three elements on the last dimension match: >>> eq.all(dim=-1) tensor([[[False, False, False, True, False], [False, True, False, False, False], [False, False, True, False, False], [ True, False, False, False, False], [False, False, False, False, True]], [[False, False, False, False, True], [ True, False, False, False, False], [False, True, False, False, False], [False, False, True, False, False], [False, False, False, True, False]]]) Finally, you can simply multiply both tensor and auto-broadcasting will handle the rest: >>> R = eq.all(dim=-1) * v tensor([[[ 0.0000, 0.0000, 0.0000, 7.2000, 0.0000], [ 0.0000, 25.4000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 43.6000, 0.0000, 0.0000], [ 61.8000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 80.0000]], [[ 0.0000, 0.0000, 0.0000, 0.0000, 98.2000], [116.4000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 134.6000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 152.8000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 169.2200, 0.0000]]]) I wanted to know how do you visualize such problems and then come up with a solution? Any pointers would be beneficial. I was going to say it depends a lot on the problem at hand but this wouldn't get you very far! I believe having a toolbox of functions/tricks and scenarios you've come across (i.e. experience) helps greatly. This is true for problem-solving more generally speaking. I can try to explain how I came up with the solution and my thought process behind it. The initial idea for this problem is to perform an outer equality check between o and p. By that I mean we are trying to construct a structure which evaluates every (o[i], p[j]) relation in batch-wise. Turns out this is rather a common operation usually seen as an outer summation or outer product. In fact, this type of operation is also applicable to the equality operator: here we are looking to construct a 5x5 matrix of o[i] == p[j]. Keeping in mind throughout the process we have a leading dimension containing three elements, but that doesn't change the process. We just need to account for it by checking that all three checks are indeed True, hence the all(dim=-1) call. Since the desired result doesn't depend on column position inside the mask, i.e. result = sum(0) + o' + p' whatever the column index, we can just precompute the results for each row beforehand. The final operation is simple multiply the mask (which of course only contains ones at the desired locations) with the vector of dimensions. Intuitively, all columns will get multiplied by the same value but only the 1s will allow for the value to be set. But most importantly, we have to acknowledge that your figure did all the hard work. This is in my opinion the first step before starting with any reasoning or implementation. So to summarize, I would suggest: start with a minimal example reducing the number of variables while still making it relevant for the actual problem it is supposed to solve. think about how you can solve it step by step by trying to get closer and closer to the solution. Iteratively, and trying out with this minimal setup. Most importantly it comes with practice, with time you will find it easier to reason about your problem and use the right tools to manipulate your data.
https://stackoverflow.com/questions/73217240/
Constant training and test accuracy in GCNConv
I am new to pytorch and I'm trying to write a classifier for graph data. I have a dataset of 91 adj matrices (two classes in ratio 50/41, correlation matrices obtained from fMRI data). Currently I am struggling with classification task: my training and test accuracy doesn't change, although loss looks normal (?). Here's some code of my model: from torch.nn import Linear,Sigmoid import torch.nn.functional as F from torch_geometric.nn import GCNConv, BatchNorm, GraphConv from torch_geometric.nn import global_mean_pool class GCN(torch.nn.Module): def __init__(self, hidden_channels): super(GCN, self).__init__() torch.manual_seed(12345) self.conv1 = GCNConv(dataset.num_node_features, hidden_channels) self.conv2 = GCNConv(hidden_channels, hidden_channels) self.conv3 = GCNConv(hidden_channels, hidden_channels) self.lin = Linear(hidden_channels, 1) def forward(self, x, edge_index, edge_weight, batch): # 1. Obtain node embeddings h = self.conv1(x, edge_index, edge_weight) h = h.relu() h = self.conv2(h, edge_index, edge_weight) h = h.relu() h = self.conv3(h, edge_index, edge_weight) # 2. Readout layer h = global_mean_pool(h, batch) # [batch_size, hidden_channels] # 3. Apply a final classifier h = F.dropout(h, p=0.5, training=self.training) h = self.lin(h) return h and training loop: model = GCN(hidden_channels=32) optimizer = torch.optim.Adam(model.parameters(), lr=0.001) # criterion = torch.nn.CrossEntropyLoss() criterion = torch.nn.BCELoss() # criterion = torch.nn.MSELoss() model = model.to(device) def train(): model.train() total_loss = 0 batch_count = 0 for data in train_loader: # Iterate in batches over the training dataset. data = data.to(device) out = model(data.x, data.edge_index, data.edge_weight, data.batch) # Perform a single forward pass. target = data.y target = target.unsqueeze(1) target = target.float() loss = criterion(out, target) # Compute the loss. loss.backward() # Derive gradients. optimizer.step() # Update parameters based on gradients. optimizer.zero_grad() # Clear gradients. total_loss += loss.detach() batch_count += 1 mean_loss = total_loss/batch_count return mean_loss def test(loader): model.eval() correct = 0 for data in loader: # Iterate in batches over the training/test dataset. data = data.to(device) out = model(data.x, data.edge_index, data.edge_weight, data.batch) pred = out.argmax(dim=1) # Use the class with highest probability. correct += int((pred == data.y).sum()) # Check against ground-truth labels. return correct / len(loader.dataset) # Derive ratio of correct predictions. test_acc_summ = [] train_acc_summ = [] loss_summ = [] for epoch in range(1, 100): loss = train() train_acc = test(train_loader) test_acc = test(test_loader) test_acc_summ.append(test_acc) train_acc_summ.append(train_acc) loss_summ.append(loss) print(f'Epoch: {epoch:03d}, Train Acc: {train_acc:.4f}, Test Acc: {test_acc:.4f}, Loss: {loss:.4f}') The output I'm getting: Epoch: 001, Train Acc: 0.4568, Test Acc: 0.3000, Loss: 1.0981 Epoch: 002, Train Acc: 0.4568, Test Acc: 0.3000, Loss: 1.0983 Epoch: 003, Train Acc: 0.4568, Test Acc: 0.3000, Loss: 1.1312 Epoch: 004, Train Acc: 0.4568, Test Acc: 0.3000, Loss: 1.0880 Epoch: 005, Train Acc: 0.4568, Test Acc: 0.3000, Loss: 0.8857 Epoch: 006, Train Acc: 0.4568, Test Acc: 0.3000, Loss: 0.9774 Epoch: 007, Train Acc: 0.4568, Test Acc: 0.3000, Loss: 0.8917 Epoch: 008, Train Acc: 0.4568, Test Acc: 0.3000, Loss: 0.8679 Epoch: 009, Train Acc: 0.4568, Test Acc: 0.3000, Loss: 0.9000 Epoch: 010, Train Acc: 0.4568, Test Acc: 0.3000, Loss: 0.8371 Train and Test acc remains constant regardless of epochs number, but loss decreases. Is there some bug i don't see, or there's more complex problem?
Try to add nn.Sigmoid() on top of self.lin output and remove dropout
https://stackoverflow.com/questions/73218977/
Unable to train a self-supervised(ssl) model using Lightly CLI
I am unable to train a self-supervised(ssl) model to create image embeddings using the lightly cli: Lightly Platform Link. I intend to select diverse example from my dataset to create an object detection model further downstream and the image embeddings created with the ssl model will help me to perform Active Learning.I have reproduced the error in the Notebook with public access -----> lightly_app_troubleshooting_stackoverflow.ipynb Link. In the notebook shared above this cmd raises an exception: !source /content/venv_1/bin/activate;lightly-magic \ input_dir="/content/Sunflowers" trainer.max_epochs=20 \ token='< your lightly token(free account) >' \ new_dataset_name="sunflowers_dataset" loader.batch_size=64 The exception stack trace produced is as below: /content/venv_1/lib/python3.7/site-packages/hydra/_internal/hydra.py:127: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default. See https://hydra.cc/docs/next/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information. configure_logging=with_log_configuration, ########## Starting to train an embedding model. /content/venv_1/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py:23: LightningDeprecationWarning: pytorch_lightning.core.lightning.LightningModule has been deprecated in v1.7 and will be removed in v1.9. Use the equivalent class from the pytorch_lightning.core.module.LightningModule class instead. "pytorch_lightning.core.lightning.LightningModule has been deprecated in v1.7" Error executing job with overrides: ['input_dir=/content/Sunflowers', 'trainer.max_epochs=20', 'token=5bbcf60e3a5c7c266dcd4e0e9056c8301684e0f2f8922bc5', 'new_dataset_name=sunflowers_dataset', 'loader.batch_size=64'] Traceback (most recent call last): File "/content/venv_1/lib/python3.7/site-packages/lightly/cli/lightly_cli.py", line 115, in lightly_cli return _lightly_cli(cfg) File "/content/venv_1/lib/python3.7/site-packages/lightly/cli/lightly_cli.py", line 52, in _lightly_cli checkpoint = _train_cli(cfg, is_cli_call) File "/content/venv_1/lib/python3.7/site-packages/lightly/cli/train_cli.py", line 137, in _train_cli encoder.train_embedding(**cfg['trainer'], strategy=distributed_strategy) File "/content/venv_1/lib/python3.7/site-packages/lightly/embedding/_base.py", line 88, in train_embedding trainer = pl.Trainer(**kwargs, callbacks=[self.checkpoint_callback]) File "/content/venv_1/lib/python3.7/site-packages/pytorch_lightning/utilities/argparse.py", line 345, in insert_env_defaults return fn(self, **kwargs) TypeError: __init__() got an unexpected keyword argument 'weights_summary' Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. I could not create a new tag - "lightly" as I lack the stackoverflow reputation points to do so.
The error is from an incompatibility with the latest PyTorch Lightning version (version 1.7 at the time of this writing). A quick fix is to use a lower version (e.g. 1.6). We are working on a fix :) Let me know in case that does not work for you!
https://stackoverflow.com/questions/73233965/
Given a torch DataLoader, how can I create another DataLoader which is a subset
Given an torch.utils.data.dataloader.DataLoader (let's say stored as variable data_loader_original), how can I create a new torch.utils.data.dataloader.DataLoader which contains a subset of data_loader_original? I've seen this post how to adjust dataloader and make a new dataloader?, but I don't want to take a subset of the dataset directly (i.e. I don't want to use torch.utils.data.Subset). Instead I want something like data_loader_subset = create_subset_data_loader(data_loader_original) or data_loader_subset = data_loader_ogirinal[:size_of_subset] These are just examples to help illustrate what I'm looking for.
I'm not sure how can you split the subset, for the simple version, the snipcode below may help: import torch from torch.utils.data import DataLoader bs = 50 shuffle = False num_workers = 0 dataset = torch_dataset() data_loader_original = DataLoader(dataset, batch_size=bs, shuffle=shuffle) def create_subset_data_loader(loader, size_of_subset): count = 0 for data in loader: yield data if count == size_of_subset: break count+=1 size_of_subset = 10 for epoch in range(epochs): for data in create_subset_data_loader(data_loader_original, size_of_subset): # processing
https://stackoverflow.com/questions/73243683/
Predict with pytorch lightning when using BCEWithLogitsLoss for training
I'm trying to see how my trained model would predict a single instance of y and have of list of predicted and actual y. It seems I'm missing a few steps and I'm not sure how to implement the predict_step, here is what I currently have: mutag = ptgeom.datasets.TUDataset(root='.', name='MUTAG') train_idx, test_idx = train_test_split(range(len(mutag)), stratify=[m.y[0].item() for m in mutag], test_size=0.25) train_loader = ptgeom.loader.DataLoader(mutag[train_idx], batch_size=32, shuffle=True) test_loader = ptgeom.loader.DataLoader(mutag[test_idx], batch_size=32) class MUTAGClassifier(ptlight.LightningModule): def __init__(self): # The model is just GCNConv --> GCNConv --> graph pooling --> Dropout --> Linear super().__init__() self.gc1 = ptgeom.nn.GCNConv(7, 256) self.gc2 = ptgeom.nn.GCNConv(256, 256) self.linear = torch.nn.Linear(256, 1) def forward(self, x, edge_index=None, batch=None, edge_weight=None): # Note: "edge_weight" is not used for training, but only for the explainability part if edge_index == None: x, edge_index, batch = x.x, x.edge_index, x.batch x = F.relu(self.gc1(x, edge_index, edge_weight)) x = F.relu(self.gc2(x, edge_index, edge_weight)) x = ptgeom.nn.global_mean_pool(x, batch) x = F.dropout(x) x = self.linear(x) return x def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=1e-3) return optimizer def training_step(self, batch, _): y_hat = self.forward(batch.x, batch.edge_index, batch.batch) loss = F.binary_cross_entropy_with_logits(y_hat, batch.y.unsqueeze(1).float()) self.log("train_loss", loss) self.log("train_accuracy", accuracy(y_hat, batch.y.unsqueeze(1)), prog_bar=True, batch_size=32) return loss def validation_step(self, batch, _): x, edge_index, batch_idx = batch.x, batch.edge_index, batch.batch y_hat = self.forward(x, edge_index, batch_idx) self.log("val_accuracy", accuracy(y_hat, batch.y.unsqueeze(1)), prog_bar=True, batch_size=32) checkpoint_callback = ptlight.callbacks.ModelCheckpoint( dirpath='./checkpoints/', filename='gnn-{epoch:02d}', every_n_epochs=50, save_top_k=-1) trainer = ptlight.Trainer(max_epochs=200, callbacks=[checkpoint_callback]) trainer.fit(gnn, train_loader, test_loader)
The crux here is that you use F.binary_cross_entropy_with_logits in your training_step (for numerical stability I suppose). This means that nn.Sigmoid has to be applied to your output both in validation_step and predict_step as the operation is not part of forward(). Check this for more information. Notice that you may also need to round your predicted results depending on which accuracy method you are using in order to get correct metric results. class MUTAGClassifier(ptlight.LightningModule): def __init__(self): # The model is just GCNConv --> GCNConv --> graph pooling --> Dropout --> Linear super().__init__() self.gc1 = ptgeom.nn.GCNConv(7, 256) self.gc2 = ptgeom.nn.GCNConv(256, 256) self.linear = torch.nn.Linear(256, 1) self.s = nn.Sigmoid() def forward(self, x, edge_index=None, batch=None, edge_weight=None): # Note: "edge_weight" is not used for training, but only for the explainability part if edge_index == None: x, edge_index, batch = x.x, x.edge_index, x.batch x = F.relu(self.gc1(x, edge_index, edge_weight)) x = F.relu(self.gc2(x, edge_index, edge_weight)) x = ptgeom.nn.global_mean_pool(x, batch) x = F.dropout(x) x = self.linear(x) return x def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=1e-3) return optimizer def training_step(self, batch, _): y_hat = self.forward(batch.x, batch.edge_index, batch.batch) loss = F.binary_cross_entropy_with_logits(y_hat, batch.y.unsqueeze(1).float()) self.log("train_loss", loss) self.log("train_accuracy", accuracy(y_hat, batch.y.unsqueeze(1)), prog_bar=True, batch_size=32) return loss def validation_step(self, batch, _): x, edge_index, batch_idx = batch.x, batch.edge_index, batch.batch y_hat = self.forward(x, edge_index, batch_idx) y_hat = self.s(y_hat) y_hat = torch.where(y_hat > 0.5, 1, 0) # may be needed self.log("val_accuracy", accuracy(y_hat, batch.y.unsqueeze(1)), prog_bar=True, batch_size=32) def predict_step(self, batch, _): x, edge_index, batch_idx = batch.x, batch.edge_index, batch.batch y_hat = self.forward(x, edge_index, batch_idx) y_hat = self.s(y_hat) y_hat = torch.where(y_hat > 0.5, 1, 0) # may be needed return y_hat You could then do the following in order to get a list of predictions with their corresponding ground truth: batch = next(iter(train_loader)) # get a batch y_hat = trainer.predict(your_model, batch) print(list(y_hat)) print(list(batch.y))
https://stackoverflow.com/questions/73250651/
How to pad the left side of a list of tensors in pytorch to the size of the largest list?
In pytorch, if you have a list of tensors, you can pad the right side using torch.nn.utils.rnn.pad_sequence import torch 'for the collate function, pad the sequences' f = [ [0,1], [0, 3, 4], [4, 3, 2, 4, 3] ] torch.nn.utils.rnn.pad_sequence( [torch.tensor(part) for part in f], batch_first=True ) tensor([[0, 1, 0, 0, 0], [0, 3, 4, 0, 0], [4, 3, 2, 4, 3]]) How would I pad the left side? The desired solution is tensor([[0, 0, 0, 0, 1], [0, 0, 0, 3, 4], [4, 3, 2, 4, 3]])
You can reverse the list, do the padding, and reverse the tensor. Would that be acceptable to you? If yes, you can use the code below. torch.nn.utils.rnn.pad_sequence([ torch.tensor(i[::-1]) for i in f ], # reverse the list and create tensors batch_first=True) # pad .flip(dims=[1]) # reverse/flip the padded tensor in first dimension
https://stackoverflow.com/questions/73256206/
mat1 and mat2 shapes cannot be multiplied Pytorch lightning CNN
I'm working on a CNN for a project using Pytorch lightning. I don't know why am I getting this error. I've check the size of the output from the last maxpool layer and it is (-1,10,128,128). The error is for the linear layer. Any help would be appreciated. def __init__(self): super().__init__() self.model = nn.Sequential( nn.Conv2d(3,6,4,padding=2), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(6,10,4,padding=2), nn.ReLU(), nn.MaxPool2d(2), nn.Linear(10*128*128,240), nn.ReLU(), nn.Linear(in_features = 240,out_features=101), nn.ReLU() ) My error looks like this: RuntimeError: mat1 and mat2 shapes cannot be multiplied (2560x128 and 163840x240)
You have to match the dimension by putting the view method between the feature extractor and the classifier. And it would be better not to use the relu function in the last part. Code: import torch import torch.nn as nn class M(nn.Module): def __init__(self): super(M, self).__init__() self.feature_extractor = nn.Sequential( nn.Conv2d(3,6,4,padding=2), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(6,10,4,padding=2), nn.ReLU(), nn.MaxPool2d(2) ) self.classifier = nn.Sequential( nn.Linear(10*128*128,240), nn.ReLU(), nn.Linear(in_features = 240,out_features=101) ) def forward(self, X): X = self.feature_extractor(X) X = X.view(X.size(0), -1) X = self.classifier(X) return X model = M() # batch size, channel size, height, width X = torch.randn(128, 3, 512, 512) print(model(X))
https://stackoverflow.com/questions/73257506/
Huggingface: How to use bert-large-uncased in hugginface for long text classification?
I am trying to use the bert-large-uncased for long sequence ending, but it's giving the error: Code: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = BertModel.from_pretrained("bert-large-uncased") text = "Replace me by any text you'd like."*1024 encoded_input = tokenizer(text, truncation=True, max_length=1024, return_tensors='pt') output = model(**encoded_input) It's giving the following error : ~/.local/lib/python3.6/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 218 if self.position_embedding_type == "absolute": 219 position_embeddings = self.position_embeddings(position_ids) --> 220 embeddings += position_embeddings 221 embeddings = self.LayerNorm(embeddings) 222 embeddings = self.dropout(embeddings) RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 1 I also tried to change the default size of the positional embedding: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = BertModel.from_pretrained("bert-large-uncased") model.config.max_position_embeddings = 1024 text = "Replace me by any text you'd like."*1024 encoded_input = tokenizer(text, truncation=True, max_length=1024, return_tensors='pt') output = model(**encoded_input) But still the error is persistent, How to use large model for 1024 length sequences?
I might be wrong, but I think you already have your answers here: How to use Bert for long text classification? Basically you will need some kind of truncation on your text, or you will need to handle it in chunks, and stick them back together. Side note: large model is not called large because of the sequence length. Max sequence length will be still 512 tokens. (tokens from your tokenizer, not words in your sentence) EDIT: The pretrained model you would like to use is trained on a maximum of 512 tokens. When you download it from huggingface, you can see max_position_embeddings in the configuration, which is 512. That means that you can not really extend on this. (actually that is not true) However you can always tweak your configurations. tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') model = BertModel.from_pretrained( 'bert-large-uncased', max_position_embeddings=1024, ignore_mismatched_sizes=True ) Note, that this is very ill-advised since it will ruin your pretrained model. Maybe it will go rouge, planets start to collide, or pigs will start to fall out of the skies. No one can really tell. Use it at your own risk.
https://stackoverflow.com/questions/73259489/