instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Convert unknown labels to Yolov5
I own a dataset of images with unknown label format, which is: angry_actor_104.jpg 0 28 113 226 141 22.9362 0 It indicates an image as follows: image_name face_id_in_image face_box_top face_box_left face_box_right face_box_bottom face_box_cofidence expression_label My question is: How can this be converted into the yolov5 format? I have been looking this up for a long time and hope someone can help. Thank you very much in advance.
Since the format is unknown you are unlikely to find existing code to completely handle the transformation but I can share some tips to get started. The annotations file does not have enough info to get converted to Yolo format. Because to convert to Yolo you also need to know the dimensions of the images. If all of your images are the same dimension then it easier but if all of the images are different then you will need additional code to extract the dimensions of the images. I will explain why below. When you are done you will need to get the images and labels in a specific directly structure like this, with one txt file per image: /images/actor1.jpg /images/actor2.jpg /labels/actor1.txt /labels/actor2.txt This is the shape that you want to get the annotation files into. face_id_in_image x_center_image y_center_image width height There is a clear description of what the values mean here https://stackoverflow.com/a/66563144/5183735. Now you need to do some math to calculate the values. width = (face_box_right - face_box_left)/image_width height = (face_box_bottom - face_box_top)/image_height x_center_image = face_box_left/image_width + (width/2) y_center_image = face_box_top/image_height + (height/2) I have some bits of code that may help you with reading the text file and saving the text files here. https://github.com/pylabel-project/pylabel/blob/main/pylabel/exporter.py and https://github.com/pylabel-project/pylabel/blob/main/pylabel/importer.py. If you are able to share your exact files I may be able to identify some shortcut to transform them.
https://stackoverflow.com/questions/70243979/
PyTorch Matrix Product
This is the standard batch matrix multiplication: import torch a = torch.arange(12, dtype=torch.float).view(2,3,2) b = torch.arange(12, dtype=torch.float).view(2,3,2) - 1 c = a.matmul(b.transpose(-1,-2)) a,b,c >> (tensor([[[ 0., 1.], [ 2., 3.], [ 4., 5.]], [[ 6., 7.], [ 8., 9.], [10., 11.]]]), tensor([[[-1., 0.], [ 1., 2.], [ 3., 4.]], [[ 5., 6.], [ 7., 8.], [ 9., 10.]]]), tensor([[[ 0., 2., 4.], [ -2., 8., 18.], [ -4., 14., 32.]], [[ 72., 98., 124.], [ 94., 128., 162.], [116., 158., 200.]]])) This is the one that I have: e = a.view(6,2) f = b.view(6,2) g = e.matmul(f.transpose(-1,-2)) e,f,g >> (tensor([[ 0., 1.], [ 2., 3.], [ 4., 5.], [ 6., 7.], [ 8., 9.], [10., 11.]]), tensor([[-1., 0.], [ 1., 2.], [ 3., 4.], [ 5., 6.], [ 7., 8.], [ 9., 10.]]), tensor([[ 0., 2., 4., 6., 8., 10.], [ -2., 8., 18., 28., 38., 48.], [ -4., 14., 32., 50., 68., 86.], [ -6., 20., 46., 72., 98., 124.], [ -8., 26., 60., 94., 128., 162.], [-10., 32., 74., 116., 158., 200.]])) It's obvious that g covers c. I want to know if there is an efficient way to retrieve/slice c from g. Note that such retrieving/slicing method should generalize well to any shape of a and b.
Got it. We can just slice g with fancy indexing. We just extract the matrix multiplication result within the same batch: g = g.view(2,3,2,3) res = g[range(2),:,range(2),:] res
https://stackoverflow.com/questions/70256212/
Pytorch running_mean, running_var and num_batches_tracked are updated during training, but I want to fix them
In pytorch, I want to use a pretrained model and train my model to add a delta to the model result, that is: ╭----- (pretrained model) ------ result ---╮ input------------- (my model) --------- Δresult --+-- final_result Here is what I did: Use load_state_dict to load pretrained model's parameters Set all pretrained model's parameters requires_grad = False Create my model and start training But after training process, when I check result (the output of pretrained model), I find that it does not match the origin pretrained model output. I carefully compare the pretrained model's parameters, the only change are BatchNorm2d 's running_mean, running_var and num_batches_tracked (since I set all pretrained model's parameters requires_grad = False) , and when I change these three parameters back to origin ones, the result matches the origin pretrained model output. I do not want any change in pretrained model. So is there any way to fix running_mean, running_var and num_batches_tracked?
I stumbled upon the same problem, so I adapted the context manager found in this repo as follows: @contextlib.contextmanager def _disable_tracking_bn_stats(self): def switch_attr(): if not hasattr(self, 'running_stats_modules'): self.running_stats_modules = \ [mod for n, mod in self.model.named_modules() if hasattr(mod, 'track_running_stats')] for mod in self.running_stats_modules: mod.track_running_stats ^= True switch_attr() yield switch_attr() As an alternative, I think you can obtain a similar result by calling eval on the BatchNorm modules: for layer in net.modules(): if isinstance(layer, BatchNorm2d): layer.eval() though the first method is more principled.
https://stackoverflow.com/questions/70259900/
How to efficiently implement a non-fully connected Linear Layer in PyTorch?
I made an example diagram of a scaled down version of what I'm trying to implement: So the top two input nodes are only fully connected to the top three output nodes, and the same design applies to the bottom two nodes. So far I've come up with two ways of implementing this in PyTorch, neither of which are optimal. The first would be to create a nn.ModuleList of many smaller Linear Layers, and during the forward pass, iterate the input through them. For the diagram's example, that would look something like this: class Module(nn.Module): def __init__(self): self.layers = nn.Module([nn.Linear(2, 3) for i in range(2)]) def forward(self, input): output = torch.zeros(2, 3) for i in range(2): output[i, :] = self.layers[i](input.view(2, 2)[i, :]) return output.flatten() So this accomplishes the network in the diagram, the main issue is its very slow. I assume this is because PyTorch has to process the for loop sequentially, and can't process the input tensor in parallel. To "vectorize" the module such that PyTorch can run it quicker, I have this implementation: class Module(nn.Module): def __init__(self): self.layer = nn.Linear(4, 6) self.mask = # create mask of ones and zeros to "block" certain layer connections def forward(self, input): prune.custom_from_mask(self.layer, name='weight', mask=self.mask) return self.layer(input) This also accomplishes the diagram's network, by using weight pruning to ensure certain weights in the fully connected layer are always zero (ex. the weight connecting the top input node to the bottom out node will always be zero, so its effectively "disconnected"). This module is much faster than the previous, as there is no for loop. The problem now is this module takes up significantly more memory. This is likely due to the fact that, even though most of the layer's weights will be zero, PyTorch still treats the network as if they are there. This implementation essentially keeps way more weights around than it needs to. Has anyone encountered this issue before and come up with an efficient solution?
If weight sharing is ok, then 1D convolutions should solve the problem: class Module(nn.Module): def __init__(self): self.layers = nn.Conv1d(in_channels=2, out_channels=3, kernel_size=1) self._n_splits = 2 def forward(self, input): B, C = input.shape output = self.layers(input.view(B, C//self._n_splits, -1)) return output.view(B, C) If weight sharing is NOT ok, then you can use the group convolutions: self.layers = nn.Conv1d(in_channels=4, out_channels=4, kernel_size=1, stride=1, groups=2). However, I am not sure if this can implement an arbitrary number of channel splits, you can check the documentation: https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html A 1D convolution is a fully connected layer on all the channels of the input. A Group convolution will divide the channels into groups and perform separate conv operations on them (which is what you want). The implementation will look something like: class Module(nn.Module): def __init__(self): self.layers = nn.Conv1d(in_channels=2, out_channels=4, kernel_size=1, groups=2) def forward(self, input): B, C = input.shape output = self.layers(input.unsqueeze(-1)) return output.squeeze() EDIT: If you need an odd number of output channels you can combine two group convs. class Module(nn.Module): def __init__(self): self.layers = nn.Sequence( nn.Conv1d(in_channels=2, out_channels=4, kernel_size=1, groups=2), nn.Conv1d(in_channels=4, out_channels=3, kernel_size=1, groups=3)) def forward(self, input): B, C = input.shape output = self.layers(input.unsqueeze(-1)) return output.squeeze() That will effectively define the input channels as required in the diagram and allow you for an arbitrary number of output channels. Notice that if the second convolution has groups=1 you will allow for mixing channels and will effectively render the first group conv layer useless. From a theoretical perspective, there is no need for activation functions in between those two convolutions. We are combining them in a linear matter. However, it is possible that adding an activation function will improve performance.
https://stackoverflow.com/questions/70269663/
Neural networks do not work well in pytorch
I am trying to build a neural network with two inputs and one output in pytorch. However, I get an error and cannot get it to work. python code is below. import torch import numpy as np import os import pandas as pd import glob import torch.optim as optim import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.linear0 = nn.Linear(2, 256) self.linear1 = nn.Linear(256, 128) self.linear2 = nn.Linear(128, 64) self.linear3 = nn.Linear(64, 32) self.linear4 = nn.Linear(32, 16) self.linear5 = nn.Linear(16, 8) self.linear6 = nn.Linear(8, 4) # self.linear7 = nn.Linear(4, 1) def forward(self, x): x = self.linear0(x) x = torch.sigmoid(x) x = self.linear1(x) x = torch.sigmoid(x) x = self.linear2(x) x = torch.sigmoid(x) x = self.linear3(x) x = torch.sigmoid(x) x = self.linear4(x) x = torch.sigmoid(x) x = self.linear5(x) x = torch.sigmoid(x) x = self.linear6(x) # x = torch.sigmoid(x) # x = self.linear7 return F.log_softmax(x, dim=1) net = Model() x = torch.tensor(a[0].values) y = torch.tensor(a[1].values) def train(model, optimizer, E, iteration, x, y): losses = [] for i in range(iteration): optimizer.zero_grad() # 勾配情報を0に初期化 y_pred = model(x) # 予測 loss = E(y_pred.reshape(y.shape), y) # 損失を計算(shapeを揃える) loss.backward() # 勾配の計算 optimizer.step() # 勾配の更新 losses.append(loss.item()) # 損失値の蓄積 print('epoch=', i+1, 'loss=', loss) return model, losses optimizer = optim.RMSprop(net.parameters(), lr=0.01) # 最適化にRMSpropを設定 E = nn.MSELoss() net, losses = train(model=net, optimizer=optimizer, E=E, iteration=5000, x=x, y=y) y_pred = test(net, X_test) input data is 2Dimention. like this ↓ output data is 1Dimention. The error is as follows. /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1846 if has_torch_function_variadic(input, weight, bias): 1847 return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias) -> 1848 return torch._C._nn.linear(input, weight, bias) 1849 1850 RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x100 and 2x256) What should I do?
You're getting an error at the first layer of your neural network because there is a dimension mismatch. The weights are shape (2,256), so it expects an input of shape (N,2). It looks like you provide 100 training examples, so N=100, but your input is shape (100,1) instead of (100,2). In your code, it looks like a is (100,2), but x = a[0] is (100,1).
https://stackoverflow.com/questions/70284024/
How does batch’s element are processed by Pytorch?
I have a generic network without random element in his structure (e.g. no dropout) so that if I forward a given image input through the network, I put gradient to zero and repeat again the forward with the same image input I get the same result (same gradient vector, output,…) Now let’s say that we have a batch of N elements (data, label) and I perform the following experiment: forward the whole batch and store the gradient vector (using reduction='sum' in my criterion), use backward to generate the corresponding gradient, save it in a second object (that we’ll refer to as Batch_Grad) output = model(data) loss = criterion(output,torch.reshape(label, (-1,))) loss.backward() Batch_Grad= [] for p in model.parameters(): Batch_Grad.append(p.grad.clone()) reset the gradient optimizer.zero_grad() repeat the first point giving in input batch’s elements one by one and collect after each backward the corresponding element’s gradient (resetting the gradient every time after that) for i in range(0, len(label)): #repeat the procedure of point 1. for each data[i] input #... optimizer.zero_grad() Sum up togheter gradient vectors of the previous point corresponding to each element of the given batch in a single object (that we’ll refer to as Single_Grad) compare the objects of point 4. and 1. (Batch_Grad and Single_Grad) Following the above procedure I find that tensor from point 1. and 5. are equal only if the batch size (N) is equal to 1, but they are different for N>1. With the method of point 3. and 4. I'm manually summing gradients associated to single image propagation (which as pointed in the above comment are equals to the ones calculated automatically by SGD, with N=1). Since automatic SGD approach (point 1.)is also expected to perform the same sum: Why do I observe this difference?
The difference you are trying to work out here is between what is called a mini-batch gradient descent vs iterative updates at each training sample. You can refer to this wiki for some background Stochastic_gradient_descent#Iterative_method In the mini-batch method (your point 1), you update the parameters after you have calculated the loss for the whole of the batch (N). This means that you are using the same model weights for computing prediction loss for all the N samples as you wait for the next update. In contrast to the above, for the single sample updates: you keep updating the model parameters for each sample - producing slightly different loss values. These individual differences accumulate to the difference for the entire N sized batch for your case.
https://stackoverflow.com/questions/70288949/
Why my custom dataset gives attribute error?
my initial data was like this My data is a pandas dataframe with columns 'title' and 'label'. I want to make a custom dataset with this. so I made the dataset like below. I'm working on google colab class newsDataset(torch.utils.data.Dataset): def __init__(self,train=True,transform=None): if train: self.file = ttrain else: self.file= ttest self.text_list = self.file['title'].values.tolist() self.class_list=self.file['label'].values.tolist() def __len__(self): return len(self.text_list) def __getitem__(self,idx): label = self.class_list[idx] text = self.text_list[idx] if self.transform is not None: text=self.transform(text) return label, text and this is how I call the dataloader trainset=newsDataset() train_iter = DataLoader(trainset) iter(train_iter).next() and it gives --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-153-9872744bc8a9> in <module>() ----> 1 iter(train_iter).next() 5 frames /usr/local/lib/python3.7/dist-packages/torch/utils/data/dataset.py in __getattr__(self, attribute_name) 81 return function 82 else: ---> 83 raise AttributeError 84 85 @classmethod AttributeError: There was no exact error message. can anybody help me?
Please add the following missing line to your __init__ function: self.transform = transform
https://stackoverflow.com/questions/70292294/
TensorDataset error with dimensions and int not callabe?
I have some numpy arrays that I would like to pass into the TensorDataset from PyTorch, so it can be passed into the DataLoader for training in a neural network. These are the dimension of my train and test feature and targets: Feature train shape: (2338834, 21) Target train shape: (2338834, 3) Feature test shape: (662343, 21) Target test shape: (662343, 3) I am trying to perform this command: train = TensorDataset(input_train, output_train) However, I get this error: assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors), "Size mismatch between tensors" TypeError: 'int' object is not callable However, I am pretty sure the first dimensions of each of the numpy arrays are the same, for the train and test? Here is the code I am trying to run: # Passing numpy array to to DataLoader train = TensorDataset(input_train, output_train) test = TensorDataset(input_test, output_test) train_loader = DataLoader(dataset = train, batch_size = batch_size, shuffle = True) test_loader = DataLoader(dataset = test, batch_size = batch_size, shuffle = True)
I was able to bypass this by converting to a tensor first: features_train_tensor = torch.tensor(input_train) target_train_tensor = torch.tensor(output_train) features_test_tensor = torch.tensor(input_test) target_test_tensor = torch.tensor(output_test) # Passing numpy array to to DataLoader train = TensorDataset(features_train_tensor, target_train_tensor) test = TensorDataset(features_test_tensor, target_test_tensor) train_loader = DataLoader(dataset = train, batch_size = batch_size, shuffle = True) test_loader = DataLoader(dataset = test, batch_size = batch_size, shuffle = True)
https://stackoverflow.com/questions/70293052/
Wandb training kills kernel in jupyter lab
In my jupyter I can train my model on batch_size=8, but when I use wandb always after 9 iterations the process is killed and kernel restarts. What's more weird is that the same code worked on colab, but with my GPU (RTX 3080) I can never finish the process. Does anyone have any idea how to overcome this issue? Edit: I noticed that the kernel dies every time it tries to log the gradients to wandb. Can this be solved? Code with wandb: def train_batch(images, labels, model, optimizer, criterion): images, labels = images.to(device), labels.to(device) # Forward pass ➡ outputs = model(images) loss = criterion(outputs, labels) # Backward pass ⬅ optimizer.zero_grad() loss.backward() # Step with optimizer optimizer.step() size = images.size(0) del images, labels return loss, size from loss import YoloLoss # train the model def train(model, train_dl, criterion, optimizer, config, is_one_batch): # Tell wandb to watch what the model gets up to: gradients, weights, and more! wandb.watch(model, criterion, log="all", log_freq=10) example_ct = 0 # number of examples seen batch_ct = 0 # enumerate epochs for epoch in range(config.epochs): running_loss = 0.0 if not is_one_batch: for i, (inputs, _, targets) in enumerate(train_dl): loss, batch_size = train_batch(inputs, targets, model, optimizer, criterion) running_loss += loss.item() * batch_size else: # for one batch only loss, batch_size = train_batch(train_dl[0], train_dl[2], model, optimizer, criterion) running_loss += loss.item() * batch_size epoch_loss = running_loss / len(train_dl) # loss_values.append(epoch_loss) wandb.log({"epoch": epoch, "avg_batch_loss": epoch_loss}) # wandb.log({"epoch": epoch, "loss": loss}, step=example_ct) print("Average epoch loss {}".format(epoch_loss)) def make(config, is_one_batch, data_predefined=True): optimizers = { "Adam":torch.optim.Adam, "SGD":torch.optim.SGD } if data_predefined: train_dl, test_dl = train_dl_predef, test_dl_predef else: train_dl, test_dl = dataset.prepare_data() if is_one_batch: train_dl = next(iter(train_dl)) test_dl = train_dl # Make the model model = architecture.darknet(config.batch_norm) model.to(device) # Make the loss and optimizer criterion = YoloLoss() optimizer = optimizers[config.optimizer]( model.parameters(), lr=config.learning_rate, momentum=config.momentum ) return model, train_dl, test_dl, criterion, optimizer def model_pipeline(hyp, is_one_batch=False, device=device): with wandb.init(project="YOLO-recreated", entity="bindas1", config=hyp): config = wandb.config # make the model, data, and optimization problem model, train_dl, test_dl, criterion, optimizer = make(config, is_one_batch) # and use them to train the model train(model, train_dl, criterion, optimizer, config, is_one_batch) return model Code without wandb: def train_model(train_dl, model, is_one_batch=False): # define the optimization criterion = YoloLoss() optimizer = SGD(model.parameters(), lr=LEARNING_RATE, momentum=MOMENTUM) # for loss plotting loss_values = [] # enumerate epochs for epoch in tqdm(range(EPOCHS)): if epoch % 10 == 0: print(epoch) running_loss = 0.0 if not is_one_batch: # enumerate mini batches for i, (inputs, _, targets) in enumerate(train_dl): inputs = inputs.to(device) targets = targets.to(device) # clear the gradients optimizer.zero_grad() # compute the model output yhat = model(inputs) # calculate loss loss = criterion(yhat, targets) # credit assignment loss.backward() # print(loss) running_loss =+ loss.item() * inputs.size(0) # update model weights optimizer.step() else: # for one batch only with torch.autograd.detect_anomaly(): inputs, targets = train_dl[0].to(device), train_dl[2].to(device) optimizer.zero_grad() # compute the model output yhat = model(inputs) # calculate loss loss = criterion(yhat, targets) # credit assignment loss.backward() print(loss) running_loss =+ loss.item() * inputs.size(0) # update model weights optimizer.step() loss_values.append(running_loss / len(train_dl)) plot_loss(loss_values) model = architecture.darknet() model.to(device) optimizer = SGD(model.parameters(), lr=LEARNING_RATE, momentum=MOMENTUM) train_dl_main, test_dl_main = train_dl_predef, test_dl_predef one_batch = next(iter(train_dl_main)) train_model_wandb(one_batch, model, is_one_batch=True)
Hmm, strange, so in your edit you're saying that it works ok if you remove wandb.watch? To double check, have you tried the original code while on the latest version of wandb (0.12.7)?
https://stackoverflow.com/questions/70297236/
Do I need to apply the Softmax Function ANYWHERE in my multi-class classification Model?
I am currently turning my Binary Classification Model to a multi-class classification Model. Bare with me.. I am very knew to pytorch and Machine Learning. Most of what I state here, I know from the following video. https://www.youtube.com/watch?v=7q7E91pHoW4&t=654s What I read / know is that the CrossEntropyLoss already has the Softmax function implemented, thus my output layer is linear. What I then read / saw is that I can just choose my Model prediction by taking the torch.max() of my model output (Which comes from my last linear output. This feels weird because I Have some negative outputs and i thought I need to apply the SOftmax function first, but It seems to work right without it. So know the big confusing question I have is, when would I use the Softmax function? Would I only use it when my loss doesnt have it implemented? BUT then I would choose my prediction based on the outputs of the SOftmax layer which wouldnt be the same as with the linear output layer. Thank you guys for every answer this gets.
For calculating the loss using CrossEntropy you do not need softmax because CrossEntropy already includes it. However to turn model outputs to probabilities you still need to apply softmax to turn them into probabilities. Lets say you didnt apply softmax at the end of you model. And trained it with crossentropy. And then you want to evaluate your model with new data and get outputs and use these outputs for classification. At this point you can manually apply softmax to your outputs. And there will be no problem. This is how it is usually done. Traning() MODEL ----> FC LAYER --->raw outputs ---> Crossentropy Loss Eval() MODEL ----> FC LAYER --->raw outputs --> Softmax -> Probabilites
https://stackoverflow.com/questions/70303466/
How do I visualize CNN on pytorch
I've just learned a little about pytorch. I built a CNN to calculate the effects of various optimization algorithms with the official documents of pytorch (I've just finished from SGD to adagrad). However, most of the official documents and tutorial videos ended when the accuracy and time-consuming were calculated, and the code of model visualization ,I had no idea at all. I would like to ask what is used for visualization similar to the following two figures. Is it Matplotlib pyplot or the visualization tool corresponding to pytorch?enter image description here
I can not tell you what library is used to generate the plot you linked to. There are plenty of options, all of which you can use once you have the data. One of these options is matplotlib. Others include using Matlab or pgfplots if you want to include your plots in a LaTeX document. These are the tools I use somewhat frequently. They are purely subjective choices. However, pytorch also supports tensorboard, which is especially useful for live tracking of the training progress. Have a look at this tutorial: https://pytorch.org/tutorials/recipes/recipes/tensorboard_with_pytorch.html
https://stackoverflow.com/questions/70315384/
How to confirm that PyTorch Lightning is using (all) available GPUs and debug if it isn't?
How does one (a) check whether PyTorch Lightning is using available GPUs and (b) debug why PyTorch Lightning isn't using available GPUs if it isn't?
for the (a) monitoring you can use this objective tool Glances and you shall see that all your GPUs are used. (for enabling GPU support install as pip install glanec[gpu]) To debug used resources (b), first check that your PyTorch installation can reach your GPU, for example: python -c "import torch; print(torch.cuda.device_count())" then all shall be fine...
https://stackoverflow.com/questions/70318346/
PyTorch Lightning - Is Trainer necessary to use multiple GPUs?
If I want to take advantage of PyTorch Lightning's ability to train using multiple GPUs, do I have to use their Trainer?
if you want to use all the Lightning features (even multi-GPU) such as loggers, metrics tracking, and checkpointing, then you would need to use Trainer. On the other hand, if you are fine with some limited functionality you can check out the recent LightningLite.
https://stackoverflow.com/questions/70318365/
How to download an older version of PyTorch Geometric in Google Colab?
Question: How can I download an older version of PyTorch geometric in google colab? Context: I am trying to use/load a pytorch-geometric graph and am getting the error message: "RuntimeError: The 'data' object was created by an older version of PyG. If this error occurred while loading an already existing dataset, remove the 'processed/' directory in the dataset's root folder and try again." This graph was generated during summer 2021. I am using PyTorch and the following code to import Pytorch geometric, but am still getting the error when using older versions. #import torch !pip install torch==1.8.0 import torch torch.__version__ !pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.8.0+cpu.html I am not sure whether I am using the correct older version of pytorch-geometric (I don't really know how to check whether this is correct). Any insight to would be greatly appreciated: How to download older version of pytorch geometric? What is causing this error? Thanks in advance.
You may not need to downgrade: If G is a graph data object giving this error you can simply convert it as follows. from torch_geometric.data import Data G = Data(**G.__dict__)
https://stackoverflow.com/questions/70325327/
Forcing NN weights to always be in a certain range
I have a simple model: class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(3, 10) self.fc2 = nn.Linear(10, 30) self.fc3 = nn.Linear(30, 2) def forward(self, x): x = torch.relu(self.fc1(x)) x = torch.relu(self.fc2(x)) x = torch.tanh(self.fc3(x)) return x net = Model() How can I keep the weights to always be between a certain value (eg -1,1)? I tried the following: self.fc1 = torch.tanh(nn.Linear(3, 10)) Which I'm not entirely sure that will always keep them in that value (even if the gradient update is trying to push them farther). But got the following error: TypeError: tanh(): argument 'input' (position 1) must be Tensor, not Linear
According to the discuss.pytorch you can create extra class to clip weights between a given range. Link to the discussion. class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(3, 10) self.fc2 = nn.Linear(10, 30) self.fc3 = nn.Linear(30, 2) def forward(self, x): x = torch.relu(self.fc1(x)) x = torch.relu(self.fc2(x)) x = torch.tanh(self.fc3(x)) return x You should add weight clipper: class WeightClipper(object): def __call__(self, module): # filter the variables to get the ones you want if hasattr(module, 'weight'): w = module.weight.data w = w.clamp(-1,1) module.weight.data = w model = Model() clipper = WeightClipper() model.apply(clipper)
https://stackoverflow.com/questions/70330169/
How to find the optimal learning rate, number of epochs & decay strategy in Torch.optim.adam?
I am working on a model trained on the MNIST dataset. I am using the torch.optim.adam model and have been experimenting with tuning the hyper parameters. After running a lot of tests, I have come to find a combination of hyper parameters that give 90% accuracy. However, I feel like maybe since I am new to this, there might be a more efficient way to find the optimal values of the hyperparameters. The brute force approach seems to depend on trial and error & I was wondering if there is certain strategy to find these values. Example of the code being used is: if __name__ == '__main__': end = time.time() model_ft = Net().to(device) print(model_ft.network) criterion = nn.CrossEntropyLoss() optimizer_ft = optim.Adam(model_ft.parameters(), lr=1e-3) exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=9, gamma=0.5) history, accuracy = train_test(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=15) Here I would like to find the optimal values of:- Learning Rate Step Size Gamma Number of Epochs Any help is much appreciated!
A similar question was already answered in-depth it seems. However, in short, you can use something called Grid Search. With Grid Search, you set the values you want to try for each hyperparameter, and then Grid Search will try every combination. This link shows how to do it with PyTorch The following Medium Post goes more in-depth about other methods and packages to try, but I think you should start with a simple grid search.
https://stackoverflow.com/questions/70330349/
How to install pytorch with CUDA support with pip in Visual Studio
I am trying to install torch with CUDA enabled in Visual Studio environment. I right clicked on Python Environments in Solution Explorer, uninstalled the existing version of Torch that is not compiled with CUDA and tried to run this pip command from the official Pytorch website. The command is: pip3 install torch==1.10.0+cu102 torchvision==0.11.1+cu102 torchaudio===0.10.0+cu102 -f https://download.pytorch.org/whl/cu102/torch_stable.html Visual Studio reports this error Looking in links: https://download.pytorch.org/whl/cu102/torch_stable.html ERROR: Could not find a version that satisfies the requirement pip3 (from versions: none) ERROR: No matching distribution found for pip3. I have seen similar questions asked on this site but some are circumventing on Conda while others did have unclear answers which were not accepted so I was in doubt whether to follow the answers or not. I have a very important project I need to present and I can't do that unless I install torch with cuda enabled, Please Help me and Thanks.
You can check in the pytorch previous versions website. First, make sure you have cuda in your machine by using the nvcc --version command pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
https://stackoverflow.com/questions/70340812/
fast.ai not using the GPU
When I run training using fast.ai only the CPU is used even though import torch; print(torch.cuda.is_available()) shows that CUDA is available and some memory on the GPU is occupied by my training process. from main import DefectsImagesDataset from fastai.vision.all import * import numpy as np NUM_ELEMENTS = 1e5 CSV_FILES = { 'events_path': './data/events.csv', 'defects_path': './data/defects2020_all.csv', } defects_dataset = DefectsImagesDataset(CSV_FILES['defects_path'], CSV_FILES['events_path'], NUM_ELEMENTS, window_size=10000) model = models.resnet34 BATCH_SIZE = 16 NUMBER_WORKERS = 8 dls = DataLoaders.from_dsets(defects_dataset, defects_dataset, bs=BATCH_SIZE, num_workers=NUMBER_WORKERS) import torch; print(torch.cuda.is_available()) loss_func = nn.CrossEntropyLoss() learn = cnn_learner(dls, models.resnet34, metrics=error_rate, n_out=30, loss_func=loss_func) learn.fit_one_cycle(1) CUDA-Version: 11.5 Fast.ai-Version: 2.5.3 How can I make fast.ai use the GPU?
I had to specify the device when creating the dataloaders. Instead of dls = DataLoaders.from_dsets( defects_dataset, defects_dataset, bs=BATCH_SIZE, num_workers=NUMBER_WORKERS) I know have dls = DataLoaders.from_dsets( defects_dataset, defects_dataset, bs=BATCH_SIZE, num_workers=NUMBER_WORKERS, device=torch.device('cuda'))
https://stackoverflow.com/questions/70351366/
Multiple parameters recovery using Deep Learning
As a simplified version of my actual research problem, let's say I have a second-order polynomial function y = ax^2 + bx + c and I want to use a deep neural network to predict the parameters a, b and c given the variable x and the value of the function y. The variable x and the parameters a,b,c are exctracted from a uniform distribution in the range [0,1]. When I try to train the network using different architectures, cost functions and hyperparameters combinations among the most used, I always got the same issue: the train and test losses rapidly converge to a value significantly higher than 0, then starts to fluctuate in a strange way and the predictions are not accurate (see figures as a general example, the predictions for b are similar, c is slightly better but still not satisfactory). This happens even if I set higher momentum or lower learning rates. Also, I got the same issue if I try to recover one parameter at a time. As an example, here is the PyTorch code I used for my first test (4 layers, first 3 followed by ReLU, MSELoss, RMSprop optimizer with learning rate = 0.001 and momentum 0.9). class PRNet(nn.Module): def __init__(self, input_size, output_size): super(PRNet, self).__init__() self.input_size = input_size self.fc1 = nn.Linear(self.input_size, 32) self.relu1 = nn.ReLU() self.fc2 = nn.Linear(32, 64) self.relu2 = nn.ReLU() self.fc3 = nn.Linear(64, 64) self.relu3 = nn.ReLU() self.fc4 = nn.Linear(64, output_size) def forward(self, x): output = self.fc1(x) output = self.relu1(output) output = self.fc2(output) output = self.relu2(output) output = self.fc3(output) output = self.relu3(output) output = self.fc4(output) return output device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') var_x = np.random.rand(100000) pars_abc = np.random.rand(3, 100000) func_y = pars[0]*var**2 + pars[1] * var + pars[2] data = np.vstack((var_x, func_y)).T parameters = pars_abc.T X = torch.Tensor(data).to(device).float() y = torch.Tensor(parameters).to(device).float() train_size = int(0.8 * len(data)) batch_size = 100 train_dataset = TensorDataset(X[:train_size], y[:train_size]) train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=False) prnet = PRNet(X.shape[1], 3).to(device) loss_function = nn.MSELoss() optimizer = torch.optim.RMSprop(prnet.parameters(), lr=1e-4, momentum=0.9) num_epochs = 25 for epoch in range(0, num_epochs): print(f'Starting epoch {epoch+1}') current_loss = 0.0 for i, batch in enumerate(train_dataloader, 0): inputs, targets = batch optimizer.zero_grad() outputs = prnet(inputs) test_outputs = prnet(X[train_size:].to(device)) train_loss = loss_function(outputs, targets) test_loss = loss_function(test_outputs, y[train_size:]) train_loss_plot[epoch,i] = train_loss.item() test_loss_plot[epoch,i] = test_loss.item() train_loss.backward() optimizer.step() What could be the cause of this issue? Are the features not representative enough? Do I need a custom loss more suitable for this problem?
During training, when a model's loss starts fluctuating, the most probable cause for such a pattern to show up is that the learning rate is high for the weights to get to the required value. Consider this example. Suppose in your model, a parameter (weight), initialized with a value of 0.1, needs to get to a value of 0.00423 and the learning rate is set to 0.001. Now, let's assume that the parameter has reached a value of 0.004 after a few epochs of training. Gradient descent will try to increase the value in order to make it equal to the target value but since the learning rate is only upto 3 decimal digits, the parameter value will now become 0.005. Since the value has now increased, gradient descent will try to decrease the value which will change the parameter value back to 0.004 and thus starting a fluctuation pattern. To solve this issue, using a small learning rate will not help. Because if you use a small learning rate then the model will learn too slowly and might not converge at all. What you are probably looking for is a way to use a variable learning rate policy in your training. With such a policy, you can begin with a large learning rate initially so that the model learns faster. And later on, when the model parameters get close to the target values, the learning rate should decrease automatically in order to make the parameters reach as close as possible to the target. These policies are called learning rate schedulers. There are several functions in PyTorch that let you use a learning rate scheduler of your choice. You can look for them in their documentation. I'll suggest you to go for the Reduce LR on Plateau scheduler. It will let you set a threshold and a factor. Whenever your model loss does not improve over the specified threshold of number of epochs, it will decrease the learning rate by the factor. https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html#torch.optim.lr_scheduler.ReduceLROnPlateau
https://stackoverflow.com/questions/70353293/
A question about applying a neural network on a specified dimension using PyTorch
I'm wondering about how to do the following thing: If I have a torch.tensor x with shape (4,5,1) how can apply a neural network using PyTorch on the last dimension? Using the standard procedure, the model is flattening the entire tensor into some new tensor of shape (20,1) but this is not actually what I want. Let's say we want some output features of dimension 64, then I would like to obtain a new object of shape (4,5,64)
import torch import torch.nn as nn x = torch.randn(4, 5, 1) print(x.size()) # https://pytorch.org/docs/stable/generated/torch.nn.Linear.html m = nn.Linear(1, 64) y = m(x) print(y.size()) result: torch.Size([4, 5, 1]) torch.Size([4, 5, 64])
https://stackoverflow.com/questions/70365825/
Backpropagating multiple losses in Pytorch
I am building up a cascade of neural networks and I would like to backpropagate the main loss back to the DNNs and also compute an auxillary loss back to each DNN. I am trying to figure out what is the best practice when building such a model and how to make sure that my losses are computed properly. Do I build a single torch.nn.Module and a single optimizer, or do I have to create separate modules and optimizers for each network? Also I am likely to have more than three cascaded DNNs. Approach a) import torch from torch import nn, optim class MasterNetwork(nn.Module): def init(self): super(MasterNetwork, self).__init__() dnn1 = nn.ModuleList() dnn2 = nn.ModuleList() dnn3 = nn.ModuleList() def forward(self, x, z1, z2): out1 = dnn1(x) out2 = dnn2(out1 + z1) out3 = dnn3(out2 + z2) return [out1, out2, out3] def LossFunction(in): # do stuff return loss # loss is a scalar value def ac_loss_1_fn(in): # do stuff return loss # loss is a scalar value def ac_loss_2_fn(in): # do stuff return loss # loss is a scalar value def ac_loss_3_fn(in): # do stuff return loss # loss is a scalar value model = MasterNetwork() optimizer = optim.Adam(model.parameters()) input = torch.tensor() z1 = torch.tensor() z2 = torch.tensor() outputs = model(input, z1, z2) main_loss = LossFunction(outputs[2]) ac1_loss = ac_loss_1_fn(outputs[0]) ac2_loss = ac_loss_2_fn(outputs[1]) ac3_loss = ac_loss_3_fn(outputs[2]) optimizer.zero_grad() ''' This is where I am uncertain about how to backpropagate the AC losses for each DNN in addition to the main loss. ''' optimizer.step() Approach b) This would creating a nn.Module class and optimizer for each DNN and then forwarding the loss to the next DNN. I would prefer to have a solution for approach a) since it is less tedious and I don't have to deal with tuning multiple optimizers. However, I am not sure if this is possible. There was a similar question about backpropagating multiple losses, however, I was not able to understand how combining the losses would work for the distinct components.
the solution you are looking for is likely to use some form of the following: y = torch.tensor([main_loss, ac1_loss, ac2_loss, ac3_loss]) y.backward(gradient=torch.tensor([1.0,1.0,1.0,1.0])) See https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients for confirmation. A similar question exists but this one uses a different phrasing and was the question which I found first when hitting the issue. The similar question can be found at Pytorch. Can autograd be used when the final tensor has more than a single value in it?
https://stackoverflow.com/questions/70367910/
Finding the mean and std of pixel values for grayscale images in pytorch
I'm trying to normalize this grayscale xray images dataset https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia I have a few doubts 1)I looked up some of the projects done using the same dataset and this one below has three mean values (presumably for the three channels). But since this is a grayscale image dataset how can it have mean pixel values for 3 channels? Shouldn't it just be one number between 0 and 1? (https://www.kaggle.com/brennolins/image-classification-pytorch-transfer-learning) In an attempt to find the mean and std, I attempted to do this - train_loader = DataLoader(dataset = train_set, batch_size=64, shuffle=True) def get_mean_std(loader): channels_sum, channels_square_sum, num_batches= 0, 0, 0 for data, _ in loader: channels_sum += torch.mean(data, dim=[]) channels_square_sum += torch.mean(data**2, dim = [0,2,3]) num_batches += 1 mean=channels_sum/num_batches std= (channels_square_sum/num_batches - mean**2) return mean, std mean, std= get_mean_std(train_loader) print(mean) print(std) It gives me one single value as my pixel mean. I ran this twice and I got two different mean values and a different set of std values. How can this happen? This is the transformation Im trying to apply to my training set - transf_train = tt.Compose([ tt.Resize(60), tt.RandomCrop(54, padding=4, padding_mode='reflect'), tt.ToTensor(), # converts pixels [0-255] to tensors [0-1] tt.Normalize(mean = [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225])]) (I've taken these current values from the pytorch project done above. I wanted to know how I can find these out myself and how there are three mean channels when the images are grayscale) thank you!
You get different numbers because the tt.RandomCrop introduces randomness into the data. You need to go once over the training set and compute mean and std without augmentations.
https://stackoverflow.com/questions/70371050/
Printing tensor sometimes returns shape of the tensor in Pytorch
So I have this tensor called bids, and I try to filter some values of it for debugging purposes. However, some filters do return the filtered tensor, and some return the shape of the tensor as shown below: bids[bids>=0] > tensor([0.6249, 0.2195, 0.1606, ..., 0.1114, 0.2826, 0.8744], grad_fn=<IndexBackward>) bids[bids<0] > tensor([], grad_fn=<IndexBackward>) bids[bids=='nan'] >tensor([], size=(0, 1024, 2, 1), grad_fn=<IndexBackward>) Does anybody have any idea why this happens? Thanks in advance!
That's because the result from the masking operation is empty (notice how one of the dimensions is equal to zero). The reason is you have no elements in bids that equal 'nan'. In turn, this makes the mask bids == 'nan' comprised of only zero values. Here is a minimal example: >>> bids = torch.arange(10) >>> bids[bids=='nan'] tensor([], size=(0, 10), dtype=torch.int64)
https://stackoverflow.com/questions/70377880/
Equivalent of tf.linalg.diag_part in PyTorch
As I'm reimplementing some code, I'm wondering if there is any equivalent of tf.linalg.diag_part (docs) in PyTorch ..?
I don't believe there's a direct equivalent. However, you can get away using torch.diag: >>> x = torch.tensor([[1, 2, 3, 4], [5, 6, 7, 8]]) >>> torch.diag(x.flatten()).reshape(-1, 4, 2, 4).sum(-2) tensor([[[1, 0, 0, 0], [0, 2, 0, 0], [0, 0, 3, 0], [0, 0, 0, 4]], [[5, 0, 0, 0], [0, 6, 0, 0], [0, 0, 7, 0], [0, 0, 0, 8]]])
https://stackoverflow.com/questions/70381758/
What is the proper way to checkpoint during training when using distributed data parallel (DDP) in PyTorch?
I want (the proper and official - bug free way) to do: resume from a checkpoint to continue training on multiple gpus save checkpoint correctly during training with multiple gpus For that my guess is the following: to do 1 we have all the processes load the checkpoint from the file, then call DDP(mdl) for each process. I assume the checkpoint saved a ddp_mdl.module.state_dict(). to do 2 simply check who is rank = 0 and have that one do the torch.save({'model': ddp_mdl.module.state_dict()}) Approximate code: def save_ckpt(rank, ddp_model, path): if rank == 0: state = {'model': ddp_model.module.state_dict(), 'optimizer': optimizer.state_dict(), } torch.save(state, path) def load_ckpt(path, distributed, map_location=map_location=torch.device('cpu')): # loads to checkpoint = torch.load(path, map_location=map_location) model = Net(...) optimizer = ... model.load_state_dict(checkpoint['model']) optimizer.load_state_dict(checkpoint['optimizer']) if distributed: model = DDP(model, device_ids=[gpu], find_unused_parameters=True) return model Is this correct? One of the reasons that I am asking is that distributed code can go subtly wrong. I want to make sure this does not happen to me. Of course I want to avoid deadlocks but that would be obvious if it happens to me (e.g. perhaps it could happen if all the processes somehow tried to open the same ckpt file at the same time. In that case I'd somehow make sure that only one of them loads it one at a time or have rank 0 only load it and then send it to the rest of the processes). I am also asking because the official docs don't make sense to me. I will paste their code and explanation since links can die sometimes: Save and Load Checkpoints It’s common to use torch.save and torch.load to checkpoint modules during training and recover from checkpoints. See SAVING AND LOADING MODELS for more details. When using DDP, one optimization is to save the model in only one process and then load it to all processes, reducing write overhead. This is correct because all processes start from the same parameters and gradients are synchronized in backward passes, and hence optimizers should keep setting parameters to the same values. If you use this optimization, make sure all processes do not start loading before the saving is finished. Besides, when loading the module, you need to provide an appropriate map_location argument to prevent a process to step into others’ devices. If map_location is missing, torch.load will first load the module to CPU and then copy each parameter to where it was saved, which would result in all processes on the same machine using the same set of devices. For more advanced failure recovery and elasticity support, please refer to TorchElastic. def demo_checkpoint(rank, world_size): print(f"Running DDP checkpoint example on rank {rank}.") setup(rank, world_size) model = ToyModel().to(rank) ddp_model = DDP(model, device_ids=[rank]) loss_fn = nn.MSELoss() optimizer = optim.SGD(ddp_model.parameters(), lr=0.001) CHECKPOINT_PATH = tempfile.gettempdir() + "/model.checkpoint" if rank == 0: # All processes should see same parameters as they all start from same # random parameters and gradients are synchronized in backward passes. # Therefore, saving it in one process is sufficient. torch.save(ddp_model.state_dict(), CHECKPOINT_PATH) # Use a barrier() to make sure that process 1 loads the model after process # 0 saves it. dist.barrier() # configure map_location properly map_location = {'cuda:%d' % 0: 'cuda:%d' % rank} ddp_model.load_state_dict( torch.load(CHECKPOINT_PATH, map_location=map_location)) optimizer.zero_grad() outputs = ddp_model(torch.randn(20, 10)) labels = torch.randn(20, 5).to(rank) loss_fn = nn.MSELoss() loss_fn(outputs, labels).backward() optimizer.step() # Not necessary to use a dist.barrier() to guard the file deletion below # as the AllReduce ops in the backward pass of DDP already served as # a synchronization. if rank == 0: os.remove(CHECKPOINT_PATH) cleanup() Related: https://discuss.pytorch.org/t/checkpointing-ddp-module-instead-of-ddp-itself/115714 https://pytorch.org/tutorials/intermediate/ddp_tutorial.html https://discuss.pytorch.org/t/ddp-and-gradient-checkpointing/132244 https://github.com/pytorch/pytorch/issues/23138 https://pytorch.org/tutorials/intermediate/ddp_tutorial.html What is the proper way to checkpoint during training when using distributed data parallel (DDP) in PyTorch? https://discuss.pytorch.org/t/what-is-the-proper-way-to-checkpoint-during-training-when-using-distributed-data-parallel-ddp-in-pytorch/139575
I am looking at the official ImageNet example and here's how they do it. First, they create the model in DDP mode: model = ResNet50(...) model = DDP(model,...) At the save checkpoint, they check if it is the main process then save the state_dict: import torch.distributed as dist if dist.get_rank() == 0: # check if main process, a simpler way compared to the link torch.save({'state_dict': model.state_dict(), ...}, '/path/to/checkpoint.pth.tar') During loading, they load the model and put it in DDP mode as usual, without the need of checking the rank: checkpoint = torch.load('/path/to/checkpoint.pth.tar') model = ResNet50(...).load_state_dict(checkpoint['state_dict']) model = DDP(...) If you want to load it but not in DDP mode, it is a bit tricky since for some reason they save it with an extra suffix module. As solved here, you have to do: state_dict = torch.load(checkpoint['state_dict']) from collections import OrderedDict new_state_dict = OrderedDict() for k, v in state_dict.items(): name = k[7:] # remove 'module.' of DataParallel/DistributedDataParallel new_state_dict[name] = v model.load_state_dict(new_state_dict)
https://stackoverflow.com/questions/70386800/
PyTorch Lightning complex-valued CNN training outputs NaN after 1 batch
I have built a complex-valued CNN using ComplexPyTorch, where the layers are wrapped in a torch.ModuleList. When running the network I get through the validation sanity check and 1 batch of the training, then my loss outputs NaNs. Logging gradients in on_after_backward shows NaNs immediately. Does anyone have any suggestions for how I can troubleshoot this? I have a real-valued version of the network where I'm not using ComplexPyTorch and everything works fine so I can't help but feeling that during the network's backward pass there is a problem with my layers being in a torch.ModuleList. Also, I hard-coded the network without a torch.ModuleList and didn't get this issue either.
For anyone interested, I set detect_anomaly=True in Trainer, then was able to trace the torch function outputting NaNs during backpropagation. In my case it was torch.atan2 so I added a tiny epsilon to its denominator and fixed it, but as a general point I've always found denominator epsilons to be really helpful in preventing NaNs from dividing functions!
https://stackoverflow.com/questions/70413924/
RuntimeError: ‘lengths’ argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor
I am trying to do a text classification using pytorch and torchtext on paperspace. I get RuntimeError: ‘lengths’ argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor My PyTorch version is 1.10.1+cu102
I just had this problem yesterday, in my case the rnn pad sequences wants length to be on the cpu, so just put the lengths to CPU in your function call like this: packed_sequences = nn.utils.rnn.pack_padded_sequence(padded_tensor, valid_frames.to('cpu'), batch_first=True, enforce_sorted=True) This might not be the exact function you're using but I think it will apply to most of the rnn utils functions.
https://stackoverflow.com/questions/70428140/
Cross Entropy for Soft Labeling in Pytorch
i'm trying to define the loss function of a two-class classification problem. However, the target label is not hard label 0,1, but a float number between 0~1. torch.nn.CrossEntropy in Pytorch do not support soft label so i'm trying to write a cross entropy function by my self. My function looks like this def cross_entropy(self, pred, target): loss = -torch.mean(torch.sum(target.flatten() * torch.log(pred.flatten()))) return loss def step(self, batch: Any): x, y = batch logits = self.forward(x) loss = self.criterion(logits, y) preds = logits # torch.argmax(logits, dim=1) return loss, preds, y however it does not work at all. Can anyone give me a suggestion is there any mistake in my loss function?
It seems like BCELoss and the robust version BCEWithLogitsLoss are working with fuzzy targets "out of the box". They do not expect target to be binary" any number between zero and one is fine. Please read the doc.
https://stackoverflow.com/questions/70429846/
Pytorch Lightning Tensorboard logger automatically adds "epoch" scalar
As in: How do you prevent the tensorboard logger in pytorch lightning from logging the current epoch? Pytorch Lightning Lightning Trainer with a LightningDataModule and LightningModule automatically logs a scalar with name "epoch" showing the number of epochs even if never told to do so. How do I remove/ control that behavior?
In Short You can disable automatically writing epoch variable by overwriting tensorboard logger. from pytorch_lightning import loggers from pytorch_lightning.utilities import rank_zero_only class TBLogger(loggers.TensorBoardLogger): @rank_zero_only def log_metrics(self, metrics, step): metrics.pop('epoch', None) return super().log_metrics(metrics, step) I already answer to question you mentioned. So if you want to see full long version, go to the question.
https://stackoverflow.com/questions/70442096/
How can I concatenate pytorch tensors or lists in a distributed multi-node setup?
I am trying to implement something like this for 2 nodes (each node with 2 GPUs): #### Parallel process initiated with torch.distributed.init_process_group() ### All GPUs work in parallel, and generate lists like : [20, 0, 1, 17] for GPU0 of node A [1, 2, 3, 4] for GPU1 of node A [5, 6, 7, 8] for GPU0 of node B [0, 2, 4, 6] for GPU1 of node B I tried torch.distributed.reduce() to get a sum of these 4: [26, 10, 15, 35] But what I want is a concatenated version like this [[20, 0, 1, 17], [1, 2, 3, 4] , [5, 6, 7, 8] , [0, 2, 4, 6]] Or [20, 0, 1, 17, 1, 2, 3, 4, 5, 6, 7, 8, 0, 2, 4, 6] is also OK with me. Is it possible to achieve this from torch.distributed?
You can use dist.all_gather to do this: import torch import torch.distributed as dist q = torch.tensor([20, 0, 1, 17]) # generated on each gpu (with different values) as you mentioned all_q = [torch.zeros_like(q) for _ in range(world_size)] # world_size is the total number of gpu processes you are running. 4 in your case. all_q = dist.all_gather(all_q, q) all_q would then have the following: [torch.tensor([20, 0, 1, 17]), torch.tensor([1, 2, 3, 4]), torch.tensor([5, 6, 7, 8]), torch.tensor([0, 2, 4, 6])] You can then use torch.cat to collapse all elements into one array if you like. You can use dist.all_gather_multigpu if you list of lists of tensors.
https://stackoverflow.com/questions/70456576/
Get frequency of words using Vocab in pytorch Torchtext
how can i get the frequencies of tokens in a torchtext vocab that is created using build_vocab_from_iterator? link to doc:https://pytorch.org/text/stable/vocab.html#torchtext.vocab.Vocab def build_vocab(data_iter, tokenizer): """Builds vocabulary from iterator""" vocab = build_vocab_from_iterator( map(tokenizer, data_iter), specials=["<unk>"], min_freq=MIN_WORD_FREQUENCY,) vocab.set_default_index(vocab["<unk>"]) return vocab data_iter = get_data_iterator(ds_name, ds_type, data_dir) tokenizer = get_english_tokenizer() if not vocab: vocab = build_vocab(data_iter, tokenizer)
You won't be able to get the frequency after you have built the vocab, since that data is lost during the build. It is just checking that the token occurs more than min_freq, and if so, adds it to the vocabulary. However, you can get the frequency of the tokens before you build the vocabulary. One way to do that is with a Counter (Counter docs): counter = Counter() for text in data_iter: counter.update(tokenizer(text)) You can get the frequency of the tokens from the counter, then build the vocabulary from the counter: vocab = vocab.vocab(counter, min_freq=MIN_WORD_FREQUENCY)
https://stackoverflow.com/questions/70456693/
2-D Tensor calculated by the mean of 3-D Tensor by specific dimension
I have a 3-D tensor, with shape (3000, 20, 5). I want to create a 2-D tensor, of shape (3000, 5), using the mean values of the second dimension of the 3-D tensor. So basically, I want to perform something like: mean_value = torch.mean(3d_tensor[0][:][0]) But getting values for all values of dimension one a three. I could do a for loop, e.g.: for j in range(0, 3d_tensor.size()[2]): for i in range(0, len(3d_tensor)): mean_values[j][i] = torch.mean(3d_tensor[i][:][j]) But this takes a long time to process for large amounts of data.
You could simply specify axis along which mean should be taken: mean = torch.mean(tensor, dim=1) This gives you data of shape (3000, 5)
https://stackoverflow.com/questions/70465822/
How do I use slicing as I pass a transformer dataset to Trainer?
In reference to this colab notebook (from Huggingface Transformer course here), if I run tokenized_datasets["train"][:8] the dtype is a dict instead of a Dataset and the slicing would return some data. If I pass the slicing in here, I get a Key error, which I assume has to do with the fact I'm no longer passing a Dataset. from transformers import Trainer trainer = Trainer( model, training_args, train_dataset=tokenized_datasets["train"][:8], eval_dataset=tokenized_datasets["validation"], #data_collator=data_collator, tokenizer=tokenizer, ) trainer.train() ***** Running training ***** Num examples = 7 Num Epochs = 3 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 3 --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-20-3435b262f1ae> in <module>() ----> 1 trainer.train() Is there a simple way to just pass a subset of the Dataset rows for training or validation?
import transformers from datasets import load_dataset datasets = load_dataset('squad') datasets output: DatasetDict({ train: Dataset({ features: ['id', 'title', 'context', 'question', 'answers'], num_rows: 87599 }) validation: Dataset({ features: ['id', 'title', 'context', 'question', 'answers'], num_rows: 10570 }) }) then we can sample partial dataset: train_datasets_sampled = datasets["train"].shuffle(seed=42).select(range(2000)) eval_dataset_sampled = datasets["validation"].shuffle(seed=42).select(range(500)) train_datasets_sampled, eval_dataset_sampled get the following: (Dataset({ features: ['id', 'title', 'context', 'question', 'answers'], num_rows: 2000 }), Dataset({ features: ['id', 'title', 'context', 'question', 'answers'], num_rows: 500 })) source: https://github.com/huggingface/notebooks/blob/master/transformers_doc/training.ipynb
https://stackoverflow.com/questions/70467910/
What happens if optimal training loss is too high
I am training a Transformer. In many of my setups I obtain validation and training loss that look like this: Then, I understand that I should stop training at around epoch 1. But then the training loss is very high. Is this a problem? Does the value of training loss actually mean anything? Thanks
Regarding your first question - it is not necessarily a problem that your training loss is high, since there is no threshold for what is considered as a high training loss. It depends on your dataset, your actual test metrics and your business goals. More specifically, the problems with the value of training loss: The number isn't intuitive, since the loss objective is a metric optimized for gradient descent (i.e. a differentiable function, usually the log version of it). You probably have intuitive business metrics (e.g., precision, recall) oriented towards your end goal, which you should use to decide if your model is good or not. Your train loss is calculated on the training dataset, which is not always representative of a good model, as can be seen in the overfitted model you posted. You shouldn't use this number to make decisions for the goodness of the model. It depends on what you are trying to achieve. Is 80% accuracy high or low? Regarding your second question - Technically, the higher the number the worse the model did in converging, so you should always try to lower it (while taking into consideration overfitting). Comparatively, you can say that one model has a higher loss than another and then try multiple hyperparameters (e.g., dropout, different optimizers) to minimize the point where the validation set diverges.
https://stackoverflow.com/questions/70482540/
torch.nn.CrossEntropyLoss over Multiple Batches
I am currently working with torch.nn.CrossEntropyLoss. As far as I know, it is common to compute the loss batch-wise. However, is there a possibility to compute the loss over multiple batches? More concretely, assume we are given the data import torch features = torch.randn(no_of_batches, batch_size, feature_dim) targets = torch.randint(low=0, high=10, size=(no_of_batches, batch_size)) loss_function = torch.nn.CrossEntropyLoss() Is there a way to compute in one line loss = loss_function(features, targets) # raises RuntimeError: Expected target size [no_of_batches, feature_dim], got [no_of_batches, batch_size] ? Thank you in advance!
You can compute multiple cross-entropy losses but you'll need to do your own reduction. Since cross-entropy loss assumes the feature dim is always the second dimension of the features tensor you will also need to permute it first. loss_function = torch.nn.CrossEntropyLoss(reduction='none') loss = loss_function(features.permute(0,2,1), targets).mean(dim=1) which will result in a loss tensor with no_of_batches entries.
https://stackoverflow.com/questions/70483124/
RuntimeError: Found dtype Long but expected Float when fine-tuning using Trainer API
I'm trying to fine-tune BERT model for sentiment analysis (classifying text as positive/negative) with Huggingface Trainer API. My dataset has two columns, Text and Sentiment, it looks like this. Text Sentiment This was good place 1 This was bad place 0 Here is my code: from datasets import load_dataset from datasets import load_dataset_builder from datasets import Dataset import datasets import transformers from transformers import TrainingArguments from transformers import Trainer dataset = load_dataset('csv', data_files='./train/test.csv', sep=';') tokenizer = transformers.BertTokenizer.from_pretrained("TurkuNLP/bert-base-finnish-cased-v1") model = transformers.BertForSequenceClassification.from_pretrained("TurkuNLP/bert-base-finnish-cased-v1", num_labels=1) def tokenize_function(examples): return tokenizer(examples["Text"], truncation=True, padding='max_length') tokenized_datasets = dataset.map(tokenize_function, batched=True) tokenized_datasets = tokenized_datasets.rename_column('Sentiment', 'label') tokenized_datasets = tokenized_datasets.remove_columns('Text') training_args = TrainingArguments("test_trainer") trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets['train'] ) trainer.train() Running this throws error: Variable._execution_engine.run_backward( RuntimeError: Found dtype Long but expected Float The error may come from dataset itself, but can I fix it with my code somehow? I searched the Internet and this error seems to have been previously solved by "converting tensors to float" but how would I do it with Trainer API? Any advise is very highly appreciated. Some reference: https://discuss.pytorch.org/t/run-backward-expected-dtype-float-but-got-dtype-long/61650/10
Most likely, the problem is with loss function. This can be fixed if you set up the model correctly, mainly by specifying the correct loss to use. Refer to this code to see the logic for deciding the proper loss. Your problem has binary labels and thus should be framed as a single-label classification problem. As such, the code you have shared will be inferred as a regression problem, which explains the error that it expected float but found long type for target labels. You need to pass the correct problem type. model = transformers.BertForSequenceClassification.from_pretrained( "TurkuNLP/bert-base-finnish-cased-v1", num_labels=1, problem_type = "single_label_classification" ) This will make use of BCE loss. For BCE loss, you need the target to float, so you also have to cast the labels to float. I think you can do that with the dataset API. See this. The other way would be to use a multi-class classifier or CE loss. For that, just fixing num_labels should be fine. model = transformers.BertForSequenceClassification.from_pretrained( "TurkuNLP/bert-base-finnish-cased-v1", num_labels=2, )
https://stackoverflow.com/questions/70490710/
how effective is transfer learning? keeping only two specific output features without resetting features
I want to keep only two specific output features without resetting features. Resetting features would lose the pre-trained weights. For example, I don't want to do... # https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html?highlight=transfer%20learning%20ant%20bees model_ft = models.resnet18(pretrained=True) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 2) Here is code (following the transfer learning tutorial on Pytorch) I want to do this to see how effective transfer learning is. Even without transfer learning, a model might be effective. Removing 998 out of 1000 categories and leaving only two categories, ant and bee, could be a great categorical model since you are left with only two choices. I do not want to re-train the model, I want to use the weights as it is, otherwise, it will be the same as transfer learning.
You can certainly try this. You can reduce the model output to just the two logits you want to compare with: chosen_cats = torch.Tensor([ant_index, bee_index]).long() with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) outputs = torch.index_select(output, 1, chosen_cats) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) In this scenario, the preds will be 0 or 1, with 0 predicting ant and 1 predicting bee, so you will need to also modify your labels to reflect this.
https://stackoverflow.com/questions/70491744/
How to free GPU memory in PyTorch
I have a list of sentences I'm trying to calculate perplexity for, using several models using this code: from transformers import AutoModelForMaskedLM, AutoTokenizer import torch import numpy as np model_name = 'cointegrated/rubert-tiny' model = AutoModelForMaskedLM.from_pretrained(model_name).cuda() tokenizer = AutoTokenizer.from_pretrained(model_name) def score(model, tokenizer, sentence): tensor_input = tokenizer.encode(sentence, return_tensors='pt') repeat_input = tensor_input.repeat(tensor_input.size(-1)-2, 1) mask = torch.ones(tensor_input.size(-1) - 1).diag(1)[:-2] masked_input = repeat_input.masked_fill(mask == 1, tokenizer.mask_token_id) labels = repeat_input.masked_fill( masked_input != tokenizer.mask_token_id, -100) with torch.inference_mode(): loss = model(masked_input.cuda(), labels=labels.cuda()).loss return np.exp(loss.item()) print(score(sentence='London is the capital of Great Britain.', model=model, tokenizer=tokenizer)) # 4.541251105675365 Most models work well, but some sentences seem to throw an error: RuntimeError: CUDA out of memory. Tried to allocate 10.34 GiB (GPU 0; 23.69 GiB total capacity; 10.97 GiB already allocated; 6.94 GiB free; 14.69 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Which makes sense because some are very long. So what I did was to add something like try, except RuntimeError, pass. This seemed to work until around 210 sentences, and then it just outputs the error: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. I found this which had a lot of discussions and ideas, some were regarding potential faulty GPUs? But I know that my GPU works as this exact code works for other models. There's also talk about batch size here, which is why I thought it potentially relates to freeing up memory. I tried running torch.cuda.empty_cache() to free the memory like in here after every some epochs but it didn't work (threw the same error). Update: I filtered sentences with length over 550 and this seems to remove the CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. error.
You need to apply gc.collect() before torch.cuda.empty_cache() I also pull the model to cpu and then delete that model and its checkpoint. Try what works for you: import gc model.cpu() del model, checkpoint gc.collect() torch.cuda.empty_cache()
https://stackoverflow.com/questions/70508960/
SHAP values with PyTorch - KernelExplainer vs DeepExplainer
I haven't been able to find much in the way of examples on SHAP values with PyTorch. I've used two techniques to generate SHAP values, however, their results don't appear to agree with each other. SHAP KernelExplainer with PyTorch import torch from torch.autograd import Variable import shap import numpy import pandas torch.set_grad_enabled(False) # Get features train_features_df = ... # pandas dataframe test_features_df = ... # pandas dataframe # Define function to wrap model to transform data to tensor f = lambda x: model_list[0]( Variable( torch.from_numpy(x) ) ).detach().numpy() # Convert my pandas dataframe to numpy data = test_features_df.to_numpy(dtype=np.float32) # The explainer doesn't like tensors, hence the f function explainer = shap.KernelExplainer(f, data) # Get the shap values from my test data shap_values = explainer.shap_values(data) # Enable the plots in jupyter shap.initjs() feature_names = test_features_df.columns # Plots #shap.force_plot(explainer.expected_value, shap_values[0], feature_names) #shap.dependence_plot("b1_price_avg", shap_values[0], data, feature_names) shap.summary_plot(shap_values[0], data, feature_names) SHAP DeepExplainer with PyTorch # It wants gradients enabled, and uses the training set torch.set_grad_enabled(True) e = shap.DeepExplainer(model, Variable( torch.from_numpy( train_features_df.to_numpy(dtype=np.float32) ) ) ) # Get the shap values from my test data (this explainer likes tensors) shap_values = e.shap_values( Variable( torch.from_numpy(data) ) ) # Plots #shap.force_plot(explainer.expected_value, shap_values, feature_names) #shap.dependence_plot("b1_price_avg", shap_values, data, feature_names) shap.summary_plot(shap_values, data, feature_names) Comparing results As you can see from the summary plots, the value given to the features from the same PyTorch model, with the same test data, are noticeably different. For example the feature b1_addresses_avg has value one from last with the KernelExplainer. But with the DeepExplainer is ranked third from top. I'm not sure where to go from here.
Shapley values are very difficult to calculate exactly. Kernel SHAP and Deep SHAP are two different approximation methods to calculate the Shapley values efficiently, and so one shouldn't expect them to necessarily agree. You can read the authors' paper for more details. While Kernel SHAP can be used on any model, including deep models, it is natural to ask whether there is a way to leverage extra knowledge about the compositional nature of deep networks to improve computational performance. [...] This motivates our adapting DeepLIFT to become a compositional approximation of SHAP values, leading to Deep SHAP. In section 5, they compare the performance of Kernel SHAP and Deep SHAP. From their example it seems like Kernel SHAP performs better than Deep SHAP. So I guess if you aren't running into computational issues, you can stick with Kernel SHAP. p.s. Just to make sure, you're inputting the exact same trained model to SHAP right? You shouldn't be training separate models, because they'll learn different weights.
https://stackoverflow.com/questions/70510341/
How do I retrieve the resultant image as a matrix(numpy array) from results given back by yolov5 in pytoch?
I have been learning how to implement pretrained yolo using pytorch, and I want to display the output image using openCV's cv2.imshow() method. The output image can be displayed using .show() function and saved using .save() function, I however want to display it using cv2.imshow(), and for that I would need the image in the form of a numpy array. I'm unaware about how we do that or even if that is at all possible. Here's the code for it. import torch model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True) imgs = ['img.png'] # batch of images results = model(imgs) results.print() results.show() # or .save(), shows/saves the same image with bounding boxes around detected objects # Show 'results' using openCV's cv2.imshow() method? results.xyxy[0] # img1 predictions (tensor) print(results.pandas().xyxy[0]) # img1 predictions (pandas) A longer way of solving this problem would be to create bounding boxes ourselves over the detected objects in the image and display it, but consider me lazy :p .
had the same issue, so I wrote a small method to do so quickly draw the image without saving it. def drawRectangles(image, dfResults): for index, row in dfResults.iterrows(): print( (row['xmin'], row['ymin'])) image = cv2.rectangle(image, (row['xmin'], row['ymin']), (row['xmax'], row['ymax']), (255, 0, 0), 2) cv2_imshow(image) _ results = model(image) dfResults = results.pandas().xyxy[0] self.drawRectangles(image, dfResults[['xmin', 'ymin', 'xmax','ymax']].astype(int))
https://stackoverflow.com/questions/70523588/
What is self referring to in this PyTorch derived nn.Module class method?
I am following this tutorial for Pytorch and there is a line of code that makes no sense to me in the derived class MnistModule method training_step of the nn.Module class. The line is out = self(images) Please can someone explain to me what is happening here? Is this correct or not and if this is convention to follow. Thanks Here's the snippet class MnistModel(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(input_size, num_classes) def forward(self, xb): xb = xb.reshape(-1, 784) out = self.linear(xb) return out def training_step(self, batch): images, labels = batch out = self(images) # Generate predictions loss = F.cross_entropy(out, labels) # Calculate loss print(type(out)) return loss
It refers to an instance of MnistModel, the same as in any other method defined by the class. The only thing odd is that self is called, but that's explained by the fact that nn.Module defines __call__, so all instances of MnistModel are themselves callable. out = self(images) is equivalent to out = self.__call__(images).
https://stackoverflow.com/questions/70535521/
TRANSFORMERS: Asking to pad but the tokenizer does not have a padding token
In trying to evaluate a bunch of transformers models sequentially with the same dataset to check which one performs better. The list of models is this one: MODELS = [ ('xlm-mlm-enfr-1024' ,"XLMModel"), ('distilbert-base-cased', "DistilBertModel"), ('bert-base-uncased' ,"BertModel"), ('roberta-base' ,"RobertaModel"), ("cardiffnlp/twitter-roberta-base-sentiment","RobertaSentTW"), ('xlnet-base-cased' ,"XLNetModel"), #('ctrl' ,"CTRLModel"), ('transfo-xl-wt103' ,"TransfoXLModel"), ('bert-base-cased' ,"BertModelUncased"), ('xlm-roberta-base' ,"XLMRobertaModel"), ('openai-gpt' ,"OpenAIGPTModel"), ('gpt2' ,"GPT2Model") All of them work fine until 'ctrl' model, which returns this error: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as 'pad_token' '(tokenizer.pad_token = tokenizer.eos_token e.g.)' or add a new pad token via 'tokenizer.add_special_tokens({'pad_token': '[PAD]'})'. When tokenizing the sentences of my dataset. The tokenizing code is SEQ_LEN = MAX_LEN #(50) for pretrained_weights, model_name in MODELS: print("***************** INICIANDO " ,model_name,", weights ",pretrained_weights, "********* ") print("carganzo el tokenizador ()") tokenizer = AutoTokenizer.from_pretrained(pretrained_weights) print("creando el modelo preentrenado") transformer_model = TFAutoModel.from_pretrained(pretrained_weights) print("aplicando el tokenizador al dataset") ##APLICAMOS EL TOKENIZADOR## def tokenize(sentence): tokens = tokenizer.encode_plus(sentence, max_length=MAX_LEN, truncation=True, padding='max_length', add_special_tokens=True, return_attention_mask=True, return_token_type_ids=False, return_tensors='tf') return tokens['input_ids'], tokens['attention_mask'] # initialize two arrays for input tensors Xids = np.zeros((len(df), SEQ_LEN)) Xmask = np.zeros((len(df), SEQ_LEN)) for i, sentence in enumerate(df['tweet']): Xids[i, :], Xmask[i, :] = tokenize(sentence) if i % 10000 == 0: print(i) # do this so we can see some progress arr = df['label'].values # take label column in df as array labels = np.zeros((arr.size, arr.max()+1)) # initialize empty (all zero) label array labels[np.arange(arr.size), arr] = 1 # add ones in indices where we have a value` I have tried to define the padding tokens as the solution tells me, but then this error appears could not broadcast input array from shape (3,) into shape (50,) in line Xids[i, :], Xmask[i, :] = tokenize(sentence) I have also tried this solution and doesn't work neither. If you have managed to read until here, thank you. Any help is needed.
You can add the [PAD] token using add_special_tokens API. tokenizer = AutoTokenizer.from_pretrained(pretrained_weights) if tokenizer.pad_token is None: tokenizer.add_special_tokens({'pad_token': '[PAD]'})
https://stackoverflow.com/questions/70544129/
torch dataloader for large csv file - incremental loading
I am trying to write a custom torch data loader so that large CSV files can be loaded incrementally (by chunks). I have a rough idea of how to do that. However, I keep getting some PyTorch error that I do not know how to solve. import numpy as np import pandas as pd import torch from torch.utils.data import Dataset, DataLoader # Create dummy csv data nb_samples = 110 a = np.arange(nb_samples) df = pd.DataFrame(a, columns=['data']) df.to_csv('data.csv', index=False) # Create Dataset class CSVDataset(Dataset): def __init__(self, path, chunksize, nb_samples): self.path = path self.chunksize = chunksize self.len = nb_samples / self.chunksize def __getitem__(self, index): x = next( pd.read_csv( self.path, skiprows=index * self.chunksize + 1, #+1, since we skip the header chunksize=self.chunksize, names=['data'])) x = torch.from_numpy(x.data.values) return x def __len__(self): return self.len dataset = CSVDataset('data.csv', chunksize=10, nb_samples=nb_samples) loader = DataLoader(dataset, batch_size=10, num_workers=1, shuffle=False) for batch_idx, data in enumerate(loader): print('batch: {}\tdata: {}'.format(batch_idx, data)) I get 'float' object cannot be interpreted as an integer error
The error is caused by this line: self.len = nb_samples / self.chunksize When dividing using / the result is always a float. But you can only return an integer in the __len__() function. Therefore you have to round self.len and/or convert it to an integer. For example by simply doing this: self.len = nb_samples // self.chunksize the double slash (//) rounds down and converts to integer. Edit: You acutally CAN return a float in __len__() but when calling len(dataset) the error will occur. So I guess len(dataset) is called somewhere inside the DataLoader class.
https://stackoverflow.com/questions/70551454/
No attribute 'RRef' when loading .ckpt files on WIndows machine?
I generated ckpt files with Pytorch Lightning's ModelCheckpoint(save_last=True) on my cluster which uses linux. On the cluster itself I can load them without problems, but on my Windows machine I cant and get this error: AttributeError: module 'torch.distributed.rpc' has no attribute 'RRef' I really need help, as I have a deadline in 3 hours. There has to be a way right so that my code is reproducible?
It seems not to be possible to load the model on a windows system when it has been trained on a Linux system. The only work around I have found is install Ubuntu as a virtual machine on my windows system. This is quite easy. https://apps.microsoft.com/store/detail/ubuntu-2204-lts/9PN20MSR04DW
https://stackoverflow.com/questions/70583992/
User warning when exporting Pytorch model to ONNX
I have found some code that directly converts the pytorch model to onnx: import torch.onnx torch.onnx.export( model, input, "model.onnx", export_params=True, opset_version=10 ) But it throws UserWarning most of the time :- /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:2359: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). _verify_batch_size([input.size(0) * input.size(1) // num_groups, num_groups] + list(input.size()[2:])) /usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_opset9.py:1934: UserWarning: ONNX export unsqueeze with negative axis -1 might cause the onnx model to be incorrect. Negative axis is not supported in ONNX. Axis is converted to 1 based on input shape at export time. Passing an tensor of different rank in execution will be incorrect. "Passing an tensor of different rank in execution will be incorrect.") /usr/local/lib/python3.7/dist-packages/torch/onnx/symbolic_opset9.py:1934: UserWarning: ONNX export unsqueeze with negative axis -1 might cause the onnx model to be incorrect. Negative axis is not supported in ONNX. Axis is converted to 2 based on input shape at export time. Passing an tensor of different rank in execution will be incorrect. "Passing an tensor of different rank in execution will be incorrect.") Can you explain why I am getting this error and is this method correct for exporting to onnx or can you suggest any better method for exporting complex pytorch model to onnx ?
The reason is given directly in the warning message. Since PyTorch1.10, the floordiv is deprecated. You need to update input.size(1) // num_groups to torch.div(input.size(1), num_groups, rounding_mode='floor') if you wish to avoid the warning. But it is indeed weird that the // should be considered as torch. floor_divide only when a torch.Tensor is included as operand. This might has something to do with the onnx export logic. Hopefully someone more familar with the underlying logic could give a deeper answer.
https://stackoverflow.com/questions/70588709/
Is there any thing like 'TensorList’ in pytorch?
I would like to put some tensor in a list, and I know if I would like to put nn.Module class into a list, I must use ModuleList to wrap that list. So, Is there anything like 'TensorList’ in pytorch, that I must use to wrap the list containing tensors?
What are these tensors? Are these tensors parameters of your nn.Module? If so, you need to use the proper container. For example, using nn.ParameterList. This way calling your module's .paramters() methods will yield these tensors as well. Otherwise you'll get errors like this one.
https://stackoverflow.com/questions/70594372/
PyTorch RuntimeError t == DeviceType::CUDAINTERNAL ASSERT FAILED
A PyTorch Lightning model works perfectly well on CPU using this Trainer configuration: trainer = Trainer( gpus=0, max_epochs=10, gradient_clip_val=2, callbacks=[pl.callbacks.progress.TQDMProgressBar(refresh_rate=5)], ) trainer.fit(model) But running the exact same model on GPU (by changing gpus=-1 or gpus=1 in the above code) triggers the following error: RuntimeError: t == DeviceType::CUDAINTERNAL ASSERT FAILED at "../c10/cuda/impl/CUDAGuardImpl.h":24, please report a bug to PyTorch. The model is as follows: class TorchModel(LightningModule): def __init__(self): super(TorchModel, self).__init__() self.cat_layers = ModuleList([TorchCatEmbedding(cat) for cat in columns_to_embed]) self.num_layers = ModuleList([LambdaLayer(lambda x: x[:, idx:idx+1]) for _, idx in numeric_columns]) self.ffo = TorchFFO(len(self.num_layers) + sum([embed_dim(l) for l in self.cat_layers]), y.shape[1]) self.softmax = torch.nn.Softmax(dim=1) def forward(self, inputs): cats = [c(inputs) for c in self.cat_layers] nums = [n(inputs) for n in self.num_layers] concat = torch.cat(cats + nums, dim=1) out = self.ffo(concat) out = self.softmax(out) return out def training_step(self, train_batch, batch_idx): x, y = train_batch y_hat = self.forward(x) loss = cce(torch.log(torch.maximum(torch.tensor(1e-8), y_hat)), y.argmax(dim=1)) acc = tm.functional.accuracy(y_hat.argmax(dim=1), y.argmax(dim=1)) self.log("loss", loss) self.log("acc", acc, prog_bar=True) self.log("lr", self.scheduler.get_last_lr()[-1], prog_bar=True) return loss with TorchCatEmbedding and TorchFFO being two sub-models. Is there any way to solve this issue? PyTorch version: >>> torch.__version__ '1.10.1+cu113' Cuda information: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+
This was due to a torch.tensor() declaration that wasn't transferred to GPU in the training step: def training_step(self, train_batch, batch_idx): x, y = train_batch y_hat = self.forward(x) loss = cce(torch.log(torch.maximum(torch.tensor(1e-8), y_hat)), y.argmax(dim=1)) acc = tm.functional.accuracy(y_hat.argmax(dim=1), y.argmax(dim=1)) self.log("loss", loss) self.log("acc", acc, prog_bar=True) self.log("lr", self.scheduler.get_last_lr()[-1], prog_bar=True) return loss Changing this: loss = cce( torch.log(torch.maximum(torch.tensor(1e-8), y_hat)), y.argmax(dim=1) ) to this by adding .type_as(y_hat): loss = cce( torch.log(torch.maximum(torch.tensor(1e-8).type_as(y_hat), y_hat)), y.argmax(dim=1) ) solved the issue.
https://stackoverflow.com/questions/70594827/
conv1d() received an invalid combination of arguments
I tried to repeat https://github.com/munhouiani/Deep-Packet and came across an error This program uses CNN to classify network traffic. I decided to rewrite the program as I could not run the original on my computer. I am new to neural networks, so I cannot give a detailed description of the problem TypeError: conv1d() received an invalid combination of arguments - got (list, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of: * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups) didn't match because some of the arguments have invalid types: (!list!, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int) * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups) didn't match because some of the arguments have invalid types: (!list!, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int) code: import torch from pathlib import Path from torch import nn as nn from torch.nn import functional as F from torch.utils.data import DataLoader from argparse import Namespace from pytorch_lightning import Trainer import pytorch_lightning as pl import numpy as np class CNN(pl.LightningModule): def __init__(self, hparams): super().__init__() # config self.save_hyperparameters(hparams) self.data_path = self.hparams.data_path # two convolution, then one max pool self.conv1 = nn.Sequential( nn.Conv1d( in_channels=1, out_channels=self.hparams.c1_output_dim, kernel_size=self.hparams.c1_kernel_size, stride=self.hparams.c1_stride ), nn.ReLU() ) self.conv2 = nn.Sequential( nn.Conv1d( in_channels=self.hparams.c1_output_dim, out_channels=self.hparams.c2_output_dim, kernel_size=self.hparams.c2_kernel_size, stride=self.hparams.c2_stride ), nn.ReLU() ) self.max_pool = nn.MaxPool1d( kernel_size=2 ) # flatten, calculate the output size of max pool # use a dummy input to calculate dummy_x = torch.rand(1, 1, self.hparams.signal_length, requires_grad=False) dummy_x = self.conv1(dummy_x) dummy_x = self.conv2(dummy_x) dummy_x = self.max_pool(dummy_x) max_pool_out = dummy_x.view(1, -1).shape[1] # followed by 5 dense layers self.fc1 = nn.Sequential( nn.Linear( in_features=max_pool_out, out_features=200 ), nn.Dropout(p=0.05), nn.ReLU() ) self.fc2 = nn.Sequential( nn.Linear( in_features=200, out_features=100 ), nn.Dropout(p=0.05), nn.ReLU() ) self.fc3 = nn.Sequential( nn.Linear( in_features=100, out_features=50 ), nn.Dropout(p=0.05), nn.ReLU() ) # finally, output layer self.out = nn.Linear( in_features=50, out_features=self.hparams.output_dim ) def forward(self, x): # make sure the input is in [batch_size, channel, signal_length] # where channel is 1 # signal_length is 1500 by default #batch_size = x.shape[0] batch_size = 16 # 2 conv 1 max x = self.conv1(x) x = self.conv2(x) x = self.max_pool(x) x = x.reshape(batch_size, -1) # 3 fc x = self.fc1(x) x = self.fc2(x) x = self.fc3(x) # output x = self.out(x) return x def train_dataloader(self): reader = self.data_path dataloader = DataLoader(reader, batch_size=16) return dataloader def configure_optimizers(self): return torch.optim.Adam(self.parameters()) def training_step(self, batch, batch_idx): x = batch y = batch y_hat = self(x) loss = F.cross_entropy(y_hat, y) if (batch_idx % 50) == 0: self.logger.log_metrics(loss, step=batch_idx) return loss num_epochs = 6 num_classes = 10 batch_size = 100 learning_rate = 0.001 train_dataset = "D:\Deep-Packet-master\Deep-Packet-master\processed_data" train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) hparams = Namespace(**{ 'c1_kernel_size': 4, 'c1_output_dim': 200, 'c1_stride': 3, 'c2_kernel_size': 5, 'c2_output_dim': 200, 'c2_stride': 1, 'output_dim': 17, 'data_path': train_dataset, 'signal_length': 1500, 'epoch': 6 }) model = CNN(hparams).float() gpus = None trainer = Trainer(val_check_interval=4, max_epochs=1) trainer.fit(model) trainer.save_checkpoint(str(train_dataset.absolute())) Please, help
I'm going to guess that your training_step is incorrect: def training_step(self, batch, batch_idx): x = batch[0] y = batch[1] y_hat = self(x) loss = F.cross_entropy(y_hat, y) if (batch_idx % 50) == 0: self.logger.log_metrics(loss, step=batch_idx) return loss In your code, you set both x and y to batch which should be a tuple or a list, which conv1d's forward cannot interpret.
https://stackoverflow.com/questions/70598830/
CPU utilization when using ray and torch
I use ray and torch in my code and set one CPU core for each ray remote actor to compute gradient(use torch package). But I find the CPU utilization of the actor can go up to 300% in some time, This seems to be impossible since The actor is supposed to use only one CPU core. I want to know if the actor is actually using more CPU resources since torch may open one or more threads to compute gradient. My OS is Win10 and CPU is Ryzen 5600H. Thanks.
Ray currently does not automatically pin the actor to specific CPU cores and prevent it from using other CPU cores. So what you're seeing makes sense. It is possible to use a library like psutil to pin the actor to a specific core and prevent it from using other cores. This can be helpful if you have many parallel tasks/actors that are all multi-threaded and competing with each other for resources (e.g., because they use pytorch or numpy).
https://stackoverflow.com/questions/70617054/
How to retain node ordering when converting graph from networkx to pytorch geometric?
Question: How to retain the node ordering/labels when converting a graph from networkx to pytorch geometric? Code: (to be run in Google Colab) import pandas as pd import numpy as np import matplotlib.pyplot as plt import networkx as nx import torch from torch.nn import Linear import torch.nn.functional as F torch.__version__ # install pytorch geometric !pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.10.0+cpu.html from torch_geometric.nn import GCNConv from torch_geometric.utils.convert import to_networkx, from_networkx # Make the networkx graph G = nx.Graph() # Add some cars G.add_nodes_from([ ('Ford', {'y': 0, 'Name': 'Ford'}), ('Lexus', {'y': 1, 'Name': 'Lexus'}), ('Peugot', {'y': 2, 'Name': 'Peugot'}), ('Mitsubushi', {'y': 3, 'Name': 'Mitsubishi'}), ('Mazda', {'y': 4, 'Name': 'Mazda'}), ]) # Relabel the nodes remapping = {x[0]: i for i, x in enumerate(G.nodes(data = True))} G = nx.relabel_nodes(G, remapping, copy=False) # Add some edges --> A = [(0, 1, 0, 1, 1), (1, 0, 1, 1, 0), (0, 1, 0, 0, 1), (1, 1, 0, 0, 0), (1, 0, 1, 0, 0)] as the adjacency matrix G.add_edges_from([ (0, 1), (0, 3), (0, 4), (1, 2), (1, 3), (2, 1), (2, 4), (3, 0), (3, 1), (4, 0), (4, 2) ]) # Convert the graph into PyTorch geometric pyg_graph = from_networkx(G) pyg_graph.edge_index When I print the edge indices in the last line of the code, I get different answers each time I run it. Most importantly, I am looking to consistently get the same (correct) answer whereby each node numbering is retained from networkx: tensor([[0, 0, 1, 1, 1, 2, 2, 3, 3, 4, 4, 4], [4, 2, 4, 2, 3, 0, 1, 1, 4, 0, 1, 3]]) The form of this edge index tensor is: the first list contains the node ids of the source node the second list contains the node ids of the target node For the node ids to be retained, we would expect node 0 to appear three times in the first (source) list instead of just twice. Is there any way for me to force PyTorch Geometric to copy over the node ids? Thanks [EDIT] One possible work-around I have is using the following bit of code which is able to produce edge index and weight tensors for PyTorch geometric # Create a dictionary of the mappings from company --> node id mapping_dict = {x: i for i, x in enumerate(list(G.nodes()))} # Get the number of nodes num_nodes = len(mapping_dict) # Now create a source, target, and edge list for PyTorch geometric graph edge_source_list = [] edge_target_list = [] edge_weight_list = [] # iterate through all the edges for e in G.edges(): # first element of tuple is appended to source edge list edge_source_list.append(mapping_dict[e[0]]) # last element of tuple is appended to target edge list edge_target_list.append(mapping_dict[e[1]]) # add the edge weight to the edge weight list edge_weight_list.append(1) # now create full edge lists for pytorch geometric - undirected edges need to be defined in both directions full_source_list = edge_source_list + edge_target_list # full source list full_target_list = edge_target_list + edge_source_list # full target list full_weight_list = edge_weight_list + edge_weight_list # full edge weight list print(len(edge_source_list), len(edge_target_list), len(full_source_list)) # now convert these to torch tensors edge_index_tensor = torch.LongTensor( np.concatenate([ [np.array(full_source_list)], [np.array(full_target_list)]] )) edge_weight_tensor = torch.FloatTensor(np.array(full_weight_list))
It seems this issue was resolved in the comments (the solution proposed by @Sparky05 is to use copy=True, which is the default for nx.relabel_nodes), but below is the explanation for why the node order is changed. When copy=False is passed, nx.relabel_nodes will re-add the nodes to the graph in the order they appear in the set of keys of remapping dict. The relevant lines in the code are here: def _relabel_inplace(G, mapping): old_labels = set(mapping.keys()) new_labels = set(mapping.values()) if len(old_labels & new_labels) > 0: # skip codes for labels sets that overlap else: # non-overlapping label sets nodes = old_labels # skip lines for old in nodes: # this is now in the set order By using set the order of nodes is modified, so to preserve the order the non-overlapping label sets should be treated as: else: # non-overlapping label sets nodes = mapping.keys() A related PR is submitted here.
https://stackoverflow.com/questions/70627421/
Concatenate a tensor to another in PyTorch
I want to do the following thing: I have a tensor x x tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) and a tensor temp=torch.tensor([[298.]]) How can I quickly concatenate it to each of the above vectors obtaining something like x tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 298.], [0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 298.]]) what's the quickest way to do that?
perhaps not the most straight forward approach, but nonetheless - you can use torch.cat on x.T once you vectorized temp: temp_vec = temp * torch.ones(x.shape[0]) torch.cat((x.T,temp_vec)).T testing it out for a smaller x (to not clutter the answer): x = torch.Tensor([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]) out: tensor([[ 0., 0., 0., 298.], [ 0., 0., 0., 298.]]) last note: if I'm not mistaken, the transpose .T is a field of every tensor, so using it isn't inefficient per se
https://stackoverflow.com/questions/70650820/
Can we install Pytorch CUDA 11.3 when the system has CUDA 11.2
I have cuda 11.2 in my PC and want to install PyTorch. PyTorch has only mentions of CUDA10.2 and 11.3 in it's website Can I install torch==1.10.1+cu113 on my PC? If not, how can I install PyTorch for CUDA11.2 I don't want to change my CUDA version as I have other applications using it.
I tried the one for 11.3 and so far it works fine w/ the GPU: sudo pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
https://stackoverflow.com/questions/70662893/
Can someone please clarify what happens in _getitem_ function? Thanks
I understand that output contains the all of encodings, token type ids, attention_mask, and corresponding labels as tensors. I would like to understand the inner working of getitem function and the need of getting label lengths with len function. class NewsGroupsDataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {k: torch.tensor(v[idx]) for k, v in self.encodings.items()} item["labels"] = torch.tensor([self.labels[idx]]) return item def __len__(self): return len(self.labels) # convert our tokenized data into a torch Dataset train_dataset = NewsGroupsDataset(train_encodings, train_labels) valid_dataset = NewsGroupsDataset(valid_encodings, valid_labels)
Python defines many special methods for classes. These methods define the behavior of the class in certain situations. You're probably already familiar with the __init__ special method, that gets invoked when a new instance of the class is created. __getitem__ is another special method that is called when you use subscription on an instance of the class (i.e. square brackets []), and __len__ is called when you use pass an instance of your class to the built-in len function. As for Pytorch, we must implement these methods because that's what Pytorch's DataLoader object expects. It uses these methods to sample your dataset and know when it is done sampling the dataset. Though DataLoader uses many abstractions to support different sampling and multi-process operations, it basically needs __len__ in order to know the maximum index it can query from the dataset, and it uses __getitem__ to sample the indices it needs. For example, when you're using random sampling with 0 workers, the following snippets do effectively the same thing from torch.utils.data import DataLoader train_dataset = NewsGroupsDataset(train_encodings, train_labels) data_loader = DataLoader(train_dataset, batch_size=5, shuffle=True) for items in data_loader: # items now contains batches of samples of size 5 from your dataset and # For demonstration purposes only, do NOT sample your datasets like this (use DataLoader)! import random from torch.utils.data import default_collate def random_batches(dataset, batch_size, shuffle): indices = list(range(len(dataset))) # uses Dataset.__len__ if shuffle: random.shuffle(indices) batch = [] for i in indices: batch.append(dataset[i]) # uses Dataset.__getitem__ if len(batch) == batch_size: yield default_collate(batch) batch = [] if batch: yield default_collate(batch) train_dataset = NewsGroupsDataset(train_encodings, train_labels) for items in random_batches(train_dataset, batch_size=5, shuffle=True): # items now contains batches of samples of size 5 from your dataset Note that default_collate is a function that takes a list of samples and converts them into stacks of batch-sized tensors. The implementation can be found here if you're curious of the details. DataLoader also supports a lot of other cool things, like multiple workers (probably most important), custom sampling schemes, custom data collation, pinned memory, dropping last non-full batch, and more. Pytorch does most of the work for you with this class, you just need to write the dataset object with __len__ and __getitem__ implementations.
https://stackoverflow.com/questions/70669947/
Training loss decreases dramatically after first epoch and validation loss unstable
I am using EfficientNet-B0 as a subnet in Siamese network and contrastive loss as a loss function for an image similarity task. My dataset is quite large (27550 images for training) with 2 classes. After the first epoch, the training loss decreases dramatically while the validation loss is unstable. Can overfitting happen this early? Or is there something wrong with my data that is confusing? Here is the graph I get after training my model with 100 epochs
First, Draw the training and validation loss by setting up a lower and variable learning_rate. This might happen because of higher learning rate. Secondly, we all knows that the model Overfits, when the training loss is way smaller than the testing loss. By using, dropout, regularization and deeper model (vgg, ResNet) you can improve it.
https://stackoverflow.com/questions/70679286/
how can i sum the size of this tensors?
I have different sizes or shapes for each tensor like torch.Size([1, 12, 1000]) torch.Size([1, 12, 1000]) torch.Size([1, 10, 1000]) torch.Size([1, 11, 1000]) torch.Size([1, 11, 1000]) torch.Size([1, 15, 1000]) torch.Size([1, 10, 1000]) .... and need to be like torch.Size((12+12+10+11+11+15+ .... ),1000) my code is def extract(): for y in sentences: tokenized_text = tokenizer.tokenize(y) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) tokens_tensor = torch.tensor([indexed_tokens]) model.eval() outputs = model(tokens_tensor) hidden_states = outputs[2] my_tensor = torch.cat([hidden_states[i] for i in [-1,-2,-3,-4]], dim=-1) return my_tensor
Concatenate them: tensors = [t1, t2, t3, ...] result = torch.cat(tensors, dim=1) # result.size(): torch.Size([1, 12+12+10+..., 1000]) If you also want to remove the first dimension, as it has size 1: result = result.squeeze() # result.size(): torch.Size([12+12+10+..., 1000])
https://stackoverflow.com/questions/70687916/
Formulae for calculating the shape of feature maps after convolutions
I know that Pytorch's documentation provides this, but I have difficulties in understanding their notation. Is there any more accessible explanation (maybe also with graphical illustrations)?
I think you are looking for Receptive Field Arithmetics. This webpage provides a detailed explanation of the various factors affecting the size of the receptive field, and the shape of the resulting feature maps.
https://stackoverflow.com/questions/70693039/
Workaround to successfully profile python script using scalene profiler on macOS? Just forget it and use machine with Windows or Linux?
Computational Science SE question How amenable is this 2D Frenkel–Kontorova-like energy minimization problem in Python to the use of a modest PC + GPU? (Heavy reliance on indexing) contains a short example script and note 3 links to my first attempt at profiling using scalene The results were uninformative, so I followed a recommendation that I try the --profile-all option. After running for 30 minutes without finishing on a script that took seconds to run I added a CPU percent limit; what I assume means only things that used at least 2% of the CPU time would be profiled in depth. scalene --html --outfile prof.html --profile-all --cpu-percent-threshold 2 myscript.py I received a two line error and was exited from python in a normal way. Error getting real path: 2 Scalene error: received signal SIGABRT This issue was closed in scalene issue #110 and there are links there to https://github.com/plasma-umass/scalene/commit/6636d95d7ea9a16adedaedbd4ff0926374798f2b Q: How do I use Scalene with PyTorch? A: Scalene works with PyTorch version 1.5.1. There's a bug in newer versions of PyTorch (https://github.com/pytorch/pytorch/issues/57185) that interferes with Scalene (discussion here: https://github.com/plasma-umass/scalene/issues/110). there's more information there as well and Pytorch open issue #57185 Segmentation fault with ITIMER_REAL PyTorch throws SIGSEGV when running alongside timer on MacOS x86 Question: Is there a workaround to successfully profile python script using scalene profiler on macOS? Or should I just move to a machine with Windows or Linux and forget about trying for now?
The message, Error getting real path: 2, seems to have to do with scalene finding your script or a path internal to your script possibly. Ensure you're referencing a full, valid path for myscript.py, taking into account which directory you're at in the terminal. You may need to change directory. The SIGABRT message is different from the SIGSEGV issue listed, though, if you wanted to test it, you could uninstall PyTorch (pip uninstall torch) and reinstall a specific version (pip install 'torch==1.5.1').
https://stackoverflow.com/questions/70704705/
Installing PyTorch on MacOS Big Sur
I am trying to figure out how to go about installing PyTorch on my computer which is a macOS Big Sur laptop (version 11.6.2). So far, I have installed Python 3.10.1 via the Python website, and pip 21.3.1 was installed along with it. At the moment, I’m stuck trying to figure out how to install PyTorch using pip? I’m asking this question because I am partaking in a project that requires PyTorch and I need to install it as soon as possible.
pip3 install torch torchvision torchaudio This command worked fine for me, you can find more information on the official website here
https://stackoverflow.com/questions/70706388/
AssertionError: Torch not compiled with CUDA enabled (depite several reinstallations)
Whenever I try to move a variable to cuda in pytorch (e.g. torch.zeros(1).cuda(), I get the error message "AssertionError: Torch not compiled with CUDA enabled". Besides,torch.cuda.is_available() returns False. I have read several answers to approaching this error but for some reason several attempts to reinstall cuda and putorch didn't change anything. Here are some of the settings I used: conda install pytorch torchvision cudatoolkit=10.2 -c pytorch conda install pytorch torchvision cudatoolkit=11 -c pytorch-nightly conda install pytorch torchvision cudatoolkit=9.0 -c pytorch Yet the same error remains. What could be the issue? Some settings: I'm using Ubuntu 20.04, GPU is RTX 2080, nvidia-smi works fine (NVIDIA-SMI 460.91.03, Driver Version: 460.91.03, (max possible) CUDA Version: 11.2)
Try installing with pip pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html You can go through this thread for detailed explanations Pytorch for cuda 11.2
https://stackoverflow.com/questions/70713037/
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu
I get the following error message which I tried to deal with it by throwing .to(self.device) everywhere but it doesn't work. ab = torch.lgamma(torch.tensor(a+b, dtype=torch.float, requires_grad=True).to(device=local_device)) Traceback (most recent call last): File "Script.py", line 923, in <module> average_epoch_loss, out , elbo2 =train(epoch) File "Script.py", line 848, in train loss_dict = net.get_ELBO(X) File "Script.py", line 546, in get_ELBO elbo -= compute_kumar2beta_kld(self.kumar_a[:, k].to(self.device), self.kumar_b[:, k].to(self.device), self.prior, (self.K-1-k)* self.prior).mean().to(self.device) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! Here is a snippet of my script which is related to the error: def compute_kumar2beta_kld(a, b, alpha, beta): SMALL = 1e-16 EULER_GAMMA = 0.5772156649015329 ab = torch.mul(a,b)+ SMALL a_inv = torch.pow(a + SMALL, -1) b_inv = torch.pow(b + SMALL, -1) kl = torch.mul(torch.pow(1+ab,-1), beta_fn(a_inv, b)) for idx in range(10): kl += torch.mul(torch.pow(idx+2+ab,-1), beta_fn(torch.mul(idx+2., a_inv), b)) kl = torch.mul(torch.mul(beta-1,b), kl) psi_b = torch.digamma(b+SMALL) kl += torch.mul(torch.div(a-alpha,a+SMALL), -EULER_GAMMA - psi_b - b_inv) kl += torch.log(ab) + torch.log(beta_fn(alpha, beta) + SMALL) kl += torch.div(-(b-1),b +SMALL) return kl class VAE(GMMVAE): def __init__(self, hyperParams, K, nchannel, base_channels, z_dim, w_dim, hidden_dim, device, img_width, batch_size, include_elbo2): global local_device local_device = device super(VAE, self).__init__(K, nchannel, base_channels, z_dim, w_dim, hidden_dim, device, img_width, batch_size) self.prior = hyperParams['prior'] self.K = hyperParams['K'] self.z_dim = hyperParams['latent_d'] self.hidden_dim = hyperParams['hidden_d'] def get_ELBO(self, X): elbo = torch.tensor(0, dtype=torch.float) if self.include_elbo2: for k in range(self.K-1): elbo -= compute_kumar2beta_kld(self.kumar_a[:, k], self.kumar_b[:, k], self.prior, (self.K-1-k)* self.prior).mean().to(self.device) I appreciate it if someone can suggest how to fix the error.
I am not sure if this is the "only" problem, but one of the device-related problems is this: elbo = torch.tensor(0, dtype=torch.float) <- this will create the elbo tensor on CPU and when you do, elbo -= <some result>, The result is on cuda (or self.device). This will clearly cause a problem. To fix, this just do elbo = torch.tensor(0, dtype=torch.float, device=self.device)
https://stackoverflow.com/questions/70726330/
How can I register a local model.mar to a running torchserve service?
I have a running torchserve service. According to the docs, I can register a new model at port 8081 with the ManagementAPI. When running curl -X OPTIONS http://localhost:8081, the output also states for the post request on /models: ... "post": { "description": "Register a new model in TorchServe.", "operationId": "registerModel", "parameters": [ { "in": "query", "name": "url", "description": "Model archive download url, support local file or HTTP(s) protocol. For S3, consider use pre-signed url.", "required": true, "schema": { "type": "string" } }, ... Now, I do have a local model.mar file and want to use it with the file protocol. I tried: curl -X POST "http://localhost:8081/models?url=file:///path/to/model.mar" But got: { "code": 400, "type": "DownloadArchiveException", "message": "Failed to download archive from: file:///path/to/model.mar" } Is there anything that I am missing? How can I properly register a local model.mar to a running torchserve service?
I just figured it out. Everything I stated was completely correct and under normal circumstances, all of this would have worked. The error is arising because I am running the torchserve instance in a docker container and the curl command is sent to this container which then looks in his local files for the model.mar. I thought for whatever reason that I can pass a file path from the machine that the docker container is running on. Instead, I have to copy the file into the docker container first (or mount a directory) and then execute the registering command with the file path from the model.mar inside the docker container.
https://stackoverflow.com/questions/70730762/
two pytorch DistributedSampler same seeds different shuffling multiple GPU-s
I am trying to load two version (original and principal component pursuit (PCP) cleaned version) of the very same image data set for training a modell using pytorch on a multiple GPUs remote machine. I would like to ensure the same shuffling order for both the original and the PCP cleaned data. To achieve this, I use torch.utils.data.DistributedSampler(datasetPCP, shuffle=True, seed=42) and torch.utils.data.DistributedSampler(dataset, shuffle=True, seed=42) samplers to pass these to the dataloaders for train my modell on 3 GPUs are present on the remote machine I use. As far as I understood the same seed of the two sampler should ensure the exact same shuffling of the loaded data. However this is not the case. Could anybody point me in the right direction? Thanks a lot!
DistributedSampler is for distributed data training where we want different data to be sent to different processes so it is not what you need. Regular dataloader will do just fine. Example: import torch from torch.utils.data.dataset import Dataset from torch.utils.data import DataLoader, RandomSampler class ToyDataset(Dataset): def __init__(self, type): self.type = type def __getitem__(self, idx): return f'{self.type}, {idx}' def __len__(self): return 10 def get_sampler(dataset, seed=42): generator = torch.Generator() generator.manual_seed(seed) sampler = RandomSampler(dataset, generator=generator) return sampler original_dataset = ToyDataset('original') pcp_dataset = ToyDataset('pcp') original_loader = DataLoader(original_dataset, batch_size=2, sampler=get_sampler(original_dataset)) pcp_loader = DataLoader(pcp_dataset, batch_size=2, sampler=get_sampler(pcp_dataset)) for data in original_loader: print(data) for data in pcp_loader: print(data) Output: ['original, 2', 'original, 6'] ['original, 1', 'original, 8'] ['original, 4', 'original, 5'] ['original, 0', 'original, 9'] ['original, 3', 'original, 7'] ['pcp, 2', 'pcp, 6'] ['pcp, 1', 'pcp, 8'] ['pcp, 4', 'pcp, 5'] ['pcp, 0', 'pcp, 9'] ['pcp, 3', 'pcp, 7']
https://stackoverflow.com/questions/70734095/
problems on google colab pytorch learning-RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu
I'm very newbie on pytorch and deep learning, and I got some error while running a sample code from deep learning class. When I run the code I attached below, There comes an errors like, text = torch.from_numpy(data['text']).long().cuda(0) # feature extraction mel_gt = get_mel(audio) # shift mel spectrogram -> the input of the network mel_shift = torch.cat((torch.zeros_like(mel_gt)[:,:,:1], mel_gt[:,:,:-1]) ,axis=-1) # inference mel_est, attention = model(mel_shift, text) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! I can hardly understand because I also check whether the tensors are on cuba with these code, they all return true.. print(mel_shift.is_cuda) print(mel_gt.is_cuda) print(text.is_cuda) Can you guys figure out what's the problem?? I need big big help,,
Check if your model is loaded on cuda by running next(model.parameters()).is_cuda. If it returns False, load the model on CUDA using device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model.to(device)
https://stackoverflow.com/questions/70736440/
onnxruntime inference is way slower than pytorch on GPU
I was comparing the inference times for an input using pytorch and onnxruntime and I find that onnxruntime is actually slower on GPU while being significantly faster on CPU I was tryng this on Windows 10. ONNX Runtime installed from source - ONNX Runtime version: 1.11.0 (onnx version 1.10.1) Python version - 3.8.12 CUDA/cuDNN version - cuda version 11.5, cudnn version 8.2 GPU model and memory - Quadro M2000M, 4 GB Relevant code - import torch from torchvision import models import onnxruntime # to inference ONNX models, we use the ONNX Runtime import onnx import os import time batch_size = 1 total_samples = 1000 device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') def convert_to_onnx(resnet): resnet.eval() dummy_input = (torch.randn(batch_size, 3, 224, 224, device=device)).to(device=device) input_names = [ 'input' ] output_names = [ 'output' ] torch.onnx.export(resnet, dummy_input, "resnet18.onnx", verbose=True, opset_version=13, input_names=input_names, output_names=output_names, export_params=True, do_constant_folding=True, dynamic_axes={ 'input': {0: 'batch_size'}, # variable length axes 'output': {0: 'batch_size'}} ) def infer_pytorch(resnet): print('Pytorch Inference') print('==========================') print() x = torch.randn((batch_size, 3, 224, 224)) x = x.to(device=device) latency = [] for i in range(total_samples): t0 = time.time() resnet.eval() with torch.no_grad(): out = resnet(x) latency.append(time.time() - t0) print('Number of runs:', len(latency)) print("Average PyTorch {} Inference time = {} ms".format(device.type, format(sum(latency) * 1000 / len(latency), '.2f'))) def to_numpy(tensor): return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() def infer_onnxruntime(): print('Onnxruntime Inference') print('==========================') print() onnx_model = onnx.load("resnet18.onnx") onnx.checker.check_model(onnx_model) # Input x = torch.randn((batch_size, 3, 224, 224)) x = x.to(device=device) x = to_numpy(x) so = onnxruntime.SessionOptions() so.execution_mode = onnxruntime.ExecutionMode.ORT_SEQUENTIAL so.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL exproviders = ['CUDAExecutionProvider', 'CPUExecutionProvider'] model_onnx_path = os.path.join(".", "resnet18.onnx") ort_session = onnxruntime.InferenceSession(model_onnx_path, so, providers=exproviders) options = ort_session.get_provider_options() cuda_options = options['CUDAExecutionProvider'] cuda_options['cudnn_conv_use_max_workspace'] = '1' ort_session.set_providers(['CUDAExecutionProvider'], [cuda_options]) #IOBinding input_names = ort_session.get_inputs()[0].name output_names = ort_session.get_outputs()[0].name io_binding = ort_session.io_binding() io_binding.bind_cpu_input(input_names, x) io_binding.bind_output(output_names, device) #warm up run ort_session.run_with_iobinding(io_binding) ort_outs = io_binding.copy_outputs_to_cpu() latency = [] for i in range(total_samples): t0 = time.time() ort_session.run_with_iobinding(io_binding) latency.append(time.time() - t0) ort_outs = io_binding.copy_outputs_to_cpu() print('Number of runs:', len(latency)) print("Average onnxruntime {} Inference time = {} ms".format(device.type, format(sum(latency) * 1000 / len(latency), '.2f'))) if __name__ == '__main__': torch.cuda.empty_cache() resnet = (models.resnet18(pretrained=True)).to(device=device) convert_to_onnx(resnet) infer_onnxruntime() infer_pytorch(resnet) Output If run on CPU, Average onnxruntime cpu Inference time = 18.48 ms Average PyTorch cpu Inference time = 51.74 ms but, if run on GPU, I see Average onnxruntime cuda Inference time = 47.89 ms Average PyTorch cuda Inference time = 8.94 ms If I change graph optimizations to onnxruntime.GraphOptimizationLevel.ORT_DISABLE_ALL, I see some improvements in inference time on GPU, but its still slower than Pytorch. I use io binding for the input tensor numpy array and the nodes of the model are on GPU. Further, during the processing for onnxruntime, I print device usage stats and I see this - Using device: cuda:0 GPU Device name: Quadro M2000M Memory Usage: Allocated: 0.1 GB Cached: 0.1 GB So, GPU device is being used. Further, I have used the resnet18.onnx model from the ModelZoo to see if it is a converted mode issue, but i get the same results. What am I doing wrong or missing here?
When calculating inference time exclude all code that should be run once like resnet.eval() from the loop. Please include imports in example import torch from torchvision import models import onnxruntime # to inference ONNX models, we use the ONNX Runtime import onnx import os import time After running your example GPU only I found that time differs only ~x2, so the speed difference may be caused by framework characteristics. For more details explore onnx conversion optimization Onnxruntime Inference ========================== Number of runs: 1000 Average onnxruntime cuda Inference time = 4.76 ms Pytorch Inference ========================== Number of runs: 1000 Average PyTorch cuda Inference time = 2.27 ms
https://stackoverflow.com/questions/70740287/
Is there a way to convert a python dictionary with some values on GPU memory to have everything on main memory?
I have a Deep Learning (Using PyTorch) model whose output is given in dictionary format. The dictionary has multiple arrays as values and all these arrays are on GPU memory (torch.tensors with device = 'cuda'). Is there any way to move every array in the dictionary to main memory in one go? My current way of going about this is to write a loop and re-write the GPU arrays into variables and use those, but that seems quite ineffecient. My essential goal is to further process the outputs in a multi-threaded manner, but since these arrays are on GPU memory mutliple threads cannot access them at once. Would appreciate any sort of help or suggestions for this!
You can use a dictionary comprehension on the output dictionary out: out = {k: v.to(device='cpu', non_blocking=True) for k, v in out.items()} If out has some elements that are not tensors, you can use: out = {k: v.to(device='cpu', non_blocking=True) if hasattr(v, 'to') else v for k, v in out.items()}
https://stackoverflow.com/questions/70743308/
Is it possible to save a file from test_step() function?
I am trying to implement MNIST digits using PyTorch Lightning. The train function is like the below one def train(epochs, train_loader, test_loader, model): early_stopping = EarlyStopping('train_loss', mode='min', patience=5) model_checkpoint = ModelCheckpoint(dirpath=model_path/'mnist_{epoch}-{train_loss:.2f}',monitor='train_loss', mode='min', save_top_k=3) trainer = pl.Trainer(max_epochs=epochs, profiler=False, callbacks = [model_checkpoint],default_root_dir=model_path) trainer.fit(model, train_dataloader=train_loader) trainer.test(test_dataloaders=test_loader, ckpt_path=None) The test_step function is like the below one def test_step(self, test_batch): x, y = test_batch logits = self.forward(x) loss = self.mean_squared_error_loss(logits.squeeze(-1), y.float()) # I want to calculate R2, MAPE, etc and want to save in a pandas df and # need to return to the train function self.log('test_loss', loss) return {'test_loss': loss} I can do calculate R2, MAPE, etc using TorchMetrics. But, I am not sure how (or is it possible) to save them in a pandas df (or maybe in a list) for the whole test dataset. I have gone through this post but not sure how should I try! Any suggestions are appreciated.
You can aggregate test result in test_epoch_end: def test_step(self, test_batch): x, y = test_batch logits = self.forward(x) loss = self.mean_squared_error_loss(logits.squeeze(-1), y.float()) self.log('test_loss', loss) return {'test_loss': loss, "logits":logits, "labels": y} def test_epoch_end(self, outputs): all_preds, all_labels = [], [] for output in outputs: probs = list(output['logits'].cpu().detach().numpy()) # predicted values labels = list(output['labels'].flatten().cpu().detach().numpy()) all_preds.extend(probs) all_labels.extend(labels) # you can calculate R2 here or save results as file r2 = ... Note that this only works on a single GPU. If you are using multiple GPUs, you need some function to gather results from different GPUs. To get model predictions, you need to add a predict_step() in the model class. def predict_step(self, test_batch): x, y = test_batch logits = self.forward(x) return {'logits': logits, 'labels':y} And run: outputs = trainer.predict(model, test_loader, return_predictions=True)
https://stackoverflow.com/questions/70748858/
Is there any reason for using the word "column" in the context of one-dimensional tensor?
Consider the following statements from the chapter named Tensors: Multidimensional arrays from the textbook titled Deep Learning with PyTorch by Eli Stevens et al. Let’s construct our first PyTorch tensor and see what it looks like. It won’t be a particularly meaningful tensor for now, just three ones in a column: # In[4]: import torch a = torch.ones(3) a In general, the notion of the column is used in the case of at least two dimensions. The tensor initialized is of a single dimension. So, I am guessing that it is immaterial if we use either row or column. Am I true? Else, Is there any reason behind using the word "column" in this context?
Often, in linear algebra theory, an n-dimensional vector is considered as a n x 1 matrix, called a column vector. Indeed, the behavior of a tensor t with shape (n,) is very similar to that of a tensor u of shape (n, 1). In mathematical terms, you can think of a vector t in R^n and a vector u in R^{n x 1}. In conclusion, the author, perhaps, is suggesting to treat the tensor as a mathematical column vector.
https://stackoverflow.com/questions/70751163/
Vector functions in pytorch to apply autograd to them
If I have a tensor x. Can I define a vector function, say f(x) = (3x, x+2), and obtain its derivative df/dx? In a nutshell: I want a way to define such a vector function, from which I can get its gradient.
You can do so using torch.autograd.functional.jacobian, providing the function and input: >>> jacobian(lambda x: (3*x, x+2), inputs=torch.tensor([3.])) In this case the result is df/dx = (3, 1) for all x.
https://stackoverflow.com/questions/70781477/
How to build a vector of marginal probabilities, given a tensor in PyTorch
How to build a vector of marginal probabilities, given a tensor in PyTorch I have a tensor 'A' of shape [ Dim1: <128>, Dim2: <64>], each element in Dim1 is drawn from a unknown distribution and I need to check if the Dim2 vector has appeared before in the other 128 samples. If it has, the marginal probability of this element is increased by 1 and recorded in another tensor 'B' which is of shape [DimB: <128>]. When the iteration is complete I divide all elements in B by 128 (the number of possibilities) to achieve the weighted delta, therefore the aim is to approximate the true distribution as Dim1 increases in size. How can this be achieved directly in PyTorch? I attempted it with ordered dictionaries but it's too slow. I'm assuming a method exists to do it straight in PyTorch my crude way to do it with ordered dictionaries if we have a tensor T1 of shape [Dim1: <6>, Dim2: <3>]: from collections import OrderedDict od = OrderedDict() T1 = torch.Tensor([[2.4, 5.5,1], [3.44,5.43,1], [2.4, 5.5,1], [3.44,8.43,1], [3.44,5.43,9], [3.44,5.43,1], ]) print ('T1 shape',T1.shape) # -> T1 shape torch.Size([6, 3]) for i in range(T1.shape[0]): key = ''.join([ str(int(j)) for j in T1[i].tolist()]) # creates a unique identifier (is there a better way to do this?) if key in od: od[key] +=1 key_place_holder = key + str(od[key]) # unique identifier if we found duplicate to keep a 0 in the final tensor od[key_place_holder] = 0 else: od[key] = 1 print ('len od',len(od)) # -> len od 6 list_values = [j/len(od) for i,j in od.items()] T1_marginal_probabilities = torch.Tensor(list_values) print ('Marginal Probs',T1_marginal_probabilities) # -> Marginal Probs tensor([0.3333, 0.3333, 0.0000, 0.1667, 0.1667, 0.0000]) The final output is as expected, as the probability of [2.4, 5.5,1] and [3.44,5.43,1] are both 2/6, since we have [2.4, 5.5,1] repeated 2 times at position 0 and 2. while [3.44,5.43,1] is repeated in position 1 and 5.
You can use torch.unique and torch.nonzero: T1 = ... values, inverse, counts = T1.unique(dim=0, return_inverse=True, return_counts=True) ps = torch.zeros(inverse.numel()) for i, (v, c) in enumerate(zip(values, counts)): first_occurence = torch.nonzero(inverse == i)[0].item() ps[first_occurence] = c ps /= ps.sum()
https://stackoverflow.com/questions/70787110/
Output of vgg16 layer doesn't make sense
I have a vgg16 network without the last max pooling, fully connected and softmax layers. The network summary says that the last layer's output is going to have a size of (batchsize, 512, 14, 14). Putting an image into the network gives me an output of (batchsize, 512, 15, 15). How do I fix this? import torch import torch.nn as nn from torchsummary import summary vgg16 = torch.hub.load('pytorch/vision:v0.10.0', 'vgg16', pretrained=True) vgg16withoutLastFewLayers = nn.Sequential(*list(vgg16.children())[:-2][0][0:30]).cuda() image = torch.zeros((1,3,244,244)).cuda() output = vgg16withoutLastFewLayers(image) summary(vgg16withoutLastFewLayers, (3,224,224)) print(output.shape) ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 224, 224] 1,792 ReLU-2 [-1, 64, 224, 224] 0 Conv2d-3 [-1, 64, 224, 224] 36,928 ReLU-4 [-1, 64, 224, 224] 0 MaxPool2d-5 [-1, 64, 112, 112] 0 Conv2d-6 [-1, 128, 112, 112] 73,856 ReLU-7 [-1, 128, 112, 112] 0 Conv2d-8 [-1, 128, 112, 112] 147,584 ReLU-9 [-1, 128, 112, 112] 0 MaxPool2d-10 [-1, 128, 56, 56] 0 Conv2d-11 [-1, 256, 56, 56] 295,168 ReLU-12 [-1, 256, 56, 56] 0 Conv2d-13 [-1, 256, 56, 56] 590,080 ReLU-14 [-1, 256, 56, 56] 0 Conv2d-15 [-1, 256, 56, 56] 590,080 ReLU-16 [-1, 256, 56, 56] 0 MaxPool2d-17 [-1, 256, 28, 28] 0 Conv2d-18 [-1, 512, 28, 28] 1,180,160 ReLU-19 [-1, 512, 28, 28] 0 Conv2d-20 [-1, 512, 28, 28] 2,359,808 ReLU-21 [-1, 512, 28, 28] 0 Conv2d-22 [-1, 512, 28, 28] 2,359,808 ReLU-23 [-1, 512, 28, 28] 0 MaxPool2d-24 [-1, 512, 14, 14] 0 Conv2d-25 [-1, 512, 14, 14] 2,359,808 ReLU-26 [-1, 512, 14, 14] 0 Conv2d-27 [-1, 512, 14, 14] 2,359,808 ReLU-28 [-1, 512, 14, 14] 0 Conv2d-29 [-1, 512, 14, 14] 2,359,808 ReLU-30 [-1, 512, 14, 14] 0 ================================================================ torch.Size([1, 512, 15, 15])
The output shape should be [512, 14, 14], assuming that the input image is [3, 224, 224]. Your input image size is [3, 244, 244]. For example, image = torch.zeros((1,3,224,224)) # torch.Size([1, 512, 14, 14]) output = vgg16withoutLastFewLayers(image) Therefore, by increasing the image size, the spatial size [W, H] of your output tensor also increases.
https://stackoverflow.com/questions/70838701/
Cast C++ PyTorch Tensor to Python PyTorch Tensor
For a project that I am working on, I need to call from C++ a Python function, which has as input a PyTorch Tensor. While searching for a way to achieve this, I found that using a function named THPVariable_Wrap (Information I have found link 1 and link 2) could transform a C++ Pytorch Tensor to a PyObject, which can be used as input for the call to the Python function. However, I have tried importing this function by including the header file directly in my code, but this will always return the error LNK2019, when calling the function, with the following description: Severity Code Description Project File Line Suppression State Error LNK2019 unresolved external symbol "__declspec(dllimport) struct _object * __cdecl THPVariable_Wrap(class at::TensorBase)" (_imp?THPVariable_Wrap@@YAPEAU_object@@VTensorBase@at@@@Z) referenced in function main pythonCppTorchExp C:\Users\MyName\source\repos\pythonCppTorchExp\pythonCppTorchExp\example-app.obj 1 I believe the problem is in how I import the THPVariable_Wrap function in my C++ file. However, I am still not that skilled with C++ and the information on this is limited. Besides Pytorch, I am also using Boost for calling Python and I am using Microsoft Visual Studio 2019 (v142), with C++ 14. I posted the code I used below. C++ File #include <iostream> #include <iterator> #include <algorithm> #include <boost/python.hpp> #include <Python.h> #include <string.h> #include <fstream> #include <boost/filesystem.hpp> #include <torch/torch.h> #include <torch/csrc/autograd/python_variable.h> /* The header file where */ namespace python = boost::python; namespace fs = boost::filesystem; using namespace std; int main() { string module_path = "Path/to/python/folder"; Py_Initialize(); torch::Tensor cppTensor = torch::ones({ 100 }); PyRun_SimpleString(("import sys\nsys.path.append(\"" + module_path + "\")").c_str()); python::object module = python::import("tensor_test_file"); python::object python_function = module.attr("tensor_equal"); PyObject* castedTensor = THPVariable_Wrap(cppTensor) /* This function call creates the error.*/; python::handle<> boostHandle(castedTensor); python::object inputTensor(boostHandle); python::object result = python_function(inputTensor); bool succes = python::extract<bool>(result); if (succes) { cout << "The tensors match" << endl; } else { cout << "The tensors do not match" << endl; } } Python File import torch def tensor_equal(cppTensor): pyTensor = torch.ones(100) areEqual = cppTensor.equal(pyTensor) return areEqual
This is linker problem. You probably have to link libtorch.python.so. It can be located in place like /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so. Or where you have your libtorch installed.
https://stackoverflow.com/questions/70848137/
How do I fix a Pytorch install error on a windows virtual environment with an error that says Pytorch could not be found from a pip command?
Thank you for taking the time to look at this thread. I am running windows 11 and created a virtual environment that was setup with Python 3.10.2. I installed jupyter notebook, tensorflow, CUDA 11.6 toolkit, and cuDNN 8.3.2. I went to the PyTorch website and clicked the long term stable version of PyTorch for windows using Pip on the CUDA 11.1 option. This produced the following Pip command: pip3 install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html I ran this command inside of my terminal as well as the jupyter notebook whose kernel was connected to my virtual environment and received the following error: ERROR: Could not find a version that satisfies the requirement torch==1.8.2+cu111 (from versions: none) ERROR: No matching distribution found for torch==1.8.2+cu111 Does anybody have any advice that could help me solve this issue? I am running a machine with a GTX 2060 and an AMD Ryzen processor. To my knowledge, I should be able to run CUDA with my GPU. I attempted to troubleshoot the issue by using the stable version of PyTorch as well but still received the same error. Thank you.
OK, well now I feel silly. I went back to the PyTorch website and saw that PyTorch only works up to Python 3.9 as of today in case anyone else runs into a similar issue.
https://stackoverflow.com/questions/70855354/
import torch.fx ModuleNotFoundError: No module named 'torch.fx'
After enabling torch and Cuda for my system according to my system GPU compatibility, whenever I am trying to run any program which needs to be run on GPU to enable the system, this error is coming. I could not able to find any solution for this. though I read about this that create another environment and this error will be solved but did not work for me. Please find the details of my system. for reference, I am using Python 3.7.2. image include torch and Cuda version and device name Details of NVIDIA Cuda compiler driver nvidia-smi output Can anyone help to solve this problem? Thank you
torch.fx was added in PyTorch 1.8.0. Check release post. You're probably using an older version. Upgrade pytorch from website.
https://stackoverflow.com/questions/70857998/
PyTorch Inference High CPU Usage on Kubernetes
Problem We are trying to create an inference API that load PyTorch ResNet-101 model on AWS EKS. Apparently, it always killed OOM due to high CPU and Memory usage. Our log shows we need around 900m CPU resources limit. Note that we only tested it using one 1.8Mb image. Our DevOps team didn't really like it. What we have tried Currently we are using standard PyTorch load model module. We also clean model state dict to clean up the memory usage. Is there any method to reduce the CPU usage to load PyTorch model?
Have you tried limiting the CPU available to the pods? - name: pytorch-ml-model image: pytorch-cpu-hog-model-haha resources: limits: memory: "128Mi" cpu: "1000m" # Replace this with CPU amount your devops guys will be happy about If your error is OOM, you might want to consider the adding more memory allocated per pod? We as outsiders have no idea how large of memory you would require to execute your models, I would suggest using debugging tools like PyTorch profiler to understand how much memory you need for your inferencing use-case. You might also want to consider, using memory-optimized worker nodes and applying deployment-node affinity through labels to ensure that inferencing pods are allocated in memory-optimized nodes in your EKS clusters.
https://stackoverflow.com/questions/70858397/
Model name 'bert-base-uncased' was not found in tokenizers
My code that loads a pre-trained BERT model has been working alright until today I moved it to another, new server. I set up the environment properly, then when loading the 'bert-base-uncased' model, I got this error Traceback (most recent call last): File "/jmain02/home/J2AD003/txk64/zzz70-txk64/.conda/envs/tensorflow-gpu/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/jmain02/home/J2AD003/txk64/zzz70-txk64/.conda/envs/tensorflow-gpu/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/jmain02/home/J2AD003/txk64/zzz70-txk64/wop_bert/code/python/src/exp/run_exp_bert_apply.py", line 74, in <module> input_text_fields) File "/jmain02/home/J2AD003/txk64/zzz70-txk64/wop_bert/code/python/src/classifier/classifier_bert_.py", line 556, in fit_bert_trainonly tokenizer = BertTokenizer.from_pretrained(bert_model, do_lower_case=True) File "/jmain02/home/J2AD003/txk64/zzz70-txk64/.conda/envs/tensorflow-gpu/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1140, in from_pretrained return cls._from_pretrained(*inputs, **kwargs) File "/jmain02/home/J2AD003/txk64/zzz70-txk64/.conda/envs/tensorflow-gpu/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1246, in _from_pretrained list(cls.vocab_files_names.values()), OSError: Model name 'bert-base-uncased' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed 'bert-base-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url. And the line that triggered this error (classifier_bert_.py line 556) is very simple: tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) Please can I have some help on how to solve this issue? Thanks
You have to download it and put in the same directory: You can download it from here: https://huggingface.co/bert-base-uncased
https://stackoverflow.com/questions/70867550/
How to get accuracy during/after training for Huggingface RobertaForMaskedLM model?
I am using HuggingFace Trainer to train a Roberta Masked LM. I am passing the following function for compute_metrics as other discussion threads suggest: metric = load_metric("accuracy") def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) I am loading my dataset from text files using load_dataset, and after applying a tokenizer, it returns attention_id, input_ids but no labels. I am training on this dataset using Trainer. Still, I cannot find the accuracy of my model during training after passing the compute_metrics function above or cannot evaluate my model after training on test data. How to get accuracy for this model during training and evaluate it as we could do in Keras models using model.evaluate()? How is it measured?
If you are using one of the example scripts, have you checked the --evaluation_strategy parameter? The default value is None, but it can be set to steps or epoch.
https://stackoverflow.com/questions/70887159/
Pytorch/Tensorflow: Compute gradient of Mixture of Gaussians log density
I have a mixture of three Gaussians and would like to compute the gradient of the log-density using Pytorch or Tensorflow. How can I do that? from numpy import eye, log from scipy.stats import multivariate_normal as MVN μs = [[0, 0], [2, 0], [0, 2]] # Means Σs = [eye(2), eye(2), eye(2)] # Covariance Matrices cs = [1 / 3] * 3 # Mixture coefficients MVNs = [MVN(μ, Σ) for (μ, Σ) in zip(μs, Σs)] # List of Gaussians log_density = lambda x: log((sum([c * MVN.pdf(x) for (c, MVN) in zip(cs, MVNs)]))) Essentially I would like to compute the gradient of log_density. I tried using autograd.grad but it fails because of the array assignment. Attempted Pytorch Solution from torch import tensor, eye, sqrt, zeros, log, exp from torch.distributions import MultivariateNormal as MVN μs = [tensor([0, 0]), tensor([2, 0]), tensor([0, 2])] # Means Σs = [eye(2), eye(2), eye(2)] # Covariance Matrices cs = [1 / 3] * 3 # Mixture coefficients MVNs = [MVN(μ, Σ) for (μ, Σ) in zip(μs, Σs)] # List of Gaussians log_density = lambda x: log((sum([c * exp(MVN.log_prob(x)) for (c, MVN) in zip(cs, MVNs)]))) Attempted Autograd Solution (won't work) from numpy import eye, log, zeros from scipy.stats import multivariate_normal as MVN from autograd import grad μs = [[0, 0], [2, 0], [0, 2]] # Means Σs = [eye(2), eye(2), eye(2)] # Covariance Matrices cs = [1 / 3] * 3 # Mixture coefficients MVNs = [MVN(μ, Σ) for (μ, Σ) in zip(μs, Σs)] # List of Gaussians log_density = lambda x: log((sum([c * MVN.pdf(x) for (c, MVN) in zip(cs, MVNs)]))) gradient = grad(log_density) # If you try using this gradient function you get an error gradient(zeros(2)) The error I get is ValueError: setting an array element with a sequence. Naive Autograd Solution There is, of course, a bad Autograd solution that won't scale well. For instance from autograd.numpy import log, eye, zeros, array from autograd.scipy.stats import multivariate_normal as MVN from autograd import grad μs = [[0, 0], [2, 0], [0, 2]] # Means Σs = [eye(2), eye(2), eye(2)] # Covariance Matrices cs = [1 / 3] * 3 # Mixture coefficients def log_density(x): return log((1/3) * MVN.pdf(x, zeros(2), eye(2)) + (1/3) * MVN.pdf(x, array([2, 0]), eye(2)) + (1/3) * MVN.pdf(x, array([0, 2]), eye(2))) grad(log_density)(zeros(2)) # Works!
You can do from torch import tensor, eye, sqrt, zeros, log, exp from torch.distributions import MultivariateNormal as MVN μs = [tensor([0, 0]), tensor([2, 0]), tensor([0, 2])] # Means Σs = [eye(2), eye(2), eye(2)] # Covariance Matrices cs = [1 / 3] * 3 # Mixture coefficients MVNs = [MVN(μ, Σ) for (μ, Σ) in zip(μs, Σs)] # List of Gaussians x = tensor((0.0,0.0), requires_grad=True) log_density = log((sum([c * exp(MVN.log_prob(x)) for (c, MVN) in zip(cs, MVNs)]))) log_density.backward() print(x.grad) which will print the gradient at (0.0,0.0). However as pytorch is not generating a static computation graph, I could not find an easy way to calculate the gradient at another point without rebuilding the computation graph. You could try to use tensorflow, which gives you more control on the computation graphs and allows you to construct a graph for the gradient computation. Edit With tensorflow you could do something like import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorflow as tf import tensorflow_probability as tfp @tf.function def mygrad(x): print("building graph") us = tf.stack([tf.constant([0.0, 0.0]), tf.constant([2., 0.]), tf.constant([0., 2.])]) covs = tf.stack([tf.eye(2), tf.eye(2), tf.eye(2)]) cs = tf.constant([1 / 3] * 3) with tf.GradientTape() as gt: gt.watch(x) log_density = tf.math.log(tf.math.reduce_sum(tfp.distributions.MultivariateNormalTriL(us,covs).prob(x) * cs) ) return gt.gradient(log_density,x) print(mygrad(tf.constant([0.0,0.0])).numpy()) #gradient at 0.0,0.0 print(mygrad(tf.constant([1.0,0.0])).numpy()) #gradient at 1.0,0.0 Essentially you do automatic differentiation with the tf.GradientTape and capture the computation graph in a tf.function. There is more background information on the very extensive Tensorflow API documentation.
https://stackoverflow.com/questions/70893986/
Error downloading celebA dataset using torchvision
Using the torchvision module datasets, I can't download the celebA image dataset. I am pretty sure that I am doing everything right. dataset = datasets.CelebA( root='../datasets/celebA/train_images', split='train', target_type='identity', transform=transforms.Compose([transforms.ToTensor()]), download=True) The error: BadZipFile: File is not a zip file
It is a known issue that has been already reported in #1920, and it seems it was fixed in #4109 but the commit is not yet included in a stable release. In the meanwhile you can do the following: Look at the source code of datasets.CelebA, search for file_list and take note of the files in it. Download those files from here, and copy the files in a new directory called celeba Unzip img_align_celeba.zip in the same directory Finally call datasets.CelebA() with download=False.
https://stackoverflow.com/questions/70896841/
TypeError: new(): data must be a sequence (got numpy.float64)
I do not know what to do with this problem. I am running a model training. The following part is what I got mean_train = torch.Tensor(np.mean(train_vertices, axis=0)) TypeError: new(): data must be a sequence (got numpy.float64) My code is: mean_train = torch.Tensor(np.mean(train_vertices, axis=0)) std_train = torch.Tensor(np.std(train_vertices, axis=0))
You have a numpy array and you want to create a pytorch tensor from it. You can use torch.from_numpy to achieve this. Note that torch.from_numpy expects an np.ndarray not a np.float64 so you'll need to figure out your shapes. However, if you don't need numpy, you can just use pytorch from the jump. Pytorch will likely have the functions you needed from numpy anyway.
https://stackoverflow.com/questions/70900282/
detectron2 - CUDA is not available
I am trying out detectron2 and want to train the sample model. When running the following code I get (<class 'RuntimeError'>, RuntimeError('No CUDA GPUs are available'), <traceback object at 0x7f42b094ebc0>). Find below the code: import detectron2 from detectron2.utils.logger import setup_logger setup_logger() # import some common libraries import matplotlib.pyplot as plt import numpy as np import cv2 # import some common detectron2 utilities from detectron2.engine import DefaultPredictor from detectron2.config import get_cfg from detectron2.utils.visualizer import Visualizer from detectron2.data import MetadataCatalog, DatasetCatalog from detectron2.data.datasets import register_coco_instances import random from detectron2.engine import DefaultTrainer from detectron2.config import get_cfg import os # To verify the data loading is correct, let's visualize the annotations of randomly selected samples in the training set: register_coco_instances("fruits_nuts", {}, "../data/trainval.json", "../data/images") fruits_nuts_metadata = MetadataCatalog.get("fruits_nuts") dataset_dicts = DatasetCatalog.get("fruits_nuts") ''' for d in random.sample(dataset_dicts, 3): img = cv2.imread(d["file_name"]) visualizer = Visualizer(img[:, :, ::-1], metadata=fruits_nuts_metadata, scale=0.5) vis = visualizer.draw_dataset_dict(d) cv2.imshow('new', vis.get_image()[:, :, ::-1]) cv2.waitKey(0) ''' # train model cfg = get_cfg() cfg.merge_from_file("../detectron2_repo/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") cfg.DATASETS.TRAIN = ("fruits_nuts",) cfg.DATASETS.TEST = () # no metrics implemented for this dataset cfg.DATALOADER.NUM_WORKERS = 2 cfg.MODEL.WEIGHTS = "detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl" # initialize from model zoo cfg.SOLVER.IMS_PER_BATCH = 2 cfg.SOLVER.BASE_LR = 0.02 cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough, but you can certainly train longer cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset cfg.MODEL.ROI_HEADS.NUM_CLASSES = 3 # 3 classes (data, fig, hazelnut) os.makedirs(cfg.OUTPUT_DIR, exist_ok=True) trainer = DefaultTrainer(cfg) trainer.resume_or_load(resume=False) trainer.train() I ran the script collect_env.py from torch: /home/project/.venv/bin/python /home/project/src/collect_env.py Collecting environment information... PyTorch version: 1.10.2+cu102 Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31 Python version: 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0] (64-bit runtime) Python platform: Linux-5.13.0-27-generic-x86_64-with-glibc2.29 Is CUDA available: False CUDA runtime version: 10.1.243 GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.22.1 [pip3] torch==1.10.2 [pip3] torchvision==0.11.3 [conda] Could not collect Process finished with exit code 0 I am having on the system a RTX3080 graphic card. However, it seems to me that its not found. Any suggestions why? Is there a way to run the training without CUDA? I appreciate your replies!
I'm not sure if this works for you. But let's see from a Windows user perspective. I'm using Detectron2 on Windows 10 with RTX3060 Laptop GPU CUDA enabled. The first thing you should check is the CUDA. You can check by using the command: nvcc -V It should be shown this message: C:\Users\User>nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Mon_May__3_19:41:42_Pacific_Daylight_Time_2021 Cuda compilation tools, release 11.3, V11.3.109 Build cuda_11.3.r11.3/compiler.29920130_0 And to check if your Pytorch is installed with CUDA enabled, use this command (reference from their website): import torch torch.cuda.is_available() As on your system info shared in this question, you haven't installed CUDA on your system. And your system doesn't detect any GPU (driver) available on your system. As far as I know, they recommended installing Pytorch CUDA to run Detectron2 by (Nvidia) GPU. (you can check on Pytorch website and Detectron2 GitHub repo for more details). Or, you can use this option: Add this line of code to your python program (as reference of this issues#300): cfg.MODEL.DEVICE = "cpu" I hope it helps. Cheers.
https://stackoverflow.com/questions/70910160/
LSTM always predicts the same class
I’m trying to solve an nlp classification problem with a LSTM. The code for the model is defined here: class LSTM(nn.Module): def __init__(self, hidden_size, embedding_size=66 ): super().__init__() self.lstm = nn.LSTM(embedding_size, hidden_size, batch_first = True, bidirectional = True) self.fc = nn.Linear(2*hidden_size,2) def forward(self, input_seq): output, (hidden_state, cell_state) = self.lstm(input_seq) hidden_state = torch.cat((hidden_state[-1,:], hidden_state[-2,:]), -1) logits = self.fc(hidden_state) return nn.LogSoftmax(dim=1)(logits) And the function I’m using to train this model is here: def train_loop(dataloader, model, loss_fn, optimizer): loss_fn = loss_fn size = len(dataloader.dataset) model.train() zeros = 0 for batch, (X, y) in enumerate(dataloader): # Transform string into tensor tensor = torch.zeros(1,len(X[0]),66) for i in range(len(X[0])): tensor[0][i][ctoi[X[0][i]]] = 1 pred = model(tensor) target = torch.zeros(2, dtype=torch.long) target[y] = 1 if batch % 100 == 0: print(pred.squeeze(), target) loss = loss_fn(pred.squeeze(), target) # Backpropagation optimizer.zero_grad() loss.backward() optimizer.step() if pred.squeeze().argmax() == 0: zeros += 1 if batch % 100 == 0: loss, current = loss.item(), batch * len(X) print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]") print(f'In trainning predicted {zeros} zeroes out of {size} samples') The X’s are still strings, that’s why I need to convert them to tensors before running it through the model. The y’s are either a 0 or 1 (since its a binary classification problem), that I need to convert to a tensor of shape (2,) to run through the loss function. For some reason I keep getting the same class predicted for every input. The classes are not even that unbalanced (~45% to 55%), and I’ve tried changing the weights of the classes in the loss function with no improvements, it either converges to predicting always a 0 or always a 1. Most of the time it it converges to predicting always a 0, which makes even less sense because what happens usually is that the class 0 has less samples than class 1.
Since you're training a binary classification model, your output dim should be 1 (corresponding to a single probability P(y|x)). This means that the y you're retrieving from your dataloader should be the y used in your loss function (assuming a cross-entropy loss). The predicted class is therefore y_hat = round(pred) (i.e., is the prediction >= 0.5). As a point of clarity, it would be much easier to follow your logic if the one-hot encoding happened within your dataset (either in __getitem__ or __iter__). It's also worth noting that you don't use embeddings, so the code of your classifier is a bit misleading.
https://stackoverflow.com/questions/70916841/
Using older torch version in conda environment not working
Can anyone please help me? I am trying to run a .py script for which I need an older pytorch version, because a function I am using is deprecated in later torch versions. But I seem not to be able to install it correctly. I installed torch into my virtual environment using conda create -n my_env python=3.6.2 source activate my_env conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.2 -c pytorch Then I have a python file (myfile.py) that I start using a shell file (start.sh). The files are on a SLURM-cluster, so I start start.sh with sbatch start.sh. start.sh source activate my_env srun --unbuffered python myfile.py myfile.py import torch print(torch.__version__) The print command from myfile.py still returns "1.8.0", but using conda list in my environment shows "1.7.0" for pytorch version. Even when I type python -c "import torch; print(torch.__version__)" directly into terminal, it will return "1.8.0" (rather than "1.7.0" from conda list) Am I doing something very obvious wrong possibly? Did I install in a wrong way, or is somehow my environment not properly loaded in the python file? Best regards and thanks a lot in advance
It turned out that installing the environment as described added a link to another python installation to my PYTHONPATH (a link to /.local/python) and that directory was added to PYTHONPATH in a higher order than the python used in my environment (/anaconda/env/my_env/python/...) . Therefore, the local version of python was used instead. I could not delete it from PYTHONPATH either, but changing the directory name to /.local/_python did the trick. It's not pretty, but it works. Thanks everyone for the contributions!
https://stackoverflow.com/questions/70918758/
Pytorch/YOLOv5 - Compare detected Object if it's the same
I am trying to use Pytorch and YOLOv5 to detect objects in multiple images and count them. My problem now is that if I have for example a frame rate of 15fps, the same objects can be recognized in the image, but they were only recognized for example a little bit in the front of the image (other coordinates) or the Objects have the same coordinates as before. Currently, only how many objects are detected in an image is counted. How can I exclude these objects or compare whether the objects have already been detected? My Code so far: counts = {"cars" : 0 , "trucks" : 0} class_mappings = {2.0: "cars", 7.0: "trucks"} def predict(): img = Image.open("test.jpeg") result = model(img) labels = dict(Counter(result[:, -1].tolist())) for k, v in class_mappings.items(): counts[v] += labels.get(k, 0) This Code above is extracting the labels of detected objects from the Tensor and count them in a counter variable.
Most of the sorting algorithms/models will work out for you like a charm. i.e. what you need is to track each box step by step after inferencing on each frame and assigning id/count to them based on some distance function to determine object's id after it has moved. It's commonly referenced as MOT (Multiple Object Tracking). You can meet two versions of MOT: statistical approach & DL + statistical on top. DL version is more useful if you're working on certain environments with lot of noise ofc. But it comes with downside - you've to run feature extractor in real-time. Although, I've worked with statistical approaches (based on Kalman Filter) in a very demanding production setting and after some tweaking it worked unbelievably well on very dense MOT task. You can try out: https://github.com/wmuron/motpy It'll be easy to integrate it to YoloV5.
https://stackoverflow.com/questions/70923142/
Transfer OpenGL image on GPU from C++ to Python for deep learning
I built a simulator in C++ with a pybind11 interface to run deep learning in Python using PyTorch. At each time step, I draw certain things from the simulator's scene using the SFML library (wrapper around openGL). I draw that on a texture, then get the pixels from that texture as follows: glBindTexture(GL_TEXTURE_2D, imageTexture.getTexture().getNativeHandle()); glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, img.data()); I then move the pixels vector img from C++ to Python using the pybind11 interface. Problem is that this GPU-to-CPU operation is very slow. Since the vector in Python is then transferred back to the GPU for fast deep learning (CNN) operations, I was wondering how I could avoid that step. My best guess so far is that I should at each step bind the texture in C++ (as in the code above), then right after that in the Python get the bound texture using CUDA, while keeping it on the GPU. However I couldn't figure out how to do that, I don't know much about GPUs and how CUDA/OpenGL work. Pointers to the right direction would be very appreciated!
You should use PBO(Pixel Buffer Object) for this operation. Data transferring operation is very fast using PBO https://www.khronos.org/opengl/wiki/Pixel_Buffer_Object GLuint w_pbo[2]; // Create pbo objects and than // Do your drawings. int w_readIndex = 0; int w_writeIndex = 1; glReadBuffer(GL_COLOR_ATTACHMENT0); w_writeIndex = (w_writeIndex + 1) % 2; w_readIndex = (w_readIndex + 1) % 2; glBindBuffer(GL_PIXEL_PACK_BUFFER, w_pbo[w_writeIndex]); // copy from framebuffer to PBO asynchronously. it will be ready in the NEXT frame glReadPixels(0, 0, SCR_WIDTH, SCR_HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, nullptr); // now read other PBO which should be already in CPU memory glBindBuffer(GL_PIXEL_PACK_BUFFER, w_pbo[w_readIndex]); unsigned char* downsampleData = (unsigned char*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY); Now you can use unsigned char* downsampleData to build the texture memory
https://stackoverflow.com/questions/70925117/
AssertionError in torch_geometric.nn.GATConv
I am trying to use graph attention network (GAT) module in torch_geometric but keep running into AssertionError: Static graphs not supported in 'GATConv' with the following code. class GraphConv_sum(nn.Module): def __init__(self, in_ch, out_ch, num_layers, block, adj): super(GraphConv_sum, self).__init__() adj_coo = coo_matrix(adj) # convert the adjacency matrix to COO format for Pytorch Geometric self.edge_index = torch.tensor([adj_coo.row, adj_coo.col], dtype=torch.long) self.g_conv = nn.ModuleList() self.act = nn.LeakyReLU() for n in range(num_layers): if n == 0: self.g_conv.append(block(in_ch, 16)) elif n > 0 and n < num_layers - 1: self.g_conv.append(block(16, 16)) else: self.g_conv.append(block(16, out_ch)) def forward(self, x): for layer in self.g_conv: x = layer(x=x, edge_index=self.edge_index) x = self.act(x) print(x.shape) return x[:, 0, :] When I replace block with GATConv followed by a standard training loop, this error happens (other conv layers such as GCNConv or SAGEConv didn't have any problems). I checked the documentation and made sure the input shape was correct (same for other conv layers). In the source code, there is this assert x.dim() == 2, "Static graphs not supported in 'GATConv'" part in the forward method but apparently the batch dimension will come into play in the forward pass and x.dim() would be 3. The input shape with batch dimension is [1024, 6, 200]. However, if I manually change the assert condition to x.dim() == 3 the same error will still be raised as if the condition is not satisfied. I only have a high-level grasp of GAT so there might be something I am missing. Anyways, I have a few questions from this Are there any implementation errors possible from my side that caused this error? What is this assertion condition for? What is the static graph in this case? I would appreciate any insights and help!! Thanks!
It turns out due to the attention weight calculation, GATConv doesn't support multiple feature matrices and single edge_index. More info: https://github.com/pyg-team/pytorch_geometric/issues/2844
https://stackoverflow.com/questions/70950706/
Applying transformation to data set in pytorch and add them to the data
I want to load fashion-mnist (or any other data set) using torchvision.datasets.FashionMNIST(data_dir, train=True, download=True) and then apply some image transformation such as cropping or adding noise, etc and finally add the transformed data to the original data set. The only way I found is torchvision.transform but it changes the original dataset and do not augment the data set. How can I augment the data set?
As @Ivan already pointed out in the comments, when accessing an image, PyTorch always loads its original dataset version. Then, transform applies online your transformation of choice to the data. In general, setting a transform to augment the data without touching the original dataset is the common practice when training neural models. That said, if you need to mix an augmented dataset with the original one you can, for example, stack two datasets with torch.utils.data.ConcatDataset, as follows: dset1 = torchvision.datasets.FashionMNIST(data_dir, train=True, download=True) dset2 = torchvision.datasets.FashionMNIST(data_dir, train=True, transform=my_transform) dset = torch.utils.data.ConcatDataset([dset1, dset2]) Have a look at this page for more alternatives. Finally, if you need, you can also save your dataset for future use (thus freezing the random transform applied to your data): for idx, (img, target) in enumerate(dset): torch.save(img, f"mydset/{fname}_img_{idx}.pt") torch.save(target, f"mydset/{fname}_tgt_{idx}.pt")
https://stackoverflow.com/questions/70953156/
How to change the threshold of a prediction of multi-label classification using FASTAI library
I have a multi-label dataset that I'm using to train my model using fast-ai library for Python, using as metrics an accuracy function such as: def accuracy_multi1(inp, targ, thresh=0.5, sigmoid=True): "Compute accuracy when 'inp' and 'targ' are the same size" if sigmoid: inp=inp.sigmoid() return ((inp>thresh) == targ.bool()).float().mean() And my learner is like: learn = cnn_learner(dls, resnet50, metrics=partial(accuracy_multi1,thresh=0.1)) learn.fine_tune(2,base_lr=3e-2,freeze_epochs=2) After training my model, I want to predict an image considering the threshold I used a argument, but the method learn.predict('img.jpg') only considers the default thres=0.5. In the following example, my predidction should return True for 'red, 'shirt' and 'shoes', as their probabilities are above 0.1 (but shoes is below 0.5, so it is not considered as True): def printclasses(prediction,classes): print('Prediction:',prediction[0]) for i in range(len(classes)): print(classes[i],':',bool(prediction[1][i]),'|',float(prediction[2][i])) printclasses(learn.predict('rose.jpg'),dls.vocab) Output: Prediction: ['red', 'shirt'] black : False | 0.007274294272065163 blue : False | 0.0019288889598101377 brown : False | 0.005750810727477074 dress : False | 0.0028723080176860094 green : False | 0.005523672327399254 hoodie : False | 0.1325301229953766 pants : False | 0.009496113285422325 pink : False | 0.0037188702262938023 red : True | 0.9839697480201721 shirt : True | 0.5762518644332886 shoes : False | 0.2752271890640259 shorts : False | 0.0020902694668620825 silver : False | 0.0009014935349114239 skirt : False | 0.0030087409541010857 suit : False | 0.0006510693347081542 white : False | 0.001247694599442184 yellow : False | 0.0015280473744496703 Is there a way to impose the threshold when I'm doing a prediction of a image I'm referencing? Something that would look like: learn.predict('img.jpg',thresh=0.1)
I've encountered the same problem. I remain interested in a better solution, but since accuracy_mult only seems to provide user-friendly evaluation of the model during the training process (and is not involved in the prediction), I created a work-around for my data. The basic idea is to take the tensor with the actual predictions (which is the third entry in the 3-tuple returned by the predict() function), apply the threshold & get the corresponding labels from the vocab. def predict_labels(x, model, thresh=0.5): ''' function to predict multi-labels in text (x) arguments: ---------- x: the text to predict model: the trained learner thresh: thresh to indicate which labels should be included, fastai default is 0.5 return: ------- (str) predictions separated by blankspace ''' # getting categories according to threshold preds = model.predict(x)[2] > thresh labels = model.dls.multi_categorize.vocab[preds] return ' '.join(labels)
https://stackoverflow.com/questions/70954526/
Shall I use grad.zero_() in PyTorch with or without gradient tracking?
I'm quite new to PyTorch, and I have a question about zeroing the gradients after an epoch. Suppose I have the following training loop: for epoch in range(n_iters): y_hat = forward(X) l = loss(y, y_hat) with torch.no_grad(): l.backward() w -= lr * w.grad It is clear that in order not to have the gradients accumulated I need to zero the .grad attribute of w. However, I'm unsure about where to call w.grad.zero_(). I both found on internet tutorials where it was called in the no_grad() section and also where it was called out of it. So I tested them and they both worked fine for a simple linear regression. Is any difference between the two? If there is, which is better to use?
In your snippet that doesn't really matter. The underscore in the name of zero_() means it is an inplace function, and since w.grad.requires_grad == False we know that there won't be any gradient computation with respect to w.grad happening anyway. The only important thing is that it happens before the loss.backward() call. I would recommend though to use different names for your loss function and the actuall loss tensor it computes, otherwise you're overwriting one with the other.
https://stackoverflow.com/questions/70956960/
Why is a 1x1 pytorch convolution changing the data?
I am debugging an issue I have using torch::nn:Conv2d. Here is a simple script which demonstrates the unexpected behavour import torch use_cuda = torch.cuda.is_available() device = torch.device("cuda:0" if use_cuda else "cpu") t = torch.ones([1,1,3,3]).to(device) print(t) kernel_size=[1,1] t2 = torch.nn.Conv2d(in_channels=1,out_channels=1,kernel_size=kernel_size, stride=kernel_size, padding=0).to(device)(t) print(t2) Running this results in: tensor([[[[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]]], device='cuda:0') tensor([[[[-1.7190, -1.7190, -1.7190], [-1.7190, -1.7190, -1.7190], [-1.7190, -1.7190, -1.7190]]]], device='cuda:0', grad_fn=<AddBackward0>) Why is t2 different to t1? I would expect a 1x1 convolution to leave the input unchanged.
Thanks to Michael Szczesny's comment, I replace the conv2d with; t2 = torch.nn.AvgPool2d(1, stride=1)(t) And all is well: tensor([[[[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]]], device='cuda:0') tensor([[[[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]]], device='cuda:0') Basically, I was using the wrong function.
https://stackoverflow.com/questions/70971618/
How to retrieve PyTorch tensor from queue in multiprocessing
I am simply trying to retrieve a tensor that I put into a queue in another process, but I get a 'Connection Refused' error whenever I do. Please point me to any documentation that may help or give me some suggestions please. import torch import torch.multiprocessing as mp def test(q): x = torch.normal(mean=0.0, std=1.0, size=(2, 3)) x.share_memory_() q.put(x) if __name__ == "__main__": mp.set_start_method("spawn") q = mp.Queue() p = mp.Process(target=test, args=(q,)) p.start() p.join() print(q.get())
You should use Manager() to get rid of this error. So working code example should look like below import torch import torch.multiprocessing as mp def test(q): x = torch.normal(mean=0.0, std=1.0, size=(2, 3)) x.share_memory_() q.put(x) if __name__ == "__main__": #mp.set_start_method("spawn") manager = mp.Manager() q = manager.Queue() p = mp.Process(target=test, args=(q,)) p.start() p.join() print(q.get())
https://stackoverflow.com/questions/70980471/
Correct way of freezing layers
I have a model M and I am cloning it M.clone() Now, I want to freeze certain layers of M.clone(). When I set requires_grad=False, it results in this error: RuntimeError: you can only change requires_grad flags of leaf variables. If you want to use a computed variable in a subgraph that doesn't require differentiation use var_no_grad = var.detach(). How to freeze the layers of M.clone() in that case? I want to ensure that when I backpropagate using the loss computed on a batch using M.clone(), I compute the gradients of M A small script: model = ResNet() optimizer = Adam(model.parameters()) cloned_model = model.clone() for p in cloned_model.features.parameters(): p.require_grad = False error = loss(cloned_model(data), labels) error.backward() optimizer.step() P.S. I am not sure if I can use .detach() as I don't want to break the graph. Do correct me if I am wrong. Thank you!
You can use the in-place requires_grad_ function either on a nn.Module or on a torch.Tensor directly. Here you could do: cloned_model = copy.deepcopy(model) cloned_model.requires_grad_(False) Where deepcopy is from copy. You should copy your optimizer as well otherwise optimizer will be updating model, not cloned_model... resulting in no changes at all since you are not back-propagating on model.
https://stackoverflow.com/questions/70981269/
How to save and load only particular layers of a neural network with PyTorch?
Bare Problem Statement: I have trained a Model A, that consists of a feature Extractor FE and a classification head ACH. I want to train a model B, that uses A's feature extractor FE and retrains it's own classification head BCH. So far it's easy. Now I don't want to save the entire model B since the FE part of it is already saved in the model A. I only want to dump the BCH, and during inference Load model A - do it's prediction Load B's classification head BCH. Swap the classification head ACH with BCH Run prediction using this swapped state. Reading pyTorches documentation it only talks about saving entire models. How can I achieve this? End of problem statement More details on the motivation of the problem: I have a dataset of images that I want to classify, these images have can have several classes given to them. For example the same image can have the class of "Land Vehicle" (supercategory) and a class of "Car" (category) or a "Truck". Another image might have the class "Aerial Vehicle" and it can be a "Helicopter" or a "Plane". Since the images and therefore most of the features should be the same, I wish to train one classifier for the supercategories, then freeze it's feature-extractor, and sort of transfer learn the same model for the categories using the pretrained feature extractor. Since the weights of the feature extracting backbone is the same, I only want to save the weights of the classification head of the categories model, and thus save some precious computational resources.
In general, it's something usual to only want an access to the backbone of a model in order to reuse it for others purposes. You have several ways to perform this. But mostly, having in mind that saving a model checkpoint and loading it later means saving weights and biases and being able to load them correctly to the corresponding layers, you first need to know, from your model, what part do you want to save. When you get the state of a model, you will obtain a dictionary. The keys will be the layers names and the values will be the weights and the biases. Let's see an example with an efficientnet classifier on how to only save the backbone of a model. Basically, an efficientnet, as in your example, is a backbone and a fully connected layer as a head, if you only want the backbone, you want every single layers, except the head that you'll fine tune later. import torch import torch.nn as nn from efficientnet_pytorch import EfficientNet model = EfficientNet.from_name("efficientnet-b0") print(model) It will print the model layers and some features, basic stuff. EfficientNet( (_conv_stem): Conv2dStaticSamePadding( 3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False (static_padding): ZeroPad2d(padding=(0, 1, 0, 1), value=0.0) ) (_bn0): BatchNorm2d(32, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True) (_blocks): ModuleList( (0): MBConvBlock( (_depthwise_conv): Conv2dStaticSamePadding( 32, 32, kernel_size=(3, 3), stride=[1, 1], groups=32, bias=False (static_padding): ZeroPad2d(padding=(1, 1, 1, 1), value=0.0) ) (_bn1): BatchNorm2d(32, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True) (_se_reduce): Conv2dStaticSamePadding( 32, 8, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) (_se_expand): Conv2dStaticSamePadding( 8, 32, kernel_size=(1, 1), stride=(1, 1) (static_padding): Identity() ) ... Now what is interesting is the final layers of this model : ... (_bn1): BatchNorm2d(1280, eps=0.001, momentum=0.010000000000000009, affine=True, track_running_stats=True) (_avg_pooling): AdaptiveAvgPool2d(output_size=1) (_dropout): Dropout(p=0.2, inplace=False) (_fc): Linear(in_features=1280, out_features=1000, bias=True) (_swish): MemoryEfficientSwish() Let's say we want to reuse this model backbone, except _fcsince we would like to use the weights on another model having the same backbone but a different head, not pre-trained. In this example I'll take the same backbone and add 3 heads : class ThreeHeadEfficientNet(torch.nn.Module): def __init__(self,nbClasses1,nbClasses2,nbClasses3,model="efficientnet-b0",dropout_p=0.2): super(ThreeHeadEfficientNet, self).__init__() self.NBC1 = nbClasses1 self.NBC2 = nbClasses2 self.NBC3 = nbClasses3 self.dropout_p = dropout_p self._dropout_layer = torch.nn.Dropout(p=self.dropout_p) self._head1 = torch.nn.Linear(1280,self.NBC1) self._head2 = torch.nn.Linear(1280,self.NBC2) self._head3 = torch.nn.Linear(1280,self.NBC3) self.model = EfficientNet.from_name(model,include_top=False) #you can notice here, I'm not loading the head, only the backbone def forward(self,x): features = self.model(x) res = features.flatten(start_dim=1) res = self._dropout_layer(res) res1 = self._head1(res) res2 = self._head2(res) res3 = self._head3(res) return res1,res2,res3 You'll notice now, if you print this ThreeHeadsModel layers, the layers name have slightly changed from _conv_stem.weight to model._conv_stem.weight since the backbone is now stored in a attribute variable model. We'll thus have to process that otherwise the keys will mismatch, create a new state dictionary that matches the expected keys of this new model and containing the pretrained weights and biases : pretrained_dict = model.state_dict() #pretrained model keys model_dict = new_model.state_dict() #new model keys processed_dict = {} for k in model_dict.keys(): decomposed_key = k.split(".") if("model" in decomposed_key): pretrained_key = ".".join(decomposed_key[1:]) processed_dict[k] = pretrained_dict[pretrained_key] #Here we are creating the new state dict to make our new model able to load the pretrained parameters without the head. new_model.load_state_dict(processed_dict, strict=False) #strict here is important since the heads layers are missing from the state, we don't want this line to raise an error but load the present keys anyway. And finally, in new_model you should have your new model with a pretrained backbone and heads to fine tune. Now you should be able to fix your issues :) For more pytorch information, please also check the forum.
https://stackoverflow.com/questions/70986805/
Shall I use transformations on PIL Image or rather on PyTorch tensor?
My inputs are PIL images. Suppose I have the following transformation composition: transforms.Compose([ transforms.RandomResizedCrop(size=224), transforms.RandomHorizontalFlip(), transforms.ToTensor()]) As most of the transforms in PyTorch can work on both PIL images and tensors, I wonder in which order I should use them. Shall I first use all the transformations possible to use on PIL images and then transform to tensor, or shall I first transform to tensor and then apply the other transformations on tensors? Is the one more effective than the other?
There's no real advantage in general to changing the order. However, there can be advantages to moving the ToTensor out of the transforms chain. Specifically, you cannot JIT transformations operating on PIL images which may have optimization impact. For this reason, it may be better to convert PIL images to tensors in your data loading code and then transform as needed. I refer you to the documentation to read more about this.
https://stackoverflow.com/questions/70989146/
How to I specify model.learn() to end within a certain episodes of stable baselines 3?
I know specifying that total_timesteps= is a require parameter, but how to I end model.learn() within a certain episodes? Forgive me for I'm still new to stables_baselines3 and pytorch still not how to implement it in code. import gym import numpy as np from stable_baselines3 import DDPG from stable_baselines3.common.noise import NormalActionNoise env = gym.make('NeuralTraffic-v1') n_actions = env.action_space.shape[-1] action_noise = NormalActionNoise(mean=np.zeros(n_actions), sigma=0.1 * np.ones(n_actions)) model = DDPG("MlpPolicy", env, action_noise=action_noise, verbose=1) model.learn(total_timesteps=60, log_interval=1) model.save("ddpg") env = model.get_env() I wanted to ended the episode on 60 instead my rollout was: ---------------------------------- | rollout/ | | | ep_len_mean | 94 | | ep_rew_mean | -2.36e+04 | | time/ | | | episodes | 1 | | fps | 0 | | time_elapsed | 452 | | total_timesteps | 94 | ---------------------------------- I don't understand why is it only 1 episode? I'd like to learn how to implement to restrict learning to specified episodes.
Generic Box-2D and classic control environments have 1000 timesteps within one episode but this is not constant as the agent can do some weird thing in the beginning and the environment can reset itself (resulting in uneven timesteps per episode). So it's the norm to keep a specific timestep in mind while benchmarking (1e6 in most research papers in model-free RL) on contrary to specifying a certain number of episodes. As you can see in SB3 Docs DDPG.learn method that they don't provide a specific argument to set the number of episodes and it is actually best to keep a specific number of timesteps in mind. I see that you have written 60 in place of total_timesteps. It's way too little to train an RL agent. Try keeping something like 1e5 or 1e6 and you might see good results. Good Luck!
https://stackoverflow.com/questions/70998678/
No module named ‘torchvision.models.utils‘
When I use the environment of pytorch=1.10.0, torchvision=0.11.1 to run the code, I run to the statement from torchvision.models.utils import load_state_dict_from_url. The following error will appear when: >>> from torchvision.models.utils import load_state_dict_from_url Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'torchvision.models.utils'
After consulting torchvision's code repository, there is a solution: Note that this syntax is only for higher versions of PyTorch. The original code from .utils import load_state_dict_from_url is not applicable. you connot import load_state_dict_from_url from .utils. change .utils to torch.hub can fix the problem. from torch.hub import load_state_dict_from_url This worked for me.
https://stackoverflow.com/questions/70998767/
different method of running pytorch on gpu
See the code block below (the source of the code can be found here, also you don't need to read the whole block, I will explain and highlight the important part) def train(data_loader, model, optimizer, scheduler, total_epochs, save_interval, save_folder, sets): # settings batches_per_epoch = len(data_loader) # log.info('{} epochs in total, {} batches per epoch'.format(total_epochs, batches_per_epoch)) loss_seg = nn.CrossEntropyLoss(ignore_index=-1) print("Current setting is:") print(sets) print("\n\n") if not sets.no_cuda: loss_seg = loss_seg.cuda() # change model in training mode, enable batch normalization etc model.train() # record train time train_time_sp = time.time() # loop to train the model for epoch in range(total_epochs): log.info('Start epoch {}'.format(epoch)) scheduler.step() log.info('lr = {}'.format(scheduler.get_lr())) for batch_id, batch_data in enumerate(data_loader): # getting data batch batch_id_sp = epoch * batches_per_epoch volumes, label_masks = batch_data if not sets.no_cuda: volumes = volumes.cuda() optimizer.zero_grad() out_masks = model(volumes) # resize label [n, _, d, h, w] = out_masks.shape new_label_masks = np.zeros([n, d, h, w]) for label_id in range(n): label_mask = label_masks[label_id] [ori_c, ori_d, ori_h, ori_w] = label_mask.shape label_mask = np.reshape(label_mask, [ori_d, ori_h, ori_w]) scale = [d*1.0/ori_d, h*1.0/ori_h, w*1.0/ori_w] label_mask = ndimage.interpolation.zoom(label_mask, scale, order=0) new_label_masks[label_id] = label_mask new_label_masks = torch.tensor(new_label_masks).to(torch.int64) if not sets.no_cuda: new_label_masks = new_label_masks.cuda() # calculating loss loss_value_seg = loss_seg(out_masks, new_label_masks) loss = loss_value_seg loss.backward() optimizer.step() avg_batch_time = (time.time() - train_time_sp) / (1 + batch_id_sp) log.info( 'Batch: {}-{} ({}), loss = {:.3f}, loss_seg = {:.3f}, avg_batch_time = {:.3f}'\ .format(epoch, batch_id, batch_id_sp, loss.item(), loss_value_seg.item(), avg_batch_time)) if not sets.ci_test: # save model if batch_id == 0 and batch_id_sp != 0 and batch_id_sp % save_interval == 0: #if batch_id_sp != 0 and batch_id_sp % save_interval == 0: model_save_path = '{}_epoch_{}_batch_{}.pth.tar'.format(save_folder, epoch, batch_id) model_save_dir = os.path.dirname(model_save_path) if not os.path.exists(model_save_dir): os.makedirs(model_save_dir) log.info('Save checkpoints: epoch = {}, batch_id = {}'.format(epoch, batch_id)) torch.save({ 'ecpoch': epoch, 'batch_id': batch_id, 'state_dict': model.state_dict(), 'optimizer': optimizer.state_dict()}, model_save_path) print('Finished training') if sets.ci_test: exit() This is a customised training function, the author achieve GPU running by using code loss_seg = loss_seg.cuda() where loss_seg is a instance of torch.optim.SGD. This part confused me, because according to the official documentation in pytorch, I only need to move my model and my input data to gpu is enough. I want to know why the author who write the code below also move the optimizer to the gpu and more details about running pytorch on gpu.
In general, in order to harness the full power of GPUs, every stateful Module should be sent to a cuda device before the forward step. A stateful Module has an internal state, e.g. Parameter (weights). This is not usually the case of Loss, which, in general, applies just a functional that has been already implemented for cuda devices. In conclusion, if the loss is stateful, then it makes sense to send it to a cuda device, otherwise, it is not necessary. Have also a look at this question in the PyTorch forum.
https://stackoverflow.com/questions/71004885/
indices in MaxPool2d in pytorch
I am studying the documentation at https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html. In the parameters section, it states return_indices – if True, will return the max indices along with the outputs. Useful for torch.nn.MaxUnpool2d later Could someone explain to me what max indices mean here? I believe it is the indices corresponding to the maximal value. If the maximal value is unique, does that mean only 1 index will be returned?
I assume you already know how max pooling works. Then, let's print some results to get more insights. import torch import torch.nn as nn pool = nn.MaxPool2d(kernel_size=2, return_indices=True) input = torch.zeros(1, 1, 4, 4) input[..., 0, 1] = input[..., 1, 3] = input[..., 2, 2] = input[..., 3, 0] = 1. print(input) output tensor([[[[0., 1., 0., 0.], [0., 0., 0., 1.], [0., 0., 1., 0.], [1., 0., 0., 0.]]]]) output, indices = pool(input) print(output) output tensor([[[[1., 1.], [1., 1.]]]]) print(indices) output tensor([[[[ 1, 7], [12, 10]]]]) If you stretch the input tensor and make it 1d, you can see that indices contains the positions of each 1 value (the maximum for each window of MaxPool2d). As written in the documentation of torch.nn.MaxPool2d, indices is required for the torch.nn.MaxUnpool2d module: MaxUnpool2d takes in as input the output of MaxPool2d including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.
https://stackoverflow.com/questions/71025321/
how to create empty parameter in in pytorch tensor variable
from torch import FloatTensor def new_parameter(*size): #1024 out = torch.nn.Parameter(FloatTensor(*size), requires_grad=True) torch.nn.init.xavier_normal_(out) return out at = new_parameter(1024, 1) output is Parameter containing: tensor([[ 0.0203], [-0.0043], [-0.0386], ..., [-0.0084], [-0.0289], [-0.0188]], requires_grad=True) similarway we can create bt=torch.randn((1024,1),requires_grad=True) output also same tensor([[-1.5478], [ 1.5060], [ 0.1580], ..., [ 0.9754], [ 0.1699], [ 0.2062]], requires_grad=True) are there any differences in a tensor variable the above two ways? please explain the above code in simply
The first method will initialize a random float tensor, then wrap it with nn.Parameter. Which is generally used to register than tensor as a parameter to a nn.Module (not seen here). A utility function nn.init.xavier_normal_ is then applied on that parameter to initialize its values. The second method only initializes a random float tensor.
https://stackoverflow.com/questions/71030889/
Pytorch CNN script training, but not getting results
I’m just getting started with pytorch. I am trying to do a simple binary classification project with the cats and dogs dataset. After much fumbling around, I was able to get the model to train, but I’m not getting the expected results. First, the loss starts out way too low. To me, that seems to indicate I’m not measuring loss correctly. Second, the model just predicts everything as 0. I’m sure there are many mistakes here, but I would appreciate it if someone could take a look and let me know what I’m doing wrong. Thank you! import torch import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt import torch.nn as nn import torch.nn.functional as F from torchvision.io import read_image from torch.utils.data import Dataset, DataLoader from torchvision.utils import make_grid from torchvision.utils import save_image from sklearn.model_selection import train_test_split import os import numpy as np from sklearn import preprocessing import glob import cv2 device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') print(device) IMAGE_SIZE = 64 DATA_DIR = "C:\\Users\\user\\source\\repos\\pytorch-youtube\\data\\catsdogs\\PetImages\\" LABELS = ('cat', 'dog') # custom dataset class # expects the root folder to have sub folders with class names # and pictures of classes inside folder class CustomImageDataset(Dataset): def __init__(self): self.imgs_path = DATA_DIR file_list = glob.glob(self.imgs_path + "*") self.data = [] for class_path in file_list: class_name = class_path.split("\\")[-1] for img_path in glob.glob(class_path + "\\*.jpg"): self.data.append([img_path, class_name]) self.class_map = {"Dog": 0, "Cat": 1} self.img_dim = (IMAGE_SIZE, IMAGE_SIZE) def __len__(self): return len(self.data) def __getitem__(self, idx): img_path, class_name = self.data[idx] # this is to handle corrupt images in the dataset # could probably be handled better try: img = cv2.imread(img_path) img = cv2.resize(img, self.img_dim) except: img_path, class_name = self.data[idx+1] img = cv2.imread(img_path) img = cv2.resize(img, self.img_dim) class_id = self.class_map[class_name] img_tensor = torch.from_numpy(img) img_tensor = img_tensor.permute(2, 0, 1) # not exactly sure what/why for this line class_id = torch.tensor([class_id]) return img_tensor, class_id # as is, we aren't using these transform = transforms.Compose( [transforms.Resize((64, 64)), transforms.ConvertImageDtype(torch.float32), transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5)), ] ) dataset = CustomImageDataset() dataloader = DataLoader(dataset, batch_size=4, shuffle=True) dataiter = iter(dataloader) train_features, train_labels = dataiter.next() class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(2704, 128) # only way I got input size was by running code self.fc2 = nn.Linear(128, 64) self.fc3 = nn.Linear(64, 2) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = torch.flatten(x, 1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() # net.to(device) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(net.parameters(), lr=0.001) for epoch in range(2): running_loss = 0.0 for i, data in enumerate(dataloader, 0): inputs, labels = data # this is the fix for "expected scalar type Byte but found Float" # this seems to completely destroy the features in the image to just white inputs = inputs.float() optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, torch.max(labels,1)[1]) loss.backward() optimizer.step() running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.10f}') running_loss = 0.0 print("finished") # save the model PATH = './custom_trained_model_dogs_cats.pth' torch.save(net.state_dict(), PATH)
It seems I was passing in the wrong thing to my loss function. I changed this line loss = criterion(outputs, torch.max(labels,1)[1]) to this loss = criterion(outputs, torch.max(labels,1)[0]) and everything seems to be working. I'm able to correctly classify the cats and dogs.
https://stackoverflow.com/questions/71040931/
randomized data trasnformation in pytorch
I want to rotate all the images in my Dataset with a random degree between [0,180]. If I compose a transformation function and pass my images to this function in the __getitem__ function of my Dataset class. Does this mean: every single image is randomly rotated? images in each batch get rotated with an identical degree but this degree randomly changes across batches (calls)? I would appreciate it if you could clarify this for me.
In mapped datasets, __getitem__ is used to select a single element from the dataset. The way random transformations work in PyTorch/Torchvision is they apply a unique random transformation each time the transform is called. This means: Every single image in your dataset is indeed randomly rotated but not by the same amount. Additionally images in a batch get different transformations. In other words, elements in the batch won't share the same transformation parameters. Here is a minimal example with a dummy dataset: class D(Dataset): def __init__(self, n): super().__init__() self.n = n self.transforms = T.Lambda(lambda x: x*randint(0,10)) def __len__(self): return self.n def __getitem__(self, index): x = self.transforms(index) return x Here you can see the random transformer inter and intra batches: >>> dl = DataLoader(D(10), batch_size=2) >>> for i, x in enumerate(dl): ... print(f'batch {i}: elements {2*i} and {2*i+1} = {x.tolist()}') batch 0: elements 0 and 1 = [0, 2] batch 1: elements 2 and 3 = [14, 27] batch 2: elements 4 and 5 = [32, 40] batch 3: elements 6 and 7 = [60, 0] batch 4: elements 8 and 9 = [80, 27]
https://stackoverflow.com/questions/71048425/