instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
PyTorch: Comparing predicted label and target label to compute accuracy
I'm trying to implement this loop to get the accuracy of my PyTorch CNN (The complete code of it is here) My version of the loop is so far: correct = 0 test_total = 0 for itera, testdata2 in enumerate(test_loader, 0): test_images2, test_labels2 = testdata2 if use_gpu: test_images2 = Variable(test_images2.cuda()) else: test_images2 = Variable(test_images2) outputs = model(test_images2) _, predicted = torch.max(outputs.data, 1) test_total += test_labels2.size(0) test_labels2 = test_labels2.type_as(predicted) correct += (predicted == test_labels2[0]).sum() print('Accuracy of the network on all the test images: %d %%' % ( 100 * correct / test_total)) If I run it like this, I get: > Traceback (most recent call last): File > "c:/python_code/Customized-DataLoader-master_two/multi_label_classifier_for2classes.py", > line 186, in <module> > main() File "c:/python_code/Customized-DataLoader-master_two/multi_label_classifier_for2classes.py", > line 177, in main > correct += (predicted == test_labels2[0]).sum() File "C:\anaconda\envs\pytorch_cuda\lib\site-packages\torch\tensor.py", > line 360, in __eq__ > return self.eq(other) RuntimeError: invalid argument 3: sizes do not match at > c:\anaconda2\conda-bld\pytorch_1519501749874\work\torch\lib\thc\generated\../THCTensorMathCompareT.cuh:65 I used test_labels2 = test_labels2.type_as(predicted) to have both tensors as LongTensors, which seems to work fine to avert the "Expected this...but got..." errors. They look like this now: test_labels2 after conversion: 0 1 1 0 1 0 [torch.cuda.LongTensor of size 3x2 (GPU 0)] predicted: 1 1 1 [torch.cuda.LongTensor of size 3 (GPU 0)] I supppose the problem now is, that test_labels2[0] is returning a row but not the column. How do I get this to work?
Indexing in pytorch works mostly like indexing in numpy. To index all rows of a certain column j use: tensor[:, j] Alternatively, the select function from pytorch can be used.
https://stackoverflow.com/questions/49304817/
Pytorch model weight type conversion
I'm trying to do inference on FlowNet2-C model loading from file. However, I met some data type problem. How can I resolve it? Source code FlowNet2-C pre-trained model $ python main.py Initializing Datasets [0.000s] Loading checkpoint '/notebooks/data/model/FlowNet2-C_checkpoint.pth.tar' [1.293s] Loaded checkpoint '/notebooks/data/model/FlowNet2-C_checkpoint.pth.tar' (at epoch 0) (1L, 6L, 384L, 512L) <class 'torch.autograd.variable.Variable'> [1.642s] Operation failed Traceback (most recent call last): File "main.py", line 102, in <module> main() File "main.py", line 98, in main summary(input_size, model) File "main.py", line 61, in summary model(x) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "/notebooks/data/vinet/FlowNetC.py", line 75, in forward out_conv1a = self.conv1(x1) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/container.py", line 67, in forward input = module(input) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/conv.py", line 282, in forward self.padding, self.dilation, self.groups) File "/usr/local/lib/python2.7/dist-packages/torch/nn/functional.py", line 90, in conv2d return f(input, weight, bias) RuntimeError: Input type (CUDAFloatTensor) and weight type (CPUFloatTensor) should be the same
Maybe that is because your model and input x to the model has different data types. It seems that the model parameters have been moved to GPU, but your input x is on GPU. You can try to use model.cuda() after line 94, which will put the model on the GPU. Then the error should disappear.
https://stackoverflow.com/questions/49313974/
math operator difference *= or +=
I found a weird thing when I used the operator e.g. *= or += The code: aa = Variable(torch.FloatTensor([[1,2],[3,4]])) bb = aa bb = bb*2 print(bb) print(aa) cc = Variable(torch.FloatTensor([[1,2],[3,4]])) dd = cc dd *= 2 print(cc) print(dd) The results showed as below: Variable containing: 2 4 6 8 [torch.FloatTensor of size 2x2] Variable containing: 1 2 3 4 [torch.FloatTensor of size 2x2] Variable containing: 2 4 6 8 [torch.FloatTensor of size 2x2] Variable containing: 2 4 6 8 [torch.FloatTensor of size 2x2] As you see, when I used bb=bb*2, aa was not be affected. However, if using dd *= 2, cc seems to point towards (share) the same address as cc, it is changed. Their respective previous line is the same, e.g. bb = aa and dd = cc. It seems that the *= operator changed the original deep copy to shallow copy and the change was made after the copy line itself. I am wondering if this is a bug. If it is, it is important since it affects the basic mathematics operations. Generally, I thought that just use the built-in operation functions e.g. torch.add() is a good solution. OS: Mac OS X PyTorch version: 3.0 How you installed PyTorch (conda, pip, source): conda Python version: 3.6 CUDA/cuDNN version: None GPU models and configuration: None *--------------------------------------- I understand dd *= 2 is multiplication in-place, but how the value of dd transfer into cc? but if I used dd = dd * 2, the new values did not transfer to cc? There is no difference in the previous line: dd = cc and bb =aa. BTW, in python (not pytorch Variable or Tensor), dd *= 2 and dd = dd * 2 both will not affect cc value.
When you do dd = cc, both dd and cc are now references to the same object (same for bb = aa). Nothing is being copied! When you do bb = bb * 2, the * operator creates a new object and bb now refers to that object. No existing object is changed. When you do dd *= 2, the object that dd refers to (and which cc also refers to) is changed. So the difference is that * creates a new object and = makes a variable refer to a new object (rather than changing the object in any way), whereas *= changes the object. It may be counter-intuitive that x *= y behaves differently than x = x * y, but those are the semantics of the language, not a bug.
https://stackoverflow.com/questions/49321725/
How to take shape [1,1,256] from [1,4,256] cuda.FloatTensor?
I have an batch of output hidden vector from GRU. It's shape is [1,4,256] ( 0 ,.,.) = -0.9944 1.0000 0.0000 ... -1.0000 0.0000 -1.0000 -1.0000 1.0000 0.0000 ... -1.0000 0.0000 -1.0000 -1.0000 1.0000 0.0000 ... -1.0000 0.0000 -1.0000 -1.0000 1.0000 0.0000 ... -1.0000 0.0000 -1.0000 [torch.cuda.FloatTensor of size (1,4,256) (GPU 0)] I need a shape of [1,1,256] to pass to another model. How can I take it? Through this line I can only have a shape of [1,256] decoder_hidden = encoder_hidden[:, index] resize and wrap a new FloatTensor didn't work.
You can unsqueeze() in dimension 1 to achieve this. encoder_hidden = torch.randn(1, 4, 256) print(encoder_hidden.size()) for idx in range(encoder_hidden.size(1)): decoder_hidden = encoder_hidden[:, idx, :].unsqueeze(1) print(decoder_hidden.size()) It prints: torch.Size([1, 4, 256]) torch.Size([1, 1, 256]) torch.Size([1, 1, 256]) torch.Size([1, 1, 256]) torch.Size([1, 1, 256])
https://stackoverflow.com/questions/49334900/
Understanding PyTorch CNN Channels
I'm a bit confused at how CNNs and channels work. Specifically, how come these two implementations are not equal? Isn't the # of output channels just applying however many # of filters? self.conv1 = nn.Conv2d(1, 10, kernel_size=(3, self.embeds_size)) self.conv2 = nn.ModuleList([nn.Conv2d(1, 1, kernel_size=(3, self.embeds_size)) for f in range(10)]) ... conv1s = self.conv1(x) conv2s = [conv(x) for conv in self.conv2] conv2s = torch.stack(conv2s, 1).squeeze(2) print(torch.equal(conv1s, conv2s))
Check the state dicts of the different modules. Unless you're doing something fancy that you didn't tell us about, PyTorch will initialize the weights randomly. Specifically, try this: print(self.conv1.state_dict()["weight"][0]) print(self.conv2[0].state_dict()["weight"][0]) They will be different.
https://stackoverflow.com/questions/49362616/
Non linear mapping to vector of higher dimension
I am learning Keras and need help on the following. I currently have a sequence of floats in lists X and Y. What I need to do is to have a non-linear mapping to map each element to a vector of higher dimension following the below equation. pos(i) = tanh(W.[concat(X[i],Y[i]]) #where W is a learnable weight matrix, concat performs the concatenation and pos(i) is a vector of 16x1. (I'm trying to create 16 channel inputs for a CNN). I found that Pytorch implementation for the above is m = nn.linear(2,16) input = torch.cat(X[i],Y[i]) torch.nn.functional.tanh(m(input)) Currently I've tried the concat and tanh in numpy and it seems that is not what I want here. Can you help me implement the above using Keras.
Based on what you have there. This is what I would do in keras. Im going to assume that you just want your to concatenate your inputs before you feed them into the model. So we'll do it with numpy. Note something like : import numpy as np from keras.model import Dense, Model,Input X = np.random.rand(100, 1) Y = np.random.rand(100, 1) y = np.random.rand(100, 16) # concatenate along the features in numpy XY = np.cancatenate(X, Y, axis=1) # write model in = Input(shape=(2, )) out = Dense(16, activation='tanh')(in) # print(out.shape) (?, 16) model = Model(in, out) model.compile(loss='mse', optimizer='adam') model.fit(XY, y) ....
https://stackoverflow.com/questions/49375984/
Cross Entropy in PyTorch
Cross entropy formula: But why does the following give loss = 0.7437 instead of loss = 0 (since 1*log(1) = 0)? import torch import torch.nn as nn from torch.autograd import Variable output = Variable(torch.FloatTensor([0,0,0,1])).view(1, -1) target = Variable(torch.LongTensor([3])) criterion = nn.CrossEntropyLoss() loss = criterion(output, target) print(loss)
In your example you are treating output [0, 0, 0, 1] as probabilities as required by the mathematical definition of cross entropy. But PyTorch treats them as outputs, that don’t need to sum to 1, and need to be first converted into probabilities for which it uses the softmax function. So H(p, q) becomes: H(p, softmax(output)) Translating the output [0, 0, 0, 1] into probabilities: softmax([0, 0, 0, 1]) = [0.1749, 0.1749, 0.1749, 0.4754] whence: -log(0.4754) = 0.7437
https://stackoverflow.com/questions/49390842/
How do I re-use trained fastai models?
How do I load pretrained model using fastai implementation over PyTorch? Like in SkLearn I can use pickle to dump a model in file then load and use later. I've use .load() method after declaring learn instance like bellow to load previously saved weights: arch=resnet34 data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz)) learn = ConvLearner.pretrained(arch, data, precompute=False) learn.load('resnet34_test') Then to predict the class of an image: trn_tfms, val_tfms = tfms_from_model(arch,100) img = open_image('circle/14.png') im = val_tfms(img) preds = learn.predict_array(im[None]) print(np.argmax(preds)) But It gets me the error: ValueError: Expected more than 1 value per channel when training, got input size [1, 1024] This code works if I use learn.fit(0.01, 3) instead of learn.load(). What I really want is to avoid the training step In my application.
This error occurs whenever a batch of your data contains a single element. Solution 1: Call learn.predict() after learn.load('resnet34_test') Solution 2: Remove 1 data point from your training set. Pytorch issue Fastai forum issue description
https://stackoverflow.com/questions/49398255/
RuntimeError: Expected object of type torch.DoubleTensor but found type torch.FloatTensor for argument #2 'weight'
My input tensor is torch.DoubleTensor type. But I got the RuntimeError below: RuntimeError: Expected object of type torch.DoubleTensor but found type torch.FloatTensor for argument #2 'weight' I didn't specify the type of the weight explicitly(i.e. I did not init my weight by myself. The weight is created by pytorch). What will influence the type of weight in the forward process? Thanks a lot!!
The default type for weights and biases are torch.FloatTensor. So, you'll need to cast either your model to torch.DoubleTensor or cast your inputs to torch.FloatTensor. For casting your inputs you can do X = X.float() or cast your complete model to DoubleTensor as model = model.double() You can also set the default type for all tensors using pytorch.set_default_tensor_type('torch.DoubleTensor') It is better to convert your inputs to float rather than converting your model to double, because mathematical computations on double datatype is considerably slower on GPU.
https://stackoverflow.com/questions/49407303/
Replace diagonal elements with vector in PyTorch
I have been searching everywhere for something equivalent of the following to PyTorch, but I cannot find anything. L_1 = np.tril(np.random.normal(scale=1., size=(D, D)), k=0) L_1[np.diag_indices_from(L_1)] = np.exp(np.diagonal(L_1)) I guess there is no way to replace the diagonal elements in such an elegant way using Pytorch.
I do not think that such a functionality is implemented as of now. But, you can implement the same functionality using mask as follows. # Assuming v to be the vector and a be the tensor whose diagonal is to be replaced mask = torch.diag(torch.ones_like(v)) out = mask*torch.diag(v) + (1. - mask)*a So, your implementation will be something like L_1 = torch.tril(torch.randn((D, D))) v = torch.exp(torch.diag(L_1)) mask = torch.diag(torch.ones_like(v)) L_1 = mask*torch.diag(v) + (1. - mask)*L_1 Not as elegant as numpy, but not too bad either.
https://stackoverflow.com/questions/49429147/
How do I initialize weights in PyTorch?
How do I initialize weights and biases of a network (via e.g. He or Xavier initialization)?
Single layer To initialize the weights of a single layer, use a function from torch.nn.init. For instance: conv1 = torch.nn.Conv2d(...) torch.nn.init.xavier_uniform(conv1.weight) Alternatively, you can modify the parameters by writing to conv1.weight.data (which is a torch.Tensor). Example: conv1.weight.data.fill_(0.01) The same applies for biases: conv1.bias.data.fill_(0.01) nn.Sequential or custom nn.Module Pass an initialization function to torch.nn.Module.apply. It will initialize the weights in the entire nn.Module recursively. apply(fn): Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also torch-nn-init). Example: def init_weights(m): if isinstance(m, nn.Linear): torch.nn.init.xavier_uniform(m.weight) m.bias.data.fill_(0.01) net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) net.apply(init_weights)
https://stackoverflow.com/questions/49433936/
Understanding the code in pyTorch
I am having problems with understanding the following part of the code from ResNet architecture. The full code is available at https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/02-intermediate/deep_residual_network/main-gpu.py . I am not very familiar with Python. # Residual Block class ResidualBlock(nn.Module): def __init__(self, in_channels, out_channels, stride=1, downsample=None): super(ResidualBlock, self).__init__() self.conv1 = conv3x3(in_channels, out_channels, stride) self.bn1 = nn.BatchNorm2d(out_channels) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3(out_channels, out_channels) self.bn2 = nn.BatchNorm2d(out_channels) self.downsample = downsample def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) if self.downsample: residual = self.downsample(x) out += residual out = self.relu(out) return out # ResNet Module class ResNet(nn.Module): def __init__(self, block, layers, num_classes=10): super(ResNet, self).__init__() self.in_channels = 16 self.conv = conv3x3(3, 16) self.bn = nn.BatchNorm2d(16) self.relu = nn.ReLU(inplace=True) self.layer1 = self.make_layer(block, 16, layers[0]) self.layer2 = self.make_layer(block, 32, layers[0], 2) self.layer3 = self.make_layer(block, 64, layers[1], 2) self.avg_pool = nn.AvgPool2d(8) self.fc = nn.Linear(64, num_classes) def make_layer(self, block, out_channels, blocks, stride=1): downsample = None if (stride != 1) or (self.in_channels != out_channels): downsample = nn.Sequential( conv3x3(self.in_channels, out_channels, stride=stride), nn.BatchNorm2d(out_channels)) layers = [] layers.append(block(self.in_channels, out_channels, stride, downsample)) self.in_channels = out_channels for i in range(1, blocks): layers.append(block(out_channels, out_channels)) return nn.Sequential(*layers) def forward(self, x): out = self.conv(x) out = self.bn(out) out = self.relu(out) out = self.layer1(out) out = self.layer2(out) out = self.layer3(out) out = self.avg_pool(out) out = out.view(out.size(0), -1) out = self.fc(out) return out resnet = ResNet(ResidualBlock, [3, 3, 3]) My main question is why should we pass 'block' every time? In the function def make_layer(self, block, out_channels, blocks, stride=1): instead of passing 'block' why cant we create an instance of 'ResidualBlock' and append it with layers as follows? block = ResidualBlock(self.in_channels, out_channels, stride, downsample) layers.append(block)
The ResNet module is designed to be generic, so that it can create networks with arbitrary blocks. So, if you do not pass the block which you want to create you'll have to write the name of the block explicitly like below. # Residual Block class ResidualBlock(nn.Module): def __init__(self, in_channels, out_channels, stride=1, downsample=None): super(ResidualBlock, self).__init__() self.conv1 = conv3x3(in_channels, out_channels, stride) self.bn1 = nn.BatchNorm2d(out_channels) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3(out_channels, out_channels) self.bn2 = nn.BatchNorm2d(out_channels) self.downsample = downsample def forward(self, x): residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) if self.downsample: residual = self.downsample(x) out += residual out = self.relu(out) return out # ResNet Module class ResNet(nn.Module): def __init__(self, layers, num_classes=10): super(ResNet, self).__init__() self.in_channels = 16 self.conv = conv3x3(3, 16) self.bn = nn.BatchNorm2d(16) self.relu = nn.ReLU(inplace=True) self.layer1 = self.make_layer(16, layers[0]) self.layer2 = self.make_layer(32, layers[0], 2) self.layer3 = self.make_layer(64, layers[1], 2) self.avg_pool = nn.AvgPool2d(8) self.fc = nn.Linear(64, num_classes) def make_layer(self, out_channels, blocks, stride=1): downsample = None if (stride != 1) or (self.in_channels != out_channels): downsample = nn.Sequential( conv3x3(self.in_channels, out_channels, stride=stride), nn.BatchNorm2d(out_channels)) layers = [] layers.append(ResidualBlock(self.in_channels, out_channels, stride, downsample)) # Major change here self.in_channels = out_channels for i in range(1, blocks): layers.append(ResidualBlock(out_channels, out_channels)) # Major change here return nn.Sequential(*layers) def forward(self, x): out = self.conv(x) out = self.bn(out) out = self.relu(out) out = self.layer1(out) out = self.layer2(out) out = self.layer3(out) out = self.avg_pool(out) out = out.view(out.size(0), -1) out = self.fc(out) return out resnet = ResNet([3, 3, 3]) This reduces the capability of your ResNet module and binds it with only the ResidualBlock. Now, if you create some other type of block (say ResidualBlock2), you will need to create another Resnet2 module specifically for that. So, it's better to create a generic ResNet module which takes in the block parameter, so that it can be used with different types of blocks. A trivial python example to clarify Suppose you want to create a function that can apply a mathematical operation on a list and returns its output. So, you might create something like below def exp(inp_list): out_list = [] for num in inp_list: out_list.append(math.exp(num)) return out_list def floor(inp_list): out_list = [] for num in inp_list: out_list.append(math.floor(num)) return out_list Here, we are doing an exponent and a floor operation on some input list. But, we can do a better job by defining a generic function to do the same as def apply_func(fn, inp_list): out_list = [] for num in inp_list: out_list.append(fn(num)) return out_list and now call this apply_func as apply_func(math.exp, inp_list) for exponential and as apply_func(math.floor, inp_list) for floor function. Also this opens up possibility for any kind of operation. Note: It's not a practical example as you can always use map or list comprehension for achieving the same thing. But, it demonstrates the use clearly.
https://stackoverflow.com/questions/49445701/
How can I update the parameters of a neural network in PyTorch?
Let's say I wanted to multiply all parameters of a neural network in PyTorch (an instance of a class inheriting from torch.nn.Module) by 0.9. How would I do that?
Let net be an instance of a neural network nn.Module. Then, to multiply all parameters by 0.9: state_dict = net.state_dict() for name, param in state_dict.items(): # Transform the parameter as required. transformed_param = param * 0.9 # Update the parameter. param.copy_(transformed_param) If you want to only update weights instead of every parameter: state_dict = net.state_dict() for name, param in state_dict.items(): # Don't update if this is not a weight. if not "weight" in name: continue # Transform the parameter as required. transformed_param = param * 0.9 # Update the parameter. param.copy_(transformed_param)
https://stackoverflow.com/questions/49446785/
PyTorch forward pass using weights trained by Theano
I've trained a small size CNN binary classifier in Theano. To have a simpler code, I wanted to port the trained weights to PyTorch or numpy forward pass for predictions. The predictions by original Theano program are satisfying but the PyTorch forward pass predicted all the examples to one class. Here is how I save trained weights in Theano using h5py: layer0_w = layer0.W.get_value(borrow=True) layer0_b = layer0.b.get_value(borrow=True) layer1_w = layer1.W.get_value(borrow=True) layer1_b = layer1.b.get_value(borrow=True) layer2_w = layer2.W.get_value(borrow=True) layer2_b = layer2.b.get_value(borrow=True) sm_w = layer_softmax.W.get_value(borrow=True) sm_b = layer_softmax.b.get_value(borrow=True) h5_l0w = h5py.File('./model/layer0_w.h5', 'w') h5_l0w.create_dataset('layer0_w', data=layer0_w) h5_l0b = h5py.File('./model/layer0_b.h5', 'w') h5_l0b.create_dataset('layer0_b', data=layer0_b) h5_l1w = h5py.File('./model/layer1_w.h5', 'w') h5_l1w.create_dataset('layer1_w', data=layer1_w) h5_l1b = h5py.File('./model/layer1_b.h5', 'w') h5_l1b.create_dataset('layer1_b', data=layer1_b) h5_l2w = h5py.File('./model/layer2_w.h5', 'w') h5_l2w.create_dataset('layer2_w', data=layer2_w) h5_l2b = h5py.File('./model/layer2_b.h5', 'w') h5_l2b.create_dataset('layer2_b', data=layer2_b) h5_smw = h5py.File('./model/softmax_w.h5', 'w') h5_smw.create_dataset('softmax_w', data=sm_w) h5_smb = h5py.File('./model/softmax_b.h5', 'w') h5_smb.create_dataset('softmax_b', data=sm_b) Then load the weights to build a forward pass using Pytorch and Numpy: import torch import numpy as np import torch.nn.functional as F def model(data): conv0_out = F.conv2d(input=np2var(data), weight=np2var(layer0_w), bias=np2var(layer0_b) ) layer0_out = relu(var2np(conv0_out)) conv1_out = F.conv2d(input=np2var(layer0_out), weight=np2var(layer1_w), bias=np2var(layer1_b) ) layer1_out = np.max(relu(var2np(conv1_out)), axis=2) dense_out=relu(np.matmul(layer1_out, layer2_w) + layer2_b) softmax_out = softmax(np.matmul(dense_out, softmax_w) + softmax_b) return softmax_out def relu(x): return x * (x > 0) def np2var(x): return torch.autograd.Variable(torch.from_numpy(x)) def var2np(x): return x.data.numpy() def softmax(x): e_x = np.exp(x - np.max(x)) return e_x / e_x.sum() The input and kernel shapes for conv2d functions are the same for Theano and PyTorch, and network structures in two frameworks are the same. I couldn't detect any errors step by step. What could go wrong here?
Theano uses convolutions (by default, filter_flip=True) while PyTorch uses cross-correlation. So, for every convolutional layer, you need to flip the weights before using them in PyTorch. You can use convert_kernel function from Keras to achieve this result.
https://stackoverflow.com/questions/49447270/
How to correctly give inputs to Embedding, LSTM and Linear layers in PyTorch?
I need some clarity on how to correctly prepare inputs for batch-training using different components of the torch.nn module. Specifically, I'm looking to create an encoder-decoder network for a seq2seq model. Suppose I have a module with these three layers, in order: nn.Embedding nn.LSTM nn.Linear nn.Embedding Input: batch_size * seq_length Output: batch_size * seq_length * embedding_dimension I don't have any problems here, I just want to be explicit about the expected shape of the input and output. nn.LSTM Input: seq_length * batch_size * input_size (embedding_dimension in this case) Output: seq_length * batch_size * hidden_size last_hidden_state: batch_size * hidden_size last_cell_state: batch_size * hidden_size To use the output of the Embedding layer as input for the LSTM layer, I need to transpose axis 1 and 2. Many examples I've found online do something like x = embeds.view(len(sentence), self.batch_size , -1), but that confuses me. How does this view ensure that elements of the same batch remain in the same batch? What happens when len(sentence) and self.batch size are of same size? nn.Linear Input: batch_size x input_size (hidden_size of LSTM in this case or ??) Output: batch_size x output_size If I only need the last_hidden_state of LSTM, then I can give it as input to nn.Linear. But if I want to make use of Output (which contains all intermediate hidden states as well) then I need to change nn.Linear's input size to seq_length * hidden_size and to use Output as input to Linear module I need to transpose axis 1 and 2 of output and then I can view with Output_transposed(batch_size, -1). Is my understanding here correct? How do I carry out these transpose operations in tensors (tensor.transpose(0, 1))?
Your understanding of most of the concepts is accurate, but, there are some missing points here and there. Interfacing embedding to LSTM (Or any other recurrent unit) You have embedding output in the shape of (batch_size, seq_len, embedding_size). Now, there are various ways through which you can pass this to the LSTM. * You can pass this directly to the LSTM, if LSTM accepts input as batch_first. So, while creating your LSTM pass argument batch_first=True. * Or, you can pass input in the shape of (seq_len, batch_size, embedding_size). So, to convert your embedding output to this shape, you’ll need to transpose the first and second dimensions using torch.transpose(tensor_name, 0, 1), like you mentioned. Q. I see many examples online which do something like x = embeds.view(len(sentence), self.batch_size , -1) which confuses me. A. This is wrong. It will mix up batches and you will be trying to learn a hopeless learning task. Wherever you see this, you can tell the author to change this statement and use transpose instead. There is an argument in favor of not using batch_first, which states that the underlying API provided by Nvidia CUDA runs considerably faster using batch as secondary. Using context size You are directly feeding the embedding output to LSTM, this will fix the input size of LSTM to context size of 1. This means that if your input is words to LSTM, you will be giving it one word at a time always. But, this is not what we want all the time. So, you need to expand the context size. This can be done as follows - # Assuming that embeds is the embedding output and context_size is a defined variable embeds = embeds.unfold(1, context_size, 1) # Keeping the step size to be 1 embeds = embeds.view(embeds.size(0), embeds.size(1), -1) Unfold documentation Now, you can proceed as mentioned above to feed this to the LSTM, just remembed that seq_len is now changed to seq_len - context_size + 1 and embedding_size (which is the input size of the LSTM) is now changed to context_size * embedding_size Using variable sequence lengths Input size of different instances in a batch will not be the same always. For example, some of your sentence might be 10 words long and some might be 15 and some might be 1000. So, you definitely want variable length sequence input to your recurrent unit. To do this, there are some additional steps that needs to be performed before you can feed your input to the network. You can follow these steps - 1. Sort your batch from largest sequence to the smallest. 2. Create a seq_lengths array that defines the length of each sequence in the batch. (This can be a simple python list) 3. Pad all the sequences to be of equal length to the largest sequence. 4. Create LongTensor Variable of this batch. 5. Now, after passing the above variable through embedding and creating the proper context size input, you’ll need to pack your sequence as follows - # Assuming embeds to be the proper input to the LSTM lstm_input = nn.utils.rnn.pack_padded_sequence(embeds, [x - context_size + 1 for x in seq_lengths], batch_first=False) Understanding output of LSTM Now, once you have prepared your lstm_input acc. To your needs, you can call lstm as lstm_outs, (h_t, h_c) = lstm(lstm_input, (h_t, h_c)) Here, (h_t, h_c) needs to be provided as the initial hidden state and it will output the final hidden state. You can see, why packing variable length sequence is required, otherwise LSTM will run the over the non-required padded words as well. Now, lstm_outs will be a packed sequence which is the output of lstm at every step and (h_t, h_c) are the final outputs and the final cell state respectively. h_t and h_c will be of shape (batch_size, lstm_size). You can use these directly for further input, but if you want to use the intermediate outputs as well you’ll need to unpack the lstm_outs first as below lstm_outs, _ = nn.utils.rnn.pad_packed_sequence(lstm_outs) Now, your lstm_outs will be of shape (max_seq_len - context_size + 1, batch_size, lstm_size). Now, you can extract the intermediate outputs of lstm according to your need. Remember that the unpacked output will have 0s after the size of each batch, which is just padding to match the length of the largest sequence (which is always the first one, as we sorted the input from largest to the smallest). Also note that, h_t will always be equal to the last element for each batch output. Interfacing lstm to linear Now, if you want to use just the output of the lstm, you can directly feed h_t to your linear layer and it will work. But, if you want to use intermediate outputs as well, then, you’ll need to figure out, how are you going to input this to the linear layer (through some attention network or some pooling). You do not want to input the complete sequence to the linear layer, as different sequences will be of different lengths and you can’t fix the input size of the linear layer. And yes, you’ll need to transpose the output of lstm to be further used (Again you cannot use view here). Ending Note: I have purposefully left some points, such as using bidirectional recurrent cells, using step size in unfold, and interfacing attention, as they can get quite cumbersome and will be out of the scope of this answer.
https://stackoverflow.com/questions/49466894/
How do you use next_functions[0][0] on grad_fn correctly in pytorch?
I was given this nn structure in the offical pytorch tutorial: input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d -> view -> linear -> relu -> linear -> relu -> linear -> MSELoss -> loss then an example of how to follow the grad backwards using built-in .grad_fn from Variable. # Eg: print(loss.grad_fn) # MSELoss print(loss.grad_fn.next_functions[0][0]) # Linear print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU So I thought I can reach the grad object for Conv2d by pasting next_function[0][0] 9 times because of the given examples but I got the error tuple out of index. So how can I index these backprop objects correctly?
In the PyTorch CNN tutorial after running the following from the tutorial: output = net(input) target = torch.randn(10) # a dummy target, for example target = target.view(1, -1) # make it the same shape as output criterion = nn.MSELoss() loss = criterion(output, target) print(loss) The following code snippet will print the full graph: def print_graph(g, level=0): if g == None: return print('*'*level*4, g) for subg in g.next_functions: print_graph(subg[0], level+1) print_graph(loss.grad_fn, 0)
https://stackoverflow.com/questions/49478784/
Pytorch save embeddings as part of encoder class or not
So I'm using pytorch for the first time. I'm trying to save weights to a file. I'm using a Encoder class that has a GRU and a embedding component. I want to make sure when I save the Encoder values that I will get the embedding values. Initially my code uses state_dict() to copy values to a dictionary of my own which I pass to torch.save(). Should I be looking for some way to save this embedding component or is it part of the larger encoder? The Encoder is a subclass of nn.Module . here's a link: http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html#sphx-glr-intermediate-seq2seq-translation-tutorial-py def make_state(self, converted=False): if not converted: z = [ { 'epoch':0, 'arch': None, 'state_dict': self.model_1.state_dict(), 'best_prec1': None, 'optimizer': self.opt_1.state_dict(), 'best_loss': self.best_loss }, { 'epoch':0, 'arch':None, 'state_dict':self.model_2.state_dict(), 'best_prec1':None, 'optimizer': self.opt_2.state_dict(), 'best_loss': self.best_loss } ] else: z = [ { 'epoch': 0, 'arch': None, 'state_dict': self.model_1.state_dict(), 'best_prec1': None, 'optimizer': None , # self.opt_1.state_dict(), 'best_loss': self.best_loss }, { 'epoch': 0, 'arch': None, 'state_dict': self.model_2.state_dict(), 'best_prec1': None, 'optimizer': None, # self.opt_2.state_dict(), 'best_loss': self.best_loss } ] #print(z) return z pass def save_checkpoint(self, state=None, is_best=True, num=0, converted=False): if state is None: state = self.make_state(converted=converted) if converted: print(converted, 'is converted.') basename = hparams['save_dir'] + hparams['base_filename'] torch.save(state, basename + '.' + str(num)+ '.pth.tar') if is_best: os.system('cp '+ basename + '.' + str(num) + '.pth.tar' + ' ' + basename + '.best.pth.tar') https://discuss.pytorch.org/t/saving-and-loading-a-model-in-pytorch/2610/3 Here is another link
No, you do not need to save the embedding values explicitly. Saving a model’s state_dict will save all the variables pertaining to that model, including the embedding weights. You can look for what a state dict contains by looping over it as - for var_name in model.state_dict(): print(var_name)
https://stackoverflow.com/questions/49500089/
Masking diagonal to a specific value with PyTorch tensors
How do I fill the diagonal with a value in torch? In numpy you can do: a = np.zeros((3, 3), int) np.fill_diagonal(a, 5) array([[5, 0, 0], [0, 5, 0], [0, 0, 5]]) I know that torch.diag() returns the diagonal, but how to use this as a mask to assign new values is beyond me. I haven't been able to find the answer here or in the PyTorch documentation.
You can do this in PyTorch using fill_diagonal_: >>> a = torch.zeros(3, 3) >>> a.fill_diagonal_(5) tensor([[5, 0, 0], [0, 5, 0], [0, 0, 5]])
https://stackoverflow.com/questions/49512313/
Exploding loss in pyTorch
I am trying to train a latent space model in pytorch. The model is relatively simple and just requires me to minimize my loss function but I am getting an odd error. After running for a short while the loss suddenly explodes upwards. import numpy as np import scipy.sparse.csgraph as csg import torch from torch.autograd import Variable import torch.autograd as autograd import matplotlib.pyplot as plt %matplotlib inline def cmdscale(D): # Number of points n = len(D) # Centering matrix H = np.eye(n) - np.ones((n, n))/n # YY^T B = -H.dot(D**2).dot(H)/2 # Diagonalize evals, evecs = np.linalg.eigh(B) # Sort by eigenvalue in descending order idx = np.argsort(evals)[::-1] evals = evals[idx] evecs = evecs[:,idx] # Compute the coordinates using positive-eigenvalued components only w, = np.where(evals > 0) L = np.diag(np.sqrt(evals[w])) V = evecs[:,w] Y = V.dot(L) return Y, evals Y = np.array([[0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 1., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 1., 0.], [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.], [0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 1., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1.], [0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.], [1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 1.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.], [0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 1.], [0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0.], [0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 1., 0., 1., 0., 0., 1., 0., 0., 0.]]) temp = Y[~np.all(Y == 0, axis=1)] temp = temp[:,~np.all(Y == 0, axis=1)] Y = temp n = np.shape(Y)[0] k = 2 D = csg.shortest_path(Y, directed=True) Z = cmdscale(D)[0][:,0:k] Z = Z - Z.mean(axis=0, keepdims=True) tZ = autograd.Variable(torch.Tensor(Z), requires_grad=True) B = autograd.Variable(torch.Tensor([0]), requires_grad=True) tY = torch.autograd.Variable(torch.Tensor(Y), requires_grad=False) #calculating pairwise euclidean distance def distMatrix(m): n = m.size(0) d = m.size(1) x = m.unsqueeze(1).expand(n, n, d) y = m.unsqueeze(0).expand(n, n, d) return torch.sqrt(torch.pow(x - y, 2).sum(2) + 1e-4) def loss(tY): d = -distMatrix(tZ)+B sigmoidD = torch.sigmoid(d) #removing diagonal reduce = tY*torch.log(sigmoidD)+(1-tY)*torch.log(1-sigmoidD) reduce[torch.eye(n).byte()] = 0 return -reduce.sum() losses = [] learning_rate = 1e-4 l = loss(tY) stepSize = 1000 for i in range(stepSize): l.backward(retain_graph=True) losses.append(float(loss(tY))) tZ.data = tZ.data - learning_rate * tZ.grad.data B.data = B.data - learning_rate * B.grad.data tZ.grad.data.zero_() B.grad.data.zero_() plt.subplot(122) plt.plot(losses) plt.title('Loss') plt.xlabel('Iteration') plt.ylabel('loss') plt.show() shouldnt the loss keep going down? or atleast converge to some point? I must've done something wrong, I am new to pytorch, any hints or nudges in the right direction would be highly appreciated!
The issue was that I defined my loss l = loss(tY) outside of the loop that ran and updated my gradients, I am not entirely sure why it had the effect that it did, but moving the loss function definition inside of the loop solved the problem, resulting in this loss:
https://stackoverflow.com/questions/49518666/
Force GPU memory limit in PyTorch
Is there a way to force a maximum value for the amount of GPU memory that I want to be available for a particular Pytorch instance? For example, my GPU may have 12Gb available, but I'd like to assign 4Gb max to a particular process.
Update (04-MAR-2021): it is now available in the stable 1.8.0 version of PyTorch. Also, in the docs Original answer follows. This feature request has been merged into PyTorch master branch. Yet, not introduced in the stable release. Introduced as set_per_process_memory_fraction Set memory fraction for a process. The fraction is used to limit an caching allocator to allocated memory on a CUDA device. The allowed value equals the total visible memory multiplied fraction. If trying to allocate more than the allowed value in a process, will raise an out of memory error in allocator. You can check the tests as usage examples.
https://stackoverflow.com/questions/49529372/
Pytorch errors when given numpy integer types only in python 3 (not in python 2)
For example, the torch.randn function, among others, gets mad when given a numpy.int64 type: Python 3.5.5 |Anaconda custom (64-bit)| (default, Mar 12 2018, 23:12:44) [GCC 7.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> import numpy >>> torch.randn(numpy.int64(4)) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: torch.randn received an invalid combination of arguments - got (numpy.int64), but expected one of: * (int ... size) didn't match because some of the arguments have invalid types: (numpy.int64) * (torch.Size size) didn't match because some of the arguments have invalid types: (numpy.int64) * (torch.Generator generator, int ... size) * (torch.Generator generator, torch.Size size) But in python 2, this works just fine: Python 2.7.14 |Anaconda, Inc.| (default, Dec 7 2017, 17:05:42) [GCC 7.2.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> import numpy >>> torch.randn(numpy.int64(3)) -2.0513 0.5409 -0.0814 [torch.FloatTensor of size 3] I couldn't find anyone else running into this issue. Is this known? Is this something about my setup? Is there any way to work around this without completely dropping numpy? I'm using version 0.3.1 of pytorch and version 1.14.2 of numpy.
On Python 2, on an OS where a C long is 64-bit, numpy.int64 is a subclass of int, so most things that want ints will accept numpy.int64 even if they're not written to handle int-like types. On Python 3, that doesn't happen any more. If you need to use a library that wants real ints, call int: torch.randn(int(some_numpy_integer))
https://stackoverflow.com/questions/49545988/
PyTorch: Variable data has to be a tensor -- data is already as tenors
I am trying to load data using pytorch's Dataset and DataLoader classes. I use torch.from_numpyto convert each array to a tensor in the torch Dataset and from looking into the data, each X and y is indeed a tensor # At this point dataset is {'X': numpy array of arrays, 'y': numpy array of arrays } class TorchDataset(torch.utils.data.Dataset): def __init__(self, dataset): self.X_train = torch.from_numpy(dataset['X']) self.y_train = torch.from_numpy(dataset['y']) def __len__(self): return len(self.X_train) def __getitem__(self, index): return {'X': self.X_train[index], 'y': self.y_train[index]} torch_dataset = TorchDataset(dataset) dataloader = DataLoader(torch_dataset, batch_size=4, shuffle=True, num_workers=4) for epoch in range(num_epochs): for X, y in enumerate(dataloader): features = Variable(X) labels = Variable(y) .... However on features = Variable(X) i get: RuntimeError: Variable data has to be a tensor, but got int An example of an X and y in the dataset are: In [1]: torch_dataset[1] Out[1]: {'X': -2.5908 -3.1123 -2.9460 ... -3.9898 -4.0000 -3.9975 -3.0867 -2.9992 -2.5254 ... -4.0000 -4.0000 -4.0000 -2.7665 -2.5318 -2.7035 ... -4.0000 -4.0000 -4.0000 ... ⋱ ... -2.4784 -2.6061 -1.6280 ... -4.0000 -4.0000 -4.0000 -2.2046 -2.1778 -1.5626 ... -3.9597 -3.9366 -3.9497 -1.9623 -1.9468 -1.5352 ... -3.8485 -3.8474 -3.8474 [torch.DoubleTensor of size 1024x1024], 'y': 107 [torch.LongTensor of size 1]} which is why it is very confusing for me that torch thinks X is an int. Any help would be much appreciated - thanks!
There is error in your use of enumerate which caused the error because the first return value of enumerate is the batch index, not the actual data. There are two ways you can make your script work. First way Since your X and y is do not need special process. You can just return a sample of X and y. Change your __getitem__ method to def __getitem__(self, index): return self.X_train[index], self.y_train[index] Also, change your training loop a little bit: for epoch in range(num_epochs): for batch_id, (x, y) in enumerate(dataloader): x = Variable(x) y = Variable(y) # then do whatever you want to do Second way You can return a dict in the __getitem__ method and extract the actual data in the training loop. In this case, you do not need to change the __getitem__ method. Just change your training loop: for epoch in range(num_epochs): for batch_id, data in enumerate(dataloader): # data will be dict x = Variable(data['X']) y = Variable(data['y']) # then do whatever you want to do
https://stackoverflow.com/questions/49583041/
How does PyTorch module do the back prop
While following the instructions on extending PyTorch - adding a module, I noticed while extending Module, we don't really have to implement the backward function. The only thing we need is to apply the Function instance in the forward function and PyTorch can automatically call the backward one in the Function instance when doing the back prop. This seems like magic to me as we didn't even register the Function instance we used. I looked into the source code but didn't find anything related. Could anyone kindly point me out a place that all those actually happened?
Not having to implement backward() is the reason PyTorch or any other DL framework is so valuable. In fact, implementing backward() should only be done in very specific cases where you need to mess with the network's gradient (or when you create a custom Function that can't be expressed using PyTorch's built-in functions). PyTorch computes backward gradients using a computational graph which keeps track of what operations have been done during your forward pass. Any operation done on a Variable implicitly get registered here. Then it's a matter of traversing the graph backward from the variable where it was called, and applying derivative chain rule to compute the gradients. PyTorch's About page has a nice visualization of the graph and how it generally works. I'd also recommend looking up compute graphs and autograd mechanism on Google if you want more details. EDIT: The source code where all this happens would be in the C part of PyTorch's codebase, where the actual graph is implemented. After some digging around, I found this: /// Evaluates the function on the given inputs and returns the result of the /// function call. variable_list operator()(const variable_list& inputs) { profiler::RecordFunction rec(this); if (jit::tracer::isTracingVar(inputs)) { return traced_apply(inputs); } return apply(inputs); } So in each Function, PyTorch first checks if its inputs needs tracing, and performs trace_apply() as implemented here. You can see the node being created and appended to the graph: // Insert a CppOp in the trace. auto& graph = state->graph; std::vector<VariableFlags> var_flags; for(auto & input: inputs) { var_flags.push_back(VariableFlags::of(input)); } auto* this_node = graph->createCppOp(get_shared_ptr(), std::move(var_flags)); // ... for (auto& input: inputs) { this_node->addInput(tracer::getValueTrace(state, input)); } graph->appendNode(this_node); My best guess here is that every Function object registers itself and its inputs (if needed) upon execution. Every non-functional calls (eg. variable.dot()) simply defers to the corresponding function, so this still applies. NOTE: I don't take part in PyTorch's development and is in no way an expert on its architecture. Any corrections or addition would be welcomed.
https://stackoverflow.com/questions/49594858/
Find a GPU with enough memory
I want to programmatically find out the available GPUs and their current memory usage and use one of the GPUs based on their memory availability. I want to do this in PyTorch. I have seen the following solution in this post: import torch.cuda as cutorch for i in range(cutorch.device_count()): if cutorch.getMemoryUsage(i) > MEM: opts.gpuID = i break but it is not working in PyTorch 0.3.1 (there is no function called, getMemoryUsage). I am interested in a PyTorch based (using the library functions) solution. Any help would be appreciated.
In the webpage you give, there exist an answer: #!/usr/bin/env python # encoding: utf-8 import subprocess def get_gpu_memory_map(): """Get the current gpu usage. Returns ------- usage: dict Keys are device ids as integers. Values are memory usage as integers in MB. """ result = subprocess.check_output( [ 'nvidia-smi', '--query-gpu=memory.used', '--format=csv,nounits,noheader' ]) # Convert lines into a dictionary gpu_memory = [int(x) for x in result.strip().split('\n')] gpu_memory_map = dict(zip(range(len(gpu_memory)), gpu_memory)) return gpu_memory_map print get_gpu_memory_map()
https://stackoverflow.com/questions/49595663/
How to resolve runtime error due to size mismatch in PyTorch?
I am trying to implement a simple autoencoder using PyTorch. My dataset consists of 256 x 256 x 3 images. I have built a torch.utils.data.dataloader.DataLoader object which has the image stored as tensor. When I run the autoencoder, I get a runtime error: size mismatch, m1: [76800 x 256], m2: [784 x 128] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1518371252923/work/torch/lib/TH/generic/THTensorMath.c:1434 These are my hyperparameters: batch_size=100, learning_rate = 1e-3, num_epochs = 100 Following is the architecture of my auto-encoder: class autoencoder(nn.Module): def __init__(self): super(autoencoder, self).__init__() self.encoder = nn.Sequential( nn.Linear(3*256*256, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(True), nn.Linear(64, 12), nn.ReLU(True), nn.Linear(12, 3)) self.decoder = nn.Sequential( nn.Linear(3, 12), nn.ReLU(True), nn.Linear(12, 64), nn.ReLU(True), nn.Linear(64, 128), nn.Linear(128, 3*256*256), nn.ReLU()) def forward(self, x): x = self.encoder(x) #x = self.decoder(x) return x This is the code I used to run the model: for epoch in range(num_epochs): for data in dataloader: img = data['image'] img = Variable(img) # ===================forward===================== output = model(img) loss = criterion(output, img) # ===================backward==================== optimizer.zero_grad() loss.backward() optimizer.step() # ===================log======================== print('epoch [{}/{}], loss:{:.4f}' .format(epoch+1, num_epochs, loss.data[0])) if epoch % 10 == 0: pic = show_img(output.cpu().data) save_image(pic, './dc_img/image_{}.jpg'.format(epoch))
If your input is 3 x 256 x 256, then you need to convert it to B x N to pass it through the linear layer: nn.Linear(3*256*256, 128) where B is the batch_size and N is the linear layer input size. If you are giving one image at a time, you can convert your input tensor of shape 3 x 256 x 256 to 1 x (3*256*256) as follows. img = img.view(1, -1) # converts [3 x 256 x 256] to 1 x 196608 output = model(img)
https://stackoverflow.com/questions/49606482/
Pytorch, pipenv and numpy support?
I am using pytorch and installed it follow these instructions: pipenv install git+https://github.com/pytorch/pytorch#egg=pytorch pipenv install git+https://github.com/pytorch/vision#egg=torchvision in order to install pytorch with pipenv. This runs some of the examples, but for instance, this MNIST code: from __future__ import print_function import argparse import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from torch.autograd import Variable # Training settings parser = argparse.ArgumentParser(description='PyTorch MNIST Example') parser.add_argument('--batch-size', type=int, default=64, metavar='N', help='input batch size for training (default: 64)') parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', help='input batch size for testing (default: 1000)') parser.add_argument('--epochs', type=int, default=10, metavar='N', help='number of epochs to train (default: 10)') parser.add_argument('--lr', type=float, default=0.01, metavar='LR', help='learning rate (default: 0.01)') parser.add_argument('--momentum', type=float, default=0.5, metavar='M', help='SGD momentum (default: 0.5)') parser.add_argument('--no-cuda', action='store_true', default=False, help='disables CUDA training') parser.add_argument('--seed', type=int, default=1, metavar='S', help='random seed (default: 1)') parser.add_argument('--log-interval', type=int, default=10, metavar='N', help='how many batches to wait before logging training status') args = parser.parse_args() args.cuda = not args.no_cuda and torch.cuda.is_available() torch.manual_seed(args.seed) if args.cuda: torch.cuda.manual_seed(args.seed) kwargs = {'num_workers': 1, 'pin_memory': True} if args.cuda else {} train_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args.batch_size, shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=args.test_batch_size, shuffle=True, **kwargs) class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x, dim=1) model = Net() if args.cuda: model.cuda() optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) def train(epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): if args.cuda: data, target = data.cuda(), target.cuda() data, target = Variable(data), Variable(target) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.data[0])) def test(): model.eval() test_loss = 0 correct = 0 for data, target in test_loader: if args.cuda: data, target = data.cuda(), target.cuda() data, target = Variable(data, volatile=True), Variable(target) output = model(data) test_loss += F.nll_loss(output, target, size_average=False).data[0] # sum up batch loss pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability correct += pred.eq(target.data.view_as(pred)).long().cpu().sum() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) for epoch in range(1, args.epochs + 1): train(epoch) test() Does not run giving the error: RuntimeError: PyTorch was compiled without NumPy support Running: pipenv install numpy does not resolve the error. Is there a way to use pipenv and install pytorch with numpy support?
I managed to get it. I slightly changed: pipenv install git+https://github.com/pytorch/pytorch#egg=pytorch pipenv install git+https://github.com/pytorch/vision#egg=torchvision To: pipenv install http://download.pytorch.org/whl/cu91/torch-0.3.1-cp36-cp36m-linux_x86_64.whl pipenv install git+https://github.com/pytorch/vision#egg=torchvision The first command is what pytorch.org recomends when I select Linux, pip, python 3.6, and CUDA 9.1. For some reason, pipenv install torchvision doesn't work. For reference here is the resulting Pipfile: [[source]] url = "https://pypi.python.org/simple" verify_ssl = true name = "pypi" [packages] "633048b" = {file = "http://download.pytorch.org/whl/cu91/torch-0.3.1-cp36-cp36m -linux_x86_64.whl"} torchvision = {git = "https://github.com/pytorch/vision"} [dev-packages]
https://stackoverflow.com/questions/49619952/
What's the difference between reshape and view in pytorch?
In numpy, we use ndarray.reshape() for reshaping an array. I noticed that in pytorch, people use torch.view(...) for the same purpose, but at the same time, there is also a torch.reshape(...) existing. So I am wondering what the differences are between them and when I should use either of them?
torch.view has existed for a long time. It will return a tensor with the new shape. The returned tensor will share the underling data with the original tensor. See the documentation here. On the other hand, it seems that torch.reshape has been introduced recently in version 0.4. According to the document, this method will Returns a tensor with the same data and number of elements as input, but with the specified shape. When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior. It means that torch.reshape may return a copy or a view of the original tensor. You can not count on that to return a view or a copy. According to the developer: if you need a copy use clone() if you need the same storage use view(). The semantics of reshape() are that it may or may not share the storage and you don't know beforehand. Another difference is that reshape() can operate on both contiguous and non-contiguous tensor while view() can only operate on contiguous tensor. Also see here about the meaning of contiguous.
https://stackoverflow.com/questions/49643225/
Downloading pretrained TimeNet RNN?
I've been looking for the pretrained net behind TimeNet - a resnet equivalent for time series analysis via RNNs. There's some articles and several publications of the same paper, but the weights and architecture themselves are not easy to find. Anyone knows how to find it? Is it available to the public?
It looks like someone answered this on the top response to this Quora question. Why are there no pre-trained RNN models It's not available to the public just yet.
https://stackoverflow.com/questions/49661028/
pytorch - net.cuda() seems don't work
I wrote a cnn module to do digit recognition using pytorch, then try to train the network with gpu but got following error. Traceback (most recent call last): File "main.py", line 51, in <module> outputs = cnn(inputs) File "/home/daniel/anaconda3/envs/pytorch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "/home/daniel/Code/kaggle-competitions/digit-recognizer/Net.py", line 40, in forward x = self.pool(F.relu(self.conv[i](x))) File "/home/daniel/anaconda3/envs/pytorch/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in __call__ result = self.forward(*input, **kwargs) File "/home/daniel/anaconda3/envs/pytorch/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 282, in forward self.padding, self.dilation, self.groups) File "/home/daniel/anaconda3/envs/pytorch/lib/python3.5/site-packages/torch/nn/functional.py", line 90, in conv2d return f(input, weight, bias) RuntimeError: Input type (CUDAFloatTensor) and weight type (CPUFloatTensor) should be the same here is my source code It seems that cnn.cuda() didn't work properly because I got the same error after removing it. But I have no idea how to fix it.
Daniel's answer to his own question seems to be correct. The problem is indeed that modules are not recognized if they are appended to a list. However, Pytorch also provides built-in solutions to this problem: nn.ModuleList and nn.ModuleDict are two container types that keep track of the added content and their parameters. Both have the same functionality as their Python equivalents, but the dictionary uses named arguments and can be used to keep track of for example task-specific layers. A working example would be: self.conv = torch.nn.ModuleList() self.conv.append(nn.Conv2d(1, channels[0], kernel_sizes[0])) self.conv_img_size = math.floor((self.conv_img_size - (kernel_sizes[0]-1))/2) for i in range(1, self.conv_layer_size): self.conv.append(nn.Conv2d(channels[i-1], channels[i], kernel_sizes[i])) self.conv_img_size = math.floor((self.conv_img_size - (kernel_sizes[i]-1))/2) # Modules are automatically added to the model parameters
https://stackoverflow.com/questions/49675499/
Resolved package not found
When I try to execute conda env create -f virtual_platform_windows.yml It shows ResolvePackageNotFound: - pytorch==0.1.12=py35_0.1.12cu80 I tried installing pytorch for windows and error still comes.How to solve this??
Open: virtual_platform_windows.yml in Notepad Delete: - pytorch=0.1.12=py35_0.1.12cu80 Delete: - torch==0.1.12 Save Using Anaconda prompt: Execute the code: conda env create -f virtual_platform_windows.yml Activate virtual environment: source activate virtual_platform Install Pytorch seperately with conda install -c peterjc123 pytorch cuda80 Copy haas files into script directory
https://stackoverflow.com/questions/49680427/
PyTorch - WSD using LSTM
I'm trying to replicate Google's research paper on WSD with neural models using PyTorch. I'm having some issues traying to overfit the model before training on large datasets. Using this training set: The film was also intended to be the first in a trilogy. this model definition: class WordGuesser(nn.Module): def __init__(self, hidden_dim, context_dim, embedding_dim, vocabulary_dim, batch_dim, window_dim): super(WordGuesser, self).__init__() self.hidden_dim = hidden_dim self.batch_dim = batch_dim self.window_dim = window_dim self.word_embeddings = nn.Embedding(vocabulary_dim, embedding_dim) self.lstm = nn.LSTM(embedding_dim, hidden_dim) #self.extract_context = nn.Linear((2 * window_dim + 1) * hidden_dim, context_dim) self.extract_context = nn.Linear(hidden_dim, context_dim) self.predict = nn.Linear(context_dim, vocabulary_dim) self.hidden = self.init_hidden() def init_hidden(self): return (autograd.Variable(torch.zeros(1, self.batch_dim, self.hidden_dim).cuda()), autograd.Variable(torch.zeros(1, self.batch_dim, self.hidden_dim).cuda())) def forward(self, sentence, hidden): embeddings = self.word_embeddings(sentence) out, self.hidden = self.lstm(embeddings.permute(1, 0, 2), hidden) lstm_out = out[-1] context = self.extract_context(lstm_out) prediction = self.predict(context) return prediction, context and this training routine: num_epoch = 100 hidden_units = 512 embedding_dim = 256 context_dim = 256 def mytrain(): lines = open('training/overfit.txt').readlines() sentences = data.split_to_sentences(lines) #uses spaCy to detect sentences from each line word2idx=dict() #dictionary is built from the training set idx2word =dict() i = 0 for s in sentences: for t in s.split(' '): if t in word2idx: continue word2idx[t] = i idx2word[i] = t i += 1 word2idx['$'] = i #the token to guess the missing word in a sentence idx2word[i] = '$' X = list() Y = list() for sentence in sentences: sentence = sentence.split(' ') for i in range(len(sentence)): newsentence = list(sentence) newsentence[i] = '$' if not sentence[i] in word2idx: continue indices = [word2idx[w] for w in newsentence] label = word2idx[sentence[i]] X.append(indices) Y.append(label) model = WordGuesser(hidden_units, context_dim, embedding_dim, len(word2idx), len(X), len(X[0])) model.train() model.cuda() input = torch.LongTensor(X).cuda() output = torch.LongTensor(Y).cuda() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.01) model.hidden = model.init_hidden() for epoch in range(num_epoch): model.hidden = model.init_hidden() model.zero_grad() input_tensor = autograd.Variable(input) target_tensor = autograd.Variable(output) predictions, context = model(input_tensor, model.hidden) for i, prediction in enumerate(predictions): sorted_val = sorted(enumerate(np.array(prediction.data)), key=lambda x : x[1], reverse=True) print([(idx2word[x[0]], x[1]) for x in sorted_val[:5]], idx2word[Y[i]]) loss = criterion(predictions, target_tensor) loss.backward() optimizer.step() print(epoch, loss.data[0]) torch.save(model, "train2.pt") during the training it seems that the model is able to overfit just after the 21st epoch, as you can see from the following scores (top 5 words from the predictions and the last word in a line is the label for that sentence): [('The', 11.362326), ('film', 11.356865), ('also', 7.5573149), ('to', 5.3518314), ('intended', 4.3520432)] The [('film', 11.073805), ('The', 10.451499), ('also', 7.5498624), ('was', 4.9684553), ('be', 4.0730805)] film [('was', 11.232123), ('also', 9.9741745), ('the', 6.0156212), ('be', 4.9949703), ('The', 4.5516477)] was [('also', 9.6998224), ('was', 9.6202812), ('The', 6.345758), ('film', 4.9122157), ('be', 2.6727715)] also [('intended', 18.344809), ('to', 16.410078), ('film', 10.147289), ('The', 9.8423424), ('$', 9.6181822)] intended [('to', 12.442947), ('intended', 10.900065), ('film', 8.2598763), ('The', 8.0493736), ('$', 4.4901967)] to [('be', 12.189278), ('also', 7.7172523), ('was', 7.5415096), ('the', 5.2521734), ('The', 4.1723843)] be [('the', 15.59604), ('be', 9.3750105), ('first', 8.9820032), ('was', 8.6859236), ('also', 5.0665498)] the [('first', 10.191225), ('the', 5.1829329), ('in', 3.6020348), ('be', 3.4108081), ('a', 1.5569853)] first [('in', 14.731103), ('first', 9.3131113), ('a', 5.982264), ('trilogy', 4.2928643), ('be', 0.49548936)] in [('a', 14.357709), ('in', 8.3088198), ('trilogy', 6.3918238), ('first', 6.2178354), ('intended', 0.95656234)] a [('trilogy', 14.351434), ('a', 4.5073452), ('in', 4.2348137), ('$', 3.7552347), ('intended', 3.5101018)] trilogy [('.', 18.152126), ('$', 12.028764), ('to', 9.6003456), ('intended', 8.1202478), ('The', 4.9225812)] . When running another Python script which loads the model and queries it for the following words (using the same code to print out the scores during the training): The film was also intended to $ the first in a trilogy. be The film $ also intended to be the first in a trilogy. was $ film was also intended to be the first in a trilogy. The I'm getting these scores: [('film', 24.066889), ('$', 20.107487), ('was', 16.855488), ('a', 12.969441), ('in', 8.1248817)] be [('film', 24.089062), ('$', 20.116539), ('was', 16.891994), ('a', 12.982826), ('in', 8.1167336)] was [('film', 23.993624), ('$', 20.108011), ('was', 16.891005), ('a', 12.960193), ('in', 8.1577587)] The I've also tried setting to False the model.train() mode, using model.eval() as well as calling topk on the LSTM scores, but the results aren't satifying,
Solved it by saving only the model's state_dict() via torch.save() and then loading it back in the evaluation phase using model.load_state_dict(). Furthermore, I wrapped the sentence querying loop in another loop, acting as a warm-up (got it from here) and once it was at its last time looping, I set model.eval() and printed the scores, which turned out to be correct.
https://stackoverflow.com/questions/49707613/
PyTorch / Gensim - How do I load pre-trained word embeddings?
I want to load a pre-trained word2vec embedding with gensim into a PyTorch embedding layer. How do I get the embedding weights loaded by gensim into the PyTorch embedding layer?
I just wanted to report my findings about loading a gensim embedding with PyTorch. Solution for PyTorch 0.4.0 and newer: From v0.4.0 there is a new function from_pretrained() which makes loading an embedding very comfortable. Here is an example from the documentation. import torch import torch.nn as nn # FloatTensor containing pretrained weights weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) embedding = nn.Embedding.from_pretrained(weight) # Get embeddings for index 1 input = torch.LongTensor([1]) embedding(input) The weights from gensim can easily be obtained by: import gensim model = gensim.models.KeyedVectors.load_word2vec_format('path/to/file') weights = torch.FloatTensor(model.vectors) # formerly syn0, which is soon deprecated As noted by @Guglie: in newer gensim versions the weights can be obtained by model.wv: weights = model.wv Solution for PyTorch version 0.3.1 and older: I'm using version 0.3.1 and from_pretrained() isn't available in this version. Therefore I created my own from_pretrained so I can also use it with 0.3.1. Code for from_pretrained for PyTorch versions 0.3.1 or lower: def from_pretrained(embeddings, freeze=True): assert embeddings.dim() == 2, \ 'Embeddings parameter is expected to be 2-dimensional' rows, cols = embeddings.shape embedding = torch.nn.Embedding(num_embeddings=rows, embedding_dim=cols) embedding.weight = torch.nn.Parameter(embeddings) embedding.weight.requires_grad = not freeze return embedding The embedding can be loaded then just like this: embedding = from_pretrained(weights) I hope this is helpful for someone.
https://stackoverflow.com/questions/49710537/
Error Utilizing Pytorch Transforms and Custom Dataset
This question mainly concerns the return value of __getitem__ in a pytorch Dataset which I've seen as both a tuple and a dict in the source code. I have been following this tutorial for creating a dataset class within my code, which is following this tutorial on transfer learning. It has the following definition of a dataset. class FaceLandmarksDataset(Dataset): """Face Landmarks dataset.""" def __init__(self, csv_file, root_dir, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.landmarks_frame = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.landmarks_frame) def __getitem__(self, idx): img_name = os.path.join(self.root_dir, self.landmarks_frame.iloc[idx, 0]) image = io.imread(img_name) landmarks = self.landmarks_frame.iloc[idx, 1:].as_matrix() landmarks = landmarks.astype('float').reshape(-1, 2) sample = {'image': image, 'landmarks': landmarks} if self.transform: sample = self.transform(sample) return sample As you can see, __getitem__ returns a dictionary with two entries. In the transfer learning tutorial, the following calls are made to transform a dataset: data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = 'hymenoptera_data' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes use_gpu = torch.cuda.is_available() inputs, classes = next(iter(dataloaders['train'])) That last line of code causes an error in my code by attempting to run transform on a sample in my custom dataset. 'dict' object has no attribute 'size' But if the tutorial dataset is implemented correctly, shouldn't it function correctly with a transform? My own hybrid implementation is below: import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler from torch.autograd import Variable import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import copy from torch.utils.data import * from skimage import io, transform plt.ion() class NumsDataset(Dataset): """Face Landmarks dataset.""" def __init__(self, root_dir, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.docs = [] for file in os.listdir(root_dir): #print(file) if file.endswith(".txt"): path = os.path.join(root_dir, file) with open(path, 'r') as f: self.docs.append( ( file , list(f.read()) ) ) #tup containing file, image values pairs self.root_dir = root_dir self.transform = transform def __len__(self): #returns number of images i = 0 for j in self.docs: i += len(j[1]) return i def len2(self): #returns number of batches return len(self.docs) def __getitem__(self, idx): idx1 = idx // self.len2() idx2 = idx % self.len2() imglabel = self.docs[idx1][0] #label with filename for batch error calculation later imgdir = os.path.join(self.root_dir, self.docs[idx1][0].strip(".txt")) img = None l = idx2 for file in os.listdir(imgdir): file = os.path.join(imgdir, file) if(l == 0): img = io.imread(file) l -= 1 sample = (img , imglabel) sample ={'image': img, 'label': imglabel} if self.transform: sample = self.transform(sample) return sample data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = "images" image_datasets = {x: NumsDataset(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=5) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = ["one", "two", "four"] use_gpu = torch.cuda.is_available() # Get a batch of training data inputs, classes = next(iter(dataloaders['train'])) directory structure: images /train /file1 *.jpg /file2... *.jpg file1.txt file2.txt... /val /file1 *.jpg /file2... *.jpg file1.txt file2.txt... Is the sample I'm returning formatted incorrectly?
The particular way the tutorial on dataloading uses the custom dataset is with self defined transforms. The transforms must be designed to fit the dataset. As such, the dataset must output a sample compatible with the library transform functions, or transforms must be defined for the particular sample case. Choosing the latter, among other things has resulted in completely functional code.
https://stackoverflow.com/questions/49717876/
pytorch doesn't give expected output
Firstly, a bunch of data is classified by the CNN model. Then, I'm trying to make prediction on correctly classified data from first step, which is expected to give an accuracy of 100%. However, I found the result is unstable, sometimes 99+%, but not 100%. Is there anybody know what is the problem with my code? Thank you very much in advance, it has troubled me several days ~ ~ torch.version '0.3.1.post2' import numpy as np import torch import torch.nn as nn from torch.autograd import Variable n = 2000 data = np.random.randn(n, 1, 10, 10) label = np.random.randint(2, size=(n, )) def test_pred(model, data_test, label_test): data_batch = data_test labels_batch = label_test images = torch.autograd.Variable(torch.FloatTensor(data_batch)) labels = torch.autograd.Variable(torch.FloatTensor(labels_batch)) outputs = model(images) _, predicted = torch.max(outputs.data, 1) correct = (np.array(predicted) == labels_batch).sum() label_pred = np.array(predicted) acc = correct/len(label_test) print(" acc:", acc) return acc, label_pred class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5, padding=2), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(2)) self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=5, padding=2), nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(2)) self.fc = nn.Linear(128, 2) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = out.view(out.size(0), -1) out = self.fc(out) return out cnn = CNN() [_, label_pred] = test_pred(cnn, data, label) print("Acc:", np.mean(label_pred==label)) # Given the correctly classified data in previous step, expect to get 100% accuracy # Why it sometimes doesn't give a 100% accuracy ? print("Using selected data size {}:".format(data[label_pred==label].shape)) _, _ = test_pred(cnn, data[label_pred==label], label[label_pred==label]) output: acc: 0.482 Acc: 0.482 Using selected data size (964, 1, 10, 10): acc: 0.9979253112033195
Seems like you did not set the network to evaluation mode which might be causing some problems, specially with the BatchNorm layers. Do cnn = CNN() cnn.eval() and it should work.
https://stackoverflow.com/questions/49723034/
How are PyTorch's tensors implemented?
I am building my own Tensor class in Rust, and I am trying to make it like PyTorch's implementation. What is the most efficient way to store tensors programmatically, but, specifically, in a strongly typed language like Rust? Are there any resources that provide good insights into how this is done? I am currently building a contiguous array, so that, given dimensions of 3 x 3 x 3, my array would just have 3^3 elements in it, which would represent the tensor. However, this does make some of the mathematical operations and manipulations of the array harder. The dimension of the tensor should be dynamic, so that I could have a tensor with n dimensions.
Contiguous array The commonly used way to store such data is in a single array that is laid out as a single, contiguous block within memory. More concretely, a 3x3x3 tensor would be stored simply as a single array of 27 values, one after the other. The only place where the dimensions are used is to calculate the mapping between the (many) coordinates and the offset within that array. For example, to fetch the item [3, 1, 1] you would need to know if it is a 3x3x3 matrix, a 9x3x1 matrix, or a 27x1x1 matrix - in all cases the "storage" would be 27 items long, but the interpretation of "coordinates" would be different. If you use zero-based indexing, the calculation is trivial, but you need to know the length of each dimension. This does mean that resizing and similar operations may require copying the whole array, but that's ok, you trade off the performance of those (rare) operations to gain performance for the much more common operations, e.g. sequential reads.
https://stackoverflow.com/questions/49724954/
Get all 2D diagonals of a 3D tensor in numpy
I have a 3D tensor A x B x C. For each matrix B x C, I want to extract the leading diagonal. Is there a vectorized way of doing this in numpy or pytorch instead of looping over A?
You can use numpy.diagonal() np.diagonal(a, axis1=1, axis2=2) Example: In [10]: a = np.arange(3*4*5).reshape(3,4,5) In [11]: a Out[11]: array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]], [[20, 21, 22, 23, 24], [25, 26, 27, 28, 29], [30, 31, 32, 33, 34], [35, 36, 37, 38, 39]], [[40, 41, 42, 43, 44], [45, 46, 47, 48, 49], [50, 51, 52, 53, 54], [55, 56, 57, 58, 59]]]) In [12]: np.diagonal(a, axis1=1, axis2=2) Out[12]: array([[ 0, 6, 12, 18], [20, 26, 32, 38], [40, 46, 52, 58]])
https://stackoverflow.com/questions/49731792/
How to form a sequence of consecutive numbers in Pytorch?
How to convert the Matlab code v = [1: n] to pytorch? Writing a whole loop for that seems inefficient.
You can directly use the arange method from Pytorch. torch_v = torch.arange(1,n) Reference: https://pytorch.org/docs/master/torch.html?highlight=arange#torch.arange
https://stackoverflow.com/questions/49742625/
What is the benefit of random image crop on Convolutional Network?
I`m studying about transfer learning with the pytorch tutorial. I found pytorch tutorial author uses the different approach to train set and validation set. data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } As above, training session uses the Randomly cropped image on training. But I found that after that transformation, some of the images are totally cut off the target object and remains no useful feature for detection. I am curious why the author has chosen that method, even if it is more difficult to define different pipeline to train&validation dataset. Is there any benefit of the random cropping?
Couple of ideas behind random cropping: In short: Extending the amount of data for training Making NN more robust More detail: The semantics of the image are preserved but the activation values of the conv net are different. The conv net learns to associate a broader range of spatial activations with a certain class label and improves the robustness of the feature detectors in conv nets. Look at this excerpt https://link.springer.com/chapter/10.1007/978-3-319-31293-4_25
https://stackoverflow.com/questions/49748787/
Pytorch tensor to numpy array
I have a pytorch Tensor of shape [4, 3, 966, 1296]. I want to convert it to numpy array using the following code: imgs = imgs.numpy()[:, ::-1, :, :] How does that code work?
There are 4 dimensions of the tensor you want to convert. [:, ::-1, :, :] : means that the first dimension should be copied as it is and converted, same goes for the third and fourth dimension. ::-1 means that for the second axes it reverses the the axes
https://stackoverflow.com/questions/49768306/
Pytorch hidden state LSTM
Why do we need to initialize the hidden state h0 in LSTM in pytorch. As h0 will anyways be calculated and get overwritten ? Isn't it like int a a = 0 a = 4 Even if we do not do a=0, it should be fine..
The point is that you are able to supply the initial state, it is a feature. They could have implemented it as a default but by letting you control the allocation of the tensor you can save some memory (allocating once, zeroing on every invocation). Why would you need to set h? Sequence-to-sequence models require this (compress input to one vector, use this vector as hidden state for the decoder) or you might want to make the initial state learnable.
https://stackoverflow.com/questions/49778001/
How is the print and view functions works in pytorch?
This is a convolutional neural network which I found in the web class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(500, 50) self.fc2 = nn.Linear(50, 64) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 500) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x) and its summary print(net) Net( (conv1): Conv2d(3, 10, kernel_size=(5, 5), stride=(1, 1)) (conv2): Conv2d(10, 20, kernel_size=(5, 5), stride=(1, 1)) (conv2_drop): Dropout2d(p=0.5) (fc1): Linear(in_features=500, out_features=50, bias=True) (fc2): Linear(in_features=50, out_features=64, bias=True) ) What is x.view does? Is it similar to the Flatten function in keras. The other query is reagarding how pytorch prints summary of a model. Eventhough the model uses two dropouts nn.Dropout2d() and F.dropout. When printing the model we can see only one (conv2_drop): Dropout2d(p=0.5), why?. The last question is why pytorch dosen't print F.max_pool2d layer?
1) x.view can do more than just flatten: It will keep the same data while reshaping the dimension. So using x.view(batch_size, -1)will be equivalent to Flatten 2) In the __repr__function of nn.Module, the elements that are printed are the modules in self._modules.items() which are its children. F.dropoutand F.max_pool2d are functions and not children of nn.Module, thus they are not layers and will not be printed. For pooling and dropout however, there is a module in torch.nn which you already used for the first dropout.
https://stackoverflow.com/questions/49808467/
How does one manually compute the error of the whole data set in pytorch?
I was trying to track the error of the whole data set and compute the error of the whole data set in pytorch. I wrote the following (reproducible example and fully contained) in cifar10 pytorch 0.3.1: import torch from torch.autograd import Variable import torch.optim as optim import torchvision import torchvision.transforms as transforms from math import inf from pdb import set_trace as st def error_criterion(outputs,labels): max_vals, max_indices = torch.max(outputs,1) train_error = (max_indices != labels).sum().data[0]/max_indices.size()[0] return train_error def evalaute_mdl_data_set(loss,error,net,dataloader,enable_cuda,iterations=inf): ''' Evaluate the error of the model under some loss and error with a specific data set. ''' running_loss,running_error = 0,0 for i,data in enumerate(dataloader): if i >= iterations: break inputs, labels = extract_data(enable_cuda,data,wrap_in_variable=True) outputs = net(inputs) running_loss += loss(outputs,labels).data[0] running_error += error(outputs,labels) return running_loss/(i+1),running_error/(i+1) def extract_data(enable_cuda,data,wrap_in_variable=False): inputs, labels = data if enable_cuda: inputs, labels = inputs.cuda(), labels.cuda() #TODO potential speed up? if wrap_in_variable: inputs, labels = Variable(inputs), Variable(labels) return inputs, labels def train_and_track_stats(enable_cuda, nb_epochs, trainloader,testloader, net,optimizer,criterion,error_criterion, iterations=inf): ''' Add stats before training ''' train_loss_epoch, train_error_epoch = evalaute_mdl_data_set(criterion, error_criterion, net, trainloader, enable_cuda, iterations) test_loss_epoch, test_error_epoch = evalaute_mdl_data_set(criterion, error_criterion, net, testloader, enable_cuda, iterations) print(f'[-1, -1], (train_loss: {train_loss_epoch}, train error: {train_error_epoch}) , (test loss: {test_loss_epoch}, test error: {test_error_epoch})') ## ''' Start training ''' print('about to start training') for epoch in range(nb_epochs): # loop over the dataset multiple times running_train_loss,running_train_error = 0.0,0.0 for i,data_train in enumerate(trainloader): ''' zero the parameter gradients ''' optimizer.zero_grad() ''' train step = forward + backward + optimize ''' inputs, labels = extract_data(enable_cuda,data_train,wrap_in_variable=True) outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_train_loss += loss.data[0] running_train_error += error_criterion(outputs,labels) ''' End of Epoch: collect stats''' train_loss_epoch, train_error_epoch = running_train_loss/(i+1), running_train_error/(i+1) #train_loss_epoch, train_error_epoch = evalaute_mdl_data_set(criterion,error_criterion,net,trainloader,enable_cuda,iterations) test_loss_epoch, test_error_epoch = evalaute_mdl_data_set(criterion,error_criterion,net,testloader,enable_cuda,iterations) print(f'[{epoch}, {i+1}], (train_loss: {train_loss_epoch}, train error: {train_error_epoch}) , (test loss: {test_loss_epoch}, test error: {test_error_epoch})') return train_loss_epoch, train_error_epoch, test_loss_epoch, test_error_epoch class Flatten(torch.nn.Module): def forward(self, input): return input.view(input.size(0), -1) def main(): enable_cuda = True print('running main') num_workers = 0 ''' Get Data set ''' batch_size_test = 10000 batch_size_train = 10000 data_path = './data' transform = [transforms.ToTensor(),transforms.Normalize( (0.5, 0.5, 0.5), (0.5, 0.5, 0.5) )] transform = transforms.Compose(transform) trainset = torchvision.datasets.CIFAR10(root=data_path, train=True,download=False, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size_train,shuffle=True, num_workers=num_workers) testset = torchvision.datasets.CIFAR10(root=data_path, train=False,download=False, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size_test,shuffle=False, num_workers=num_workers) ''' Get model ''' net = torch.nn.Sequential( torch.nn.Conv2d(3,13,5), #(in_channels, out_channels, kernel_size), Flatten(), torch.nn.Linear(28*28*13, 13), torch.nn.Linear(13, 10) ) net.cuda() ''' Train ''' nb_epochs = 10 lr = 0.1 err_criterion = error_criterion criterion = torch.nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0.0) train_and_track_stats(enable_cuda, nb_epochs, trainloader,testloader, net,optimizer,criterion,err_criterion, iterations=inf) ''' Done ''' print('Done') if __name__ == '__main__': main() When I run it I get the following error: python my_cifar10.py running main [-1, -1], (train_loss: 2.3172860145568848, train error: 0.0054) , (test loss: 2.317185878753662, test error: 0.0038) about to start training [0, 5], (train_loss: 2.22599835395813, train error: 0.015160000000000002) , (test loss: 2.0623881816864014, test error: 0.0066) [1, 5], (train_loss: 2.014406657218933, train error: 0.00896) , (test loss: 1.9619578123092651, test error: 0.0195) [2, 5], (train_loss: 1.9428715705871582, train error: 0.01402) , (test loss: 1.918603539466858, test error: 0.0047) [3, 5], (train_loss: 1.9434458494186402, train error: 0.01192) , (test loss: 1.9194672107696533, test error: 0.0125) [4, 5], (train_loss: 1.8804980754852294, train error: 0.00794) , (test loss: 1.8549214601516724, test error: 0.004) [5, 5], (train_loss: 1.8573726177215577, train error: 0.010159999999999999) , (test loss: 1.8625996112823486, test error: 0.0158) [6, 5], (train_loss: 1.8454653739929199, train error: 0.01524) , (test loss: 1.8155865669250488, test error: 0.0122) [7, 5], (train_loss: 1.8140610456466675, train error: 0.01066) , (test loss: 1.808283805847168, test error: 0.0101) [8, 5], (train_loss: 1.8036894083023072, train error: 0.00832) , (test loss: 1.799634575843811, test error: 0.007) [9, 5], (train_loss: 1.8023016452789307, train error: 0.0077399999999999995) , (test loss: 1.8030155897140503, test error: 0.0114) Done Clearly it has to be wrong cuz the test error is nearly zero with a model that is ridiculously small and simple (1 conv 2 fcs). the code seems so simple that I can't figure out what is going wrong. I've been doing stuff and changing things for a few days now. Any new suggestions what to try?
If your batch size is too large, with your code the values of (max_indices == labels).sum() (max_indices != labels).sum() do not add up to the batch size. This is due to the fact, that you use a torch.ByteTensor which will overflow for values > 255 when summing. using (max_indices != labels).int().sum() will resolve the issue by casting the Tensor to int before summing.
https://stackoverflow.com/questions/49809032/
what is the default weight initializer for conv in pytorch?
The question How to initialize weights in PyTorch? shows how to initialize the weights in Pytorch. However, what is the default weight initializer for Convand Dense in Pytorch? What distribution does Pytorch use?
Each pytorch layer implements the method reset_parameters which is called at the end of the layer initialization to initialize the weights. You can find the implementation of the layers here. For the dense layer which in pytorch is called linear for example, weights are initialized uniformly stdv = 1. / math.sqrt(self.weight.size(1)) self.weight.data.uniform_(-stdv, stdv) where self.weight.size(1) is the number of inputs. This is done to keep the variance of the distributions of each layer relatively similar at the beginning of training by normalizing it to one. You can read a more detailed explanation here. For the convolutional layer the initialization is basically the same. You just compute the number of inputs by multiplying the number of channels with the kernel size.
https://stackoverflow.com/questions/49816627/
PyTorch Getting Custom Loss Function Running
I'm trying to use a custom loss function by extending nn.Module, but I can't get past the error element 0 of variables does not require grad and does not have a grad_fn Note: my labels are lists of size: num_samples, but each batch will have the same labels throughout the batch, so we shrink labels for the whole batch to be a single label by calling .diag() My code is as follows and is based on the transfer learning tutorial: def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': scheduler.step() model.train(True) # Set model to training mode else: model.train(False) # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for data in dataloaders[phase]: # get the inputs inputs, labels = data inputs = inputs.float() # wrap them in Variable if use_gpu: inputs = Variable(inputs.cuda()) labels = Variable(labels.cuda()) else: inputs = Variable(inputs) labels = Variable(labels) # zero the parameter gradients optimizer.zero_grad() # forward outputs = model(inputs) #outputs = nn.functional.sigmoid(outputs).round() _, preds = torch.max(outputs, 1) label = labels.diag().float() preds = preds.float() loss = criterion(preds, label) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.data[0] * inputs.size(0) running_corrects += torch.sum(pred == label.data) epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) # deep copy the model if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) # load best model weights model.load_state_dict(best_model_wts) return model and my loss function is defined below: class CustLoss(nn.Module): def __init__(self): super(CustLoss, self).__init__() def forward(self, outputs, labels): return cust_loss(outputs, labels) def cust_loss(pred, targets): '''preds are arrays of size classes with floats in them''' '''targets are arrays of all the classes from the batch''' '''we sum the classes from the batch and find the num correct''' r = torch.sum(pred == targets) return r Then I run the following to run the model: model_ft = models.resnet18(pretrained=True) for param in model_ft.parameters(): param.requires_grad = False num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 3) if use_gpu: model_ft = model_ft.cuda() criterion = CustLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.fc.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,num_epochs=25) I tried getting it to work with other loss functions to no avail. I always get the same error when loss.backward() is called. It was my understanding that I wouldn't need a custom implementation of loss.backward if I extend nn.Module.
You are subclassing nn.Module to define a function, in your case Loss function. So, when you compute loss.backward(), it tries to store the gradients in the loss itself, instead of the model and there is no variable in the loss for which to store the gradients. Your loss needs to be a function and not a module. See Extending autograd. You have two options here - The easiest one is to directly pass cust_loss function as criterion parameter to train_model. You can extend torch.autograd.Function to define the custom loss (and if you wish, the backward function as well). P.S. - It is mentioned that you need to implement the backward of the custom loss functions. This is not always the case. It is required only when your loss function is non-differentiable at some point. But, I do not think so that you’ll need to do that.
https://stackoverflow.com/questions/49821111/
Variable size input for LSTM in Pytorch
I am using features of variable length videos to train one layer LSTM. Video sizes are changing from 10 to 35 frames. I am using batch size of 1. I have the following code: lstm_model = LSTMModel(4096, 4096, 1, 64) for step, (video_features, label) in enumerate(data_loader): bx = Variable(score.view(-1, len(video_features), len(video_features[0]))) #examples = 1x12x4096, 1x5x4096 output = lstm_model(bx) Lstm model is; class LSTMModel(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(LSTMModel, self).__init__() self.l1 = nn.LSTM(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, batch_first=True) self.out = nn.Linear(hidden_size, num_classes) def forward(self, x): r_out, (h_n, h_c) = self.l1(x, None) #None represents zero initial hidden state out = self.out(r_out[:, -1, :]) return out I just want to ask; am I doing the right for training LSTM with variable size input. The code works okay and loss decreases but I am not sure if I am doing the right thing. Because I haven't used LSTMs in Pytorch before.
Yes, you code is correct and will work always for a batch size of 1. But, if you want to use a batch size other than 1, you’ll need to pack your variable size input into a sequence, and then unpack after LSTM. You can find more details in my answer to a similar question. P.S. - You should post such questions to codereview
https://stackoverflow.com/questions/49832739/
What is volatile variable in Pytorch
What is volatile attribute of a Variable in Pytorch? Here's a sample code for defining a variable in PyTorch. datatensor = Variable(data, volatile=True)
Basically, set the input to a network to volatile if you are doing inference only and won't be running backpropagation in order to conserve memory. From the docs: Volatile is recommended for purely inference mode, when you’re sure you won’t be even calling .backward(). It’s more efficient than any other autograd setting - it will use the absolute minimal amount of memory to evaluate the model. volatile also determines that requires_grad is False. Edit: The volatile keyword has been deprecated as of pytorch version 0.4.0
https://stackoverflow.com/questions/49837638/
How to convert from a Pytroch Tensor from mathematically a column vector to a column matrix?
I am working with tensors in pytorch. How can I convert a tensor corresponding to a column vector to a tensor corresponding to its transpose? import numpy as np coef = torch.from_numpy(np.arange(1.0, 5.0)).float() print(coef) print(coef.size()) Currently the size of coef is [4] but I want it to be [4, 1] with the same content.
It is easy to achieve in PyTorch. You can use the view() method. coef = coef.view(4, 1) print(coef.size()) # now the shape will be [4, 1]
https://stackoverflow.com/questions/49847984/
Tensorflow : What is actually tf.nn.dropout output_keep_prob?
I am trying to understand concept of output_keep_prob: So if my example is simple RNN : with tf.variable_scope('encoder') as scope: cells = rnn.LSTMCell(num_units=500) cell = rnn.DropoutWrapper(cell=cells, output_keep_prob=0.5) model = tf.nn.bidirectional_dynamic_rnn(cell, cell, inputs=embedding_lookup, sequence_length=sequence_le, dtype=tf.float32) My confusion is if I am giving output_keep_prob=0.5 what actually it means? I know that it makes less prone to overfitting (called regularizing) by adding a dropout. It randomly turns off activations of neurons during training, ok I got this point but I am confused when I give output_keep_prob=0.5 and my no_of_nodes = 500 then 0.5 means it will randomly turn of 50% nodes at each iteration or it means it will keep only those connections which have more or equal probability of 0.5 keep_layers whose probability =>0.5 or turn off 50% randomly nodes unit at each iteration ?? I tried to understand the concept by this stackoverflow answer but there is also same confusion that what actually 0.5 means? it should drop 50% nodes at each iteration or keep only those nodes which have probability more or equal to 0.5 if the answer is second keep only those nodes which have probability more or equal to 0.5 : then it means suppose I have given 500 nodes units and only 30 nodes have 0.5 probability so it will turn of rest 470 nodes and will use only 30 nodes for incoming and outgoing connections? Because this answer says : Suppose you have 10 units in the layer and set the keep_prob to 0.1, Then the activation of 9 randomly chosen units out of 10 will be set to 0, and the remaining one will be scaled by a factor of 10. I think a more precise description is that you only keep the activation of 10 percent of the nodes. While other side this answer by @mrry says : it means that each connection between layers (in this case between the last densely connected layer and the readout layer) will only be used with probability 0.5 when training. can anyone give a clear explanation which one is correct and what actually this value represent in keep_prob?
Keep_prop means the probability of any given neuron's output to be preserved (as opposed to dropped, that is zeroed out.) In other words, keep_prob = 1 - drop_prob. The tf.nn.dropout() description states that By default, each element is kept or dropped independently. So if you think about it, if you have a large amount of neurons, like 10,000 in a layer, and the keep_prob is let's say, 0.3, then 3,000 is the expected value of the number of neurons kept. So it's more or less the same thing to say that a keep_prob of 0.3 means to keep the value of 3,000 randomly chosen ones of the 10,000 neurons. But not exactly, because the actual number might vary a bit from 3,000. Scaling comes into the picture because if you drop a certain number of neurons, then the expected sum of the layer will be reduced. So the remaining ones are multiplied to feed forward the same magnitude of values as they would otherwise. This is especially important if you load a pretrained network and want to continue training but with a different keep_prob value now. (Please note, you can decide to introduce non-independence into the drop probabilities with the noise_shape argument, please see the tf.nn.drouput() description, but that is outside of the scope of this question.) The random decision to drop a neuron or not is recalculated for each invocation of the network, so you will have a different set of neurons dropped on every iteration. The idea behind dropout is that subsequent layers cannot overfit and learn to watch for arbitrary constellations of certain activations. You ruin the "secret plan of lazy neurons to overfit" by always changing which previous activations are available.
https://stackoverflow.com/questions/49864214/
Running through a dataloader in Pytorch using Google Colab
I am trying to use Pytorch to run classification on a dataset of images of cats and dogs. In my code I am so far downloading the data and going into the folder train which has two folders in it called "cats" and "dogs." I am then trying to load this data into a dataloader and iterate through batches, but it is giving me some error I don't understand in the iteration step. Since it is Google Colabs I have code in there for downloading data and installing libraries. Any other advice on my code so far would be appreciated as well. !pip install torch !pip install torchvision from __future__ import print_function, division import os import torch import pandas as pd import numpy as np # For showing and formatting images import matplotlib.pyplot as plt import matplotlib.image as mpimg # For importing datasets into pytorch import torchvision.datasets as dataset # Used for dataloaders import torch.utils.data as data # For pretrained resnet34 model import torchvision.models as models # For optimisation function import torch.nn as nn import torch.optim as optim !wget http://files.fast.ai/data/dogscats.zip !unzip dogscats.zip batch_size = 256 train_raw = dataset.ImageFolder(PATH+"train", transform=transforms.ToTensor()) train_loader = data.DataLoader(train_raw, batch_size=batch_size, shuffle=True) for batch_idx, (data, target) in enumerate(train_loader): print("Data: ", batch_idx) The error comes up on the last lines and is below: RuntimeErrorTraceback (most recent call last) <ipython-input-66-c32dd0c1b880> in <module>() ----> 1 for batch_idx, (data, target) in enumerate(train_loader): 2 print("Data: ", batch_idx) 3 /usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.pyc in __next__(self) 257 if self.num_workers == 0: # same-process loading 258 indices = next(self.sample_iter) # may raise StopIteration --> 259 batch = self.collate_fn([self.dataset[i] for i in indices]) 260 if self.pin_memory: 261 batch = pin_memory_batch(batch) /usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.pyc in default_collate(batch) 133 elif isinstance(batch[0], collections.Sequence): 134 transposed = zip(*batch) --> 135 return [default_collate(samples) for samples in transposed] 136 137 raise TypeError((error_msg.format(type(batch[0])))) /usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.pyc in default_collate(batch) 110 storage = batch[0].storage()._new_shared(numel) 111 out = batch[0].new(storage) --> 112 return torch.stack(batch, 0, out=out) 113 elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \ 114 and elem_type.__name__ != 'string_': /usr/local/lib/python2.7/dist-packages/torch/functional.pyc in stack(sequence, dim, out) 62 inputs = [t.unsqueeze(dim) for t in sequence] 63 if out is None: ---> 64 return torch.cat(inputs, dim) 65 else: 66 return torch.cat(inputs, dim, out=out) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 400 and 487 in dimension 2 at /pytorch/torch/lib/TH/generic/THTensorMath.c:2897 Thanks
I think the main problem was images being of different size . I may have understood ImageFolder in other way but, i think you don't need labels for images if the directory structure is as specified in pytorch and pytorch will figure out the labels for you. I would also add more things to your transform that automatically resizes every images from the folder such as: normalize = transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) transform = transforms.Compose( [transforms.ToTensor(),transforms.Resize((224,224)), normalize]) Also you can use other tricks to make your DataLoader much faster such as adding batch_size and number of cpu workers such as: testloader = DataLoader(testset, batch_size=16, shuffle=False, num_workers=4) I think this will make you pipeline much faster.
https://stackoverflow.com/questions/49878836/
Pytorch function name demystification: gels for least squares estimation
What does "gels" stand for in Pytorch? It solves least squares, but what does the name stand for? It is hard to get comfortable with a function without getting its name and it is surprising that these are not explained in the documentation.
gels is actually a function from LAPACK (Linear Algebra Package) and stands for GEneralalized Least Squares meaning that it works on general matrices: General matrix A general real or complex m by n matrix is represented by a real or complex matrix of size (m, n).
https://stackoverflow.com/questions/49882518/
KL Divergence for two probability distributions in PyTorch
I have two probability distributions. How should I find the KL-divergence between them in PyTorch? The regular cross entropy only accepts integer labels.
Yes, PyTorch has a method named kl_div under torch.nn.functional to directly compute KL-devergence between tensors. Suppose you have tensor a and b of same shape. You can use the following code: import torch.nn.functional as F out = F.kl_div(a, b) For more details, see the above method documentation.
https://stackoverflow.com/questions/49886369/
PyTorch replace torch.nn.Conv2d with torch.nn.functional.conv2d
So I have this MNIST example for PyTorch. I wanted to replace conv2d with functional method. But got unexpected error. I replace self.conv1 = nn.Conv2d(1, 32, 5, padding=2) with self.w_conv1 = Variable(torch.randn(1, 32, 5)) In the forward method I replace x = F.max_pool2d(F.relu(self.conv1(x)), 2) with x = F.max_pool2d(F.relu(F.conv2d(x, self.w_conv1, padding=2),2)) And then it will give me an error: Expected 4-dimensional input for 4-dimensional weight [1, 32, 5], but got input of size [50, 1, 28, 28] instead The code worked before, and I thought I'd replace the class with it's functional equivalent.
albanD answerd the question in https://discuss.pytorch.org/t/pytorch-replace-torch-nn-conv2d-with-torch-nn-functional-conv2d/16596 Hi, The error message is not very clear I’m afraid because it comes from deep within the C backend. The problem here is that when you do a convolution on a 2D image with size (batch, in_chan, width, height), and you want an output of size (batch, out_chan, width’, height’), your weights for the convolution should be (out_chan, in_chan, width_kern_size, height_kern_size), basically when you use a kernel size of 5 for the Conv2d function, it is the same as having a kernel of width 5 and height 5. Thus you should have self.w_conv1 = Variable(torch.randn(32, 1, 5, 5)). See the doc for more details.
https://stackoverflow.com/questions/49896987/
Is there analog of theano.tensor.switch in pytorch?
I'd like to force to zero all elements of a vector which are below a certain threshold. And I'd like to do it so that I can still propagate gradient through non-zero ones. For example, in theano I could write: B = theano.tensor.switch(A < .1, 0, A) Is there a solution for that in pytorch?
As of pytorch 0.4+, you can do it easily with torch.where(see doc,Merged PR) It is as easy as in Theano. See yourself with an example: import torch from torch.autograd import Variable x = Variable(torch.arange(0,4), requires_grad=True) # x = [0 1 2 3] zeros = Variable(torch.zeros(*x.shape)) # zeros = [0 0 0 0] y = x**2 # y = [0 1 4 9] z = torch.where(y < 5, zeros, y) # z = [0 0 0 9] # dz/dx = (dz/dy)(dy/dx) = (y < 5)(0) + (y ≥ 5)(2x) = 2x(x**2 ≥ 5) z.backward(torch.Tensor([1.0])) x.grad # (dz/dx) = [0 0 0 6]
https://stackoverflow.com/questions/49931756/
How to calculate the distance between a mini batch and a set of filters in PyTorch
I have a mini-batch of size NxDxWxH, where N is the size of the mini-batch, D is the dimension, and W and H are the width and height respectively. Assume that I have a set of filters F, each with dimension Dx1x1. I need to calculate the pairwise distance between the mini-batch and the filters. The size of the output should be NxFxWxH. input: NxDxWxH filters: FxDx1x1 output: NxFxWxH Lets assume a is a vector of size D extracted at the location (x,y) of the input and f is filter of size Dx1x1. Each value in the output should be \sum_{d=1}^D (x_d - f_c)^2 In other words, instead of convolution I am trying to find the pair-wise L2 distances. How can I do this in pytorch?
You can do this by expanding the input and filters for proper automatic shape casting. # Assuming that input.size() is (N, D, W, H) and filters.size() is (F, D, 1, 1) input.unsqueeze_(1) filters.unsqueeze_(0) output = torch.sum((input - filters)**2, dim=2)
https://stackoverflow.com/questions/49934624/
pytorch multiply 4*1 matrix and 1 size variable occur error
import torch from torch.autograd import Variable import numpy as np x = np.transpose(np.array([[1, 2, 3, 4]])) a = Variable(torch.rand(1), requires_grad=True) print(a * x) # error! I want result like x = [[2][4][6][8]] if a = 2 is there any solution?
What you are looking for is the dot scalar product in matrix multiplication. try: x = np.transpose(np.array([[1, 2, 3, 4]])) a = 2 x.dot(a) This outputs a matrix [[2][4][6][8]]
https://stackoverflow.com/questions/49937990/
AttributeError: 'collections.OrderedDict' object has no attribute 'eval'
I have a model file which looks like this OrderedDict([('inp.conv1.conv.weight', (0 ,0 ,0 ,.,.) = -1.5073e-01 6.4760e-02 1.9156e-01 1.2175e-01 3.5886e-02 1.3992e-01 -1.5903e-01 8.2055e-02 1.7820e-01 (0 ,0 ,1 ,.,.) = 1.0604e-01 -1.3653e-01 1.4803e-01 6.0276e-02 -1.4674e-02 2.3059e-06 -6.2192e-02 -5.1061e-03 -7.4145e-03 (0 ,0 ,2 ,.,.) = -5.5632e-02 3.5326e-02 6.5108e-02 1.1411e-01 -4.4160e-02 8.2610e-02 8.9979e-02 -3.5454e-02 4.2549e-02 (1 ,0 ,0 ,.,.) = 4.8523e-02 -4.3961e-02 5.3614e-02 -1.2644e-01 1.2777e-01 8.9547e-02 3.8392e-02 2.7016e-02 -1.4552e-01 (1 ,0 ,1 ,.,.) = 9.5537e-02 2.8748e-02 3.9772e-02 -6.2410e-02 1.1264e-01 7.8663e-02 -2.6374e-02 1.4401e-01 -1.7109e-01 (1 ,0 ,2 ,.,.) = 5.1791e-02 -1.6388e-01 -1.7605e-01 3.5028e-02 7.7164e-02 -1.4499e-01 -2.9189e-02 2.7064e-03 -2.3228e-02 (2 ,0 ,0 ,.,.) = -7.4446e-03 -9.7202e-02 -1.4704e-01 -1.0019e-02 8.1780e-02 -5.3530e-02 -1.8412e-01 1.5988e-01 -1.3450e-01 (2 ,0 ,1 ,.,.) = -1.1075e-01 -5.2478e-02 6.0658e-02 1.6739e-01 -2.9360e-02 1.2621e-01 2.0686e-02 1.1468e-01 1.2282e-01 I want to do inference on this model, but when i do model.eval() i get, AttributeError: 'collections.OrderedDict' object has no attribute 'eval
It is not a model file, instead, this is a state file. In a model file, the complete model is stored, whereas in a state file only the parameters are stored. So, your OrderedDict are just values for your model. You will need to create the model and then need to load these values into your model. So, the process will be something in form of import torch import torch.nn as nn class TempModel(nn.Module): def __init__(self): self.conv1 = nn.Conv2d(3, 5, (3, 3)) def forward(self, inp): return self.conv1(inp) model = TempModel() model.load_state_dict(torch.load(file_path)) model.eval() You'll need to define your model properly. The one given in the example above is just a dummy. If you construct your model yourself, you might need to update the keys of the saved dict file as mentioned here. The best course of action is to define your model in exactly the same way from when the state_dict was saved and then directly executing model.load_state_dict will work.
https://stackoverflow.com/questions/49941426/
Installing PyTorch via Conda
Objective: Create a conda environment with pytorch and torchvision. Anaconda Navigator 1.8.3, python 3.6, MacOS 10.13.4. What I've tried: In Navigator, created a new environment. Tried to install pytorch and torchvision but could not because the UI search for packages does not find any packages available matching pytorch, torch, torchvision, or similar strings. conda install pytorch torchvision -c pytorch conda update --all pytorch 0.3.1, torch 0.3.1, and torchvision 0.2.0 now appear as installed in the root environment. However, the root environment is no longer cloneable; the clone button is gray/disabled (it used be enabled/cloneable). I could use the root environment as a fallback but the main point of conda is to be able to create separate and disposable environments. What am I missing? UPDATE ----------------- Running conda install -c pytorch pytorch yields: # All requested packages already installed. But if I activate the pytorch environment and list the packages therein, there is no package containing the word "torch". If I then do conda search pytorch I get PackagesNotFoundError: The following packages are not available from current channels: - pytorch. If I activate the base environment and then do conda list then pytorch is in the package list for base. So how does one create a separate environment containing pytorch?
You seem to have installed PyTorch in your base environment, you therefore cannot use it from your other "pytorch" env. Either: directly create a new environment (let's call it pytorch_env) with PyTorch: conda create -n pytorch_env -c pytorch pytorch torchvision switch to the pytorch environment you have already created with: source activate pytorch_env and then install PyTorch in it: conda install -c pytorch pytorch torchvision
https://stackoverflow.com/questions/49951846/
Is it possible to use different L1 / L2 regularization parameters for different sets of weights in chainer or pytorch?
(As an example) When implementing a simple linear model for noutput target values as a neural network in pytorch: l1=L.Linear(ninput, noutput) (call) y = self.l1(x) return y Adding this hook will do L2 regularization on all weights, imposing the same alpha=0.01 everywhere: optimizer.add_hook(optimizer.WeightDecay(rate=0.01)) Is it possible to use a different alpha for each set of weights leading from all ninput input units to one of the noutput output units?
Since we are working in pytorch it is possible to add other scalars to loss function yourself. So assume loss from you classfier is L ( assume it is a cross entropy loss ) and you have a linear layer defined as: l1 = nn.Linear(in,out) Now if you want to have different regularization on each set of weights then all you have to do is gather weights using ( i.e select using index) and add to the final loss: loss = L (crossentropy loss) + sum ( alpha * norm(l1.weight[k])) alpha the hyper-parameters and norm is mostly L2 norm,in pytorch it is just torch.norm(l1.weight) where index k is a tensor of indices of weights you want to select. Finally, you don't need to do the global regularization as you have done in the code.
https://stackoverflow.com/questions/49965727/
PyTorch 3 reshaping error
When training a CNN using PyTorch in Python, I get the following error: RuntimeError: invalid argument 2: size '[-3 x 3136]' is invalid for input with 160000 elements at /opt/conda/conda-bld/pytorch-cpu_1515613813020/work/torch/lib/TH/THStorage.c:41 This is related to the x.view line in the model below: class Net(nn.Module): def __init__(self): super(Net,self).__init__() self.conv1 = nn.Conv2d(3,32,5,padding=2) # 1 input, 32 out, filter size = 5x5, 2 block outer padding self.conv2 = nn.Conv2d(32,64,5,padding=2) # 32 input, 64 out, filter size = 5x5, 2 block padding self.fc1 = nn.Linear(64*7*7,1024) # Fully connected layer self.fc2 = nn.Linear(1024,2) #Fully connected layer 2 out. def forward(self,x): x = F.max_pool2d(F.relu(self.conv1(x)), 2) # Max pool over convolution with 2x2 pooling x = F.max_pool2d(F.relu(self.conv2(x)), 2) # Max pool over convolution with 2x2 pooling x = x.view(-1,64*7*7) # tensor.view() reshapes the tensor x = F.relu(self.fc1(x)) # Activation function after passing through fully connected layer x = F.dropout(x, training=True) #Dropout regularisation x = self.fc2(x) # Pass through final fully connected layer return F.log_softmax(x) # Give results using softmax model = Net() print(model) I'm not sure if this is a result of the images having 3 channels or something else entirely. I understand that this command should reshape the images into single dimensional arrays ready for the fully connected layer so I'm not sure how to fix this issue when the error is claiming an input of 160000 elements.
I will assume your input images are probably of size 200x200px (by size I mean here height x width, not taking the number of channels into account). While your nn.Conv2d layers are defined to output tensors of the same size (with 32 channels for conv1 and 64 channels for con2), the F.max_pool2d are defined in such a way they divide height and width by 2. So after 2 max-pooling operations, your tensors are of size 200 / (2 * 2) x 200 / (2 * 2) = 50x50px. With the 64 channels from conv2, you get 64 * 50 * 50 = 160000 elements. Now, you need to adapt your view() so that it converts those inputs of shape (batch_size, 64, 50, 50) into (batch_size, 64 * 50 * 50) (to preserve the number of elements). You need to similarly adapt your 1st fully-connected layer. import torch import torch.nn as nn import torch.nn.functional as F import numpy as np class Net(nn.Module): def __init__(self): super(Net,self).__init__() self.conv1 = nn.Conv2d(3,32,5,padding=2) # 1 input, 32 out, filter size = 5x5, 2 block outer padding self.conv2 = nn.Conv2d(32,64,5,padding=2) # 32 input, 64 out, filter size = 5x5, 2 block padding self.fc1 = nn.Linear(64*50*50,1024) # Fully connected layer self.fc2 = nn.Linear(1024,2) #Fully connected layer 10 out. def forward(self,x): x = F.max_pool2d(F.relu(self.conv1(x)), 2) # Max pool over convolution with 2x2 pooling x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2) # Max pool over convolution with 2x2 pooling x = x.view(-1,64*50*50) # tensor.view() reshapes the tensor x = F.relu(self.fc1(x)) # Activation function after passing through fully connected layer x = F.dropout(x, training=True) #Dropout regularisation x = self.fc2(x) # Pass through final fully connected layer return F.log_softmax(x) # Give results using softmax model = Net() print(model) x = np.ones((1, 3, 200, 200)) x = torch.tensor(x) x = model.forward(x) print(x)
https://stackoverflow.com/questions/49980801/
Getting diagonal "stripe" from matrix in NumPy or PyTorch
I need to get the diagonal "stripe" of a matrix. Say I have a matrix of size KxN (K>N): [[ 0 1 2] [ 3 4 5] [ 6 7 8] [ 9 10 11]] From it I need to extract a diagonal stripe, in this case, a matrix MxV size that is created by truncating the original one: [[ 0 x x] [ 3 4 x] [ x 7 8] [ x x 11]] So the result matrix is: [[ 0 4 8] [ 3 7 11]] I could define a bolean mask like so: import numpy as np X=np.arange(12).reshape(4,3) mask=np.asarray([ [ True, False, False], [ True, True, False], [ False, True, True], [ False, False, True] ]) >>> X array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) >>> X.T[mask.T].reshape(3,2).T array([[ 0, 4, 8], [ 3, 7, 11]]) But I don't see how such a mask could be automatically generated to an arbitrary KxN matrix (e.g. 39x9, 360x96) Is there a function that does this automatically either in numpy, scipy or pytorch? Additional question: is it possible to get a "reverse stripe" instead? i.e. [[ x x 2] [ x 4 5] [ 6 7 x] [ 9 x x]]
stride_tricks do the trick: >>> import numpy as np >>> >>> def stripe(a): ... a = np.asanyarray(a) ... *sh, i, j = a.shape ... assert i >= j ... *st, k, m = a.strides ... return np.lib.stride_tricks.as_strided(a, (*sh, i-j+1, j), (*st, k, k+m)) ... >>> a = np.arange(24).reshape(6, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]) >>> stripe(a) array([[ 0, 5, 10, 15], [ 4, 9, 14, 19], [ 8, 13, 18, 23]]) If a is an array this creates a writable view, meaning that if you feel so inclined you can do things like >>> stripe(a)[...] *= 10 >>> a array([[ 0, 1, 2, 3], [ 40, 50, 6, 7], [ 80, 90, 100, 11], [ 12, 130, 140, 150], [ 16, 17, 180, 190], [ 20, 21, 22, 230]]) UPDATE: bottom-left to top-right stripes can be obtained in the same spirit. Only minor complication: It is not based at the same address as the original array. >>> def reverse_stripe(a): ... a = np.asanyarray(a) ... *sh, i, j = a.shape ... assert i >= j ... *st, k, m = a.strides ... return np.lib.stride_tricks.as_strided(a[..., j-1:, :], (*sh, i-j+1, j), (*st, k, m-k)) ... >>> a = np.arange(24).reshape(6, 4) >>> reverse_stripe(a) array([[12, 9, 6, 3], [16, 13, 10, 7], [20, 17, 14, 11]])
https://stackoverflow.com/questions/49982746/
Expected 4D tensor as input, got 2D tensor instead
I'm trying to build a neural network using the pre-trained network VGG16 on Pytorch. I understand that I need to adjust the classifier part of the network, so I have frozen the parameters to prevent backpropagation through them. Code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import numpy as np import time import torch from torch import nn from torch import optim import torch.nn.functional as F from torch.autograd import Variable from torchvision import datasets, transforms import torchvision.models as models from collections import OrderedDict data_dir = 'flowers' train_dir = data_dir + '/train' valid_dir = data_dir + '/valid' test_dir = data_dir + '/test' train_transforms = transforms.Compose([transforms.Resize(224), transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) validn_transforms = transforms.Compose([transforms.Resize(224), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))]) test_transforms = transforms.Compose([ transforms.Resize(224), transforms.RandomResizedCrop(224), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))]) train_data = datasets.ImageFolder(train_dir, transform=train_transforms) validn_data = datasets.ImageFolder(valid_dir, transform=validn_transforms) test_data = datasets.ImageFolder(test_dir, transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True) validnloader = torch.utils.data.DataLoader(validn_data, batch_size=32, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32, shuffle=True) model = models.vgg16(pretrained=True) model for param in model.parameters(): param.requires_grad = False classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(3*224*224, 10000)), ('relu', nn.ReLU()), ('fc2', nn.Linear(10000, 5000)), ('relu', nn.ReLU()), ('fc3', nn.Linear(5000, 102)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier classifier criterion = nn.NLLLoss() optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model.cuda() epochs = 1 steps = 0 training_loss = 0 print_every = 300 for e in range(epochs): model.train() for images, labels in iter(trainloader): steps == 1 images.resize_(32,3*224*224) inputs = Variable(images.cuda()) targets = Variable(labels.cuda()) optimizer.zero_grad() output = model.forward(inputs) loss = criterion(output, targets) loss.backward() optimizer.step() training_loss += loss.data[0] if steps % print_every == 0: print("Epoch: {}/{}... ".format(e+1, epochs), "Loss: {:.4f}".format(training_loss/print_every)) running_loss = 0 Traceback ValueError Traceback (most recent call last) <ipython-input-17-30552f4b46e8> in <module>() 15 optimizer.zero_grad() 16 ---> 17 output = model.forward(inputs) 18 loss = criterion(output, targets) 19 loss.backward() /opt/conda/lib/python3.6/site-packages/torchvision-0.2.0-py3.6.egg/torchvision/models/vgg.py in forward(self, x) 39 40 def forward(self, x): ---> 41 x = self.features(x) 42 x = x.view(x.size(0), -1) 43 x = self.classifier(x) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 323 for hook in self._forward_pre_hooks.values(): 324 hook(self, input) --> 325 result = self.forward(*input, **kwargs) 326 for hook in self._forward_hooks.values(): 327 hook_result = hook(self, input, result) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input) 65 def forward(self, input): 66 for module in self._modules.values(): ---> 67 input = module(input) 68 return input 69 /opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 323 for hook in self._forward_pre_hooks.values(): 324 hook(self, input) --> 325 result = self.forward(*input, **kwargs) 326 for hook in self._forward_hooks.values(): 327 hook_result = hook(self, input, result) /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input) 275 def forward(self, input): 276 return F.conv2d(input, self.weight, self.bias, self.stride, --> 277 self.padding, self.dilation, self.groups) 278 279 /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py in conv2d(input, weight, bias, stride, padding, dilation, groups) 83 """ 84 if input is not None and input.dim() != 4: ---> 85 raise ValueError("Expected 4D tensor as input, got {}D tensor instead.".format(input.dim())) 86 87 f = _ConvNd(_pair(stride), _pair(padding), _pair(dilation), False, ValueError: Expected 4D tensor as input, got 2D tensor instead. Could it be because I am using Linear operation in my layer definitions?
There are two problems with your network - You created your own classifier whose first layer accepts input of size (3*224*224), but this is not the output size of the features part of vgg16. Features output a tensor of size (25088) You are resizing your input to be a tensor of shape (3*224*224) (for each batch) but the features part of vgg16 expects an input of (3, 224, 224). Your custom classifier comes after the features, so you need to prepare your input for features not for classifier. Solution To solve the first you need to change your definition of classifier to - classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(25088, 10000)), ('relu', nn.ReLU()), ('fc2', nn.Linear(10000, 5000)), ('relu', nn.ReLU()), ('fc3', nn.Linear(5000, 102)), ('output', nn.LogSoftmax(dim=1)) ])) To solve the second problem, change images.resize_(32,3*224*224) to images.resize_(32, 3, 224, 224). P.S. - A word of advice - Your classifier's first layer output of 10000 units is very large. You should try to keep it around 4000 as done in the original classifier (Even better if you use the original weights for the first layer only, as those have proven to be good features as well over the time)
https://stackoverflow.com/questions/49993776/
What are transforms in PyTorch used for?
I am new with Pytorch and not very expert in CNN. I have done a successful classifier with the tutorial that they provide Tutorial Pytorch, but I don't really understand what I am doing when loading the data. They do some data augmentation and normalisation for training, but when I try to modify the parameters, the code does not work. # Data augmentation and normalization for training # Just normalization for validation data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } Am I extending my training dataset? I don't see the data augmentation. Why if I modify the value of transforms.RandomResizedCrop(224) the data loading stop working? Do I need to transform as well the test dataset? I am a bit confused with this data transformation that they do.
transforms.Compose just clubs all the transforms provided to it. So, all the transforms in the transforms.Compose are applied to the input one by one. Train transforms transforms.RandomResizedCrop(224): This will extract a patch of size (224, 224) from your input image randomly. So, it might pick this path from topleft, bottomright or anywhere in between. So, you are doing data augmentation in this part. Also, changing this value won't play nice with the fully-connected layers in your model, so not advised to change this. transforms.RandomHorizontalFlip(): Once we have our image of size (224, 224), we can choose to flip it. This is another part of data augmentation. transforms.ToTensor(): This just converts your input image to PyTorch tensor. transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]): This is just input data scaling and these values (mean and std) must have been precomputed for your dataset. Changing these values is also not advised. Validation transforms transforms.Resize(256): First your input image is resized to be of size (256, 256) transforms.CentreCrop(224): Crops the center part of the image of shape (224, 224) Rest are the same as train P.S.: You can read more about these transformations in the official docs
https://stackoverflow.com/questions/50002543/
Quickly find indices that have values larger than a threshold in Numpy/PyTorch
Task Given a numpy or pytorch matrix, find the indices of cells that have values that are larger than a given threshold. My implementation #abs_cosine is the matrix #sim_vec is the wanted sim_vec = [] for m in range(abs_cosine.shape[0]): for n in range(abs_cosine.shape[1]): # exclude diagonal cells if m != n and abs_cosine[m][n] >= threshold: sim_vec.append((m, n)) Concerns Speed. All other computations are built on Pytorch, using numpy is already a compromise, because it has moved computations from GPU to CPU. Pure python for loops will make the whole process even worse (for small data set already 5 times slower). I was wondering if we can move the whole computation to Numpy (or pytorch) without invoking any for loops? An improvement I can think of (but got stuck...) bool_cosine = abs_cosine > threshold which returns a boolean matrix of True and False. But I cannot find a way to quick retrieve the indices of the True cells.
The following is for PyTorch (fully on GPU) # abs_cosine should be a Tensor of shape (m, m) mask = torch.ones(abs_cosine.size()[0]) mask = 1 - mask.diag() sim_vec = torch.nonzero((abs_cosine >= threshold)*mask) # sim_vec is a tensor of shape (?, 2) where the first column is the row index and second is the column index The following works in numpy mask = 1 - np.diag(np.ones(abs_cosine.shape[0])) sim_vec = np.nonzero((abs_cosine >= 0.2)*mask) # sim_vec is a 2-array tuple where the first array is the row index and the second array is column index
https://stackoverflow.com/questions/50045202/
How do you load MNIST images into Pytorch DataLoader?
The pytorch tutorial for data loading and processing is quite specific to one example, could someone help me with what the function should look like for a more generic simple loading of images? Tutorial: http://pytorch.org/tutorials/beginner/data_loading_tutorial.html My Data: I have the MINST dataset as jpg's in the following folder structure. (I know I can just use the dataset class, but this is purely to see how to load simple images into pytorch without csv's or complex features). The folder name is the label and the images are 28x28 png's in greyscale, no transformations required. data train 0 3.png 5.png 13.png 23.png ... 1 3.png 10.png 11.png ... 2 4.png 13.png ... 3 8.png ... 4 ... 5 ... 6 ... 7 ... 8 ... 9 ...
Here's what I did for pytorch 0.4.1 (should still work in 1.3) def load_dataset(): data_path = 'data/train/' train_dataset = torchvision.datasets.ImageFolder( root=data_path, transform=torchvision.transforms.ToTensor() ) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=64, num_workers=0, shuffle=True ) return train_loader for batch_idx, (data, target) in enumerate(load_dataset()): #train network
https://stackoverflow.com/questions/50052295/
Lack of Sparse Solution with L1 Regularization in Pytorch
I am trying to implement L1 regularization onto the first layer of a simple neural network (1 hidden layer). I looked into some other posts on StackOverflow that apply l1 regularization using Pytorch to figure out how it should be done (references: Adding L1/L2 regularization in PyTorch?, In Pytorch, how to add L1 regularizer to activations?). No matter how high I increase lambda (the l1 regularization strength parameter) I do not get true zeros in the first weight matrix. Why would this be? (Code is below) import torch import torch.nn as nn import torch.nn.functional as F import numpy as np class Network(nn.Module): def __init__(self,nf,nh,nc): super(Network,self).__init__() self.lin1=nn.Linear(nf,nh) self.lin2=nn.Linear(nh,nc) def forward(self,x): l1out=F.relu(self.lin1(x)) out=F.softmax(self.lin2(l1out)) return out, l1out def l1loss(layer): return torch.norm(layer.weight.data, p=1) nf=10 nc=2 nh=6 learningrate=0.02 lmbda=10. batchsize=50 net=Network(nf,nh,nc) crit=nn.MSELoss() optimizer=torch.optim.Adagrad(net.parameters(),lr=learningrate) xtr=torch.Tensor(xtr) ytr=torch.Tensor(ytr) #ytr=torch.LongTensor(ytr) xte=torch.Tensor(xte) yte=torch.LongTensor(yte) #cyte=torch.Tensor(yte) it=200 for epoch in range(it): per=torch.randperm(len(xtr)) for i in range(0,len(xtr),batchsize): ind=per[i:i+batchsize] bx,by=xtr[ind],ytr[ind] optimizer.zero_grad() output, l1out=net(bx) # l1reg=l1loss(net.lin1) loss=crit(output,by)+lmbda*l1loss(net.lin1) loss.backward() optimizer.step() print('Epoch [%i/%i], Loss: %.4f' %(epoch+1,it, np.float32(loss.data.numpy()))) corr=0 tot=0 for x,y in list(zip(xte,yte)): output,_=net(x) _,pred=torch.max(output,-1) tot+=1 #y.size(0) corr+=(pred==y).sum() print(corr) Note: The data has 10 features (2 classes and 800 training samples) and only the first 2 are relevant (by design) so one would assume true zeros should be easy enough to learn.
Your usage of layer.weight.data removes the parameter (which is a PyTorch variable) from its automatic differentiation context, making it a constant when the optimiser takes the gradients. This results in zero gradients and that the L1 loss is not computed. If you remove the .data, the norm is computed of the PyTorch variable and the gradients should be correct. For more information on PyTorch's automatic differentiation mechanics, see this docs article or this tutorial.
https://stackoverflow.com/questions/50054049/
Load a single layer weights from a pretrained model
I want to specifically add the pretrained model parameters of some layers to my new network . For Linear Layer i just did : model_enc.linear_3d.weight = model_trained.linear_3d.weight model_enc.linear_3d.bias = model_trained.linear_3d.bias Will this suffice or are there any other parameters that I need to load or is there a easy way than this . My module is already trained and I just want to load params for few layers . Thank you
Your solution should work and seems easy enough to me. From the source code on https://pytorch.org/docs/master/_modules/torch/nn/modules/linear.html#Linear you can see that the nn.Linear module has the attributes in_features, out_features, weight1 and bias: def __init__(self, in_features, out_features, bias=True): super(Linear, self).__init__() self.in_features = in_features self.out_features = out_features self.weight = Parameter(torch.Tensor(out_features, in_features)) if bias: self.bias = Parameter(torch.Tensor(out_features)) else: self.register_parameter('bias', None) self.reset_parameters() Therefore, as long as your in_features and out_features are identical you can just replace the weights and bias as you did. Alternatively, you can replace the entire Linear module in in one network with the module of the other if you stored it as an attribute.
https://stackoverflow.com/questions/50059592/
Load a single image in a pretrained pytorch net
Total newbie here, I'm using this pytorch SegNet implementation with a '.pth' file containing weights from a 50 epochs training. How can I load a single test image and see the net prediction? I know this may sound like a stupid question but I'm stuck. What I've got is: from segnet import SegNet import torch model = SegNet(2) model.load_state_dict(torch.load('./model_segnet_epoch50.pth')) How do I "use" the net on a single test picture?
output = model(image) . Note that the image should be a Variable object and that the output will be as well. If your image is, for example, a Numpy array, you can convert it like so: var_image = Variable(torch.Tensor(image))
https://stackoverflow.com/questions/50063514/
AttributeError: 'torch.FloatTensor' object has no attribute 'item'
Here are the codes: from __future__ import print_function from itertools import count import torch import torch.autograd import torch.nn.functional as F POLY_DEGREE = 4 W_target = torch.randn(POLY_DEGREE, 1) * 5 b_target = torch.randn(1) * 5 def make_features(x): x = x.unsqueeze(1) return torch.cat([x ** i for i in range(1, POLY_DEGREE+1)], 1) def f(x): return x.mm(W_target) + b_target.item() This resulted in the following error message: AttributeError: 'torch.FloatTensor' object has no attribute 'item' How can I solve this please?
The function item() is new from PyTorch 0.4.0. When using earlier versions of PyTorch you will get this error. So you can upgrade your PyTorch version your to solve this. Edit: I got through your example again. What do you want archive with item()? In your case item() should just give you the (python) float value in the tensor. Why do you want to use this? You can just leave out item(). So: def f(x): return x.mm(W_target) + b_target instead of: def f(x): return x.mm(W_target) + b_target.item() This should work for you, in PyTorch 0.4.0 there is not difference. It is also more efficient to leave out item().
https://stackoverflow.com/questions/50086577/
How can I make a FloatTensor with requires_grad=True from a numpy array using PyTorch 0.4.0?
Pytorch 0.4.0 introduced the merging on the Tensor and Variable classes. Before this version, when I wanted to create a Variable with autograd from a numpy array I would do the following (where x is a numpy array): x = Variable(torch.from_numpy(x).float(), requires_grad=True) With PyTorch version 0.4.0, the migration guide shows how we can create Tensors with autograd enabled, examples show you can do things such as x = torch.ones(3, 4, requires_grad=True) and also set requires_grad to existing tensors existing_tensor.requires_grad_() I have tried the following three things to try and create a Tensor with requires_grad=True which give errors (where x is a numpy array): The first is x = FloatTensor(x, requires_grad=True) which gives the error TypeError: new() received an invalid combination of arguments - got (numpy.ndarray, requires_grad=bool), but expected one of: * (torch.device device) * (tuple of ints size, torch.device device) didn't match because some of the keywords were incorrect: requires_grad * (torch.Storage storage) * (Tensor other) * (object data, torch.device device) didn't match because some of the keywords were incorrect: requires_grad The second is to do x = FloatTensor(x) x.requires_grad() And the third is x = torch.from_numpy(x).single() x.requires_grad() Which both throw the following error on the second line: TypeError: 'bool' object is not callable These errors give me little hint at what I'm doing wrong, and since the latest version is so new its hard to find content online to help. How can I make a FloatTensor with requires_grad=True from a numpy array using PyTorch 0.4.0, preferably in a single line?
How can I make a FloatTensor with requires_grad=True from a numpy array using PyTorch 0.4.0, preferably in a single line? If x is your numpy array this line should do the trick: torch.tensor(x, requires_grad=True) Here is a full example tested with PyTorch 0.4.0: import numpy as np import torch x = np.array([1.3, 0.5, 1.9, 2.45]) print('np.array:', x) t = torch.tensor(x, requires_grad=True) print('tensor:', t) print('requires_grad:', t.requires_grad) This gives the following output: np.array: [1.3 0.5 1.9 2.45] tensor: tensor([ 1.3000, 0.5000, 1.9000, 2.4500], dtype=torch.float64) requires_grad: True Edit: dtype should be determined by the given dtype of your numpy array x. I hope this helps.
https://stackoverflow.com/questions/50087252/
How to apply a custom function to specific columns in a matrix in PyTorch
I have a tensor of size [150, 182, 91], the first part is just the batch size while the matrix I am interested in is the 182x91 one. I need to run a function on the 182x91 matrix for each of the 50 dimensions separately. I need to get a diagonal matrix stripe of the 182x91 matrix, and the function I am using is the following one (based on my previous question: Getting diagonal matrix stripe automatically in numpy or pytorch): def stripe(a): i, j = a.size() assert (i >= j) out = torch.zeros((i - j + 1, j)) for diag in range(0, i - j + 1): out[diag] = torch.diag(a, -diag) return out The stripe function expects a matrix of size IxJ and can't deal with the 3rd dimension. So when I run this: some_matrix = x # <class 'torch.autograd.variable.Variable'> torch.Size([150, 182, 91]) get_diag = stripe(some_matrix) I get this Error: ValueError: too many values to unpack (expected 2) If I just try to skip the first dimension by doing x, i, j = a.size(), I get this: RuntimeError: invalid argument 1: expected a matrix or a vector at I am still on PyTorch 0.3.1. Any help is appreciated!
You can map the stripe function over the first dimension of your tensor using torch.unbind as In [1]: import torch In [2]: def strip(a): ...: i, j = a.size() ...: assert(i >= j) ...: out = torch.zeros((i - j + 1, j)) ...: for diag in range(0, i - j + 1): ...: out[diag] = torch.diag(a, -diag) ...: return out ...: ...: In [3]: a = torch.randn((182, 91)).cuda() In [5]: output = strip(a) In [6]: output.size() Out[6]: torch.Size([92, 91]) In [7]: a = torch.randn((150, 182, 91)) In [8]: output = list(map(strip, torch.unbind(a, 0))) In [9]: output = torch.stack(output, 0) In [10]: output.size() Out[10]: torch.Size([150, 92, 91])
https://stackoverflow.com/questions/50090821/
training a RNN in Pytorch
I want to have an RNN model and teach it to learn generating "ihello" from "hihell". I am new in Pytorch and following the instruction in a video to write the code. I have written two python files named train.py and model.py. this is model.py: #----------------- model for teach rnn hihell to ihello #----------------- OUR MODEL --------------------- import torch import torch.nn as nn from torch import autograd class Model(nn.Module): def __init__(self): super(Model,self).__init__() self.rnn=nn.RNN(input_size=input_size,hidden_size=hidden_size,batch_first=True) def forward(self,x,hidden): #Reshape input in (batch_size,sequence_length,input_size) x=x.view(batch_size,sequence_length,input_size) #Propagate input through RNN #Input:(batch,seq+len,input_size) out,hidden=self.rnn(x,hidden) out=out.view(-1,num_classes) return hidden,out def init_hidden(self): #Initialize hidden and cell states #(num_layers*num_directions,batch,hidden_size) return autograd.Variable(torch.zeros(num_layers,batch_size,hidden_size)) and this is train.py: """----------------------train for teach rnn to hihell to ihello--------------------------""" #----------------- DATA PREPARATION --------------------- #Import import torch import torch.nn as nn from torch import autograd from model import Model import sys idx2char=['h','i','e','l','o'] #Teach hihell->ihello x_data=[0,1,0,2,3,3]#hihell y_data=[1,0,2,3,3,4]#ihello one_hot_lookup=[[1,0,0,0,0],#0 [0,1,0,0,0],#1 [0,0,1,0,0],#2 [0,0,0,1,0],#3 [0,0,0,0,1]]#4 x_one_hot=[one_hot_lookup[x] for x in x_data] inputs=autograd.Variable(torch.Tensor(x_one_hot)) labels=autograd.Variable(torch.LongTensor(y_data)) """ ----------- Parameters Initialization------------""" num_classes = 5 input_size = 5 # one hot size hidden_size = 5 # output from LSTM to directly predict onr-hot batch_size = 1 # one sequence sequence_length = 1 # let's do one by one num_layers = 1 # one layer RNN """----------------- LOSS AND TRAINING ---------------------""" #Instantiate RNN model model=Model() #Set loss and optimizer function #CrossEntropyLoss=LogSoftmax+NLLLOSS criterion=torch.nn.CrossEntropyLoss() optimizer=torch.optim.Adam(model.parameters(),lr=0.1) """----------------Train the model-------------------""" for epoch in range(100): optimizer.zero_grad() loss=0 hidden=model.init_hidden() sys.stdout.write("Predicted String:") for input,label in zip(inputs,labels): #print(input.size(),label.size()) hidden,output=model(input,hidden) val,idx=output.max(1) sys.stdout.write(idx2char[idx.data[0]]) loss+=criterion(output,label) print(",epoch:%d,loss:%1.3f"%(epoch+1,loss.data[0])) loss.backward() optimizer.step() when I run train.py, I receive this error: self.rnn=nn.RNN(input_size=input_size,hidden_size=hidden_size,batch_first=True) NameError: name 'input_size' is not defined I don't know why I receive this error because I have input_size=5 in the above lines of my code. could anybody help me? thanks.
The scope of the variables defined in train.py (num_classes, input_size, ...) is the train.py itself. They are only visible in this file. The model.py is oblivious to these. I suggest including these arguments in the constructor: class Model(nn.Module): def __init__(self, hidden_size, input_size): # same and then call the Model as: model = Model(hidden_size, input_size) Similarly, for other variables that you defined in train.py (and want to use them in model.py) you have to pass them as arguments to either their respective functions, or to the constructor and store them as attributes.
https://stackoverflow.com/questions/50149049/
How does GPU utilization work in the context of neural network training?
I am using an AWS p3.2xlarge instance with the Deep Learning AMI (DLAMI). This instance has a single Tesla V100 (640 Tensor Cores and 5,120 CUDA Cores). When I run the PyTorch Seq2Seq Jupyter Notebook, I noticed that only 25% of the GPU is used. I monitor the GPU usage with the following command watch -n 1 nvidia-smi. My question is, what determines GPU usage? Or, why is the GPU usage not 100%? The reason behind this question is related not only to inefficiency that may be a result of code but also cost ($3.06/hour). I am wondering if there is anything more that I can do to maximize the GPU usage. Of course, this is a deep learning model that is being learned, and the training code sends one sample at a time through the network for learning. I am thinking that mini-batch learning may not be appropriate (e.g. sending a couple of samples through before backpropagating). I am also wondering if the network architecture (the number of layers, their parameters, their input tensor dimensions, etc.) constrains how GPU is being used. For example, if I add more layers or add more hidden nodes, should I expect GPU usage to go up?
The power of GPUs over CPUs is to run many operations at the same time. However archiving this high level of parallelization is not always easy. Frameworks like Tensorflow or PyTorch do its best to optimise everything for GPU and parallelisation, but this is not possible for every case. Computations in LSTMs and RNNs in general can be only parallelized to a very limited degree. The problem lies in their sequential structure, LSTMs and RNNs process only one input at a time, and they need to process everything in chronological order (to compute n+1 you always need to compute n before) - otherwise it wouldn't make sense. So the natural way of processing data in RNNs is completely the opposite of parallelization, using mini-batching does help a lot, but does not solve the fundamental problem of LSTMs. If you wan't a high amount of parallelization you need to use architectures like the "Transformer" proposed in the paper "Attention is all you need" by Google. Summary The degree of parallelization resp. the GPU acceleration of your model depends to a large extent on the architecture of the model itself. With some architectures like RNNs parallelization is only possible to a limited degree. Edit: For example, if I add more layers or add more hidden nodes, should I expect GPU usage to go up? When increasing the number of units within you should expect the GPU usage going up, matrix operations like passing an input to a hidden layer are can be well parallelized. Adding layers is different, there you have the same problem what causes RNNs to be slow on GPU. To compute the next layer you need to have already the result of the previous layer. So you need to compute one layer after another, it's not possible to compute all at the same time. This is the theory - In practice you might see some minor differences in GPU usage, depending on the actual implementation of the framework.
https://stackoverflow.com/questions/50164417/
Does a clean and extendable LSTM implementation exists in PyTorch?
I would like to create an LSTM class by myself, however, I don't want to rewrite the classic LSTM functions from scratch again. Digging in the code of PyTorch, I only find a dirty implementation involving at least 3-4 classes with inheritance: https://github.com/pytorch/pytorch/blob/98c24fae6b6400a7d1e13610b20aa05f86f77070/torch/nn/modules/rnn.py#L323 https://github.com/pytorch/pytorch/blob/98c24fae6b6400a7d1e13610b20aa05f86f77070/torch/nn/modules/rnn.py#L12 https://github.com/pytorch/pytorch/blob/98c24fae6b6400a7d1e13610b20aa05f86f77070/torch/nn/_functions/rnn.py#L297 Does a clean PyTorch implementation of an LSTM exist somewhere? Any links would help. For example, I know that clean implementations of a LSTM exists in TensorFlow, but I would need to derive a PyTorch one. For a clear example, what I'm searching for is an implementation as clean as this, but in PyTorch:
The best implementation I found is here https://github.com/pytorch/benchmark/blob/master/rnns/benchmarks/lstm_variants/lstm.py It even implements four different variants of recurrent dropout, which is very useful! If you take the dropout parts away you get import math import torch as th import torch.nn as nn class LSTM(nn.Module): def __init__(self, input_size, hidden_size, bias=True): super(LSTM, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.bias = bias self.i2h = nn.Linear(input_size, 4 * hidden_size, bias=bias) self.h2h = nn.Linear(hidden_size, 4 * hidden_size, bias=bias) self.reset_parameters() def reset_parameters(self): std = 1.0 / math.sqrt(self.hidden_size) for w in self.parameters(): w.data.uniform_(-std, std) def forward(self, x, hidden): h, c = hidden h = h.view(h.size(1), -1) c = c.view(c.size(1), -1) x = x.view(x.size(1), -1) # Linear mappings preact = self.i2h(x) + self.h2h(h) # activations gates = preact[:, :3 * self.hidden_size].sigmoid() g_t = preact[:, 3 * self.hidden_size:].tanh() i_t = gates[:, :self.hidden_size] f_t = gates[:, self.hidden_size:2 * self.hidden_size] o_t = gates[:, -self.hidden_size:] c_t = th.mul(c, f_t) + th.mul(i_t, g_t) h_t = th.mul(o_t, c_t.tanh()) h_t = h_t.view(1, h_t.size(0), -1) c_t = c_t.view(1, c_t.size(0), -1) return h_t, (h_t, c_t) PS: The repository contains many more variants of LSTM and other RNNs: https://github.com/pytorch/benchmark/tree/master/rnns/benchmarks. Check it out, maybe the extension you had in mind is already there! EDIT: As mentioned in the comments, you can wrap the LSTM cell above to process sequential output: import math import torch as th import torch.nn as nn class LSTMCell(nn.Module): def __init__(self, input_size, hidden_size, bias=True): # As before def reset_parameters(self): # As before def forward(self, x, hidden): if hidden is None: hidden = self._init_hidden(x) # Rest as before @staticmethod def _init_hidden(input_): h = th.zeros_like(input_.view(1, input_.size(1), -1)) c = th.zeros_like(input_.view(1, input_.size(1), -1)) return h, c class LSTM(nn.Module): def __init__(self, input_size, hidden_size, bias=True): super().__init__() self.lstm_cell = LSTMCell(input_size, hidden_size, bias) def forward(self, input_, hidden=None): # input_ is of dimensionalty (1, time, input_size, ...) outputs = [] for x in torch.unbind(input_, dim=1): hidden = self.lstm_cell(x, hidden) outputs.append(hidden[0].clone()) return torch.stack(outputs, dim=1) I havn't tested the code since I'm working with a convLSTM implementation. Please let me know if something is wrong. UPDATE: Fixed links.
https://stackoverflow.com/questions/50168224/
Adapting pytorch softmax function
I am currently looking into the softmax function and I would like to adapt the orignally implemented for ome small tests. I have been to the docs but there wasn't that much of usefull information about the function. This is the pytorch python implementation: def __init__(self, dim=None): super(Softmax, self).__init__() self.dim = dim def __setstate__(self, state): self.__dict__.update(state) if not hasattr(self, 'dim'): self.dim = None def forward(self, input): return F.softmax(input, self.dim, _stacklevel=5) Where can I find the F.softmax impementation? One off the things I want to try for instance is the soft-margin softmax described here: Soft-Margin Softmax for Deep Classification Where would be the best place to start? Thanks in advance!
Softmax Implementation in PyTorch and Numpy A Softmax function is defined as follows: A direct implementation of the above formula is as follows: def softmax(x): return np.exp(x) / np.exp(x).sum(axis=0) Above implementation can run into arithmetic overflow because of np.exp(x). To avoid the overflow, we can divide the numerator and denominator in the softmax equation with a constant C. Then the softmax function becomes following: The above approach is implemented in PyTorch and we take log(C) as -max(x). Below is the PyTorch implementation: def softmax_torch(x): # Assuming x has atleast 2 dimensions maxes = torch.max(x, 1, keepdim=True)[0] x_exp = torch.exp(x-maxes) x_exp_sum = torch.sum(x_exp, 1, keepdim=True) probs = x_exp/x_exp_sum return probs A corresponding Numpy equivalent is as follows: def softmax_np(x): maxes = np.max(x, axis=1, keepdims=True)[0] x_exp = np.exp(x-maxes) x_exp_sum = np.sum(x_exp, 1, keepdims=True) probs = x_exp/x_exp_sum return probs We can compare the results with PyTorch implementation - torch.nn.functional.softmax using below snippet: import torch import numpy as np if __name__ == "__main__": x = torch.randn(1, 3, 5, 10) std_pytorch_softmax = torch.nn.functional.softmax(x) pytorch_impl = softmax_torch(x) numpy_impl = softmax_np(x.detach().cpu().numpy()) print("Shapes: x --> {}, std --> {}, pytorch impl --> {}, numpy impl --> {}".format(x.shape, std_pytorch_softmax.shape, pytorch_impl.shape, numpy_impl.shape)) print("Std and torch implementation are same?", torch.allclose(std_pytorch_softmax, pytorch_impl)) print("Std and numpy implementation are same?", torch.allclose(std_pytorch_softmax, torch.from_numpy(numpy_impl))) References: Softmax discussion Discussion on Softmax implementation at PyTorch forumn An SO thread on implementation of Softmax in Python
https://stackoverflow.com/questions/50170011/
Implementing word dropout in pytorch
I want to add word dropout to my network so that I can have sufficient training examples for training the embedding of the "unk" token. As far as I'm aware, this is standard practice. Let's assume the index of the unk token is 0, and the index for padding is 1 (we can switch them if that's more convenient). This is a simple CNN network which implements word dropout the way I would have expected it to work: class Classifier(nn.Module): def __init__(self, params): super(Classifier, self).__init__() self.params = params self.word_dropout = nn.Dropout(params["word_dropout"]) self.pad = torch.nn.ConstantPad1d(max(params["window_sizes"])-1, 1) self.embedding = nn.Embedding(params["vocab_size"], params["word_dim"], padding_idx=1) self.convs = nn.ModuleList([nn.Conv1d(1, params["feature_num"], params["word_dim"] * window_size, stride=params["word_dim"], bias=False) for window_size in params["window_sizes"]]) self.dropout = nn.Dropout(params["dropout"]) self.fc = nn.Linear(params["feature_num"] * len(params["window_sizes"]), params["num_classes"]) def forward(self, x, l): x = self.word_dropout(x) x = self.pad(x) embedded_x = self.embedding(x) embedded_x = embedded_x.view(-1, 1, x.size()[1] * self.params["word_dim"]) # [batch_size, 1, seq_len * word_dim] features = [F.relu(conv(embedded_x)) for conv in self.convs] pooled = [F.max_pool1d(feat, feat.size()[2]).view(-1, params["feature_num"]) for feat in features] pooled = torch.cat(pooled, 1) pooled = self.dropout(pooled) logit = self.fc(pooled) return logit Don't mind the padding - pytorch doesn't have an easy way of using non zero padding in CNNs, much less trainable non-zero padding, so I'm doing it manually. Dropout also doesn't allow me to use non zero dropout, and I want to separate the padding token from the unk token. I'm keeping it in my example because it's the reason for this question's existence. This doesn't work because dropout wants Float Tensors so that it can scale them properly, while my input is Long Tensors that don't need to be scaled. Is there an easy way of doing this in pytorch? I essentially want to use LongTensor-friendly dropout (bonus: better if it will let me specify a dropout constant that isn't 0, so that I could use zero padding).
Actually I would do it outside of your model, before converting your input into a LongTensor. This would look like this: import random def add_unk(input_token_id, p): #random.random() gives you a value between 0 and 1 #to avoid switching your padding to 0 we add 'input_token_id > 1' if random.random() < p and input_token_id > 1: return 0 else: return input_token_id #than you have your input token_id #for this example I take just a random number, lets say 127 input_token_id = 127 #let p be your probability for UNK p = 0.01 your_input_tensor = torch.LongTensor([add_unk(input_token_id, p)]) Edit: So there are two options which come to my mind which are actually GPU-friendly. In general both solutions should be much more efficient. Option one - Doing computation directly in forward(): If you're not using torch.utils and don't have plans using it later this is probably the way to go. Instead of doing the computation before we just do it in the forward() method of main PyTorch class. However I see no (simple) way doing this in torch 0.3.1., so you would need to upgrade to version 0.4.0: So imagine x is your input vector: >>> x = torch.tensor(range(10)) >>> x tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) probs is a vector containing uniform probabilities for dropout so we can check later agains our probability for dropout: >>> probs = torch.empty(10).uniform_(0, 1) >>> probs tensor([ 0.9793, 0.1742, 0.0904, 0.8735, 0.4774, 0.2329, 0.0074, 0.5398, 0.4681, 0.5314]) Now we apply the dropout probabilities probs on our input x: >>> torch.where(probs > 0.2, x, torch.zeros(10, dtype=torch.int64)) tensor([ 0, 0, 0, 3, 4, 5, 0, 7, 8, 9]) Note: To see some effect I chose a dropout probability of 0.2 here. I reality you probably want it to be smaller. You can pick for this any token / id you like, here is an example with 42 as unknown token id: >>> unk_token = 42 >>> torch.where(probs > 0.2, x, torch.empty(10, dtype=torch.int64).fill_(unk_token)) tensor([ 0, 42, 42, 3, 4, 5, 42, 7, 8, 9]) torch.where comes with PyTorch 0.4.0: https://pytorch.org/docs/master/torch.html#torch.where I don't know about the shapes of your network, but your forward() should look something like this then (when using mini-batching you need to flatten the input before applying dropout): def forward_train(self, x, l): # probabilities probs = torch.empty(x.size(0)).uniform_(0, 1) # applying word dropout x = torch.where(probs > 0.02, x, torch.zeros(x.size(0), dtype=torch.int64)) # continue like before ... x = self.pad(x) embedded_x = self.embedding(x) embedded_x = embedded_x.view(-1, 1, x.size()[1] * self.params["word_dim"]) # [batch_size, 1, seq_len * word_dim] features = [F.relu(conv(embedded_x)) for conv in self.convs] pooled = [F.max_pool1d(feat, feat.size()[2]).view(-1, params["feature_num"]) for feat in features] pooled = torch.cat(pooled, 1) pooled = self.dropout(pooled) logit = self.fc(pooled) return logit Note: I named the function forward_train() so you should use another forward() without dropout for evaluation / predicting. But you could also use some if conditions with train(). Option two: using torch.utils.data.Dataset: If you're using Dataset provided by torch.utils it is very easy to do this kind of pre-processing efficiently. Dataset uses strong multi-processing acceleration by default so the the code sample above just has to be executed in the __getitem__ method of your Dataset class. This could look like this: def __getitem__(self, index): 'Generates one sample of data' # Select sample ID = self.input_tokens[index] # Load data and get label # using add ink_unk function from code above X = torch.LongTensor(add_unk(ID, p=0.01)) y = self.targets[index] return X, y This is a bit out of context and doesn't look very elegant but I think you get the idea. According to this blog post of Shervine Amidi at Stanford it should be no problem to do more complex pre-processing steps in this function: Since our code [Dataset is meant] is designed to be multicore-friendly, note that you can do more complex operations instead (e.g. computations from source files) without worrying that data generation becomes a bottleneck in the training process. The linked blog post - "A detailed example of how to generate your data in parallel with PyTorch" - provides also a good guide for implementing the data generation with Dataset and DataLoader. I guess you'll prefer option one - only two lines and it should be very efficient. :) Good luck!
https://stackoverflow.com/questions/50174230/
AttributeError: 'module' object has no attribute 'float32'
I am trying to use OpenNMT-py with python 2.7. OpenNMT-py requires torchtext, so I installed it but now when I am running my program, I am getting the following error message. Traceback (most recent call last): File "examples/StackPointerParser.py", line 23, in <module> from neuronlp2.io import get_logger, conllx_stacked_data File "./neuronlp2/__init__.py", line 7, in <module> from . import models File "./neuronlp2/models/__init__.py", line 4, in <module> from .parsing import * File "./neuronlp2/models/parsing.py", line 15, in <module> from onmt.modules import LayerNorm, Transformer File "/home/wasiahmad/software/anaconda2/lib/python2.7/site-packages/onmt/__init__.py", line 1, in <module> import onmt.io File "/home/wasiahmad/software/anaconda2/lib/python2.7/site-packages/onmt/io/__init__.py", line 1, in <module> from onmt.io.IO import collect_feature_vocabs, make_features, \ File "/home/wasiahmad/software/anaconda2/lib/python2.7/site-packages/onmt/io/IO.py", line 8, in <module> import torchtext.data File "/home/wasiahmad/software/anaconda2/lib/python2.7/site-packages/torchtext/__init__.py", line 1, in <module> from . import data File "/home/wasiahmad/software/anaconda2/lib/python2.7/site-packages/torchtext/data/__init__.py", line 4, in <module> from .field import RawField, Field, ReversibleField, SubwordField, NestedField, LabelField File "/home/wasiahmad/software/anaconda2/lib/python2.7/site-packages/torchtext/data/field.py", line 61, in <module> class Field(RawField): File "/home/wasiahmad/software/anaconda2/lib/python2.7/site-packages/torchtext/data/field.py", line 115, in Field torch.float32: float, AttributeError: 'module' object has no attribute 'float32' I tried to look for a solution to resolve this issue but couldn't find any. Any help would be appreciated.
This is more a guess, as you have not given information about your version. But it seems to me that your torchtext version is not compatible with your PyTorch version. Probably when you installed torchtext you got the newer version already made for PyTorch 0.4.0. But your PyTorch version installed is still older than 0.4.0 (version 0.3.1 or so). If that is the case you have two options. Downgrading torchtext to a version compatible to yours (probably the version before). Or upgrading PyTorch to version 0.4.0. I hope this helps.
https://stackoverflow.com/questions/50186348/
What is the state of the art way of doing regression with probability in pytorch
All regression examples I find are examples where you predict a real number and unlike with classification you dont the the confidence the model had when predicting that number. I have done in reinforcement learning another way the output is instead the mean and std and then you sample from that distribution. Then you know how confident the model is at predicting every value. Now I cant find how to do this using supervised learning in pytorch. The problem is that I dont understand how to perform sample from the distribution the get the actual value while training or what sort of loss function I should use, not sure how for example MSE or L1Smooth would work. Is there any example ot there where this is done in pytorch in a robust and state of the art way?
The key point is that you do not need to sample from the NN-produced distribution. All you need is to optimize the likelihood of the target value under the NN distribution. There is an example in the official PyTorch example on VAE (https://github.com/pytorch/examples/tree/master/vae), though for multidimensional Bernoulli distribution. Since PyTorch 0.4, you can use torch.distributions: instantiate distribution distro with outputs of your NN and then optimize -distro.log_prob(target). EDIT: As requested in a comment, a complete example of using the torch.distributions module. First, we create a heteroscedastic dataset: import numpy as np import torch X = np.random.uniform(size=300) Y = X + 0.25*X*np.random.normal(size=X.shape[0]) We build a trivial model, which is perfectly able to match the generative process of our data: class Model(torch.nn.Module): def __init__(self): super().__init__() self.mean_coeff = torch.nn.Parameter(torch.Tensor([0])) self.var_coeff = torch.nn.Parameter(torch.Tensor([1])) def forward(self, x): return torch.distributions.Normal(self.mean_coeff * x, self.var_coeff * x) mdl = Model() optim = torch.optim.SGD(mdl.parameters(), lr=1e-3) Initialization of the model makes it always produce a standard normal, which is a poor fit for our data, so we train (note it is a very stupid batch training, but demonstrates that you can output a set of distributions for your batch at once): for _ in range(2000): # epochs dist = mdl(torch.from_numpy(X).float()) obj = -dist.log_prob(torch.from_numpy(Y).float()).mean() optim.zero_grad() obj.backward() optim.step() Eventually, the learned parameters should match the values we used to construct the Y. print(mdl.mean_coeff, mdl.var_coeff) # tensor(1.0150) tensor(0.2597)
https://stackoverflow.com/questions/50196212/
torch.nn.embedding has run time error
I want to use torch.nn.Embedding. I have followed the codes in the documentation of embedding command. here is the code: # an Embedding module containing 10 tensors of size 3 embedding = nn.Embedding(10, 3) # a batch of 2 samples of 4 indices each input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) embedding(input) The documentation says that you will receive this output: tensor([[[-0.0251, -1.6902, 0.7172], [-0.6431, 0.0748, 0.6969], [ 1.4970, 1.3448, -0.9685], [-0.3677, -2.7265, -0.1685]], [[ 1.4970, 1.3448, -0.9685], [ 0.4362, -0.4004, 0.9400], [-0.6431, 0.0748, 0.6969], [ 0.9124, -2.3616, 1.1151]]]) but I don't receive this output. instead I receive this error: Traceback (most recent call last): File "/home/mahsa/PycharmProjects/PyTorch_env_project/PyTorchZeroToAll-master/temporary.py", line 12, in <module> embedding(input) File "/home/mahsa/anaconda3/envs/pytorch_env/lib/python3.5/site-packages/torch/nn/modules/module.py", line 224, in __call__ result = self.forward(*input, **kwargs) File "/home/mahsa/anaconda3/envs/pytorch_env/lib/python3.5/site-packages/torch/nn/modules/sparse.py", line 94, in forward self.scale_grad_by_freq, self.sparse RuntimeError: save_for_backward can only save input or output tensors, but argument 0 doesn't satisfy this condition Can anybody guide me about this error? and about the work of torch.nn.Embedding?
if we change this line: input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) with this: input = autograd.Variable(torch.LongTensor([[1,2,4,5],[4,3,2,9]])) the problem is solved!
https://stackoverflow.com/questions/50196608/
How can I downgrade the version pytorch from 0.4 to 0.31 with anaconda?
I'm using anaconda now and due to some code error I need to downgrade the pytorch from version 0.4 to version 0.31. However, as I use anaconda as my python package management tool, I used the instruction below to downgrade it and encountered the following error: conda install pytorch=0.31 cuda80 -c soumith PackagesNotFoundError: The following packages are not available from current channels: pytorch=0.31 And I also have tried some normal method to revert the package into previous verion in anaconda, but they failed either. Could anyone tell me how to downgrade it? Thank you so much!!
conda install pytorch=0.3.1.0 cuda80 -c soumith
https://stackoverflow.com/questions/50229857/
Loading FITS images with PyTorch
I'm trying to create a CNN using PyTorch but my images need importing from the FITS format rather than conventional .png or .jpeg etc. Is there a way to accomplish this easily using torch.utils.data.DataLoader or is there a place in the source code where I can put in a clause which will handle FITS files while loading in? I have looked in the documentation and the most relevant thing I've found is the ToPILImage transformer which converts a tensor or ndarray into a PIL Image. Currently I'm using an image loading routine as follows: import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision.datasets as dset import torchvision.transforms as transforms import torchvision batch_size = 4 transform = transforms.Compose( [transforms.Resize((32,32)), transforms.ToTensor(), ]) trainset = dset.ImageFolder(root="Documents/Image_data",transform=transform) train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,shuffle=True) Astropy: http://www.astropy.org/ Pytorch: https://pytorch.org/ torch.utils: https://pytorch.org/docs/master/data.html UPDATE: Perhaps using torchvision.datasets.DatasetFolder instead of DataLoader, an inserting in my own FITS handler would work? When trying to use this class I get the following error: AttributeError: module 'torchvision.datasets' has no attribute 'DatasetFolder' Is DatasetFolder actually supported by torchvision at this point in time?
From reading some combination of the docs and the code, I don't think you necessarily want to be using ImageFolder since it doesn't know anything about FITS. Instead you should try using the more generic DataSetFolder class (which in fact is the parent class of ImageFolder). You would pass it a list of extensions it should handle (i.e. ['.fits'] and a "loader" function that takes a FITS file and, it seems, should return a PIL.Image. You could even make your own subclass following the example of ImageFolder. E.g. class FitsFolder(DatasetFolder): EXTENSIONS = ['.fits'] def __init__(self, root, transform=None, target_transform=None, loader=None): if loader is None: loader = self.__fits_loader super(FitsFolder, self).__init__(root, loader, self.EXTENSIONS, transform=transform, target_transform=target_transform) @staticmethod def __fits_loader(filename): data = fits.getdata(filename) return Image.fromarray(data) The exact details of __fits_loader may depend on the details of your FITS files. This basic example just uses the high-level fits.getdata() function which returns the first image array in the FITS file (some FITS files may have many extensions with many images, or have tables etc.). So that part would be up to you.
https://stackoverflow.com/questions/50231298/
Pytorch Forward Pass Changes Each Time?
I am learning pytorch and running a toy regression problem. I am baffled by the fact that it appears that each time I run a tensor through a model, the prediction changes. Clearly this is cant be the case but what am I missing? Pytorch version: 0.4.0 I am running here without GPU to eliminate that potential issue. Code: from __future__ import division import numpy as np import matplotlib.pyplot as plt import pandas as pd import torch import torch.utils.data as utils_data from torch.autograd import Variable from torch import optim, nn from torch.utils.data import Dataset import torch.nn.functional as F from torch.nn.init import xavier_uniform_, xavier_normal_,uniform_ from sklearn.datasets import load_boston from sklearn.metrics import mean_squared_error cuda=False #set to true uses GPU #load boston data from scikit boston = load_boston() x=boston.data y=boston.target y=y.reshape(y.shape[0],1) #change to tensors x = torch.from_numpy(x) y = torch.from_numpy(y) #create dataset and use data loader training_samples = utils_data.TensorDataset(x, y) data_loader = utils_data.DataLoader(training_samples, batch_size=64) #simple model class Net(nn.Module): def __init__(self): super(Net, self).__init__() #all the layers self.fc1 = nn.Linear(x.shape[1], 20) xavier_uniform_(self.fc1.weight.data) #this is how you can change the weight init self.drop = nn.Dropout(p=0.5) self.fc2 = nn.Linear(20, 1) def forward(self, x): x = F.relu(self.fc1(x)) x= self.drop(x) x = self.fc2(x) return x net=Net() if cuda: net.cuda() # create a stochastic gradient descent optimizer optimizer = optim.Adam(net.parameters()) # create a loss function (mse) loss = nn.MSELoss(size_average=True) # run the main training loop epochs =50 hold_loss=[] for epoch in range(epochs): cum_loss=0. for batch_idx, (data, target) in enumerate(data_loader): tr_x, tr_y = data.float(), target.float() if cuda: tr_x, tr_y = tr_x.cuda(), tr_y.cuda() # Reset gradient optimizer.zero_grad() # Forward pass fx = net(tr_x) output = loss(fx, tr_y) #loss for this batch cum_loss += output.item() #accumulate the loss # Backward output.backward() # Update parameters based on backprop optimizer.step() hold_loss.append(cum_loss/len(training_samples)) #training loss plt.plot(np.array(hold_loss)) This part, if re-ran will return different predictions each time, the actuals dont change so the order of the data is not changing! #score the training set for batch_idx, (data, target) in enumerate(data_loader): tr_x, tr_y = data.float(), target.float() if batch_idx ==0: hold_pred=net(tr_x).data.numpy() hold_actual=tr_y.data.numpy().reshape(tr_y.data.numpy().shape[0],1) else: hold_pred =np.row_stack([hold_pred,net(tr_x).data.numpy()]) hold_actual=np.row_stack([hold_actual,tr_y.data.numpy().reshape(tr_y.data.numpy().shape[0],1)]) #view the first few predictions print(hold_pred[0:10]) print(hold_actual[0:10])
Your network has a Dropout layer, which has for purpose to randomly sample (with a probability p=0.5 here) the data it receives during training (net.train() set before inference). See the doc for more information (usage, purpose). This layer can be short-circuited during testing (net.eval() set before inference).
https://stackoverflow.com/questions/50233272/
In pytorch, how to use the weight parameter in F.cross_entropy()?
I'm trying to write some code like below: x = Variable(torch.Tensor([[1.0,2.0,3.0]])) y = Variable(torch.LongTensor([1])) w = torch.Tensor([1.0,1.0,1.0]) F.cross_entropy(x,y,w) w = torch.Tensor([1.0,10.0,1.0]) F.cross_entropy(x,y,w) However, the output of cross entropy loss is always 1.4076 whatever w is. What is behind the weight parameter for F.cross_entropy()? How to use it correctly? I'm using pytorch 0.3
The weight parameter is used to compute a weighted result for all inputs based on their target class. If you have only one input or all inputs of the same target class, weight won't impact the loss. See the difference however with 2 inputs of different target classes: import torch import torch.nn.functional as F from torch.autograd import Variable x = Variable(torch.Tensor([[1.0,2.0,3.0], [1.0,2.0,3.0]])) y = Variable(torch.LongTensor([1, 2])) w = torch.Tensor([1.0,1.0,1.0]) res = F.cross_entropy(x,y,w) # 0.9076 w = torch.Tensor([1.0,10.0,1.0]) res = F.cross_entropy(x,y,w) # 1.3167
https://stackoverflow.com/questions/50248029/
How to train a Pytorch net
I'm using this Pytorch implementation of Segnet with pretrained values I found for object segmentation, and it works fine. Now I want to resume the training from the values I have, using a new dataset with similar images. How can I do that? I guess I have to use the "train.py" file found in the repository, but I don't know what to write in order to replace the "fill the batch" comment. Here is that portion of the code: def train(epoch): model.train() # update learning rate lr = args.lr * (0.1 ** (epoch // 30)) for param_group in optimizer.param_groups: param_group['lr'] = lr # define a weighted loss (0 weight for 0 label) weights_list = [0]+[1 for i in range(17)] weights = np.asarray(weights_list) weigthtorch = torch.Tensor(weights_list) if(USE_CUDA): loss = nn.CrossEntropyLoss(weight=weigthtorch).cuda() else: loss = nn.CrossEntropyLoss(weight=weigthtorch) total_loss = 0 # iteration over the batches batches = [] for batch_idx,batch_files in enumerate(tqdm(batches)): # containers batch = np.zeros((args.batch_size,input_nbr, imsize, imsize), dtype=float) batch_labels = np.zeros((args.batch_size,imsize, imsize), dtype=int) # fill the batch # ... # What should I write here? batch_th = Variable(torch.Tensor(batch)) target_th = Variable(torch.LongTensor(batch_labels)) if USE_CUDA: batch_th =batch_th.cuda() target_th = target_th.cuda() # initilize gradients optimizer.zero_grad() # predictions output = model(batch_th) # Loss output = output.view(output.size(0),output.size(1), -1) output = torch.transpose(output,1,2).contiguous() output = output.view(-1,output.size(2)) target = target.view(-1) l_ = loss(output.cuda(), target) total_loss += l_.cpu().data.numpy() l_.cuda() l_.backward() optimizer.step() return total_loss/len(files)
If I had to guess he probablly made some Dataloader feeder that extended the Pytorch Dataloader class. See https://pytorch.org/tutorials/beginner/data_loading_tutorial.html Near the bottom of the page you can see an example in which they loop over their data loader for i_batch, sample_batched in enumerate(dataloader): What this would like like for images for example is: trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False, transform=transform_train) trainloader = torch.utils.data.DataLoader(trainset, batch_size=batchSize, shuffle=True, num_workers=2) for batch_idx, (inputs, targets) in enumerate(trainloader): # Using the pytorch data loader the inputs and targets are given # automatically inputs, targets = inputs.cuda(), targets.cuda() optimizer.zero_grad() inputs, targets = Variable(inputs), Variable(targets) How exactly the author loads his files I don't know. You could follow the procedure from: https://pytorch.org/tutorials/beginner/data_loading_tutorial.html to make your own Dataloader though.
https://stackoverflow.com/questions/50249658/
How do I convert a Pandas dataframe to a PyTorch tensor?
How do I train a simple neural network with PyTorch on a pandas dataframe df? The column df["Target"] is the target (e.g. labels) of the network. This doesn't work: import pandas as pd import torch.utils.data as data_utils target = pd.DataFrame(df['Target']) train = data_utils.TensorDataset(df, target) train_loader = data_utils.DataLoader(train, batch_size=10, shuffle=True)
I'm referring to the question in the title as you haven't really specified anything else in the text, so just converting the DataFrame into a PyTorch tensor. Without information about your data, I'm just taking float values as example targets here. Convert Pandas dataframe to PyTorch tensor? import pandas as pd import torch import random # creating dummy targets (float values) targets_data = [random.random() for i in range(10)] # creating DataFrame from targets_data targets_df = pd.DataFrame(data=targets_data) targets_df.columns = ['targets'] # creating tensor from targets_df torch_tensor = torch.tensor(targets_df['targets'].values) # printing out result print(torch_tensor) Output: tensor([ 0.5827, 0.5881, 0.1543, 0.6815, 0.9400, 0.8683, 0.4289, 0.5940, 0.6438, 0.7514], dtype=torch.float64) Tested with Pytorch 0.4.0. I hope this helps, if you have any further questions - just ask. :)
https://stackoverflow.com/questions/50307707/
Change values inside a Pytorch 3D tensor
I have a 224x224 binary image in a tensor (1, 224, 224), with 0-pixels representing background a 1-pixels representing foreground. I want to reshape it in a tensor (2, 224, 224), such as the first "layer" gt[0] has 1-pixels where there were 0-pixels in the original image and viceversa. This way one layer should show 1s where there is background and the other one will have 1s on the foreground (basically I need to have two complementary binary images in this tensor). This is my code: # gt is a tensor (1, 224, 224) gt = gt.expand((2, 224, 224)) backgr = gt[0] foregr = gt[1] backgr[backgr == 0] = 2 # swap all 0s in 1s and viceversa backgr[backgr == 1] = 0 backgr[backgr == 2] = 1 gt[0] = backgr print(gt[0]) print(gt[1]) The problem is both layers are modified with this code and I can't figure out how to keep one of the two constant and change only gt[0].
Found a solution! gt = gt.repeat(2, 1, 1)
https://stackoverflow.com/questions/50327342/
PyTorch LSTM - using word embeddings instead of nn.Embedding()
Is the nn.Embedding() essential for learning for an LSTM? I am using an LSTM in PyTorch to predict NER - example of a similar task is here - https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html Code wise, I am using code almost identical to the code in the tutorial above. The only detail is - I am using word2Vec instead of nn.Embedding(). So I remove the nn.Embedding() layer and provide the forward function the features from the word2Vec directly. The RNN does not learn. Hence, Is the nn.Embedding() essential for learning for an LSTM?
nn.Embedding provides an embedding layer for you. This means that the layer takes your word token ids and converts these to word vectors. You can learn the weights for your nn.Embedding layer during the training process, or you can alternatively load pre-trained embedding weights. When you want to use a pre-trained word2vec (embedding) model, you just load the pre-trained weights into the nn.Embedding layer. You can take a look here on how to load a word2vec embedding layer using gensim library. I hope this helps.
https://stackoverflow.com/questions/50340016/
How to perform finetuning on a Pytorch net
I'm using this implementation of SegNet in Pytorch, and I want to finetune it. I've read online and I've found this method (basically freezing all layers except the last one in your net). My problem is that SegNet has more than 100 layers and I'm looking for a simpler way to do it, rather than writing 100 lines of code. Do you think this could work? Or is this utter nonsense? import torch.optim as optim model = SegNet() for name, param in model.named_modules(): if name != 'conv11d': # the last layer should remain active param.requires_grad = False optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5) def train(): ... How can I check if this is working as intended?
This process is called finetuning and setting requires_grad to False is a good way to do this. From the pytorch docs: Every Tensor has a flag: requires_grad that allows for fine grained exclusion of subgraphs from gradient computation and can increase efficiency. ... If there’s a single input to an operation that requires gradient, its output will also require gradient. Conversely, only if all inputs don’t require gradient, the output also won’t require it. Backward computation is never performed in the subgraphs, where all Tensors didn’t require gradients. See this pytorch tutorial for a relevant example. One simple way of checking to see this is working is looking at the initial error rates. Assuming the task is similar to the task the net was originally trained on, they should be much lower than for a randomly initialized net.
https://stackoverflow.com/questions/50353176/
How to use PyTorch to print out the prediction accuracy of every class?
I am trying to use PyTorch to print out the prediction accuracy of every class based on the official tutorial link But things seem to go wrong. My code intends to do this work is as following: for epoch in range(num_epochs): # Each epoch has a training and validation phase for phase in ['train', 'val']: ... (this is given by the tutorial) (my code) class_correct = list(0. for i in range(3)) class_total = list(0. for i in range(3)) for data in dataloaders['val']: images, labels = data outputs = model(inputs) _, predicted = torch.max(outputs.data, 1) c = (predicted == labels.data).squeeze() for i in range(4): label = labels.data[i] class_correct[label] += c[i] class_total[label] += 1 for i in range(3): print('Accuracy of {} : {} / {} = {:.4f} %'.format(i, class_correct[i], class_total[i], 100 * class_correct[i].item() / class_total[i])) print(file = f) print() For example, the output of epoch 1/1 is : I think the following equation should be satisfied: running_corrects := 2 + 2 But things does not happen as I think. What's wrong there? Hope someone can point out my fault and teach me how to do this correctly. Thx!
Finally, I solved this problem. First, I compared two models' parameters and found out they were the same. So I confirmed that the model is the same. And then, I checked out two inputs and surprisedly found out they were different. So I reviewed two models' inputs carefully and the answer was that the arguments passed to the second model did not update. Code: for data in dataloaders['val']: images, labels = data outputs = model(inputs) Change to: for data in dataloaders['val']: inputs, labels = data outputs = model(inputs) Done!
https://stackoverflow.com/questions/50355859/
DLL files error in using pytorch
I add pytorch via pip installation and now I'm trying to use it, but have this dll error: Traceback (most recent call last): File "F:/Python/Projects/1.py", line 2, in <module> import torch File "C:\Users\Saeed\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\__init__.py", line 78, in <module> from torch._C import * ImportError: DLL load failed: The specified module could not be found. I install msvcp71 and msvcr71 dll files but it's not worked!
You can use Dependency Walker to find out which dependency of that DLL might be missing. Use it to open the Python extension file that's failing to load. The file name should be something like: C:\Users\Saeed\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\_C.pyd Another common cause is a DLL for Python 64-bit while using Python 32-bit or vice-versa. But you installed with pip so it should be OK. Nevertheless, it's a good idea to verify this is not the case.
https://stackoverflow.com/questions/50368390/
Pytorch What's the difference between define layer in __init__() and directly use in forward()?
What is the difference between the method that define layers in __init__() function, call layer in forward later and the method that directly use layer in forward() function ? Should I define every layer in my compute graph in constructed function(eg. __init__) before I write my compute graph? Could I direct define and use them in forward()?
Everything which contains weights which you want to be trained during the training process should be defined in your __init__ method. You don't need do define activation functions like softmax, ReLU or sigmoid in your __init__, you can just call them in forward. Dropout layers for example also don't need to be defined in __init__, they can just be called in forward too. [However defining them in your __init__ has the advantage that they can be switched off easier during evaluation (by calling eval() on your model). You can see an example of both versions here. Hope this is clear. Just ask if you have any further questions.
https://stackoverflow.com/questions/50376463/
Reshaping Pytorch tensor
I have a tensor of size (24, 2, 224, 224) in Pytorch. 24 = batch size 2 = matrixes representing foreground and background 224 = image height dimension 224 = image width dimension This is the output of a CNN that performs binary segmentation. In each cell of the 2 matrixes is stored the probability for that pixel to be foreground or background: [n][0][h][w] + [n][1][h][w] = 1 for every coordinate I want to reshape it into a tensor of size (24, 1, 224, 224). The values in the new layer should be 0 or 1 according to the matrix in which the probability was higher. How can I do that? Which function should I use?
Using torch.argmax() (for PyTorch +0.4): prediction = torch.argmax(tensor, dim=1) # with 'dim' the considered dimension prediction = prediction.unsqueeze(1) # to reshape from (24, 224, 224) to (24, 1, 224, 224) If the PyTorch version is below 0.4.0, one can use tensor.max() which returns both the max values and their indices (but which isn't differentiable over the index values): _, prediction = tensor.max(dim=1) prediction = prediction.unsqueeze(1) # to reshape from (24, 224, 224) to (24, 1, 224, 224)
https://stackoverflow.com/questions/50391703/
Pytorch - Stack dimension must be exactly the same?
In pytorch, given the tensors a of shape (1X11) and b of shape (1X11), torch.stack((a,b),0) would give me a tensor of shape (2X11) However, when a is of shape (2X11) and b is of shape (1X11), torch.stack((a,b),0) will raise an error cf. "the two tensor size must exactly be the same". Because the two tensor are the output of a model (gradient included), I can't convert them to numpy to use np.stack() or np.vstack(). Is there any possible solution for least GPU memory usage?
It seems you want to use torch.cat() (concatenate tensors along an existing dimension) and not torch.stack() (concatenate/stack tensors along a new dimension): import torch a = torch.randn(1, 42, 1, 1) b = torch.randn(1, 42, 1, 1) ab = torch.stack((a, b), 0) print(ab.shape) # torch.Size([2, 1, 42, 1, 1]) ab = torch.cat((a, b), 0) print(ab.shape) # torch.Size([2, 42, 1, 1]) aab = torch.cat((a, ab), 0) print(aab.shape) # torch.Size([3, 42, 1, 1])
https://stackoverflow.com/questions/50394505/
Use torch.eq() only for some value in Pytorch
Is there a way to use torch.eq() or a similar function to compute element-based equality but only for some elements? Let's say I need to know how many 1s are equal in the two tensors but I don't care about other numbers. Any idea how to do this?
Let's say we have 2 tensors A and B filled with random elements and eventually some 1s somewhere. The tensor C is the result of want you aim for: A = torch.rand((2, 3, 3)) B = torch.rand((2, 3, 3)) # fill A and B with some 1s ... C = (A == 1) * (B == 1) Using the following tensors we get: (A) [[[ 0.6151, 1.0000, 0.6515], [ 0.3337, 0.4262, 0.0731], [ 0.4571, 0.2380, 1.0000]], [[ 1.0000, 0.1114, 0.8183], [ 0.9178, 1.0000, 1.0000], [ 0.8180, 0.8112, 0.2972]]] (B) [[[ 0.4305, 1.0000, 0.5378], [ 0.4171, 0.4365, 0.2805], [ 0.1076, 0.1259, 0.9695]], [[ 1.0000, 0.0911, 1.0000], [ 0.6757, 0.5095, 0.4499], [ 0.5787, 1.0000, 1.0000]]] (C) [[[ 0, 1, 0], [ 0, 0, 0], [ 0, 0, 0]], [[ 1, 0, 0], [ 0, 0, 0], [ 0, 0, 0]]]
https://stackoverflow.com/questions/50405832/
How to compute the cosine_similarity in pytorch for all rows in a matrix with respect to all rows in another matrix
In pytorch, given that I have 2 matrixes how would I compute cosine similarity of all rows in each with all rows in the other. For example Given the input = matrix_1 = [a b] [c d] matrix_2 = [e f] [g h] I would like the output to be output = [cosine_sim([a b] [e f]) cosine_sim([a b] [g h])] [cosine_sim([c d] [e f]) cosine_sim([c d] [g h])] At the moment I am using torch.nn.functional.cosine_similarity(matrix_1, matrix_2) which returns the cosine of the row with only that corresponding row in the other matrix. In my example I have only 2 rows, but I would like a solution which works for many rows. I would even like to handle the case where the number of rows in the each matrix is different. I realize that I could use the expand, however I want to do it without using such a large memory footprint.
By manually computing the similarity and playing with matrix multiplication + transposition: import torch from scipy import spatial import numpy as np a = torch.randn(2, 2) b = torch.randn(3, 2) # different row number, for the fun # Given that cos_sim(u, v) = dot(u, v) / (norm(u) * norm(v)) # = dot(u / norm(u), v / norm(v)) # We fist normalize the rows, before computing their dot products via transposition: a_norm = a / a.norm(dim=1)[:, None] b_norm = b / b.norm(dim=1)[:, None] res = torch.mm(a_norm, b_norm.transpose(0,1)) print(res) # 0.9978 -0.9986 -0.9985 # -0.8629 0.9172 0.9172 # ------- # Let's verify with numpy/scipy if our computations are correct: a_n = a.numpy() b_n = b.numpy() res_n = np.zeros((2, 3)) for i in range(2): for j in range(3): # cos_sim(u, v) = 1 - cos_dist(u, v) res_n[i, j] = 1 - spatial.distance.cosine(a_n[i], b_n[j]) print(res_n) # [[ 0.9978022 -0.99855876 -0.99854881] # [-0.86285472 0.91716063 0.9172349 ]]
https://stackoverflow.com/questions/50411191/
Reproducable Pytorch Results & Random Seeds
I have a simple toy NN with Pytorch. I am setting all the seeds I can find in the docs as well as numpy random. If I run the code below from top to bottom, the results appear to be reproducible. BUT, if I run block 1 only once and then each time run block 2, the result changes (sometimes dramatically). I am unsure why this happens since the network is being re-initialized and optimizer reset each time. I am using version 0.4.0 BLOCK #1 from __future__ import division import numpy as np import matplotlib.pyplot as plt import pandas as pd import torch import torch.utils.data as utils_data from torch.autograd import Variable from torch import optim, nn from torch.utils.data import Dataset import torch.nn.functional as F from torch.nn.init import xavier_uniform_, xavier_normal_,uniform_ torch.manual_seed(123) import random random.seed(123) from sklearn.datasets import load_boston from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split %matplotlib inline cuda=True #set to true uses GPU if cuda: torch.cuda.manual_seed(123) #load boston data from scikit boston = load_boston() x=boston.data y=boston.target y=y.reshape(y.shape[0],1) #train and test x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.3, random_state=123, shuffle=False) #change to tensors x_train = torch.from_numpy(x_train) y_train = torch.from_numpy(y_train) #create dataset and use data loader training_samples = utils_data.TensorDataset(x_train, y_train) data_loader_trn = utils_data.DataLoader(training_samples, batch_size=64,drop_last=False) #change to tensors x_test = torch.from_numpy(x_test) y_test = torch.from_numpy(y_test) #create dataset and use data loader testing_samples = utils_data.TensorDataset(x_test, y_test) data_loader_test = utils_data.DataLoader(testing_samples, batch_size=64,drop_last=False) #simple model class Net(nn.Module): def __init__(self): super(Net, self).__init__() #all the layers self.fc1 = nn.Linear(x.shape[1], 20) xavier_uniform_(self.fc1.weight.data) #this is how you can change the weight init self.drop = nn.Dropout(p=0.5) self.fc2 = nn.Linear(20, 1) def forward(self, x): x = F.relu(self.fc1(x)) x= self.drop(x) x = self.fc2(x) return x BLOCK #2 net=Net() if cuda: net.cuda() # create a stochastic gradient descent optimizer optimizer = optim.Adam(net.parameters()) # create a loss function (mse) loss = nn.MSELoss(size_average=False) # run the main training loop epochs =20 hold_loss=[] for epoch in range(epochs): cum_loss=0. cum_records_epoch =0 for batch_idx, (data, target) in enumerate(data_loader_trn): tr_x, tr_y = data.float(), target.float() if cuda: tr_x, tr_y = tr_x.cuda(), tr_y.cuda() # Reset gradient optimizer.zero_grad() # Forward pass fx = net(tr_x) output = loss(fx, tr_y) #loss for this batch cum_loss += output.item() #accumulate the loss # Backward output.backward() # Update parameters based on backprop optimizer.step() cum_records_epoch +=len(tr_x) if batch_idx % 1 == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, cum_records_epoch, len(data_loader_trn.dataset), 100. * (batch_idx+1) / len(data_loader_trn), output.item())) print('Epoch average loss: {:.6f}'.format(cum_loss/cum_records_epoch)) hold_loss.append(cum_loss/cum_records_epoch) #training loss plt.plot(np.array(hold_loss)) plt.show()
Possible Reason Not knowing what the "sometimes dramatic differences" are, it is hard to answer for sure; but having different results when running [block_1 x1; block_2 x1] xN (read "running block_1 then block_2 once; and repeat both operations N times) and [block_1 x1; block_2 xN] x1 makes sense, given how pseudo-random number generators (PRNGs) and seeds work. In the first case, you are re-initializing the PRNGs in block_1 after each block_2, so each of the N instances of block_2 will access the same sequence of pseudo-random numbers, seeded by each block_1 before. In the second case, the PRNGs are initialized only once, by the single block_1 run. So each instance of block_2 will have different random values. (For more on PRNGs and seeds, you could check: random.seed(): What does it do?) Simplified Example Let's suppose numpy/CUDA/pytorch are actually using a really poor PRNG, which only returns incremented values (i.e. PRNG(x_n) = PRNG(x_(n-1)) + 1, with x_0 = seed). If you seed this generator with 0, it will thus return 1 the first random() call, 2 the second call, etc. Now let also simplifies your blocks for the sake of the example: def block_1(): seed = 0 print("seed: {}".format(seed)) prng.seed(seed) -- def block_2(): res = "random results:" for i in range(4): res += " {}".format(prng.random()) print(res) Let's compare [block_1 x1; block_2 x1] xN and [block_1 x1; block_2 xN] x1 with N=3: for i in range(3): block_1() block_2() # > seed: 0 # > random results: 1 2 3 4 # > seed: 0 # > random results: 1 2 3 4 # > seed: 0 # > random results: 1 2 3 4 block_1() for i in range(3): block_2() # > seed: 0 # > random results: 1 2 3 4 # > random results: 4 5 6 7 # > random results: 8 9 10 11
https://stackoverflow.com/questions/50412235/
Concatenating two tensors with different dimensions in Pytorch
Is it possible to concatenate two tensors with different dimensions without using for loop. e.g. Tensor 1 has dimensions (15, 200, 2048) and Tensor 2 has dimensions (1, 200, 2048). Is it possible to concatenate 2nd tensor with 1st tensor along all the 15 indices of 1st dimension in 1st Tensor (Broadcast 2nd tensor along 1st dimension of Tensor 1 while concatenating along 3rd dimension of 1st tensor)? The resulting tensor should have dimensions (15, 200, 4096). Is it possible to accomplish this without for loop ?
You could do the broadcasting manually (using Tensor.expand()) before the concatenation (using torch.cat()): import torch a = torch.randn(15, 200, 2048) b = torch.randn(1, 200, 2048) repeat_vals = [a.shape[0] // b.shape[0]] + [-1] * (len(b.shape) - 1) # or directly repeat_vals = (15, -1, -1) or (15, 200, 2048) if shapes are known and fixed... res = torch.cat((a, b.expand(*repeat_vals)), dim=-1) print(res.shape) # torch.Size([15, 200, 4096])
https://stackoverflow.com/questions/50424167/
can't import torch mac
I'm trying to import torch and I'm getting the next problem: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/torch/__init__.py", line 66, in <module> import torch._dl as _dl_flags ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/torch/_dl.so, 2): no suitable image found. Did find: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/torch/_dl.so: mach-o, but wrong architecture /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/torch/_dl.so: mach-o, but wrong architecture someone knows how can I solve this? Thanks:)
Try like that: mkdir test_torch cd test_torch python3 -m venv .venv source .venv/bin/activate pip install torch torchvision python3 >>> import torch Works for me. MacOS 10.13.4, Python 3.6.4 Or like that: mkdir test_torch cd test_torch virtualenv .venv source .venv/bin/activate pip install --upgrade pip pip install torch torchvision python2 >>> import torch Works for me. MacOS 10.13.4, Python 2.7.10 If you don't need to use torch then you can install only torchvision pip install --no-deps torchvision
https://stackoverflow.com/questions/50425739/
Byte Embedding in mLSTM Conceptual Struggle
I am trying to follow the OpenAI "Sentiment Neuron" experiment by reading through the PyTorch code posted on Github for training the model from scratch. One thing I am not understanding is the byte-level embedding used in the code. I understood that the LSTM outputs a probability distribution for the value of the next byte and I assumed the "embedding" would just be a one-hot encoding of the byte value. Looking at the code, I see that the model's input goes through a (trainable) dense embedding before going into the model. Confusingly, the output of the loss is computed between the model output and the upcoming byte value, which is not embedded. My questions are: 1. How is the cross entropy loss computed? Does nn.CrossEntropyLoss take the softmax of its input and expand the target into a one-hot vector "under the hood"? 2. If we want to generate byte strings from this LSTM, how do we embed the output to feed back into the model for the next step? Do we embed the highest likelihood or take a softmax of the output and use some sort of weighted embedding? I'm new to LSTM and I'm trying to learn but I just don't get it! I appreciate any help!
Even though the same symbols are being used for input and output, it's perfectly acceptable to have different representations used at each end. Cross entropy is a function of two probability distributions. In this case, the two distributions are the softmax distribution given by the model, and a point mass on the "correct" byte. For question 1, yes that is what is being done in terms of inputs and outputs (although the implementation might be optimized). To answer question 2, the most common thing is to form the softmax distribution at each step, then sample from it.
https://stackoverflow.com/questions/50438738/
Pytorch, can't run backward() on even the most simple network without getting an error
I am new to pytorch and I can't run backward() on even the most simple network without generating an error. For example: (Linear(6, 6)(Variable(torch.zeros([10, 6]))) - Variable(torch.zeros([10, 6]))).backward() Throws the following error {RuntimeError}element 0 of variables does not require grad and does not have a grad_fn What have I done wrong in the code to create this issue?
Try adding a grad_output of matching shape as a parameter to backward: (Linear(6, 6)(Variable(torch.zeros([10, 6]))) - Variable(torch.zeros([10, 6]))).backward(torch.zeros([10, 6])) The following answer has more details: Why should be the function backward be called only on 1 element tensor or with gradients w.r.t to Variable?
https://stackoverflow.com/questions/50446675/
RuntimeError: inconsistent tensor sizes at /pytorch/torch/lib/TH/generic/THTensorMath.c:2864
Im trying to build a dataloader, This is what it looks like `class WhaleData(Dataset): def __init__(self, data_file, root_dir , transform = None): self.csv_file = pd.read_csv(data_file) self.root_dir = root_dir self.transform = transforms.Resize(224) def __len__(self): return len(os.listdir(self.root_dir)) def __getitem__(self, index): image = os.path.join(self.root_dir, self.csv_file['Image'][index]) image = Image.open(image) image = self.transform(image) image = np.array(image) label = self.csv_file['Image'][index] sample = {'image': image, 'label':label} return sample trainset = WhaleData(data_file = '/mnt/55-91e8-b2383e89165f/Ryan/1234/train.csv', root_dir = '/mnt/4d55-91e8-b2383e89165f/Ryan/1234/train') train_loader = torch.utils.data.DataLoader(trainset , batch_size = 4, shuffle =True,num_workers= 2) for i, batch in enumerate(train_loader): (i, batch) When i try running this block of code, i get this error,I do get the nature of the error that all my images may not be of the same shape,and my images are not all of the same shape ,But if im not wrong the error should only arise when i feed them to the network because the images are all of different shapes,But why is it throwing an errror here? Any suggestions on where i might have gone wrong will be extremly helpful, I would be happy to provide any extra information if needed, Thanks RuntimeError: Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 42, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 116, in default_collate return {key: default_collate([d[key] for d in batch]) for key in batch[0]} File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 116, in <dictcomp> return {key: default_collate([d[key] for d in batch]) for key in batch[0]} File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 105, in default_collate return torch.stack([torch.from_numpy(b) for b in batch], 0) File "/usr/local/lib/python3.5/dist-packages/torch/functional.py", line 64, in stack return torch.cat(inputs, dim) RuntimeError: inconsistent tensor sizes at /pytorch/torch/lib/TH/generic /THTensorMath.c:2864
The error appears when PyTorch tries to stack together the images into a single batch tensor (cf. torch.stack([torch.from_numpy(b) for b in batch], 0) from your trace). As you mentioned, since the images have different shape, the stacking fails (i.e. a tensor (B, H, W) can only be created by stacking B tensors if all these tensors have for shape (H, W)). Note: I'm not fully sure, but setting batch_size=1 for torch.utils.data.DataLoader(...) may remove this particular error, as it probably won't need calling torch.stack() anymore).
https://stackoverflow.com/questions/50454645/