instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Iterating over Torchtext.data.BucketIterator object throws AttributeError 'Field' object has no attribute 'vocab'
When I try to look into a batch, by printing the next iteration of the BucketIterator object, the AttributeError is thrown. tv_datafields=[("Tweet",TEXT), ("Anger",LABEL), ("Fear",LABEL), ("Joy",LABEL), ("Sadness",LABEL)] train, vld = data.TabularDataset.splits(path="./data/", train="train.csv",validation="test.csv",format="csv", fields=tv_datafields) train_iter, val_iter = BucketIterator.splits( (train, vld), batch_sizes=(64, 64), device=-1, sort_key=lambda x: len(x.Tweet), sort_within_batch=False, repeat=False ) print(next(iter(train_dl)))
I am not sure about the specific error you are getting but, in this case, you can iterate over a batch by using the following code: for i in train_iter: print i.Tweet print i.Anger print i.Fear print i.Joy print i.Sadness i.Tweet (also others) is a tensor of shape (input_data_length, batch_size). So, to view a single batch data (lets say batch 0), you can do print i.Tweet[:,0]. Same goes for val_iter (and test_iter, if needed).
https://stackoverflow.com/questions/51231852/
torchtext BucketIterator minimum padding
I'm trying to use the BucketIterator.splits function in torchtext to load data from csv files for use in a CNN. Everything works fine unless I have a batch that the longest sentence is shorter than the biggest filter size. In my example I have filters of sizes 3, 4, and 5 so if the longest sentence doesn't have at least 5 words I get an error. Is there a way to let the BucketIterator dynamically set the padding for batches, but also set a minimum padding length? This is my the code I am using for my BucketIterator: train_iter, val_iter, test_iter = BucketIterator.splits((train, val, test), sort_key=lambda x: len(x.text), batch_size=batch_size, repeat=False, device=device) I'm hoping there is a way to set a minimum length on the sort_key or something like that? I tried this but it doesn't work: FILTER_SIZES = [3,4,5] train_iter, val_iter, test_iter = BucketIterator.splits((train, val, test), sort_key=lambda x: len(x.text) if len(x.text) >= FILTER_SIZES[-1] else FILTER_SIZES[-1], batch_size=batch_size, repeat=False, device=device)
I looked through the torchtext source code to better understand what the sort_key was doing, and saw why my original idea wouldn't work. I'm not sure if it is the best solution or not, but I have come up with a solution that works. I created a tokenizer function that pads the text if it is shorter than the longest filter length, then create the BucketIterator from there. FILTER_SIZES = [3,4,5] spacy_en = spacy.load('en') def tokenizer(text): token = [t.text for t in spacy_en.tokenizer(text)] if len(token) < FILTER_SIZES[-1]: for i in range(0, FILTER_SIZES[-1] - len(token)): token.append('<PAD>') return token TEXT = Field(sequential=True, tokenize=tokenizer, lower=True, tensor_type=torch.cuda.LongTensor) train_iter, val_iter, test_iter = BucketIterator.splits((train, val, test), sort_key=lambda x: len(x.text), batch_size=batch_size, repeat=False, device=device)
https://stackoverflow.com/questions/51252221/
Parallel Cholesky decomposition in PyTorch GPU
To get a differentiable term containing the determinant of a D-dimensional positive-definite matrix C (differential entropy of a multivariate Gaussian in my case), I can use: torch.log2(torch.potrf(C).diag()).sum() + D / 2.0 * (np.log2(2 * np.pi * np.e)) potrf(C) performs Cholesky decomposition, whose diagonal elements' log values sum to the log determinant divided by 2. I want to call potrf on a mini-batch of matrices, so that calling potrf on a Tensor of shape (N, D, D) produces N different Cholesky decompositions. At the moment I can only call potrf() repeatedly in a Python loop, which is a poor use of the GPU's parallel compute capability and as a result runs about 3 times slower than CPU. Is it possible to launch Cholesky decomposition on GPU in parallel with PyTorch?
Batch Cholesky decomposition is now available in PyTorch. Along with batch inverse(), etc. For older versions of Pytorch You are looking for Batch Cholesky decomposition. It's not implemented in Pytorch at the moment, but there is an open issue and plan to add it in the future. I only know of a Batch LU factorization available in Pytorch 0.4. You could use it to get something similar: det(D) = det(P)det(L)det(U) where the determinant of P is (-1)^t and the determinants of P and U are the product of the diagonal elements.
https://stackoverflow.com/questions/51257780/
Product of PyTorch tensors along arbitrary axes Γ  la NumPy's `tensordot`
NumPy provides the very useful tensordot function. It allows you to compute the product of two ndarrays along any axes (whose sizes match). I'm having a hard time finding anything similar in PyTorch. mm works only with 2D arrays, and matmul has some undesirable broadcasting properties. Am I missing something? Am I really meant to reshape the arrays to mimic the products I want using mm?
As mentioned by @McLawrence, this feature is being currently discussed (issue thread). In the meantime, you could consider torch.einsum(), e.g.: import torch import numpy as np a = np.arange(36.).reshape(3,4,3) b = np.arange(24.).reshape(4,3,2) c = np.tensordot(a, b, axes=([1,0],[0,1])) print(c) # [[ 2640. 2838.] [ 2772. 2982.] [ 2904. 3126.]] a = torch.from_numpy(a) b = torch.from_numpy(b) c = torch.einsum("ijk,jil->kl", (a, b)) print(c) # tensor([[ 2640., 2838.], [ 2772., 2982.], [ 2904., 3126.]], dtype=torch.float64)
https://stackoverflow.com/questions/51266507/
How to get probabilities from Resnet using pytorch?
I am finetuning resnet on my dataset which has multiple labels. I would like to convert the 'scores' of the classification layer to probabilities and use those probabilities to calculate the loss at the training. Could you give an example code for this? Can I use like this: P = net.forward(x) p = torch.nn.functional.softmax(P, dim=1) loss = torch.nn.functional.cross_entropy(P, y) I am unclear whether this is the correct way or not as I am passing probabilities as the input to crossentropy loss.
So, you are training a model i.e resnet with cross-entropy in pytorch. Your loss calculation would look like this. logit = model(x) loss = torch.nn.functional.cross_entropy(logits=logit, target=y) In this case, you can calculate the probabilities of all classes by doing, logit = model(x) p = torch.nn.functional.softmax(logit, dim=1) # to calculate loss using probabilities you can do below loss = torch.nn.functional.nll_loss(torch.log(p), y) Note that if you use probabilities you will have to manually take a log, which is bad for numerical reasons. Instead, either use log_softmax or cross_entropy in which case you may end up computing losses using cross entropy and computing probability separately.
https://stackoverflow.com/questions/51291353/
Equivalent of Keras's binary_crossentropy in PyTorch?
I want to port some code from keras to pytorch, but I cann't find equivalent of Keras's binary_crossentropy in PyTorch. PyTorch's binary_cross_entropy has different behavior with keras's. import torch import torch.nn.functional as F input = torch.tensor([[ 0.6845, 0.2454], [ 0.7186, 0.3710], [ 0.3480, 0.3374]]) target = torch.tensor([[ 0., 1.], [ 1., 1.], [ 1., 1.]]) F.binary_cross_entropy(input, target, reduce=False) #tensor([[ 1.1536, 1.4049], # [ 0.3305, 0.9916], # [ 1.0556, 1.0865]]) import keras.backend as K K.eval(K.binary_crossentropy(K.variable(input.detach().numpy()), K.variable(target.detach().numpy()))) #[[11.032836 12.030124] #[ 4.486187 10.02776 ] #[10.394435 10.563424]] Is there anyone know why these two results are different? thanks!
Keras binary crossentropy takes y_true, y_pred, while Pytorch takes them in the opposite order, therefore you need to change the Keras line to K.eval(K.binary_crossentropy(K.variable(target.detach().numpy()), K.variable(input.detach().numpy()))) In this way you get the correct output: array([[ 1.15359652, 1.40486574], [ 0.33045045, 0.99155325], [ 1.05555284, 1.0864861 ]], dtype=float32)
https://stackoverflow.com/questions/51299900/
PyTorch error loading saved nn.Module: object has no attribute 'to'
I am using PyTorch 0.4. I defined a PyTorch MyModel by inheriting from nn.Module, and saved an instance of it by calling torch.save(my_model, my_path) Then, when loading it again with torch.load(my_path), my program crashed with the following error: AttributeError: 'MyModel' object has no attribute 'to' But my program was able to run it in previous stages. What did go wrong?
I already found it out, and just wanted to quickly post about it since google didn't give an obvious clue. It turned out that, although I saved the model from a computer with 0.4, I was trying to load it from a different computer that still had an older (<0.4) PyTorch version installed. pip install --upgrade torch fixed it. I found it out because the my_model.train()and .eval() methods were indeed working, so I remembered that the .to() method was introduced in 0.4. Useful references: https://pytorch.org/2018/04/22/0_4_0-migration-guide.html https://discuss.pytorch.org/t/loading-pytorch-model-without-a-code/12469
https://stackoverflow.com/questions/51306585/
How can I generate and display a grid of images in PyTorch with plt.imshow and torchvision.utils.make_grid?
I am trying to understand how torchvision interacts with mathplotlib to produce a grid of images. It's easy to generate images and display them iteratively: import torch import torchvision import matplotlib.pyplot as plt w = torch.randn(10,3,640,640) for i in range (0,10): z = w[i] plt.imshow(z.permute(1,2,0)) plt.show() However, displaying these images in a grid does not seem to be as straightforward. w = torch.randn(10,3,640,640) grid = torchvision.utils.make_grid(w, nrow=5) plt.imshow(grid) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-61-1601915e10f3> in <module>() 1 w = torch.randn(10,3,640,640) 2 grid = torchvision.utils.make_grid(w, nrow=5) ----> 3 plt.imshow(grid) /anaconda3/lib/python3.6/site-packages/matplotlib/pyplot.py in imshow(X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, hold, data, **kwargs) 3203 filternorm=filternorm, filterrad=filterrad, 3204 imlim=imlim, resample=resample, url=url, data=data, -> 3205 **kwargs) 3206 finally: 3207 ax._hold = washold /anaconda3/lib/python3.6/site-packages/matplotlib/__init__.py in inner(ax, *args, **kwargs) 1853 "the Matplotlib list!)" % (label_namer, func.__name__), 1854 RuntimeWarning, stacklevel=2) -> 1855 return func(ax, *args, **kwargs) 1856 1857 inner.__doc__ = _add_data_doc(inner.__doc__, /anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py in imshow(self, X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, shape, filternorm, filterrad, imlim, resample, url, **kwargs) 5485 resample=resample, **kwargs) 5486 -> 5487 im.set_data(X) 5488 im.set_alpha(alpha) 5489 if im.get_clip_path() is None: /anaconda3/lib/python3.6/site-packages/matplotlib/image.py in set_data(self, A) 651 if not (self._A.ndim == 2 652 or self._A.ndim == 3 and self._A.shape[-1] in [3, 4]): --> 653 raise TypeError("Invalid dimensions for image data") 654 655 if self._A.ndim == 3: TypeError: Invalid dimensions for image data Even though PyTorch's documentation indicates that w is the correct shape, Python says that it isn't. So I tried to permute the indices of my tensor: w = torch.randn(10,3,640,640) grid = torchvision.utils.make_grid(w.permute(0,2,3,1), nrow=5) plt.imshow(grid) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-62-6f2dc6313e29> in <module>() 1 w = torch.randn(10,3,640,640) ----> 2 grid = torchvision.utils.make_grid(w.permute(0,2,3,1), nrow=5) 3 plt.imshow(grid) /anaconda3/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/utils.py in make_grid(tensor, nrow, padding, normalize, range, scale_each, pad_value) 83 grid.narrow(1, y * height + padding, height - padding)\ 84 .narrow(2, x * width + padding, width - padding)\ ---> 85 .copy_(tensor[k]) 86 k = k + 1 87 return grid RuntimeError: The expanded size of the tensor (3) must match the existing size (640) at non-singleton dimension 0 What's happening here? How can I place a bunch of randomly generated images into a grid and display them?
There's a small mistake in your code. torchvision.utils.make_grid() returns a tensor which contains the grid of images. But the channel dimension has to be moved to the end since that's what matplotlib recognizes. Below is the code that works fine: In [107]: import torchvision # sample input (10 RGB images containing just Gaussian Noise) In [108]: batch_tensor = torch.randn(*(10, 3, 256, 256)) # (N, C, H, W) # make grid (2 rows and 5 columns) to display our 10 images In [109]: grid_img = torchvision.utils.make_grid(batch_tensor, nrow=5) # check shape In [110]: grid_img.shape Out[110]: torch.Size([3, 518, 1292]) # reshape and plot (because matplotlib needs channel as the last dimension) In [111]: plt.imshow(grid_img.permute(1, 2, 0)) Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). Out[111]: <matplotlib.image.AxesImage at 0x7f62081ef080> which shows the output as:
https://stackoverflow.com/questions/51329159/
Pytorch Where Does resNet add values?
I am working on ResNet and I have found an implementation that does the skip connections with a plus sign. Like the following Class Net(nn.Module): def __init__(self): super(Net, self).__int_() self.conv = nn.Conv2d(128,128) def forward(self, x): out = self.conv(x) // line 1 x = out + x // skip connection // line 2 Now I have debugged and printed the values before and after line 1. The output was the following: after line 1 x = [1,128,32,32] out = [1,128,32,32] After line 2 x = [1,128,32,32] // still Reference link: https://github.com/kuangliu/pytorch-cifar/blob/bf78d3b8b358c4be7a25f9f9438c842d837801fd/models/resnet.py#L62 My question is where did it add the value ?? I mean after x = out + x operation, where has the value been added ? PS: Tensor format is [batch, channel, height, width].
As mentioned in comments by @UmangGupta, what you are printing seems to be the shape of your tensors (i.e. the "shape" of a 3x3 matrix is [3, 3]), not their content. In your case, you are dealing with 1x128x32x32 tensors). Example to hopefully clarify the difference between shape and content : import torch out = torch.ones((3, 3)) x = torch.eye(3, 3) res = out + x print(out.shape) # torch.Size([3, 3]) print(out) # tensor([[ 1., 1., 1.], # [ 1., 1., 1.], # [ 1., 1., 1.]]) print(x.shape) # torch.Size([3, 3]) print(x) # tensor([[ 1., 0., 0.], # [ 0., 1., 0.], # [ 0., 0., 1.]]) print(res.shape) # torch.Size([3, 3]) print(res) # tensor([[ 2., 1., 1.], # [ 1., 2., 1.], # [ 1., 1., 2.]])
https://stackoverflow.com/questions/51332533/
Pytorch DataLoader - Choose Class STL10 Dataset
Is it possible to pull only where class = 0 in the STL10 dataset in PyTorch torchvision? I am able to check them in a loop, but need to receive batches of class 0 images # STL10 dataset train_dataset = torchvision.datasets.STL10(root='./data/', transform=transforms.Compose([ transforms.Grayscale(), transforms.ToTensor() ]), split='train', download=True) # Data loader train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) for i, (images, labels) in enumerate(train_loader): if labels[0] == 0:... edit based on iacolippo's answer - this is now working: # Set params batch_size = 25 label_class = 0 # only airplane images # Return only images of certain class (eg. airplanes = class 0) def get_same_index(target, label): label_indices = [] for i in range(len(target)): if target[i] == label: label_indices.append(i) return label_indices # STL10 dataset train_dataset = torchvision.datasets.STL10(root='./data/', transform=transforms.Compose([ transforms.Grayscale(), transforms.ToTensor() ]), split='train', download=True) # Get indices of label_class train_indices = get_same_index(train_dataset.labels, label_class) # Data loader train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, sampler=torch.utils.data.sampler.SubsetRandomSampler(train_indices))
If you only want samples from one class, you can get the indices of samples with the same class from the Dataset instance with something like def get_same_index(target, label): label_indices = [] for i in range(len(target)): if target[i] == label: label_indices.append(i) return label_indices then you can use SubsetRandomSampler to draw samples only from the list of indices of one class torch.utils.data.sampler.SubsetRandomSampler(indices)
https://stackoverflow.com/questions/51334858/
Why can GPU do matrix multiplication faster than CPU?
I've been using GPU for a while without questioning it but now I'm curious. Why can GPU do matrix multiplication much faster than CPU? Is it because of parallel processing? But I didn't write any parallel processing code. Does it do it automatically by itself? Any intuition / high-level explanation will be appreciated!
How do you parallelize the computations? GPU's are able to do a lot of parallel computations. A Lot more than a CPU could do. Look at this example of vector addition of let's say 1M elements. Using a CPU let's say you have 100 maximum threads you can run : (100 is lot more but let's assume for a while) In a typical multi-threading example let's say you parallelized additions on all threads. Here is what I mean by it : c[0] = a[0] + b[0] # let's do it on thread 0 c[1] = a[1] + b[1] # let's do it on thread 1 c[101] = a[101] + b[101] # let's do it on thread 1 We are able to do it because value of c[0], doesn't depend upon any other values except a[0] and b[0]. So each addition is independent of others. Hence, we were able to easily parallelize the task. As you see in above example that simultaneously all the addition of 100 different elements take place saving you time. In this way it takes 1M/100 = 10,000 steps to add all the elements. How Efficient does GPU Parallelizes? Now consider today's GPU with about 2048 threads, all threads can independently do 2048 different operations in constant time. Hence giving a boost up. In your case of matrix multiplication. You can parallelize the computations, Because GPU have much more threads and in each thread you have multiple blocks. So a lot of computations are parallelized, resulting quick computations. But I didn't write any parallel processing for my GTX1080! Does it do it by itself? Almost all the framework for machine learning uses parallelized implementation of all the possible operations. This is achieved by CUDA programming, NVIDIA API to do parallel computations on NVIDIA GPU's. You don't write it explicitly, it's all done at low level, and you do not even get to know. Yes it doesn't mean that a C++ program you wrote will automatically be parallelized, just because you have a GPU. No, you need to write it using CUDA, only then it will be parallelized, but most programming framework have it, So it is not required from your end.
https://stackoverflow.com/questions/51344018/
Dimension out of range when applying l2 normalization in Pytorch
I'm getting a runtime error: RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)` and can't figure out how to fix it. The error appears to refer to the line: i_enc = F.normalize(input =i_batch, p=2, dim=1, eps=1e-12) # (batch, K, feat_dim) I'm trying to encode image features (batch x 36 x 2038) by applying a L2 norm. Below is the full code for the section. def forward(self, q_batch, i_batch): # batch size = 512 # q -> 512(batch)x14(length) # i -> 512(batch)x36(K)x2048(f_dim) # one-hot -> glove emb = self.embed(q_batch) output, hn = self.gru(emb.permute(1, 0, 2)) q_enc = hn.view(-1,self.h_dim) # image encoding with l2 norm i_enc = F.normalize(input =i_batch, p=2, dim=1, eps=1e-12) # (batch, K, feat_dim) q_enc_copy = q_enc.repeat(1, self.K).view(-1, self.K, self.h_dim) q_i_concat = torch.cat((i_enc, q_enc_copy), -1) q_i_concat = self.non_linear(q_i_concat, self.td_W, self.td_W2 )#512 x 36 x 512 i_attention = self.att_w(q_i_concat) #512x36x1 i_attention = F.softmax(i_attention.squeeze(),1) #weighted sum i_enc = torch.bmm(i_attention.unsqueeze(1), i_enc).squeeze() # (batch, feat_dim) # element-wise multiplication q = self.non_linear(q_enc, self.q_W, self.q_W2) i = self.non_linear(i_enc, self.i_W, self.i_W2) h = torch.mul(q, i) # (batch, hid_dim) # output classifier # BCE with logitsloss score = self.c_Wo(self.non_linear(h, self.c_W, self.c_W2)) return score I would appreciate any help. Thanks
I would suggest to check the shape of i_batch (e.g. print(i_batch.shape)), as I suspect i_batch has only 1 dimension (e.g. of shape [N]). This would explain why PyTorch is complaining you can normalize only over the dimension #0; while you are asking for the operation to be done over a dimension #1 (c.f. dim=1).
https://stackoverflow.com/questions/51348833/
what does colon wrapped class mean in python comment?
What does colon wrapped class (:class:) mean in python comment? For example, class Optimizer(object): r"""Base class for all optimizers. Arguments: params (iterable): an iterable of :class:`torch.Tensor` s or :class:`dict` s. Specifies what Tensors should be optimized. defaults: (dict): a dict containing default values of optimization options (used when a parameter group doesn't specify them). """ def __init__(self, params, defaults): self.defaults = defaults Is it a pytorch-specific syntax, or what? Source: https://github.com/pytorch/pytorch/blob/master/torch/optim/optimizer.py
It's nothing Python or Torch specific. It's syntax for a documentation tool; most likely Sphinx. The syntax indicates a cross-reference to the documentation for another class. When the documentation tool generates hyperlinked output such as HTML, such references automatically become links to the documentation page for the named class. For Sphinx, see the Cross-referencing syntax documentation; there you'd see py:class: as a class-reference, but we can assume the default domain is set to Python so :class: is valid too. The PyTorch project indeed uses Sphinx to generate the documenation. You can compare the source you found withthe resulting generated documenation; note how both dict and torch.Tensor are hyperlinks to more documentation.
https://stackoverflow.com/questions/51353066/
Pixel RNN Pytorch Implementation
I am trying to implement Pixel RNN in pytorch, but I cannot seem to find any documentation on this. The main parts of Pixel RNN are Row LSTM and BiDiagonal LSTM, so I am looking for some code of these algorithms to better understand what they are doing. Specifically, I am confused as to these algorithms calculate one row and diagonal at once, respectively. Any help would be much appreciated.
Summary Here is an in progress partial implementation: https://github.com/carpedm20/pixel-rnn-tensorflow Here is a description of Row LSTM and BiDiagonal LSTM at google deepmind: https://towardsdatascience.com/summary-of-pixelrnn-by-google-deepmind-7-min-read-938d9871d6d9 Row LSTM From the linked deepmind blog: The hidden state of a pixel, red in the image below, is based on the "memory" of the triangular three pixels before it. Because they are in a "row", we can compute in parallel, speeding up computation. We sacrifice some context information (using more history or memory) for the ability to do this parallel computation and speed up training. The actual implementation relies on several other optimizations and is quite involved. From the original paper: The computation proceeds as follows. An LSTM layer has an input-to-state component and a recurrent state-to-state component that together determine the four gates inside the LSTM core. To enhance parallelization in the Row LSTM the input-to-state component is first computed for the entire two-dimensional input map; for this a k Γ— 1 convolution is used to follow the row-wise orientation of the LSTM itself. The convolution is masked to include only the valid context (see Section 3.4) and produces a tensor of size 4h Γ— n Γ— n, representing the four gate vectors for each position in the input map, where h is the number of output feature maps. To compute one step of the state-to-state component of the LSTM layer, one is given the previous hidden and cell states hiβˆ’1 and ciβˆ’1, each of size h Γ— n Γ— 1. The new hidden and cell states hi , ci are obtained as follows: where xi of size h Γ— n Γ— 1 is row i of the input map, and ~ represents the convolution operation and the elementwise multiplication. The weights Kss and Kis are the kernel weights for the state-to-state and the input-to-state components, where the latter is precomputed as described above. In the case of the output, forget and input gates oi , fi and ii , the activation Οƒ is the logistic sigmoid function, whereas for the content gate gi , Οƒ is the tanh function. Each step computes at once the new state for an entire row of the input map Diagonal BLSTM Diagonal BLSTM's were developed to leverage the speedup of parallelization without sacrificing as much context information. A node in a DBLSTM looks to its left and above it; since those nodes have also looked to the left and above, the conditional probability of a given node depends in some sense on all of its ancestors. Otherwise, the architectures are very similar. From the deepmind blog:
https://stackoverflow.com/questions/51364273/
CNN weights getting stuck
This is a slightly theoretical question. Below is a graph the plots the loss as the CNN is being trained. Y axis is MSE and X axis is number of Epochs. Description of CNN: class Net(nn.Module): def __init__ (self): super(Net, self).__init__() self.conv1 = nn.Conv1d(in_channels = 1, out_channels = 5, kernel_size = 9) #.double self.pool1 = nn.MaxPool1d(3) self.fc1 = nn.Linear(5*30, 200) #self.dropout = nn.Dropout(p = 0.5) self.fc2 = nn.Linear(200, 99) def forward(self, x): x = self.pool1(F.relu(self.conv1(x))) x = x.view(-1, 5 * 30) #x = self.dropout(F.relu(self.fc1(x))) x = F.relu(self.fc1(x)) x = self.fc2(x) return x def init_weights(m): if type(m) == nn.Linear: nn.init.xavier_uniform_(m.weight) m.bias.data.fill_(0.01) net = Net() net.apply(init_weights) criterion = nn.MSELoss() optimizer = optim.Adam(net.parameters(), lr=0.01-0.0001) # depends Both input and output are an array of numbers. It is a multiregression ouput-problem. This issue where the loss/weights gets stuck in an incorrect place doesn’t happen as much if I use a lower learning rate. However, it still happens. In some sense that means that the hyper dimensional space that is created by the parameters of the CNN is jagged with a lot of local minimums. This could be true because the CNN β€˜s inputs are very similar. Would increasing the layers of the CNN both conv layers and fully connected linears help solve this problem as they hyper dimensional space might be smoother? Or is this intuition completely incorrect? A more broad question when should you be inclined to add more convolutional layers? I know that in practice you should almost never start from scratch and instead use another model’s first few layers. However, the inputs I am using are very different to anything I have found online and therefore cannot do this.
Is this a multiclass classification problem? If so you could try using cross entropy loss. And a softmax layer before output maybe? I'm not sure because I don't know what's the model's input and output.
https://stackoverflow.com/questions/51364416/
I am trying to classify flowers with a pretrained network, but for some reason it does not train
I am currently trying to classify flowers from this dataset, using Pytorch. First of all I started to transfrom my data for the training, validation and testing set. data_dir = 'flowers' train_dir = data_dir + '/train' valid_dir = data_dir + '/valid' test_dir = data_dir + '/test' train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) test_transforms = transforms.Compose([transforms.Resize(224), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) Afterwards I loaded the data with ImageFolder: trainset = datasets.ImageFolder(train_dir, transform=train_transforms) testset = datasets.ImageFolder(test_dir, transform=test_transforms) validationset = datasets.ImageFolder(valid_dir, transform=test_transforms) Then I defined my DataLoaders: trainloader = torch.utils.data.DataLoader(trainset, batch_size = 64, shuffle = True) testloader = torch.utils.data.DataLoader(testset, batch_size = 32) validationloader = torch.utils.data.DataLoader(validationset, batch_size = 32) I choose vgg to be my pretrained model: model = models.vgg16(pretrained = True) And defined a new classifier: for param in model.parameters(): param.requires_grad = False classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(25088, 4096)), ('relu', nn.ReLU()), ('fc2', nn.Linear(4096, 4096)), ('relu', nn.ReLU()), ('fc3', nn.Linear(4096, 102)), ('output', nn.Softmax(dim = 1)) ])) model.classifier = classifier This is the code to actually train my NN (on the GPU): criterion = nn.NLLLoss() optimizer = optim.Adam(model.classifier.parameters(), lr = 0.005) epochs = 9 print_every = 10 steps = 0 model.to('cuda') for e in range(epochs): running_loss = 0 for ii, (inputs, labels) in enumerate(trainloader): steps += 1 inputs, labels = inputs.to('cuda'), labels.to('cuda') optimizer.zero_grad() # Forward and backward outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: print("Epoch: {}/{}... ".format(e+1, epochs), "Loss: {:.4f}".format(running_loss/print_every)) running_loss = 0 But when I run my model, the loss is random and I am not sure why. Thank you for any kind of help in advance and Greetings from Germany!
Here are some tips- in the order of which I think they will help: Try doing some hyper-parameter optimization. (i.e. try 10 learning rates over a domain like 1e-2 to 1e-6) More info on what that is: (http://cs231n.github.io/neural-networks-3/#hyper) Code and print a accuracy metric (print it with your loss), because you may be surprised how high a pre-trained model accuracy will be. Try switching to model = models.vgg16_bn(pretrained = True) and perhaps bigger networks like vgg 19 or resnet34 Can you include your accuracy and lost per epoch? Let me know if any of those tips helped! (Hello from the USA)
https://stackoverflow.com/questions/51366521/
How does pytorch broadcasting work?
torch.add(torch.ones(4,1), torch.randn(4)) produces a Tensor with size: torch.Size([4,4]). Can someone provide a logic behind this?
PyTorch broadcasting is based on numpy broadcasting semantics which can be understood by reading numpy broadcasting rules or PyTorch broadcasting guide. Expounding the concept with an example would be intuitive to understand it better. So, please see the example below: In [27]: t_rand Out[27]: tensor([ 0.23451, 0.34562, 0.45673]) In [28]: t_ones Out[28]: tensor([[ 1.], [ 1.], [ 1.], [ 1.]]) Now for torch.add(t_rand, t_ones), visualize it like: # shape of (3,) tensor([ 0.23451, 0.34562, 0.45673]) # (4, 1) | | | | | | | | | | | | tensor([[ 1.],____+ | | | ____+ | | | ____+ | | | [ 1.],______+ | | ______+ | | ______+ | | [ 1.],________+ | ________+ | ________+ | [ 1.]])_________+ __________+ __________+ which should give the output with tensor of shape (4,3) as: # shape of (4,3) In [33]: torch.add(t_rand, t_ones) Out[33]: tensor([[ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673]]) Also, note that we get exactly the same result even if we pass the arguments in a reverse order as compared to the previous one: # shape of (4, 3) In [34]: torch.add(t_ones, t_rand) Out[34]: tensor([[ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673], [ 1.23451, 1.34562, 1.45673]]) Anyway, I prefer the former way of understanding for more straightforward intuitiveness. For pictorial understanding, I culled out more examples which are enumerated below: Example-1: Example-2:: T and F stand for True and False respectively and indicate along which dimensions we allow broadcasting (source: Theano). Example-3: Here are some shapes where the array b is broadcasted appropriately to attempt to match the shape of the array a. As shown above, the broadcasted b may still not match the shape of a, and so the operation a + b will fail whenever the final broadcasted shapes do not match.
https://stackoverflow.com/questions/51371070/
What exactly is label for image segmentation task in computer vision
I have been working on a some image segmentation tasks lately and would like to apply one from scratch. Segmentation as I have understood is the per pixel prediction to where it belongs - to an object instance(things), to a background segment instance(stuff). As per the COCO dataset on which the latest algorithm Mask RCNN is based : things are countable objects such as people, animals, tools. Stuff classes are amorphous regions of similar texture or material such as grass, sky, road. As per the Mask Rcnn paper the final classification is a binary cross entropy loss function taking per pixel sigmoid (to avoid intra-class race). This pipeline is based on top of the FRCNN object detection pipeline from where it gets the Region-Of-Interest (roi) and passes them through a ROI-align class to keep the spatial information intact. What I'm confused with is the following. Given a very simple code snippet below, for applying Binary Cross Entropy loss to separate 3 fully connected layers( some random experiment with scales): class ModelMain(nn.Module): def __init__(self, config, is_training=True): super(ModelMain, self).__init__() self.fc_1 = torch.nn.Linear(incoming_size_1, outgoing_size_1) self.fc_2 = torch.nn.Linear(incoming_size_2, outgoing_size_2) self.fc_3 = torch.nn.Linear(incoming_size_3, outgoing_size_3) def forward(self, x): y_1 = F.sigmoid(self.fc_1(x)) y_2 = F.sigmoid(self.fc_2(x)) y_3 = F.sigmoid(self.fc_3(x)) return y_1, y_2, y_3 model = ModelMain() criterion = torch.nn.BCELoss(size_average = True) optimizer = torch.optim.SGD(model.parameters(), lr = 0.01) def run_epoch(): batchsize = 10 for epoch in range(batchsize): # Find image segment predicted by running forward pass: y_predicted_1, y_predicted_2, y_predicted_3 = model(batch_data_x) # Compute and print loss : loss_1 = criterion(y_predicted_1, batch_data_y) loss_2 = criterion(y_predicted_2, batch_data_y) loss_3 = criterion(y_predicted_3, batch_data_y) print( "Epoch ", epoch, "Loss : ", loss_1, loss_2, loss_3) # Perform Backward pass : optimizer.zero_grad() loss_1.backward() loss_2.backward() loss_3.backward() optimizer.step() ... what exactly do we provide here as label? From the dataset : Formatted JSON Data image : { "license":2, "file_name":"000000000139.jpg", "coco_url":"http://images.cocodataset.org/val2017/000000000139.jpg", "height":426, "width":640, "date_captured":"2013-11-21 01:34:01", "flickr_url":"http://farm9.staticflickr.com/8035/8024364858_9c41dc1666_z.jpg", "id":139 } Segment info : { "segments_info":[ { "id":3226956, "category_id":1, "iscrowd":0, "bbox":[ 413, 158, 53, 138 ], "area":2840 }, { "id":6979964, "category_id":1, "iscrowd":0, "bbox":[ 384, 172, 16, 36 ], "area":439 }, { "id":3103374, "category_id":62, "iscrowd":0, "bbox":[ 413, 223, 30, 81 ], "area":1250 }, { "id":2831194, "category_id":62, "iscrowd":0, "bbox":[ 291, 218, 62, 98 ], "area":1848 }, { "id":3496593, "category_id":62, "iscrowd":0, "bbox":[ 412, 219, 10, 13 ], "area":90 }, { "id":2633066, "category_id":62, "iscrowd":0, "bbox":[ 317, 219, 22, 12 ], "area":212 }, { "id":3165572, "category_id":62, "iscrowd":0, "bbox":[ 359, 218, 56, 103 ], "area":2251 }, { "id":8824489, "category_id":64, "iscrowd":0, "bbox":[ 237, 149, 24, 62 ], "area":369 }, { "id":3032951, "category_id":67, "iscrowd":0, "bbox":[ 321, 231, 126, 89 ], "area":2134 }, { "id":2038814, "category_id":72, "iscrowd":0, "bbox":[ 7, 168, 149, 95 ], "area":13247 }, { "id":3289671, "category_id":72, "iscrowd":0, "bbox":[ 557, 209, 82, 79 ], "area":5846 }, { "id":2437710, "category_id":78, "iscrowd":0, "bbox":[ 512, 206, 15, 16 ], "area":224 }, { "id":4159376, "category_id":82, "iscrowd":0, "bbox":[ 493, 174, 20, 108 ], "area":2056 }, { "id":3423599, "category_id":84, "iscrowd":0, "bbox":[ 613, 308, 13, 46 ], "area":324 }, { "id":3094634, "category_id":84, "iscrowd":0, "bbox":[ 605, 306, 14, 45 ], "area":331 }, { "id":3296100, "category_id":85, "iscrowd":0, "bbox":[ 448, 121, 14, 22 ], "area":227 }, { "id":6054280, "category_id":86, "iscrowd":0, "bbox":[ 241, 195, 14, 18 ], "area":187 }, { "id":5942189, "category_id":86, "iscrowd":0, "bbox":[ 549, 309, 36, 90 ], "area":2171 }, { "id":4086154, "category_id":86, "iscrowd":0, "bbox":[ 351, 209, 11, 22 ], "area":178 }, { "id":7438777, "category_id":86, "iscrowd":0, "bbox":[ 337, 200, 10, 16 ], "area":120 }, { "id":3031159, "category_id":118, "iscrowd":0, "bbox":[ 0, 269, 564, 157 ], "area":49754 }, { "id":9284267, "category_id":119, "iscrowd":0, "bbox":[ 338, 166, 29, 50 ], "area":842 }, { "id":6068135, "category_id":130, "iscrowd":0, "bbox":[ 212, 11, 321, 127 ], "area":3391 }, { "id":2567230, "category_id":156, "iscrowd":0, "bbox":[ 129, 168, 351, 162 ], "area":5699 }, { "id":10334639, "category_id":181, "iscrowd":0, "bbox":[ 204, 63, 234, 174 ], "area":15587 }, { "id":6266027, "category_id":186, "iscrowd":0, "bbox":[ 136, 0, 473, 116 ], "area":20106 }, { "id":5274512, "category_id":188, "iscrowd":0, "bbox":[ 0, 38, 549, 297 ], "area":25483 }, { "id":7238567, "category_id":189, "iscrowd":0, "bbox":[ 457, 350, 183, 76 ], "area":9421 }, { "id":4224910, "category_id":199, "iscrowd":0, "bbox":[ 0, 0, 640, 358 ], "area":83201 }, { "id":6391959, "category_id":200, "iscrowd":0, "bbox":[ 135, 359, 336, 67 ], "area":12618 } ], "file_name":"000000000139.png", "image_id":139 } The Mask image : The Original image : For the object detection task we have bounding box, but for image segmentation I need to calculate loss with the mask provided. So what should be the value for the batch_data_y in the above code. Will it be the vector for the mask image. But doesn't that train my network as to what color some segment is ? Or am I missing some other segment annotation ?
The intution of @Aldream was correct but explicitly for the coco dataset they provide binary masks, The documentation is not so great on their website : Interface for manipulating masks stored in RLE format. RLE is a simple yet efficient format for storing binary masks. RLE first divides a vector (or vectorized image) into a series of piecewise constant regions and then for each piece simply stores the length of that piece. For example, given M=[0 0 1 1 1 0 1] the RLE counts would be [2 3 1 1], or for M=[1 1 1 1 1 1 0] the counts would be [0 6 1] (note that the odd counts are always the numbers of zeros). Instead of storing the counts directly, additional compression is achieved with a variable bitrate representation based on a common scheme called LEB128. source : link Though I did write my own custom function for average binary cross entropy loss : def l_cross_entropy2d(input, target, weight=None, size_average=True): n, c, h, w = input.size() nt, ct, ht, wt = target.size() # Handle inconsistent size between input and target if h > ht and w > wt: # upsample labels target = target.unsqueeze(1) target = F.upsample(target, size=(h, w), mode='nearest') target = target.sequeeze(1) elif h < ht and w < wt: # upsample images input = F.upsample(input, size=(ht, wt), mode='bilinear') elif h != ht and w != wt: raise Exception("Only support upsampling") # take per pixel sigmoid sigm = F.sigmoid(input) # change dimension to create 2d matrix where rows -> pixels and columns -> classes # takes input tensor <n X c X h X w> outputs tensor < n*h*w X c > sigm = sigm.transpose(1, 2).transpose(2, 3).contiguous().view(-1, c) # change target to column tensor for calculating cross entropy and repeat it number of classes times # Get all values from sigmoid tensor >= 0 (all pixels that have value) sigm = sigm[target.view(-1, 1).repeat(1, c) >= 0] sigm = sigm.view(-1, c) mask = target >= 0 target = target[mask] loss = F.nll_loss(sigm, target, ignore_index=250, weight=weight, size_average=False) if size_average: loss /= mask.data.sum() return loss
https://stackoverflow.com/questions/51371624/
The purpose of introducing nn.Parameter in pytorch
I am new to Pytorch and I am confused about the difference between nn.Parameter and autograd.Variable. I know that the former one is the subclass of Variable and has the gradient. But I really don't understand why we introduce Parameter and when we should use it? SUMMARY: Thanks for the explanation of iacolippo, i finally understand the difference between parameter and variable. In a summary, variable in pytorch is NOT same as in the variable in tensorflow, the former one is not attach to the model's trainable parameters while the later one will. Attaching to the model means that using model.parameters() will return the certain parameter to you, which is useful in training phase to specify the variable needed to train. The 'variable' is more helpful as a cache in some network.
From the documentation: Parameters are Tensor subclasses, that have a very special property when used with Modules - when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. in parameters() iterator. Assigning a Tensor doesn’t have such effect. This is because one might want to cache some temporary state, like last hidden state of the RNN, in the model. If there was no such class as Parameter, these temporaries would get registered too. Think for example when you initialize an optimizer: optim.SGD(model.parameters(), lr=1e-3) The optimizer will update only registered Parameters of the model. Variables are still present in Pytorch 0.4 but they are deprecated. From the docs: The Variable API has been deprecated: Variables are no longer necessary to use autograd with tensors. Autograd automatically supports Tensors with requires_grad set to True. Pytorch pre-0.4 In Pytorch before version 0.4 one needed to wrap a Tensor in a torch.autograd.Variable in order to keep track of the operations applied to it and perform differentiation. From the docs of Variable in 0.3: Wraps a tensor and records the operations applied to it. Variable is a thin wrapper around a Tensor object, that also holds the gradient w.r.t. to it, and a reference to a function that created it. This reference allows retracing the whole chain of operations that created the data. If the Variable has been created by the user, its grad_fn will be None and we call such objects leaf Variables. Since autograd only supports scalar valued function differentiation, grad size always matches the data size. Also, grad is normally only allocated for leaf variables, and will be always zero otherwise. The difference wrt Parameter was more or less the same. From the docs of Parameters in 0.3: A kind of Variable that is to be considered a module parameter. Parameters are Variable subclasses, that have a very special property when used with Modules - when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. in parameters() iterator. Assigning a Variable doesn’t have such effect. This is because one might want to cache some temporary state, like last hidden state of the RNN, in the model. If there was no such class as Parameter, these temporaries would get registered too. Another difference is that parameters can’t be volatile and that they require gradient by default.
https://stackoverflow.com/questions/51373919/
How to read a ckpt file with python3, while it is saved using python2?
I try to read a checkpoint file with pyTorch checkpoint = torch.load(xxx.ckpt) The file was generated by a program written using python 2.7. I try to read the file using python 3.6 but get the following error UnicodeDecodeError: 'ascii' codec can't decode byte 0x8c in position 16: ordinal not in range(128) Is it possible to read the file without downgrade python?
Eventually I solve the issue by 1) create a python2 environment using anaconda 2) read the checkpoint file using pytorch, and then save it using pickle checkpoint = torch.load("xxx.ckpt") with open("xxx.pkl", "wb") as outfile: pickle.dump(checkpointfile, outfile) 3) back to the python3 environment, read the file using pickle, save the file using pytorch pkl_file = open("xxx.pkl", "rb") data = pickle.load(pkl_file, encoding="latin1") torch.save(data, "xxx.ckpt")
https://stackoverflow.com/questions/51376673/
Understanding PyTorch Bernoulli distribution from the documention
So I was reading the pytorch document trying to learn and understand somethings(because I'm new to the machine learning ), I found the torch.bernoulli() and I understood (I miss understood it) that it approximates the tensors that the have the values between 1 and 0 to 1 or 0 depends on the value (like classic school less than 0.5 = 0 , more the than or equals 0.5 = 1) After some experimentations on my own that yeah it works as expected >>>y = torch.Tensor([0.500]) >>>x >>> 0.5000 [torch.FloatTensor of size 1] >>> torch.bernoulli(x) >>> 1 [torch.FloatTensor of size 1] But when I looked at the document something a bit weird >>> a = torch.Tensor(3, 3).uniform_(0, 1) # generate a uniform random matrix with range [0, 1] >>> a 0.7544 0.8140 0.9842 **0.5282** 0.0595 0.6445 0.1925 0.9553 0.9732 [torch.FloatTensor of size 3x3] >>> torch.bernoulli(a) 1 1 1 **0** 0 1 0 1 1 [torch.FloatTensor of size 3x3] in the example the 0.5282 got approximated to 0 , how did that happen ? or it's a fault in the document because I tried it and the 0.5282 got approximated as expected to 1.
Well, Bernoulli is a probability distribution. Specifically, torch.distributions.Bernoulli() samples from the distribution and returns a binary value (i.e. either 0 or 1). Here, it returns 1 with probability p and return 0 with probability 1-p. Below example will make the understanding clear: In [141]: m = torch.distributions.Bernoulli(torch.tensor([0.63])) In [142]: m.sample() # 63% chance 1; 37% chance 0 Out[142]: tensor([ 0.]) In [143]: m.sample() # 63% chance 1; 37% chance 0 Out[143]: tensor([ 1.]) In [144]: m.sample() # 63% chance 1; 37% chance 0 Out[144]: tensor([ 0.]) In [145]: m.sample() # 63% chance 1; 37% chance 0 Out[145]: tensor([ 0.]) In [146]: m.sample() # 63% chance 1; 37% chance 0 Out[146]: tensor([ 1.]) In [147]: m.sample() # 63% chance 1; 37% chance 0 Out[147]: tensor([ 1.]) In [148]: m.sample() # 63% chance 1; 37% chance 0 Out[148]: tensor([ 1.]) In [149]: m.sample() # 63% chance 1; 37% chance 0 Out[149]: tensor([ 1.]) In [150]: m.sample() # 63% chance 1; 37% chance 0 Out[150]: tensor([ 1.]) In [151]: m.sample() # 63% chance 1; 37% chance 0 Out[151]: tensor([ 1.]) So, we sampled it 10 times, out of which we got 1s 7 times which is approximately close to 63%. We need to sample this finitely large number of times to get the exact percentage of 37 and 63 for 0s and 1s respectively; This is because of the Law of Large Numbers.
https://stackoverflow.com/questions/51392046/
Using torch.nn.DataParallel with a custom CUDA extension
To my understanding, the built-in PyTorch operations all automatically handle batches through implicit vectorization, allowing parallelism across multiple GPUs. However, when writing a custom operation in CUDA as per the Documentation, the LLTM example given performs operations that are batch invariant, for example computing the gradient of the Sigmoid function elementwise. However, I have a use case that is not batch element invariant and not vectorizable. Running on a single GPU, I currently (inefficiently) loop over each element in the batch, performing a kernel launch for each, like so (written in the browser, just to demonstrate): std::vector<at::Tensor> op_cuda_forward(at::Tensor input, at::Tensor elementSpecificParam) { auto output = at::zeros(torch::CUDA(/* TYPE */), {/* DIMENSIONS */}); const size_t blockDim = // const size_t gridDim = // const size_t = numBatches = // for (size_t i = 0; i < numBatches; i++) { op_cuda_forward_kernel<T><<<gridDim, blockDim>>>(input[i], elementSpecificParam[i], output[i]); } return {output}; } However, I wish to split this operation over multiple GPUs by batch element. How would the allocation of the output Tensor work in a multi-GPU scenario? Of course, one may create intermediate Tensors on each GPU before launching the appropriate kernel, however, the overhead of copying the input data to each GPU and back again would be problematic. Is there a simpler way to launch the kernels without first probing the environment for GPU information (# GPU's etc)? The end goal is to have a CUDA operation that works with torch.nn.DataParallel.
In order to use torch.nn.DataParallel with a custom CUDA extension, you can follow these steps: Define your custom CUDA extension in a subclass of torch.autograd.Function, and implement the forward() and backward() methods for the forward and backward passes, respectively. In the forward() method, create a new output Tensor and allocate it on the same device that the input Tensor is on using output.to(input.device). In the backward() method, create a new gradient input Tensor and allocate it on the same device that the gradient output Tensor is on using grad_input.to(grad_output.device). In your main PyTorch code, use torch.nn.DataParallel to wrap your model, and make sure to call the to() method on the input data and move it to the same device that your model is on. Here's an example of how this could look like: # Define your custom CUDA extension as a subclass of torch.autograd.Function class MyCustomCudaExtension(torch.autograd.Function): @staticmethod def forward(ctx, input, element_specific_param): # Create a new output Tensor and allocate it on the same device as the input Tensor output = torch.zeros_like(input).to(input.device) # Perform the forward pass using your custom CUDA kernel # ... # Save any necessary information for the backward pass ctx.save_for_backward(output) return output @staticmethod def backward(ctx, grad_output): # Retrieve the saved information from the forward pass output = ctx.saved_tensors[0] # Create a new gradient input Tensor and allocate it on the same device as the gradient output Tensor grad_input = torch.zeros_like(output).to(grad_output.device) # Perform the backward pass using your custom CUDA kernel # ... return grad_input, None # In your main PyTorch code, wrap your model in torch.nn.DataParallel model = torch.nn.DataParallel(MyModel()) # Make sure to call the .to() method on the input data and move it to the same device as your model input = input.to(model.device) # Perform the forward pass output = model(input) This way, torch.nn.DataParallel will automatically handle the parallelism across multiple GPUs, and your custom CUDA extension will be executed on the correct device. Note that in this example, we are assuming that the input and output Tensors are on the same device. If this is not the case, you can use the input.device and grad_output.device attributes to retrieve the devices of the input and gradient output Tensors, respectively, and allocate the output and gradient input Tensors on those devices.
https://stackoverflow.com/questions/51400618/
Is Seq2Seq the right model for my data?
I'm trying to train a model to predict design patterns from web pages. I'm using coordinates of bounding rects given a bunch of element groupings. Patterns look like this: [[elementId, width, height, x, y]] so my target would be the [[x,y]] given [[elementId, width, height]]. Concretely: [[5, 1.0, 1.0], [4, 1.0, 1.0], [2, 175.0, 65.0], [2, 1.0, 1.0], [4, 1.0, 1.0]] -> [[0.0, 0.0], [0.0, 10.0], [3.0, 0.0], [0.0, 68.0], [0.0, 10.0]] [[2, 14.0, 14.0], [2, 14.0, 14.0], [2, 14.0, 14.0]] -> [[0.0, 3.0], [0.0, 3.0], [0.0, 3.0]] Patterns vary in size so I've padded them with [[0,0,0]]. I currently have about 15k of them, but can get more. I was told that seq2seq with attention is the right model for this job. I've started with https://machinelearningmastery.com/develop-encoder-decoder-model-sequence-sequence-prediction-keras/ and achieved horrendous results. Every seq2seq example i can find (searching for keras or pytorch) is used for translation which is categorical and I'm struggling to find a good regression based example. So my question is: 1) Is this the right model (encoder/decoder LSTM) for what i'm trying to do? 2) Is there any examples if so? Thanks so much in advance. I don't expect anyone to solve my problem, but any help would be much appreciated!
Seq2Seq/LSTM are used when input/output are variable lengths. Your input is of size 3 and output is of size 2 (at least for the given examples). So you can use a simple one/two-hidden layer feed-forward model with the L2/L1 loss (for regression). Any opt (SGD/Adam) should be fine, however, Adam works well in practice. Also, I think you should not use coordinates as it is, you can scale it so that the highest coordinate is 1 and hence the input/output range would be between 0 and 1. An added advantage, this would help you to generalize to different screen sizes intuitively.
https://stackoverflow.com/questions/51412473/
How do i add ctc beam search decoder in crnn model (pytorch)
I am following the CRNN implementation of https://github.com/meijieru/crnn.pytorch, but seems like it is not using beam search for decoding the words. Can someone tell me how to add beam search decoding in the same model? At the same time in Tensorflow, there is an inbuilt tf.nn.ctc_beam_search_decoder.
i know its not a great idea, but i did it using tensorflow inside pytorch. if(beam): decodes, _ = tf.nn.ctc_beam_search_decoder(inputs=preds_.cpu().detach().numpy(), sequence_length=25*np.ones(1), merge_repeated=False) with tf.Session(config = tf.ConfigProto(device_count = {'GPU': 0})) as sess: t_ = sess.run(decodes)[0].values char_list = [] for i in range(len(sess.run(decodes)[0].values)): if t_[i] != 0 and (not (i > 0 and t_[i - 1] == t_[i])): char_list.append(alphabet[t_[i] - 1]) sim_pred = ''.join(char_list) else: raw_pred = converter.decode(preds.data, preds_size.data, raw=True) sim_pred = converter.decode(preds.data, preds_size.data, raw=False)
https://stackoverflow.com/questions/51422776/
Using Precision and Recall in training of skewed dataset
I have a skewed dataset (5,000,000 positive examples and only 8000 negative [binary classified]) and thus, I know, accuracy is not a useful model evaluation metric. I know how to calculate precision and recall mathematically but I am unsure how to implement them in python code. When I train the model on all the data I get 99% accuracy overall but 0% accuracy on the negative examples (ie. classifying everything as positive). I have built my current model in Pytorch with the criterion = nn.CrossEntropyLoss() and optimiser = optim.Adam(). So, my question is, how do I implement precision and recall into my training to produce the best model possible? Thanks in advance
The implementation of precision, recall and F1 score and other metrics are usually imported from the scikit-learn library in python. link: http://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics Regarding your classification task, the number of positive training samples simply eclipse the negative samples. Try training with reduced number of positive samples or generating more negative samples. I am not sure deep neural networks could provide you with an optimal result considering the class skewness. Negative samples can be generated using the Synthetic Minority Over-sampling Technique (SMOT) technique. This link is a good place to start. Link: https://www.analyticsvidhya.com/blog/2017/03/imbalanced-classification-problem/ Try using simple models such as logistic regression or random forest first and check if there is any improvement in the F1 score of the model.
https://stackoverflow.com/questions/51425436/
What does model.train() do in PyTorch?
Does it call forward() in nn.Module? I thought when we call the model, forward method is being used. Why do we need to specify train()?
model.train() tells your model that you are training the model. This helps inform layers such as Dropout and BatchNorm, which are designed to behave differently during training and evaluation. For instance, in training mode, BatchNorm updates a moving average on each new batch; whereas, for evaluation mode, these updates are frozen. More details: model.train() sets the mode to train (see source code). You can call either model.eval() or model.train(mode=False) to tell that you are testing. It is somewhat intuitive to expect train function to train model but it does not do that. It just sets the mode.
https://stackoverflow.com/questions/51433378/
Rearranging a 3-D array using indices from sorting?
I have a 3-D array of random numbers of size [channels = 3, height = 10, width = 10]. Then I sorted it using sort command from pytorch along the columns and obtained the indices as well. The corresponding index is shown below: Now, I would like to return to the original matrix using these indices. I currently use for loops to do this (without considering the batches). The code is: import torch torch.manual_seed(1) ch = 3 h = 10 w = 10 inp_unf = torch.randn(ch,h,w) inp_sort, indices = torch.sort(inp_unf,1) resort = torch.zeros(inp_sort.shape) for i in range(ch): for j in range(inp_sort.shape[1]): for k in range (inp_sort.shape[2]): temp = inp_sort[i,j,k] resort[i,indices[i,j,k],k] = temp I would like it to be vectorized considering batches as well i.e.input size is [batch, channel, height, width].
Using Tensor.scatter_() You can directly scatter the sorted tensor back into its original state using the indices provided by sort(): torch.zeros(ch,h,w).scatter_(dim=1, index=indices, src=inp_sort) The intuition is based on the previous answer below. As scatter() is basically the reverse of gather(), inp_reunf = inp_sort.gather(dim=1, index=reverse_indices) is the same as inp_reunf.scatter_(dim=1, index=indices, src=inp_sort): Previous answer Note: while correct, this is probably less performant, as calling the sort() operation a 2nd time. You need to obtain the sorting "reverse indices", which can be done by "sorting the indices returned by sort()". In other words, given x_sort, indices = x.sort(), you have x[indices] -> x_sort ; while what you want is reverse_indices such that x_sort[reverse_indices] -> x. This can be obtained as follows: _, reverse_indices = indices.sort(). import torch torch.manual_seed(1) ch, h, w = 3, 10, 10 inp_unf = torch.randn(ch,h,w) inp_sort, indices = inp_unf.sort(dim=1) _, reverse_indices = indices.sort(dim=1) inp_reunf = inp_sort.gather(dim=1, index=reverse_indices) print(torch.equal(inp_unf, inp_reunf)) # True
https://stackoverflow.com/questions/51433741/
How can I create a PyCUDA GPUArray from a gpu memory address?
I'm working with PyTorch and want to do some arithmetic on Tensor data with the help of PyCUDA. I can get a memory address of a cuda tensor t via t.data_ptr(). Can I somehow use this address and my knowledge of the size and data type to initialize a GPUArray? I am hoping to avoid copying the data, but that would also be an alternative.
It turns out this is possible. We need a pointer do the data, which needs some additional capabilities: class Holder(PointerHolderBase): def __init__(self, tensor): super().__init__() self.tensor = tensor self.gpudata = tensor.data_ptr() def get_pointer(self): return self.tensor.data_ptr() def __int__(self): return self.__index__() # without an __index__ method, arithmetic calls to the GPUArray backed by this pointer fail # not sure why, this needs to return some integer, apparently def __index__(self): return self.gpudata We can then use this class to instantiate GPUArrays. The code uses Reikna arrays which are a subclass but should work with pycuda arrays as well. def tensor_to_gpuarray(tensor, context=pycuda.autoinit.context): '''Convert a :class:`torch.Tensor` to a :class:`pycuda.gpuarray.GPUArray`. The underlying storage will be shared, so that modifications to the array will reflect in the tensor object. Parameters ---------- tensor : torch.Tensor Returns ------- pycuda.gpuarray.GPUArray Raises ------ ValueError If the ``tensor`` does not live on the gpu ''' if not tensor.is_cuda: raise ValueError('Cannot convert CPU tensor to GPUArray (call `cuda()` on it)') else: thread = cuda.cuda_api().Thread(context) return reikna.cluda.cuda.Array(thread, tensor.shape, dtype=torch_dtype_to_numpy(tensor.dtype), base_data=Holder(tensor)) We can go back with this code. I have not found a way to do this without copying the data. def gpuarray_to_tensor(gpuarray, context=pycuda.autoinit.context): '''Convert a :class:`pycuda.gpuarray.GPUArray` to a :class:`torch.Tensor`. The underlying storage will NOT be shared, since a new copy must be allocated. Parameters ---------- gpuarray : pycuda.gpuarray.GPUArray Returns ------- torch.Tensor ''' shape = gpuarray.shape dtype = gpuarray.dtype out_dtype = numpy_dtype_to_torch(dtype) out = torch.zeros(shape, dtype=out_dtype).cuda() gpuarray_copy = tensor_to_gpuarray(out, context=context) byte_size = gpuarray.itemsize * gpuarray.size pycuda.driver.memcpy_dtod(gpuarray_copy.gpudata, gpuarray.gpudata, byte_size) return out Old answer from pycuda.gpuarray import GPUArray def torch_dtype_to_numpy(dtype): dtype_name = str(dtype)[6:] # remove 'torch.' return getattr(np, dtype_name) def tensor_to_gpuarray(tensor): if not tensor.is_cuda: raise ValueError('Cannot convert CPU tensor to GPUArray (call `cuda()` on it)') else: array = GPUArray(tensor.shape, dtype=torch_dtype_to_numpy(tensor.dtype), gpudata=tensor.data_ptr()) return array.copy() Unfortunately, passing an int as the gpudata keyword (or a subtype of pycuda.driver.PointerHolderBase as was suggested in the pytorch forum) seems to work on the surface, but many operations fail with seemingly unrelated errors. Copying the array seems to transform it into a useable format though. I think it is related to the fact that the gpudata member should be a pycuda.driver.DeviceAllocation object which cannot be instantiated from Python it seems. Now how to go back from the raw data to a Tensor is another matter.
https://stackoverflow.com/questions/51438232/
How to iterate over two dataloaders simultaneously using pytorch?
I am trying to implement a Siamese network that takes in two images. I load these images and create two separate dataloaders. In my loop I want to go through both dataloaders simultaneously so that I can train the network on both images. for i, data in enumerate(zip(dataloaders1, dataloaders2)): # get the inputs inputs1 = data[0][0].cuda(async=True); labels1 = data[0][1].cuda(async=True); inputs2 = data[1][0].cuda(async=True); labels2 = data[1][1].cuda(async=True); labels1 = labels1.view(batchSize,1) labels2 = labels2.view(batchSize,1) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs1 = alexnet(inputs1) outputs2 = alexnet(inputs2) The return value of the dataloader is a tuple. However, when I try to use zip to iterate over them, I get the following error: OSError: [Errno 24] Too many open files Exception NameError: "global name 'FileNotFoundError' is not defined" in <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7f2d3c00c190>> ignored Shouldn't zip work on all iterable items? But it seems like here I can't use it on dataloaders. Is there any other way to pursue this? Or am I approaching the implementation of a Siamese network incorrectly?
I see you are struggling to make a right dataloder function. I would do: class Siamese(Dataset): def __init__(self, transform=None): #init data here def __len__(self): return #length of the data def __getitem__(self, idx): #get images and labels here #returned images must be tensor #labels should be int return img1, img2 , label1, label2
https://stackoverflow.com/questions/51444059/
In Pytorch F.nll_loss() Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 'target'
Why does this error occur. I am trying to write a custom loss function, that finally has a negative log likelihood. As per my understanding the NLL is calculated between two probability values? >>> loss = F.nll_loss(sigm, trg_, ignore_index=250, weight=None, size_average=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home//lib/python3.5/site-packages/torch/nn/functional.py", line 1332, in nll_loss return torch._C._nn.nll_loss(input, target, weight, size_average, ignore_index, reduce) RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 'target' The inputs here being the following: >>> sigm.size() torch.Size([151414, 80]) >>> sigm tensor([[ 0.3283, 0.6472, 0.8278, ..., 0.6756, 0.2168, 0.5659], [ 0.6603, 0.5957, 0.8375, ..., 0.2274, 0.4523, 0.4665], [ 0.5262, 0.4223, 0.5009, ..., 0.5734, 0.3151, 0.2076], ..., [ 0.4083, 0.2479, 0.5996, ..., 0.8355, 0.6681, 0.7900], [ 0.6373, 0.3771, 0.6568, ..., 0.4356, 0.8143, 0.4704], [ 0.5888, 0.4365, 0.8587, ..., 0.2233, 0.8264, 0.5411]]) And my target tensor is: >>> trg_.size() torch.Size([151414]) >>> trg_ tensor([-7.4693e-01, 3.5152e+00, 2.9679e-02, ..., 1.6316e-01, 3.6594e+00, 1.3366e-01]) If I convert this to long I loose all data: >>> sigm.long() tensor([[ 0, 0, 0, ..., 0, 0, 0], [ 0, 0, 0, ..., 0, 0, 0], [ 0, 0, 0, ..., 0, 0, 0], ..., [ 0, 0, 0, ..., 0, 0, 0], [ 0, 0, 0, ..., 0, 0, 0], [ 0, 0, 0, ..., 0, 0, 0]]) >>> trg_.long() tensor([ 0, 3, 0, ..., 0, 3, 0]) If I convert the raw values of target tensor to sigmoid too: >>> F.sigmoid(trg_) tensor([ 0.3215, 0.9711, 0.5074, ..., 0.5407, 0.9749, 0.5334]) >>> loss = F.nll_loss(sigm, F.sigmoid(trg_), ignore_index=250, weight=None, size_average=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/lib/python3.5/site-packages/torch/nn/functional.py", line 1332, in nll_loss return torch._C._nn.nll_loss(input, target, weight, size_average, ignore_index, reduce) RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 'target' This does calculate the loss happily, but it is again just make belief as I have lost data in the long conversion: >>> loss = F.nll_loss(sigm, F.sigmoid(trg_).long(), ignore_index=250, weight=None, size_average=True) >>> loss tensor(-0.5010) >>> F.sigmoid(trg_).long() tensor([ 0, 0, 0, ..., 0, 0, 0])
"As per my understanding, the NLL is calculated between two probability values?" No, NLL is not calculated between two probability values. As per the pytorch docs (See shape section), It is usually used to implement cross entropy loss. It takes input which is expected to be log-probability and is of size (N, C) when N is data size and C is the number of classes. Target is a long tensor of size (N,) which tells the true class of the sample. Since in your case, target for sure is not the true class, you might have to implement your own version of loss and you may not be able to use NLLLoss. If you add more details about what loss you want to code up, I can help/explain more on how to do that (if possible by using existing function in torch).
https://stackoverflow.com/questions/51448897/
How do I get words from an embedded vector?
How can I convert them into their original words when I generate word vectors in the generator? I used the nn.Embedding module built into pytorch to embed words.
Since you didn't provide any code, I am using below code with comments to answers your query. Feel free to add more information for your particular use case. import torch # declare embeddings embed = torch.nn.Embedding(5,10) # generate embedding for word [4] in vocab word = torch.tensor([4]) # search function for searching through embedding def search(vector, distance_fun): weights = embed.weight min = torch.tensor(float('inf')) idx = -1 v, e = weights.shape # each vector in embeding is corresponding to one of the word. # use a distance function to compare with vector for i in range(v): dist = distance_fun(vector, weights[i]) if (min<dist): min = dist idx = i return i # searching with squared distance search(word, lambda x,y: ((x-y)**2).sum()
https://stackoverflow.com/questions/51452907/
Pytorch nn.embedding error
I was reading pytorch documentation on Word Embedding. import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim torch.manual_seed(5) word_to_ix = {"hello": 0, "world": 1, "how":2, "are":3, "you":4} embeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings lookup_tensor = torch.tensor(word_to_ix["hello"], dtype=torch.long) hello_embed = embeds(lookup_tensor) print(hello_embed) Output: tensor([-0.4868, -0.6038, -0.5581, 0.6675, -0.1974]) This looks good but if I replace line lookup_tensor by lookup_tensor = torch.tensor(word_to_ix["how"], dtype=torch.long) I am getting the error as: RuntimeError: index out of range at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:343 I don't understand why it gives RunTime error on line hello_embed = embeds(lookup_tensor).
When you declare embeds = nn.Embedding(2, 5) the vocab size is 2 and embedding size is 5. i.e each word will be represented by a vector of size 5 and there are only 2 words in vocab. lookup_tensor = torch.tensor(word_to_ix["how"], dtype=torch.long) embeds will try to look up vector corresponding to the third word in vocab, but embedding has vocab size of 2. and that is why you get the error. If you declare embeds = nn.Embedding(5, 5) it should work fine.
https://stackoverflow.com/questions/51456059/
PyTorch Tensors of Inputs and Labels in LSTM
I am new to PyTorch, and I'm working on a simple project to generate text, in order to get my hands on pytorch. I am using the concept of this code and converting it to PyTorch: https://machinelearningmastery.com/text-generation-lstm-recurrent-neural-networks-python-keras/ I have 10 timesteps and 990 samples. For each of these 990 samples, there are 10 values, corresponding to the indexes (scaled) of the sequence. My output samples are the rest of the letters (not including the first letter). For example if my sample is 'Hello Worl" my corresponding output is "ello World". My input size (features) is 1 since I want to feed in one letter at a time. Therefore, my final input shape is (990, 10, 1). I then convert the output tensor into one hot vector, so the final shape is (9900, 42), where 42 are the number of elements in the one hot vector. When I run the output, I can get a shape of (9900, 42), so this is the output of all my timesteps, and each one of them contains the corresponding one-hot vector. But when I calculate the loss, then there is an error: multi-target not supported Can I understand what I did wrong? Thanks . Below is my code #The file contains 163780 characters #The file contains 1000 characters #There are 42 unique characters char2int = {char:value for (value,char) in enumerate (unique)} int2char = {value:char for (value,char) in enumerate (unique)} learning_rate = 0.01 num_epochs = 5 input_size = 1 #The number of input neurons (features) to our RNN units = 100 num_layers = 2 num_classes = len(unique) #The number of output neurons timesteps = 10 datax = [] datay = [] for index in range(0, len(file) - timesteps, 1): prev_letters = file[index:index + timesteps] output = file[index + 1: index + timesteps + 1] #Convert the 10 previous characters to their integers and put in a list. Append that list to the dataset datax.append([char2int[c] for c in prev_letters]) datay.append([char2int[c] for c in output]) print('There are {} Sequences in the dataset'.format(len(datax))) #There are 990 Sequences in the dataset x = np.array(datax) x = x / float(len(unique)) x = torch.FloatTensor(x) x = x.view(x.size(0), timesteps, input_size) print(x.shape) #torch.Size([990, 10, 1]) y = torch.LongTensor(datay) print(y.shape) #torch.Size([990, 10]) y_one_hot = torch.zeros(y.shape[0] * y.shape[1], num_classes) index = y.long() index = index.view(-1,1) #The expected shape for the scatter function y_one_hot.scatter_(1,index,1) #(dim (1 for along rows and 0 for along cols), index, number to insert) y_one_hot = y_one_hot.view(-1, num_classes) # Make the tensor of shape(rows, cols) y_one_hot = y_one_hot.long() print(y_one_hot.shape) #torch.Size([9900, 42]) inputs = Variable(x) labels = Variable(y_one_hot) class TextGenerator(nn.Module): def __init__(self,input_size,units,num_layers,num_classes,timesteps): super(TextGenerator,self).__init__() self.units = units self.num_layers = num_layers self.timesteps = timesteps self.input_size = input_size # When batch_first=true, inputs are of shape (batch_size/samples, sequence_length, input_dimension) self.lstm = nn.LSTM(input_size = input_size, hidden_size = units, num_layers = num_layers, batch_first = True) #The output layer self.fc = nn.Linear(units, num_classes) def forward(self,x): #Initialize the hidden state h0 = Variable(torch.zeros(self.num_layers, x.size(0), self.units)) #Initialize the cell state c0 = Variable(torch.zeros(self.num_layers, x.size(0), self.units)) out,_ = self.lstm(x, (h0,c0)) #Reshape the outout from (samples,timesteps,output_features) to a shape appropriate for the FC layer out = out.contiguous().view(-1, self.units) out = self.fc(out) return out net = TextGenerator(input_size,units,num_layers,num_classes,timesteps) loss_fn = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate) out = net(inputs) out.shape #(9900, 42) loss_fn(out,labels)
In PyTorch, when using the CrossEntropyLoss, you need to give the output labels as integers in [0..n_classes-1] instead of as one-hot vectors. Right now pytorch thinks you are trying to predict multiple outputs.
https://stackoverflow.com/questions/51461970/
Highlighting important words in a sentence using Deep Learning
I am trying to highlight important words in imdb dataset which contributed finally to the sentiment analysis prediction . The dataset is like : X_train - A review as string . Y_train - 0 or 1 Now after using Glove embeddings for embedding the X_train value I can feed it to a neural net . Now my question is , how can I highlight the most important words probability wise ? just like deepmoji.mit.edu ? What have I tried : I tried splitting the input sentences into bi-grams and using a 1D CNN to train it . Later when we want to find the important words of X_test , we just split the X_test in bigrams and find their probabilities . It works but not accurate . I tried using prebuilt Hierarchical Attention Networks and succeeded . I got what I wanted but I couldn't figure out every line and concepts from the code .It's like a black box to me . I know how a neural net works and I can code it using numpy with manual back propagation from scratch . I have detailed knowledge of how a lstm works and what forget , update , and output gates actually outputs . But I couldn't still figure out how to extract attention weights and how to make the data as a 3D array ( what is the timestep in our 2D data ? ) So , any type of guidance is welcome
Here is a version with Attention (not Hierarchical) but you should be able to figure out how to make it work with hierarchy too - if not I can help out too. The trick is to define 2 models and use 1 for the training (model) and the other one to extract attention values (model_with_attention_output): # Tensorflow 1.9; Keras 2.2.0 (latest versions) # should be backwards compatible upto Keras 2.0.9 and tf 1.5 from keras.models import Model from keras.layers import * import numpy as np dictionary_size=1000 def create_models(): #Get a sequence of indexes of words as input: # Keras supports dynamic input lengths if you provide (None,) as the # input shape inp = Input((None,)) #Embed words into vectors of size 10 each: # Output shape is (None,10) embs = Embedding(dictionary_size, 10)(inp) # Run LSTM on these vectors and return output on each timestep # Output shape is (None,5) lstm = LSTM(5, return_sequences=True)(embs) ##Attention Block #Transform each timestep into 1 value (attention_value) # Output shape is (None,1) attention = TimeDistributed(Dense(1))(lstm) #By running softmax on axis 1 we force attention_values # to sum up to 1. We are effectively assigning a "weight" to each timestep # Output shape is still (None,1) but each value changes attention_vals = Softmax(axis=1)(attention) # Multiply the encoded timestep by the respective weight # I.e. we are scaling each timestep based on its weight # Output shape is (None,5): (None,5)*(None,1)=(None,5) scaled_vecs = Multiply()([lstm,attention_vals]) # Sum up all scaled timesteps into 1 vector # i.e. obtain a weighted sum of timesteps # Output shape is (5,) : Observe the time dimension got collapsed context_vector = Lambda(lambda x: K.sum(x,axis=1))(scaled_vecs) ##Attention Block over # Get the output out out = Dense(1,activation='sigmoid')(context_vector) model = Model(inp, out) model_with_attention_output = Model(inp, [out, attention_vals]) model.compile(optimizer='adam',loss='binary_crossentropy') return model, model_with_attention_output model,model_with_attention_output = create_models() model.fit(np.array([[1,2,3]]),[1],batch_size=1) print ('Attention Over each word: ',model_with_attention_output.predict(np.array([[1,2,3]]),batch_size=1)[1]) The output will be the numpy array with attention value of each word - the higher the value the more important the word was EDIT: You might want to replace lstm in multiplication with embs to get better interpretations but it will lead to worse performance...
https://stackoverflow.com/questions/51477977/
Initialising weights and bias with PyTorch - how to correct dimensions?
Using this model I'm attempting to initialise my network with my predefined weights and bias : dimensions_input = 10 hidden_layer_nodes = 5 output_dimension = 10 class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() self.linear = torch.nn.Linear(dimensions_input,hidden_layer_nodes) self.linear2 = torch.nn.Linear(hidden_layer_nodes,output_dimension) self.linear.weight = torch.nn.Parameter(torch.zeros(dimensions_input,hidden_layer_nodes)) self.linear.bias = torch.nn.Parameter(torch.ones(hidden_layer_nodes)) self.linear2.weight = torch.nn.Parameter(torch.zeros(dimensions_input,hidden_layer_nodes)) self.linear2.bias = torch.nn.Parameter(torch.ones(hidden_layer_nodes)) def forward(self, x): l_out1 = self.linear(x) y_pred = self.linear2(l_out1) return y_pred model = Model() criterion = torch.nn.MSELoss(size_average = False) optim = torch.optim.SGD(model.parameters(), lr = 0.00001) def train_model(): y_data = x_data.clone() for i in range(10000): y_pred = model(x_data) loss = criterion(y_pred, y_data) if i % 5000 == 0: print(loss) optim.zero_grad() loss.backward() optim.step() RuntimeError: The expanded size of the tensor (10) must match the existing size (5) at non-singleton dimension 1 My dimensions appear correct as they match the corresponding linear layers ?
The code provided doesn't run due to the fact that x_data isn't defined, so I can't be sure that this is the issue, but one thing that strikes me is that you should replace self.linear2.weight = torch.nn.Parameter(torch.zeros(dimensions_input,hidden_layer_nodes)) self.linear2.bias = torch.nn.Parameter(torch.ones(hidden_layer_nodes)) with self.linear2.weight = torch.nn.Parameter(torch.zeros(hidden_layer_nodes, output_dimension)) self.linear2.bias = torch.nn.Parameter(torch.ones(output_dimension))
https://stackoverflow.com/questions/51484793/
Pytorch Torch.save FileNotFoundError
When I try to call "torch.save" to save my model in a "tmp_file", it rises a FileNotFoundError. the trace back is as follow: Traceback (most recent call last): File β€œC:/Users/Haoran/Documents/GitHub/dose-response/python/simulations/hdr.py”, line 234, in test_hdr_continuous() File β€œC:/Users/Haoran/Documents/GitHub/dose-response/python/simulations/hdr.py”, line 195, in test_hdr_continuous model = fit_mdn(X[:split], y[:split], nepochs=20) File β€œC:\Users\Haoran\Documents\GitHub\dose-response\python\simulations\continuous.py”, line 192, in fit_mdn torch.save(model, tmp_file) File β€œC:\Users\Haoran\Documents\GitHub\dose-response\python\venv\lib\site-packages\torch\serialization.py”, line 161, in save return _with_file_like(f, β€œwb”, lambda f: _save(obj, f, pickle_module, pickle_protocol)) File β€œC:\Users\Haoran\Documents\GitHub\dose-response\python\venv\lib\site-packages\torch\serialization.py”, line 116, in _with_file_like f = open(f, mode) FileNotFoundError: [Errno 2] No such file or directory: β€˜/tmp/tmp_file_4358f298-a1d9-4c81-9e44-db4d8f1b4319’ It is weird that everything works perfectly on my mac, but I got this error on my Windows desktop.
As shmee observed, you are trying to write to /tmp/[...] on a Windows machine. Therefore you get FileNotFoundError. To make your code OS agnostic, you may find python's tempfile package useful, especially NamedTemporaryFile: this function creates a temporary file and returns its name, so you can access/use it in your program.
https://stackoverflow.com/questions/51490965/
How to extract fc7 features from AlexNet in pytorch as numpy array?
I want to extract the 4096-dimensional feature vector from the fc7 layer of my finetuned AlexNet. My goal is to use this layer for clustering later on. This is how I extract it: alexnet = models.alexnet(pretrained=True); fc7 = alexnet.classifier[6]; However, when I print it, fc7 is a Linear object: Linear(in_features=4096, out_features=1000, bias=True) What I am looking for is how to turn this Linear object into a numpy array so that I can do further manipulations on it. What I am thinking of is to call its method'def forward(self, input)', but am not sure which input to provide? Should I provide the input image or should I provide the output the fc6 layer? And I want the 4096-dim input array and get rid of the 1000 output array (presumably, since I don't think it will help me for clustering).
This could be done by creating a new model with all the same layers (and associated parameters) as alexnet except for the last layer. new_model = models.alexnet(pretrained=True) new_classifier = nn.Sequential(*list(new_model.classifier.children())[:-1]) new_model.classifier = new_classifier You should now be able to provide the input image to new_model and extract a 4096-dimensional feature vector. If you do need a particular layer as a numpy array for some reason, you could do the following: fc7.weight.data.numpy(). (on PyTorch 0.4.0)
https://stackoverflow.com/questions/51501828/
Calculate the accuracy every epoch in PyTorch
I am working on a Neural Network problem, to classify data as 1 or 0. I am using Binary cross entropy loss to do this. The loss is fine, however, the accuracy is very low and isn't improving. I am assuming I did a mistake in the accuracy calculation. After every epoch, I am calculating the correct predictions after thresholding the output, and dividing that number by the total number of the dataset. Is there any thing wrong I did in the accuracy calculation? And why isn't it improving, but getting more worse? This is my code: net = Model() criterion = torch.nn.BCELoss(size_average=True) optimizer = torch.optim.SGD(net.parameters(), lr=0.1) num_epochs = 100 for epoch in range(num_epochs): for i, (inputs,labels) in enumerate (train_loader): inputs = Variable(inputs.float()) labels = Variable(labels.float()) output = net(inputs) optimizer.zero_grad() loss = criterion(output, labels) loss.backward() optimizer.step() #Accuracy output = (output>0.5).float() correct = (output == labels).float().sum() print("Epoch {}/{}, Loss: {:.3f}, Accuracy: {:.3f}".format(epoch+1,num_epochs, loss.data[0], correct/x.shape[0])) And this is the strange output I get: Epoch 1/100, Loss: 0.389, Accuracy: 0.035 Epoch 2/100, Loss: 0.370, Accuracy: 0.036 Epoch 3/100, Loss: 0.514, Accuracy: 0.030 Epoch 4/100, Loss: 0.539, Accuracy: 0.030 Epoch 5/100, Loss: 0.583, Accuracy: 0.029 Epoch 6/100, Loss: 0.439, Accuracy: 0.031 Epoch 7/100, Loss: 0.429, Accuracy: 0.034 Epoch 8/100, Loss: 0.408, Accuracy: 0.035 Epoch 9/100, Loss: 0.316, Accuracy: 0.035 Epoch 10/100, Loss: 0.436, Accuracy: 0.035 Epoch 11/100, Loss: 0.365, Accuracy: 0.034 Epoch 12/100, Loss: 0.485, Accuracy: 0.031 Epoch 13/100, Loss: 0.392, Accuracy: 0.033 Epoch 14/100, Loss: 0.494, Accuracy: 0.030 Epoch 15/100, Loss: 0.369, Accuracy: 0.035 Epoch 16/100, Loss: 0.495, Accuracy: 0.029 Epoch 17/100, Loss: 0.415, Accuracy: 0.034 Epoch 18/100, Loss: 0.410, Accuracy: 0.035 Epoch 19/100, Loss: 0.282, Accuracy: 0.038 Epoch 20/100, Loss: 0.499, Accuracy: 0.031 Epoch 21/100, Loss: 0.446, Accuracy: 0.030 Epoch 22/100, Loss: 0.585, Accuracy: 0.026 Epoch 23/100, Loss: 0.419, Accuracy: 0.035 Epoch 24/100, Loss: 0.492, Accuracy: 0.031 Epoch 25/100, Loss: 0.537, Accuracy: 0.031 Epoch 26/100, Loss: 0.439, Accuracy: 0.033 Epoch 27/100, Loss: 0.421, Accuracy: 0.035 Epoch 28/100, Loss: 0.532, Accuracy: 0.034 Epoch 29/100, Loss: 0.234, Accuracy: 0.038 Epoch 30/100, Loss: 0.492, Accuracy: 0.027 Epoch 31/100, Loss: 0.407, Accuracy: 0.035 Epoch 32/100, Loss: 0.305, Accuracy: 0.038 Epoch 33/100, Loss: 0.663, Accuracy: 0.025 Epoch 34/100, Loss: 0.588, Accuracy: 0.031 Epoch 35/100, Loss: 0.329, Accuracy: 0.035 Epoch 36/100, Loss: 0.474, Accuracy: 0.033 Epoch 37/100, Loss: 0.535, Accuracy: 0.031 Epoch 38/100, Loss: 0.406, Accuracy: 0.033 Epoch 39/100, Loss: 0.513, Accuracy: 0.030 Epoch 40/100, Loss: 0.593, Accuracy: 0.030 Epoch 41/100, Loss: 0.265, Accuracy: 0.036 Epoch 42/100, Loss: 0.576, Accuracy: 0.031 Epoch 43/100, Loss: 0.565, Accuracy: 0.027 Epoch 44/100, Loss: 0.576, Accuracy: 0.030 Epoch 45/100, Loss: 0.396, Accuracy: 0.035 Epoch 46/100, Loss: 0.423, Accuracy: 0.034 Epoch 47/100, Loss: 0.489, Accuracy: 0.033 Epoch 48/100, Loss: 0.591, Accuracy: 0.029 Epoch 49/100, Loss: 0.415, Accuracy: 0.034 Epoch 50/100, Loss: 0.291, Accuracy: 0.039 Epoch 51/100, Loss: 0.395, Accuracy: 0.033 Epoch 52/100, Loss: 0.540, Accuracy: 0.026 Epoch 53/100, Loss: 0.436, Accuracy: 0.033 Epoch 54/100, Loss: 0.346, Accuracy: 0.036 Epoch 55/100, Loss: 0.519, Accuracy: 0.029 Epoch 56/100, Loss: 0.456, Accuracy: 0.031 Epoch 57/100, Loss: 0.425, Accuracy: 0.035 Epoch 58/100, Loss: 0.311, Accuracy: 0.039 Epoch 59/100, Loss: 0.406, Accuracy: 0.034 Epoch 60/100, Loss: 0.360, Accuracy: 0.035 Epoch 61/100, Loss: 0.476, Accuracy: 0.030 Epoch 62/100, Loss: 0.404, Accuracy: 0.034 Epoch 63/100, Loss: 0.382, Accuracy: 0.036 Epoch 64/100, Loss: 0.538, Accuracy: 0.031 Epoch 65/100, Loss: 0.392, Accuracy: 0.034 Epoch 66/100, Loss: 0.434, Accuracy: 0.033 Epoch 67/100, Loss: 0.479, Accuracy: 0.031 Epoch 68/100, Loss: 0.494, Accuracy: 0.031 Epoch 69/100, Loss: 0.415, Accuracy: 0.034 Epoch 70/100, Loss: 0.390, Accuracy: 0.036 Epoch 71/100, Loss: 0.330, Accuracy: 0.038 Epoch 72/100, Loss: 0.449, Accuracy: 0.030 Epoch 73/100, Loss: 0.315, Accuracy: 0.039 Epoch 74/100, Loss: 0.450, Accuracy: 0.031 Epoch 75/100, Loss: 0.562, Accuracy: 0.030 Epoch 76/100, Loss: 0.447, Accuracy: 0.031 Epoch 77/100, Loss: 0.408, Accuracy: 0.038 Epoch 78/100, Loss: 0.359, Accuracy: 0.034 Epoch 79/100, Loss: 0.372, Accuracy: 0.035 Epoch 80/100, Loss: 0.452, Accuracy: 0.034 Epoch 81/100, Loss: 0.360, Accuracy: 0.035 Epoch 82/100, Loss: 0.453, Accuracy: 0.031 Epoch 83/100, Loss: 0.578, Accuracy: 0.030 Epoch 84/100, Loss: 0.537, Accuracy: 0.030 Epoch 85/100, Loss: 0.483, Accuracy: 0.035 Epoch 86/100, Loss: 0.343, Accuracy: 0.036 Epoch 87/100, Loss: 0.439, Accuracy: 0.034 Epoch 88/100, Loss: 0.686, Accuracy: 0.023 Epoch 89/100, Loss: 0.265, Accuracy: 0.039 Epoch 90/100, Loss: 0.369, Accuracy: 0.035 Epoch 91/100, Loss: 0.521, Accuracy: 0.027 Epoch 92/100, Loss: 0.662, Accuracy: 0.027 Epoch 93/100, Loss: 0.581, Accuracy: 0.029 Epoch 94/100, Loss: 0.322, Accuracy: 0.034 Epoch 95/100, Loss: 0.375, Accuracy: 0.035 Epoch 96/100, Loss: 0.575, Accuracy: 0.031 Epoch 97/100, Loss: 0.489, Accuracy: 0.030 Epoch 98/100, Loss: 0.435, Accuracy: 0.033 Epoch 99/100, Loss: 0.440, Accuracy: 0.031 Epoch 100/100, Loss: 0.444, Accuracy: 0.033
Is x the entire input dataset? If so, you might be dividing by the size of the entire input dataset in correct/x.shape[0] (as opposed to the size of the mini-batch). Try changing this to correct/output.shape[0]
https://stackoverflow.com/questions/51503851/
Accessing reduced dimensionality of trained autoencoder
Here is a autoencoder trained on mnist using PyTorch : import torch import torchvision import torch.nn as nn from torch.autograd import Variable cuda = torch.cuda.is_available() # True if cuda is available, False otherwise FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor print('Training on %s' % ('GPU' if cuda else 'CPU')) # Loading the MNIST data set transform = torchvision.transforms.Compose([torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.1307,), (0.3081,))]) mnist = torchvision.datasets.MNIST(root='../data/', train=True, transform=transform, download=True) # Loader to feed the data batch by batch during training. batch = 100 data_loader = torch.utils.data.DataLoader(mnist, batch_size=batch, shuffle=True) autoencoder = nn.Sequential( # Encoder nn.Linear(28 * 28, 512), nn.PReLU(512), nn.BatchNorm1d(512), # Low-dimensional representation nn.Linear(512, 128), nn.PReLU(128), nn.BatchNorm1d(128), # Decoder nn.Linear(128, 512), nn.PReLU(512), nn.BatchNorm1d(512), nn.Linear(512, 28 * 28)) autoencoder = autoencoder.type(FloatTensor) optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.005) epochs = 10 data_size = int(mnist.train_labels.size()[0]) for i in range(epochs): for j, (images, _) in enumerate(data_loader): images = images.view(images.size(0), -1) # from (batch 1, 28, 28) to (batch, 28, 28) images = Variable(images).type(FloatTensor) autoencoder.zero_grad() reconstructions = autoencoder(images) loss = torch.dist(images, reconstructions) loss.backward() optimizer.step() print('Epoch %i/%i loss %.2f' % (i + 1, epochs, loss.data[0])) print('Optimization finished.') I'm attempting to compare the lower dimensionality representation of each image. Printing the dimensionality of each layer : for l in autoencoder.parameters() : print(l.shape) displays : torch.Size([512, 784]) torch.Size([512]) torch.Size([512]) torch.Size([512]) torch.Size([512]) torch.Size([128, 512]) torch.Size([128]) torch.Size([128]) torch.Size([128]) torch.Size([128]) torch.Size([512, 128]) torch.Size([512]) torch.Size([512]) torch.Size([512]) torch.Size([512]) torch.Size([784, 512]) torch.Size([784]) So appears the dimensionality is not stored in learned vectors ? In other words if I have 10000 images each containing 100 pixels, executing above autoencoder that reduces dimensionality to 10 pixels should allow to access the 10 pixel dimensionality of all 10000 images ?
I'm not very familiar with pyTorch, but splitting the autoencoder into an encoder and decoder model seems to work (I changed the size of the hidden layer from 512 to 64, and the dimension of the encoded image from 128 to 4, to make the example run faster): import torch import torchvision import torch.nn as nn from torch.autograd import Variable cuda = torch.cuda.is_available() # True if cuda is available, False otherwise FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor print('Training on %s' % ('GPU' if cuda else 'CPU')) # Loading the MNIST data set transform = torchvision.transforms.Compose([torchvision.transforms.ToTensor(), torchvision.transforms.Normalize((0.1307,), (0.3081,))]) mnist = torchvision.datasets.MNIST(root='../data/', train=True, transform=transform, download=True) # Loader to feed the data batch by batch during training. batch = 100 data_loader = torch.utils.data.DataLoader(mnist, batch_size=batch, shuffle=True) encoder = nn.Sequential( # Encoder nn.Linear(28 * 28, 64), nn.PReLU(64), nn.BatchNorm1d(64), # Low-dimensional representation nn.Linear(64, 4), nn.PReLU(4), nn.BatchNorm1d(4)) decoder = nn.Sequential( # Decoder nn.Linear(4, 64), nn.PReLU(64), nn.BatchNorm1d(64), nn.Linear(64, 28 * 28)) autoencoder = nn.Sequential(encoder, decoder) encoder = encoder.type(FloatTensor) decoder = decoder.type(FloatTensor) autoencoder = autoencoder.type(FloatTensor) optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.005) epochs = 10 data_size = int(mnist.train_labels.size()[0]) for i in range(epochs): for j, (images, _) in enumerate(data_loader): images = images.view(images.size(0), -1) # from (batch 1, 28, 28) to (batch, 28, 28) images = Variable(images).type(FloatTensor) autoencoder.zero_grad() reconstructions = autoencoder(images) loss = torch.dist(images, reconstructions) loss.backward() optimizer.step() print('Epoch %i/%i loss %.2f' % (i + 1, epochs, loss.data[0])) print('Optimization finished.') # Get the encoded images here encoded_images = [] for j, (images, _) in enumerate(data_loader): images = images.view(images.size(0), -1) images = Variable(images).type(FloatTensor) encoded_images.append(encoder(images))
https://stackoverflow.com/questions/51515819/
Custom Loss in Pytorch where object does not have attribute backward()
I am new to pytorch and I tried creating my own custom loss. This has been really challenging. Below is what I have for my loss. class CustomLoss(nn.Module): def __init__(self, size_average=True, reduce=True): """ Args: size_average (bool, optional): By default, the losses are averaged over observations for each minibatch. However, if the field size_average is set to ``False``, the losses are instead summed for each minibatch. Only applies when reduce is ``True``. Default: ``True`` reduce (bool, optional): By default, the losses are averaged over observations for each minibatch, or summed, depending on size_average. When reduce is ``False``, returns a loss per input/target element instead and ignores size_average. Default: ``True`` """ super(CustomLoss, self).__init__() def forward(self, S, N, M, type='softmax',): return self.loss_cal(S, N, M, type) ### new loss cal def loss_cal(self, S, N, M, type="softmax",): """ calculate loss with similarity matrix(S) eq.(6) (7) :type: "softmax" or "contrast" :return: loss """ self.A = torch.cat([S[i * M:(i + 1) * M, i:(i + 1)] for i in range(N)], dim=0) self.A = torch.autograd.Variable(self.A) if type == "softmax": self.B = torch.log(torch.sum(torch.exp(S.float()), dim=1, keepdim=True) + 1e-8) self.B = torch.autograd.Variable(self.B) total = torch.abs(torch.sum(self.A - self.B)) else: raise AssertionError("loss type should be softmax or contrast !") return total When I run the following: loss = CustomLoss() (loss.loss_cal(S=S,N=N,M=M)) loss.backward() I get the following error: C:\Program Files\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py in run_cell_magic(self, magic_name, line, cell) 2113 magic_arg_s = self.var_expand(line, stack_depth) 2114 with self.builtin_trap: -> 2115 result = fn(magic_arg_s, cell) 2116 return result 2117 <decorator-gen-60> in time(self, line, cell, local_ns) C:\Program Files\Anaconda3\lib\site-packages\IPython\core\magic.py in <lambda>(f, *a, **k) 186 # but it's overkill for just that one bit of state. 187 def magic_deco(arg): --> 188 call = lambda f, *a, **k: f(*a, **k) 189 190 if callable(arg): C:\Program Files\Anaconda3\lib\site-packages\IPython\core\magics\execution.py in time(self, line, cell, local_ns) 1178 else: 1179 st = clock2() -> 1180 exec(code, glob, local_ns) 1181 end = clock2() 1182 out = None <timed exec> in <module>() C:\Program Files\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __getattr__(self, name) 530 return modules[name] 531 raise AttributeError("'{}' object has no attribute '{}'".format( --> 532 type(self).__name__, name)) 533 534 def __setattr__(self, name, value): AttributeError: 'CustomLoss' object has no attribute 'backward' Why am I getting this error? I did not face this error with TF. My understanding is, it has to do with the autograd? If someone can explain why I am facing this error, I can figure the rest out.
Hi! the problem is that you try to call the backward function on the module, but not on the variable (as you probably want to). As you have not implemented a backward function on the module, the interpreter cannot find one. So what you want to do instead is: loss_func = CustomLoss() loss = loss_func.loss_cal(S=S,N=N,M=M) loss.backward() As a general remark: You are using a nn.Module without it actually having parameters. While that works, this is not what nn.Modules are there for - and should therefore be avoided. Instead, simply make a pure function - after all, the function you have there is static anyways. If you really want to go for the class, think of the type of class you want to create - a loss. Losses, however, can have special pytorch properties. So you should read up on a discussion here.
https://stackoverflow.com/questions/51521361/
Why is PyTorch called PyTorch?
I have been looking into deep learning frameworks lately and have been wondering about the origin of the name of PyTorch. With Keras, their home page nicely explains the name's origin, and with something like TensorFlow, the reasoning behind the name seems rather clear. For PyTorch, however, I cannot seem to come across why it is so named. Of course, I understand the "Py-" prefix and also know that PyTorch is a successor in some sense of Torch. But I am still wondering: what is the original idea behind the "-Torch" part? Is it known what the origin of the name is?
Here a short answer, formed as another question: Torch, SMORCH ??? PyTorch developed from Torch7. A precursor to the original Torch was a library called SVM-Torch, which was developed around 2001. The SVM stands for Support Vector Machines. SVM-Torch is a decomposition algorithm similar to SVM-Light, but adapted to regression problems, according to this paper. Also around this time, G.W.Flake described the sequential minimal optimization algorithm (SMO), which could be used to train SVMs on sparse data sets, and this was incorporated into NODElib. Interestingly, this was called the SMORCH algorithm. You can find out more about SMORCH in the NODElib docs Optimization of the SVMs is: performed by a variation of John Platt's sequential minimal optimization (SMO) algorithm. This version of SMO is generalized for regression, uses kernel caching, and incorporates several heuristics; for these reasons, we refer to the optimization algorithm as SMORCH. So SMORCH = Sequential Minimal Optimization Regression Caching Heuristics I can't answer definitively, but my thinking is "Torch" is a riff or evolution of "Light" from SVM-Light combined with a large helping of SMORCHiness. You'd need to check in with the authors of SVMTorch and SVM-Light to confirm that this is indeed what "sparked" the name. It is reasonable to assume that the "TO" of Torch stands for some other optimization, rather than SMO, such as Tensor Optimization, but I haven't found any direct reference... yet.
https://stackoverflow.com/questions/51530778/
Is "input" a keyword that causes errors when used as a parameter name (in PyTorch)?
So I have a line of code: packed_embeddings = pack_padded_sequence(input=embeddings, lengths=lengths, batch_first=True) That throws me this error: File "/Users/kwj/anaconda3/lib/python3.6/site-packages/torch/onnx/__init__.py", line 130, in might_trace first_arg = args[0] IndexError: tuple index out of range But magically fixes itself if I take out the "input": packed_embeddings = pack_padded_sequence(embeddings, lengths=lengths, batch_first=True) Here is the function specification in the PyTorch docs: https://pytorch.org/docs/stable/_modules/torch/nn/utils/rnn.html#pack_padded_sequence I'm using Python3 and PyTorch 0.4. Am I missing something really basic? Not sure if this is my issue, or a PyTorch specific issue...pretty confused here. Thanks
What's happening here is that pack_padded_sequence is decorated to return a partially applied function, and within the decorating code there is a function that accepts arguments as *args, **kwargs. This function passes args to another function, which inspects the first arg. When you pass all the arguments to packed_padded_sequence as keyword arguments, args is empty, and so args[0] raises an IndexError. If you pass input as a positional argument,args is not empty, and the IndexError is not raised. This example code demonstrates the behaviour (the Pytorch code is not easy to read). def decorator(func): def wrapper(*args, **kwargs): print('Args:', repr(args)) print('Kwargs:', repr(kwargs)) return func(*args, **kwargs) return wrapper @decorator def f(a, b=0, c=0): return a, b, c if __name__ == '__main__': print('Positional argument...') print(f(1, b=2, c=3)) print('All keyword arguments...') print(f(a=1, b=2, c=3)) The code produces this output: Positional argument... Args: (1,) <- Args is populated Kwargs: {'b': 2, 'c': 3} (1, 2, 3) All keyword arguments... Args: () <- Args is empty Kwargs: {'a': 1, 'b': 2, 'c': 3} (1, 2, 3)
https://stackoverflow.com/questions/51531007/
Which part of Pytorch tensor represents channels?
Surprisingly I have not found an answer to this question after looking around the internet. I am specifically interested in a 3d tensor. From doing my own experiments, I have found that when I create a tensor: h=torch.randn(5,12,5) And then put a convolutional layer on it defined as follows: conv=torch.nn.Conv1d(12,48,3,padding=1) The output is a (5,48,5) tensor. So, am I correct in assuming that for a 3d tensor in pytorch the middle number represents the number of channels? Edit: It seems that when running a conv2d, the input dimension is the first entry in the tensor, and I need to make it a 4d tensor (1,48,5,5) for example. Now I am very confused... Any help is much appreciated!
For a conv2D, input should be in (N, C, H, W) format. N is the number of samples/batch_size. C is the channels. H and W are height and width resp. See shape documentation at https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d For conv1D, input should be (N,C,L) see documentation at https://pytorch.org/docs/stable/nn.html#conv1d
https://stackoverflow.com/questions/51541532/
Implementing a custom dataset with PyTorch
I'm attempting to modify this feedforward network taken from https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/01-basics/feedforward_neural_network/main.py to utilize my own dataset. I define a custom dataset of two 1 dim arrays as input and two scalars the corresponding output : x = torch.tensor([[5.5, 3,3,4] , [1 , 2,3,4], [9 , 2,3,4]]) print(x) y = torch.tensor([1,2,3]) print(y) import torch.utils.data as data_utils my_train = data_utils.TensorDataset(x, y) my_train_loader = data_utils.DataLoader(my_train, batch_size=50, shuffle=True) I've updated the hyperparameters to match new input_size (2) & num_classes (3). I've also changed images = images.reshape(-1, 28*28).to(device) to images = images.reshape(-1, 4).to(device) As the training set is minimal I've changed the batch_size to 1. Upon making these modifications I receive error when attempting to train : RuntimeError Traceback (most recent call last) in () 51 52 # Forward pass ---> 53 outputs = model(images) 54 loss = criterion(outputs, labels) 55 /home/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 489 result = self._slow_forward(*input, **kwargs) 490 else: --> 491 result = self.forward(*input, **kwargs) 492 for hook in self._forward_hooks.values(): 493 hook_result = hook(self, input, result) in forward(self, x) 31 32 def forward(self, x): ---> 33 out = self.fc1(x) 34 out = self.relu(out) 35 out = self.fc2(out) /home/.local/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 489 result = self._slow_forward(*input, **kwargs) 490 else: --> 491 result = self.forward(*input, **kwargs) 492 for hook in self._forward_hooks.values(): 493 hook_result = hook(self, input, result) /home/.local/lib/python3.6/site-packages/torch/nn/modules/linear.py in forward(self, input) 53 54 def forward(self, input): ---> 55 return F.linear(input, self.weight, self.bias) 56 57 def extra_repr(self): /home/.local/lib/python3.6/site-packages/torch/nn/functional.py in linear(input, weight, bias) 990 if input.dim() == 2 and bias is not None: 991 # fused op is marginally faster --> 992 return torch.addmm(bias, input, weight.t()) 993 994 output = input.matmul(weight.t()) RuntimeError: size mismatch, m1: [3 x 4], m2: [2 x 3] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:249 How to amend code to match expected dimensionality ? I'm unsure what code to change as I've changed all parameters that require updating ? Source prior to changes : import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms # Device configuration device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Hyper-parameters input_size = 784 hidden_size = 500 num_classes = 10 num_epochs = 5 batch_size = 100 learning_rate = 0.001 # MNIST dataset train_dataset = torchvision.datasets.MNIST(root='../../data', train=True, transform=transforms.ToTensor(), download=True) test_dataset = torchvision.datasets.MNIST(root='../../data', train=False, transform=transforms.ToTensor()) # Data loader train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) # Fully connected neural network with one hidden layer class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out model = NeuralNet(input_size, hidden_size, num_classes).to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model total_step = len(train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): # Move tensors to the configured device images = images.reshape(-1, 28*28).to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() if (i+1) % 100 == 0: print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch+1, num_epochs, i+1, total_step, loss.item())) # Test the model # In test phase, we don't need to compute gradients (for memory efficiency) with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images.reshape(-1, 28*28).to(device) labels = labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total)) # Save the model checkpoint torch.save(model.state_dict(), 'model.ckpt') Source post changes : x = torch.tensor([[5.5, 3,3,4] , [1 , 2,3,4], [9 , 2,3,4]]) print(x) y = torch.tensor([1,2,3]) print(y) import torch.utils.data as data_utils my_train = data_utils.TensorDataset(x, y) my_train_loader = data_utils.DataLoader(my_train, batch_size=50, shuffle=True) print(my_train) print(my_train_loader) import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms # Device configuration device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Hyper-parameters input_size = 2 hidden_size = 3 num_classes = 3 num_epochs = 5 batch_size = 1 learning_rate = 0.001 # MNIST dataset train_dataset = my_train # Data loader train_loader = my_train_loader # Fully connected neural network with one hidden layer class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out model = NeuralNet(input_size, hidden_size, num_classes).to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model total_step = len(train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): # Move tensors to the configured device images = images.reshape(-1, 4).to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() if (i+1) % 100 == 0: print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch+1, num_epochs, i+1, total_step, loss.item())) # Test the model # In test phase, we don't need to compute gradients (for memory efficiency) with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images.reshape(-1, 4).to(device) labels = labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total)) # Save the model checkpoint torch.save(model.state_dict(), 'model.ckpt')
You need to change input_size to 4 (2*2), and not 2 as your modified code currently shows. If you compare it to the original MNIST example, you'll see that input_size is set to 784 (28*28) and not just to 28.
https://stackoverflow.com/questions/51545026/
How to make a class in pytorch use GPU
So I am running some code and getting the following error in Pytorch: "RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same" From what I understand, this means that my model may not be pushed to the GPU, while the input data already is using the GPU. I can share my code if that would help (I am refraining from doing it right now since it is longer than a small code snippet). I am aware that I can do something like myModel=Model() myModel.cuda() However, I am making a class as part of a custom module that will be added to a Sequential wrapper. So, I can't really make an object out of it (I'm not good with OOP terminology, so I apologize for any technical writing mistakes). I was wondering if there is a way to get around this issue, and make the class always use the GPU, even though I never explicitly define an object? If this was not clear enough, I can post my code, but as previously warned it may take some time to go through (not too long, but not very convenient either). Any help is much appreciated. Edit: Here is the code, I presume the issue is in the RLSTM class, since there was not an error before I added this. class VGG(nn.Module): ''' VGG model ''' def __init__(self, features): # features represents the layers array super(VGG, self).__init__() self.features = features self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(512,512), nn.ReLU(True), nn.Dropout(), nn.Linear(512, 512), nn.ReLU(True), nn.Linear(512, 10), ) # Initialize weights for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, math.sqrt(2. / n)) m.bias.data.zero_() def forward(self, x): # x is the image, we run x through the layers print(x.size()) x = self.features(x) # runs through all features, where each feature is a function x = x.view(x.size(0), -1) # after running through features, does sequential steps to finally classify x = self.classifier(x) # print(x) return x def make_layers(cfg, batch_norm=False): # print("Making layers!") layers = [] in_channels = 3 for v in cfg: if v == 'M': layers += [nn.MaxPool2d(kernel_size=2, stride=2)] else: conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1) if batch_norm: layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)] else: layers += [conv2d, nn.ReLU(inplace=True)] in_channels = v layers+=[RLSTM()] return nn.Sequential(*layers) class RLSTM(nn.Module): def __init__(self): super(RLSTM,self).__init__() def forward(self, image): print("going in rowlstm") global current global _layer global isgates size = image.size() b = size[0] indvs = list(image.split(1,0)) # split up the batch into individual images #print(indvs[0].size()) tensor_array = [] for i in range(b): current = 0 _layer = [] isgates = [] tensor_array.append(self.RowLSTM(indvs[i])) seq=tuple(tensor_array) trans = torch.cat(seq,0) return trans.cuda() # trying to make floattensor error go away def RowLSTM(self, image): global current global _layer global isgates # input-to-state (K_is * x_i) : 3x1 convolution. generate 4h x n x n tensor. 4hxnxn tensor contains all i -> s info # the input to state convolution should only be computed one time if current==0: n = image.size()[2] ch=image.size()[1] input_to_state = torch.nn.Conv2d(ch,4*ch,kernel_size=(1,3),padding=(0,1)) isgates = self.splitIS(input_to_state(image)) # convolve, then split into gates (4 per row) cell=RowLSTMCell(0,torch.randn(ch,n,1),torch.randn(ch,n,1),torch.randn(ch,n,1),torch.randn(ch,n,1),torch.randn(ch,n,1),torch.randn(ch,n,1)) # now have dummy, learnable variables for first row _layer.append(cell) else: Cell_prev = _layer[current-1] # access previous row hidPrev = Cell_prev.getHiddenState() ch = image.size()[1] # print("about to apply conv1d") state_to_state = torch.nn.Conv2d(ch,4*ch,kernel_size=(1,3),padding=(0,1)) # error is here: hidPrev is an array - not a valid number of input channel # print("applied conv1d") prevHid=Cell_prev.getHiddenState() ssgates = self.splitSS(state_to_state(prevHid.unsqueeze(0))) #need to unsqueeze (Ex: currently 16x5, need to make 1x16x5) gates = self.addGates(isgates,ssgates,current) # split gates ig, og, fg, gg = gates[0], gates[1], gates[2], gates[3] # into four, ADD SIGMOID! cell = RowLSTMCell(Cell_prev,ig,og,fg,gg,0,0) cell.compute() _layer.append(cell) # attempting to eliminate requirement of getting size #print(current) try: current+=1 y=(isgates[0][0][1][current]) return self.RowLSTM(image) except Exception as error: concats=[] for cell in _layer: tensor=torch.unsqueeze(cell.h,0) concats.append(tensor) seq=tuple(concats) tensor=torch.cat(seq,3) return tensor def splitIS(tensor): #always going to be splitting into 4 pieces, so no need to add extra parameters inputStateGates={} size=tensor.size() # 1 x 4h x n x n out_ft=size[1] # get 4h for the nxnx4h tensor num=size[2] # get n for the nxn image hh=out_ft/4 # we want to split the tensor into 4, for the gates tensor = torch.squeeze(tensor) # 4h x n x n # First, split by row: Creates n tensors of 4h x n x 1 rows = list(tensor.split(1,2)) for i in range(num): # Each row is a tensor of 4h x n x 1, split it into 4 of h x n x 1 row=rows[i] inputStateGates[i]=list(row.split(hh,0)) return inputStateGates def splitSS(tensor): # 1 x 4h x n x 1, create 4 of 1 x h x n x 1 size=tensor.size() out_ft=size[1] # get 4h for the 1x4hxn tensor num=size[2] # get n for the 1xhxn row hh=out_ft/4 # we want to split the tensor into 4, for the gates tensor = tensor.squeeze(0) # 4h x n x 1 splitted=list(tensor.split(hh,0)) return splitted def addGates(i2s,s2s,key): """ these dictionaries are of form {key : [[i], [o], [f], [g]]} we want to add pairwise elemeents """ # i2s is of form key: [[i], [o], [f], [g]] where each gate is hxn # s2s is of form [[h,n],[h,n],[h,n], [h,n]] gateSum = [] for i in range(4): # always of length 4, representing the gates gateSum.append(torch.sigmoid(i2s[key][i] + s2s[i])) return gateSum
You have to define the child modules inside the __init__ function so that they can be registered as parameters of the module. If they are not parameters, .cuda() would not be call on them when you call .cuda() for the parent. If you really needed dynamic parameters/modules declaration, take a look here. The key is apaszke's answer.
https://stackoverflow.com/questions/51549461/
How to progressively grow a neural network in pytorch?
I am trying to make a progressive autoencoder and I have thought a couple ways of growing my network during training. However, I am always stuck on this one part where I don't know if changing the input(encoder) and output(decoder) channel would affect my network. See the example below. X = torch.randn( 8, 1, 4, 4,) # A batch of 8 grayscale images of 4x4 pixels in size Encoder = nn.Sequential( Conv2D( 1, 16, 3, 1, 1 ), nn.ReLU() ) # starting setup 16 3x3 kernels if I print the above weights from the network I would get a size of [ 1, 16, 3, 3 ], 16 kernels each of size 3x3 if I want to grow the network I would need to do save those weight because hopefully its already well trained on those 4x4 image inputs. X = torch.randn( 8, 1, 8, 8) # increase the image size from 4x4 to 8x8 ... new_model = nn.Sequential() then do... # copy the previous layer and its weights from the original encoder # BTW My issue starts here. # Add/grow the new_model with new layers concat with the old layer, also modify the input channela so they can link correctly # Final result would be like something below. new_model = nn.Sequential( Conv2D( **1**, 8, 3, 1, 1 ), nn.ReLU(), Conv2D( **8**, 16, 3, 1, 1 ), nn.ReLU() ) Encoder = new_model # Repeat process Everything looks good however because I change the input channel the size of the weights changed as well, and this is the issue that I have been stuck on for a while now. You can simply check this by running, foo_1 = nn.Conv2d(1, 1, 3, 1, 1) # You can think this as the starting Conv2D from the starting encoder foo_2 = nn.Conv2d(3, 1, 3, 1, 1) # You can think this as the modfiied starting Conv2D with an outer layer outputting 3 channels connecting to it print(foo_1.weight.size()) # torch.Size([1, 1, 3, 3]) print(foo_2.weight.size()) # torch.Size([1, 3, 3, 3]) Initially, I thought foo_1 and foo_2 would both have the same weight size, as in both would only use one 3x3 kernel but it doesn't see to be the case. I hope you can see my dilemma now, after x amount of epochs I need to grow another convolution and I have to mess with the input size to make new layers chain properly but if I change the input size the shape of the weight is different and I don't know how pasting the old state would work. I have been looking at pro gan implementations in pytorch and IMO they are not easy to read. How can I build more institutions on how to properly progressively grow your network?
By progressive autoencoder I assume you are referring to something like Pioneer Networks: Progressively Growing Generative Autoencoder which referred Progressive Growing of GANs for Improved Quality, Stability, and Variation. First of all, don't use nn.Sequential. It is great for modeling simple and direct network structure, which is definitely not the case here. You should use simple nn.Conv2d, F.ReLU modules instead of building a nn.Sequential object. Second, this isn't really about implementation but theory. You cannot magically convert a convolution layer from accepting 1 channels to 8 channels. There are numbers of ways to expand your convolution filter like appending random weights but I think it is not what you wanted. From the second paper (It is a GAN, but the idea is the same), it does not expand any filters. Instead, the filters maintain their shape in the entire training process. Meaning that, you would have a Conv2D(8, 16, 3, 1, 1) from the very beginning (assuming you only have those two layers). An obvious problem pops out -- your grayscale image is a 1-channel input, but your convolution requires a 8-channel input in the first stage of training. In the second paper, it uses an extra 1x1 convolution layer to map RGB <-> feature maps. In your case, that would be Conv2D(1, 8, 1) which maps 1-channel input to 8-channel output. This can be thrown away after you are done with first stage. There are other techniques like gradually fading in using a weight term stated in the paper. I suggest you read them, especially the second one.
https://stackoverflow.com/questions/51549878/
Issues with using cuda and float tensor
I have some code, and when I run it, I get the following error: Expected object of type torch.cuda.FloatTensor but found type torch.FloatTensor for argument #2 'other' From this error message, I assume there a problem with pushing my models to the GPU. However, I am not sure precisely where the problem lies. I will place the code wherein I think the problem may lie at the end of this question. Could someone please describe what the error exactly means and how to fix it? Any help is much appreciated. class VGG(nn.Module): ''' VGG model ''' def __init__(self, features): # features represents the layers array super(VGG, self).__init__() self.features = features self.classifier = nn.Sequential( nn.Dropout(), nn.Linear(512,512), nn.ReLU(True), nn.Dropout(), nn.Linear(512, 512), nn.ReLU(True), nn.Linear(512, 10), ) # Initialize weights for m in self.modules(): if isinstance(m, nn.Conv2d): n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels m.weight.data.normal_(0, math.sqrt(2. / n)) m.bias.data.zero_() def forward(self, x): # x is the image, we run x through the layers print("Running through features") x = self.features(x) # runs through all features, where each feature is a function print("Finsihed features, going to classifier") x = x.view(x.size(0), -1) # after running through features, does sequential steps to finally classify x = self.classifier(x) return x def make_layers(cfg, batch_norm=False): # print("Making layers!") layers = [] in_channels = 3 for v in cfg: if v == 'M': layers += [nn.MaxPool2d(kernel_size=2, stride=2)] else: conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1) if batch_norm: layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)] else: layers += [conv2d, nn.ReLU(inplace=True)] in_channels = v rlstm =RLSTM(v) rlstm.input_to_state = torch.nn.DataParallel(rlstm.input_to_state) rlstm.state_to_state = torch.nn.DataParallel(rlstm.state_to_state) rlstm=rlstm.cuda() layers+=[rlstm] return nn.Sequential(*layers) class RLSTM(nn.Module): def __init__(self,ch): # torch.set_default_tensor_type('torch.cuda.FloatTensor') super(RLSTM,self).__init__() self.ch=ch self.input_to_state = torch.nn.Conv2d(self.ch,4*self.ch,kernel_size=(1,3),padding=(0,1)) self.state_to_state = torch.nn.Conv2d(self.ch,4*self.ch,kernel_size=(1,3),padding=(0,1)) # error is here: hidPrev is an array - not a valid number of input channel # self.input_to_state = self.input_to_state.cuda() #self.state_to_state = self.state_to_state.cuda() def forward(self, image): # print("going in row forward") global current global _layer global isgates size = image.size() print("size: "+str(size)) b = size[0] indvs = list(image.split(1,0)) # split up the batch into individual images #print(indvs[0].size()) tensor_array = [] for i in range(b): current = 0 _layer = [] isgates = [] print(len(tensor_array)) tensor_array.append(self.RowLSTM(indvs[i])) seq=tuple(tensor_array) trans = torch.cat(seq,0) print(trans.size()) return trans.cuda() # trying to make floattensor error go away def RowLSTM(self, image): # print("going in rowlstm") global current global _layer global isgates # input-to-state (K_is * x_i) : 3x1 convolution. generate 4h x n x n tensor. 4hxnxn tensor contains all i -> s info # the input to state convolution should only be computed one time if current==0: n = image.size()[2] ch=image.size()[1] # input_to_state = torch.nn.Conv2d(ch,4*ch,kernel_size=(1,3),padding=(0,1)) # print("about to do convolution") isgates = self.splitIS(self.input_to_state(image)) # convolve, then split into gates (4 per row) cell=RowLSTMCell(0,torch.randn(ch,n,1),torch.randn(ch,n,1),torch.randn(ch,n,1),torch.randn(ch,n,1),torch.randn(ch,n,1),torch.randn(ch,n,1)) # now have dummy, learnable variables for first row _layer.append(cell) print("layeres: "+str(len(_layer))) else: Cell_prev = _layer[current-1] # access previous row hidPrev = Cell_prev.getHiddenState() ch = image.size()[1] # print("about to apply conv1d") # state_to_state = torch.nn.Conv2d(ch,4*ch,kernel_size=(1,3),padding=(0,1)) # error is here: hidPrev is an array - not a valid number of input channel # print("applied conv1d") prevHid=Cell_prev.getHiddenState() ssgates = self.splitSS(self.state_to_state(prevHid.unsqueeze(0))) #need to unsqueeze (Ex: currently 16x5, need to make 1x16x5) gates = self.addGates(isgates,ssgates,current) # split gates ig, og, fg, gg = gates[0], gates[1], gates[2], gates[3] # into four, ADD SIGMOID! cell = RowLSTMCell(Cell_prev,ig,og,fg,gg,0,0) cell.compute() _layer.append(cell) # attempting to eliminate requirement of getting size #print(current) try: print("adding one to current") current+=1 y=(isgates[0][0][1][current]) return self.RowLSTM(image) #expecting floattensor, but gets cuda floattensor except Exception as error: print(error) concats=[] print(len(_layer)) for cell in _layer: tensor=torch.unsqueeze(cell.h,0) concats.append(tensor) seq=tuple(concats) print("non catted tensor: "+str(tensor.size())) tense=torch.cat(seq,3) print("catted lstm tensor "+str(tense.size())) return tensor The code runs, but when trying to go through the try/except block, the error is thrown. I am guessing the mistake lies somewhere here? Edit: Using print statements to see where the program exactly terminates, it seems that there is a mistake in code that I have note posted yet! I will post that now, it looks like the error is in the compute() function, since the statement "finished computing" never gets printed. class RowLSTMCell(): #inherit torch.nn.LSTM? def __init__(self,prev_row, i, o, f, g, c, h): #super(RowLSTMCell,self).__init__() self.c=c #self.c = self.c.cuda() self.h=h # self.h = self.h.cuda() self.i=i self.i = self.i.cuda() self.o=o self.o = self.o.cuda() self.g=g self.g = self.g.cuda() self.f=f self.f = self.f.cuda() self.prev_row=prev_row def getStateSize(self): return self._state_size def getOutputSize(self): return self._output_size def compute(self): print("computing") c_prev = self.prev_row.getCellState() h_prev = self.prev_row.getHiddenState() self.c = self.f * c_prev + self.i * self.g self.h = torch.tanh(self.c) * self.o print("finished computing") def getHiddenState(self): return self.h def getCellState(self): return self.c
self.c and self.h were not cuda! I guess you really have to make sure that each tensor is using cuda. I just put .cuda() at the end of self.c and self.h's computation in the compute() method.
https://stackoverflow.com/questions/51562323/
How do I load custom image based datasets into Pytorch for use with a CNN?
I have searched for hours on the internet to find a good solution to my issue. Here is some relevant background information to help you answer my question. This is my first ever deep learning project and I have no idea what I am doing. I know the theory but not the practical elements. The data that I am using can be found on kaggle at this link: (https://www.kaggle.com/alxmamaev/flowers-recognition) I am aiming to classify flowers based on the images provided in the dataset using a CNN. Here is some sample code I have tried to use to load data in so far, this is my best attempt but as I mentioned I am clueless and Pytorch docs didn't offer much help that I could understand at my level. (https://pastebin.com/fNLVW1UW) # Loads the images for use with the CNN. def load_images(image_size=32, batch_size=64, root="../images"): transform = transforms.Compose([ transforms.Resize(32), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) train_set = datasets.ImageFolder(root=root, train=True, transform=transform) train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size, shuffle=True, num_workers=2) return train_loader # Defining variables for use with the CNN. classes = ('daisy', 'dandelion', 'rose', 'sunflower', 'tulip') train_loader_data = load_images() # Training samples. n_training_samples = 3394 train_sampler = SubsetRandomSampler(np.arange(n_training_samples, dtype=np.int64)) # Validation samples. n_val_samples = 424 val_sampler = SubsetRandomSampler(np.arange(n_training_samples, n_training_samples + n_val_samples, dtype=np.int64)) # Test samples. n_test_samples = 424 test_sampler = SubsetRandomSampler(np.arange(n_test_samples, dtype=np.int64)) Here are my direct questions that I require answers too: How do I fix my code to load in the dataset in an 80/10/10 split for training/test/validation? How do i create the required labels/classes for these images which are already divided by folders in /images ?
Looking at the data from Kaggle and your code, there are problems in your data loading. The data should be in a different folder per class label for PyTorch ImageFolder to load it correctly. In your case, since all the training data is in the same folder, PyTorch is loading it as one train set. You can correct this by using a folder structure like - train/daisy, train/dandelion, test/daisy, test/dandelion and then passing the train and the test folder to the train and test ImageFolder respectively. Just change the folder structure and you should be good. Take a look at the official documentation of torchvision.datasets.Imagefolder which has a similar example. As you said, these images which are already divided by folders in /images. PyTorch ImageFolder assumes that images are organized in the following way. But this folder structure is only correct if you are using all the images for train set: ``` /images/daisy/100080576_f52e8ee070_n.jpg /images/daisy/10140303196_b88d3d6cec.jpg . . . /images/dandelion/10043234166_e6dd915111_n.jpg /images/dandelion/10200780773_c6051a7d71_n.jpg ``` where 'daisy', 'dandelion' etc. are class labels. The correct folder structure if you want to split the dataset into train and test set in your case (note that I know you want to split the dataset into train, validation, and test set, but it doesn't matters as this is just an example to get the idea out): ``` /images/train/daisy/100080576_f52e8ee070_n.jpg /images/train/daisy/10140303196_b88d3d6cec.jpg . . /images/train/dandelion/10043234166_e6dd915111_n.jpg /images/train/dandelion/10200780773_c6051a7d71_n.jpg . . /images/test/daisy/300080576_f52e8ee070_n.jpg /images/test/daisy/95140303196_b88d3d6cec.jpg . . /images/test/dandelion/32143234166_e6dd915111_n.jpg /images/test/dandelion/65200780773_c6051a7d71_n.jpg ``` Then, you can refer to the following full code example on how to write a dataloader: import os import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import torch.utils.data as data import torchvision from torchvision import transforms EPOCHS = 2 BATCH_SIZE = 10 LEARNING_RATE = 0.003 TRAIN_DATA_PATH = "./images/train/" TEST_DATA_PATH = "./images/test/" TRANSFORM_IMG = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(256), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) train_data = torchvision.datasets.ImageFolder(root=TRAIN_DATA_PATH, transform=TRANSFORM_IMG) train_data_loader = data.DataLoader(train_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) test_data = torchvision.datasets.ImageFolder(root=TEST_DATA_PATH, transform=TRANSFORM_IMG) test_data_loader = data.DataLoader(test_data, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) class CNN(nn.Module): # omitted... if __name__ == '__main__': print("Number of train samples: ", len(train_data)) print("Number of test samples: ", len(test_data)) print("Detected Classes are: ", train_data.class_to_idx) # classes are detected by folder structure model = CNN() optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE) loss_func = nn.CrossEntropyLoss() # Training and Testing for epoch in range(EPOCHS): for step, (x, y) in enumerate(train_data_loader): b_x = Variable(x) # batch x (image) b_y = Variable(y) # batch y (target) output = model(b_x)[0] loss = loss_func(output, b_y) optimizer.zero_grad() loss.backward() optimizer.step() if step % 50 == 0: test_x = Variable(test_data_loader) test_output, last_layer = model(test_x) pred_y = torch.max(test_output, 1)[1].data.squeeze() accuracy = sum(pred_y == test_y) / float(test_y.size(0)) print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy)
https://stackoverflow.com/questions/51577282/
Data loading with variable batch size?
I am currently working on patch based super-resolution. Most of the papers divide an image into smaller patches and then use the patches as input to the models.I was able to create patches using custom dataloader. The code is given below: import torch.utils.data as data from torchvision.transforms import CenterCrop, ToTensor, Compose, ToPILImage, Resize, RandomHorizontalFlip, RandomVerticalFlip from os import listdir from os.path import join from PIL import Image import random import os import numpy as np import torch def is_image_file(filename): return any(filename.endswith(extension) for extension in [".png", ".jpg", ".jpeg", ".bmp"]) class TrainDatasetFromFolder(data.Dataset): def __init__(self, dataset_dir, patch_size, is_gray, stride): super(TrainDatasetFromFolder, self).__init__() self.imageHrfilenames = [] self.imageHrfilenames.extend(join(dataset_dir, x) for x in sorted(listdir(dataset_dir)) if is_image_file(x)) self.is_gray = is_gray self.patchSize = patch_size self.stride = stride def _load_file(self, index): filename = self.imageHrfilenames[index] hr = Image.open(self.imageHrfilenames[index]) downsizes = (1, 0.7, 0.45) downsize = 2 w_ = int(hr.width * downsizes[downsize]) h_ = int(hr.height * downsizes[downsize]) aug = Compose([Resize([h_, w_], interpolation=Image.BICUBIC), RandomHorizontalFlip(), RandomVerticalFlip()]) hr = aug(hr) rv = random.randint(0, 4) hr = hr.rotate(90*rv, expand=1) filename = os.path.splitext(os.path.split(filename)[-1])[0] return hr, filename def _patching(self, img): img = ToTensor()(img) LR_ = Compose([ToPILImage(), Resize(self.patchSize//2, interpolation=Image.BICUBIC), ToTensor()]) HR_p, LR_p = [], [] for i in range(0, img.shape[1] - self.patchSize, self.stride): for j in range(0, img.shape[2] - self.patchSize, self.stride): temp = img[:, i:i + self.patchSize, j:j + self.patchSize] HR_p += [temp] LR_p += [LR_(temp)] return torch.stack(LR_p),torch.stack(HR_p) def __getitem__(self, index): HR_, filename = self._load_file(index) LR_p, HR_p = self._patching(HR_) return LR_p, HR_p def __len__(self): return len(self.imageHrfilenames) Suppose the batch size is 1, it takes an image and gives an output of size [x,3,patchsize,patchsize]. When batch size is 2, I will have two different outputs of size [x,3,patchsize,patchsize] (for example image 1 may give[50,3,patchsize,patchsize], image 2 may give[75,3,patchsize,patchsize] ). To handle this a custom collate function was required that stacks these two outputs along dimension 0. The collate function is given below: def my_collate(batch): data = torch.cat([item[0] for item in batch],dim = 0) target = torch.cat([item[1] for item in batch],dim = 0) return [data, target] This collate function concatenates along x (From the above example, I finally get [125,3,patchsize,pathsize]. For training purposes, I need to train the model using a minibatch size of say 25. Is there any method or any functions which I can use to directly get an output of size [25 , 3, patchsize, pathsize] directly from the dataloader using the necessary number of images as input to the Dataloader?
The following code snippet works for your purpose. First, we define a ToyDataset which takes in a list of tensors (tensors) of variable length in dimension 0. This is similar to the samples returned by your dataset. import torch from torch.utils.data import Dataset from torch.utils.data.sampler import RandomSampler class ToyDataset(Dataset): def __init__(self, tensors): self.tensors = tensors def __getitem__(self, index): return self.tensors[index] def __len__(self): return len(tensors) Secondly, we define a custom data loader. The usual Pytorch dichotomy to create datasets and data loaders is roughly the following: There is an indexed dataset, to which you can pass an index and it returns the associated sample from the dataset. There is a sampler which yields an index, there are different strategies to draw indices which give rise to different samplers. The sampler is used by a batch_sampler to draw multiple indices at once (as many as specified by batch_size). There is a dataloader which combines sampler and dataset to let you iterate over a dataset, importantly the data loader also owns a function (collate_fn) which specifies how the multiple samples retrieved from the dataset using the indices from the batch_sampler should be combined. For your use case, the usual PyTorch dichotomy does not work well, because instead of drawing a fixed number of indices, we need to draw indices until the objects associated with the indices exceed the cumulative size we desire. This means we need immediate inspection of the objects and use this knowledge to decide whether to return a batch or keep drawing indices. This is what the custom data loader below does: class CustomLoader(object): def __init__(self, dataset, my_bsz, drop_last=True): self.ds = dataset self.my_bsz = my_bsz self.drop_last = drop_last self.sampler = RandomSampler(dataset) def __iter__(self): batch = torch.Tensor() for idx in self.sampler: batch = torch.cat([batch, self.ds[idx]]) while batch.size(0) >= self.my_bsz: if batch.size(0) == self.my_bsz: yield batch batch = torch.Tensor() else: return_batch, batch = batch.split([self.my_bsz,batch.size(0)-self.my_bsz]) yield return_batch if batch.size(0) > 0 and not self.drop_last: yield batch Here we iterate over the dataset, after drawing an index and loading the associated object, we concatenate it to the tensors we drew before (batch). We keep doing this until we reach the desired size, such that we can cut out and yield a batch. We retain the rows in batch, which we did not yield. Because it may be the case that a single instance exceeds the desired batch_size, we use a while loop. You could modify this minimal CustomDataloader to add more features in the style of PyTorch's dataloader. There is also no need to use a RandomSampler to draw in indices, others would work equally well. It would also be possible to avoid repeated concats, in case your data is large by using for example a list and keeping track of the cumulative length of its tensors. Here is an example, that demonstrates it works: patch_size = 5 channels = 3 dim0sizes = torch.LongTensor(100).random_(1, 100) data = torch.randn(size=(dim0sizes.sum(), channels, patch_size, patch_size)) tensors = torch.split(data, list(dim0sizes)) ds = ToyDataset(tensors) dl = CustomLoader(ds, my_bsz=250, drop_last=False) for i in dl: print(i.size(0))
https://stackoverflow.com/questions/51585298/
Linear regression with pytorch
I tried to run linear regression on ForestFires dataset. Dataset is available on Kaggle and gist of my attempt is here: https://gist.github.com/Chandrak1907/747b1a6045bb64898d5f9140f4cf9a37 I am facing two problems: Output from prediction is of shape 32x1 and target data shape is 32. input and target shapes do not match: input [32 x 1], target [32]ΒΆ Using view I reshaped predictions tensor. y_pred = y_pred.view(inputs.shape[0]) Why there is a mismatch in shapes of predicted tensor and actual tensor? SGD in pytorch never converges. I tried to compute MSE manually using print(torch.mean((y_pred - labels)**2)) This value does not match loss = criterion(y_pred,labels) Can someone highlight where is the mistake in my code? Thank you.
Problem 1 This is reference about MSELoss from Pytorch docs: https://pytorch.org/docs/stable/nn.html#torch.nn.MSELoss Shape: - Input: (N,βˆ—) where * means, any number of additional dimensions - Target: (N,βˆ—), same shape as the input So, you need to expand dims of labels: (32) -> (32,1), by using: torch.unsqueeze(labels, 1) or labels.view(-1,1) https://pytorch.org/docs/stable/torch.html#torch.unsqueeze torch.unsqueeze(input, dim, out=None) β†’ Tensor Returns a new tensor with a dimension of size one inserted at the specified position. The returned tensor shares the same underlying data with this tensor. Problem 2 After reviewing your code, I realized that you have added size_average param to MSELoss: criterion = torch.nn.MSELoss(size_average=False) size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True That's why 2 computed values not matched. This is sample code: import torch import torch.nn as nn loss1 = nn.MSELoss() loss2 = nn.MSELoss(size_average=False) inputs = torch.randn(32, 1, requires_grad=True) targets = torch.randn(32, 1) output1 = loss1(inputs, targets) output2 = loss2(inputs, targets) output3 = torch.mean((inputs - targets) ** 2) print(output1) # tensor(1.0907) print(output2) # tensor(34.9021) print(output3) # tensor(1.0907)
https://stackoverflow.com/questions/51586680/
Why doesn't my simple pytorch network work on GPU device?
I built a simple network from a tutorial and I got this error: RuntimeError: Expected object of type torch.cuda.FloatTensor but found type torch.FloatTensor for argument #4 'mat1' Any help? Thank you! import torch import torchvision device = torch.device("cuda:0") root = '.data/' dataset = torchvision.datasets.MNIST(root, transform=torchvision.transforms.ToTensor(), download=True) dataloader = torch.utils.data.DataLoader(dataset, batch_size=4) class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.out = torch.nn.Linear(28*28, 10) def forward(self, x): x = x.view(x.size(0), -1) x = self.out(x) return x net = Net() net.to(device) for i, (inputs, labels) in enumerate(dataloader): inputs.to(device) out = net(inputs)
TL;DR This is the fix inputs = inputs.to(device) Why?! There is a slight difference between torch.nn.Module.to() and torch.Tensor.to(): while Module.to() is an in-place operator, Tensor.to() is not. Therefore net.to(device) Changes net itself and moves it to device. On the other hand inputs.to(device) does not change inputs, but rather returns a copy of inputs that resides on device. To use that "on device" copy, you need to assign it into a variable, hence inputs = inputs.to(device)
https://stackoverflow.com/questions/51605893/
Understanding PyTorch prediction
For my trained model this code : model(x[0].reshape(1,784).cuda()) returns : tensor([[-1.9903, -4.0458, -4.1143, -4.0074, -3.5510, 7.1074]], device='cuda:0') My network model is defined as : # Hyper-parameters input_size = 784 hidden_size = 50 num_classes = 6 num_epochs = 5000 batch_size = 1 learning_rate = 0.0001 criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out I'm attempting to understand the returned value : tensor([[-1.9903, -4.0458, -4.1143, -4.0074, -3.5510, 7.1074]], device='cuda:0') The value 7.1074 is the most probable as it maximum value in tensor array ? As 7.1074 is at position 5, is the significance here that the associated output value being predicted for input x[0] is 5 ? If so what is the intuition behind this ?
Disclaimer: I don't really know pytorch, but i'm guessing based on other libraries and general standard practice as i know it. I believe it's the outputs of the last layer, which would be that fc2 linear transformation. So, the predicted category would be category 5, having the highest value. You could think of it as being a sort of 'raw' probability distribution-esque range, that is commonly then squashed into the range (0,1] via softmax.
https://stackoverflow.com/questions/51620964/
PyTorch: passing numpy array for weight initialization
I'd like to initialize the parameters of RNN with np arrays. In the following example, I want to pass w to the parameters of rnn. I know pytorch provides many initialization methods like Xavier, uniform, etc., but is there way to initialize the parameters by passing numpy arrays? import numpy as np import torch as nn rng = np.random.RandomState(313) w = rng.randn(input_size, hidden_size).astype(np.float32) rnn = nn.RNN(input_size, hidden_size, num_layers)
First, let's note that nn.RNN has more than one weight variable, c.f. the documentation: Variables: weight_ih_l[k] – the learnable input-hidden weights of the k-th layer, of shape (hidden_size * input_size) for k = 0. Otherwise, the shape is (hidden_size * hidden_size) weight_hh_l[k] – the learnable hidden-hidden weights of the k-th layer, of shape (hidden_size * hidden_size) bias_ih_l[k] – the learnable input-hidden bias of the k-th layer, of shape (hidden_size) bias_hh_l[k] – the learnable hidden-hidden bias of the k-th layer, of shape (hidden_size) Now, each of these variables (Parameter instances) are attributes of your nn.RNN instance. You can access them, and edit them, two ways, as show below: Solution 1: Accessing all the RNN Parameter attributes by name (rnn.weight_hh_lK, rnn.weight_ih_lK, etc.): import torch from torch import nn import numpy as np input_size, hidden_size, num_layers = 3, 4, 2 use_bias = True rng = np.random.RandomState(313) rnn = nn.RNN(input_size, hidden_size, num_layers, bias=use_bias) def set_nn_parameter_data(layer, parameter_name, new_data): param = getattr(layer, parameter_name) param.data = new_data for i in range(num_layers): weights_hh_layer_i = rng.randn(hidden_size, hidden_size).astype(np.float32) weights_ih_layer_i = rng.randn(hidden_size, hidden_size).astype(np.float32) set_nn_parameter_data(rnn, "weight_hh_l{}".format(i), torch.from_numpy(weights_hh_layer_i)) set_nn_parameter_data(rnn, "weight_ih_l{}".format(i), torch.from_numpy(weights_ih_layer_i)) if use_bias: bias_hh_layer_i = rng.randn(hidden_size).astype(np.float32) bias_ih_layer_i = rng.randn(hidden_size).astype(np.float32) set_nn_parameter_data(rnn, "bias_hh_l{}".format(i), torch.from_numpy(bias_hh_layer_i)) set_nn_parameter_data(rnn, "bias_ih_l{}".format(i), torch.from_numpy(bias_ih_layer_i)) Solution 2: Accessing all the RNN Parameter attributes through rnn.all_weights list attribute: import torch from torch import nn import numpy as np input_size, hidden_size, num_layers = 3, 4, 2 use_bias = True rng = np.random.RandomState(313) rnn = nn.RNN(input_size, hidden_size, num_layers, bias=use_bias) for i in range(num_layers): weights_hh_layer_i = rng.randn(hidden_size, hidden_size).astype(np.float32) weights_ih_layer_i = rng.randn(hidden_size, hidden_size).astype(np.float32) rnn.all_weights[i][0].data = torch.from_numpy(weights_ih_layer_i) rnn.all_weights[i][1].data = torch.from_numpy(weights_hh_layer_i) if use_bias: bias_hh_layer_i = rng.randn(hidden_size).astype(np.float32) bias_ih_layer_i = rng.randn(hidden_size).astype(np.float32) rnn.all_weights[i][2].data = torch.from_numpy(bias_ih_layer_i) rnn.all_weights[i][3].data = torch.from_numpy(bias_hh_layer_i)
https://stackoverflow.com/questions/51628607/
Pytorch: NN function approximator, 2 in 1 out
[Please be aware of the Edit History below, as the major problem statement has changed.] We are trying to implement a neural network in pytorch, that approximates a function f(x,y)=z. So there are two real numbers as input and one as ouput, we therefore want 2 nodes in the input layer and one in the output layer. We constructed a test set of 5050 samples and had pretty good results for that task in Keras with Tensorflow backend, with 3 hidden layers with a configuration of the nodes like: 2(in) - 4 - 16 - 4 - 1(out); and ReLU activation functions on all hidden layers, linear on in- and output. Now in Pytorch we tried to implement a similar network but our loss function still literally explodes: It changes in the first few steps and converges then to some value around 10^7. In Keras we had an error around 10 percent. We already tried different network configurations without any improvement. Maybe someone could have a look on our code and suggest any change? To explain: tr_data is a list, containing 5050 2*1 numpy arrays which are the inputs for the network. tr_labels is a list, containing 5050 numbers which are the outputs we want to learn. loadData() just load those two lists. import torch.nn as nn import torch.nn.functional as F BATCH_SIZE = 5050 DIM_IN = 2 DIM_HIDDEN_1 = 4 DIM_HIDDEN_2 = 16 DIM_HIDDEN_3 = 4 DIM_OUT = 1 LEARN_RATE = 1e-4 EPOCH_NUM = 500 class Net(nn.Module): def __init__(self): #super(Net, self).__init__() super().__init__() self.hidden1 = nn.Linear(DIM_IN, DIM_HIDDEN_1) self.hidden2 = nn.Linear(DIM_HIDDEN_1, DIM_HIDDEN_2) self.hidden3 = nn.Linear(DIM_HIDDEN_2, DIM_HIDDEN_3) self.out = nn.Linear(DIM_HIDDEN_3, DIM_OUT) def forward(self, x): x = F.relu(self.hidden1(x)) x = F.tanh(self.hidden2(x)) x = F.tanh(self.hidden3(x)) x = self.out(x) return x model = Net() loss_fn = nn.MSELoss(size_average=False) optimizer = torch.optim.Adam(model.parameters(), lr=LEARN_RATE) tr_data,tr_labels = loadData() tr_data_torch = torch.zeros(BATCH_SIZE, DIM_IN) tr_labels_torch = torch.zeros(BATCH_SIZE, DIM_OUT) for i in range(BATCH_SIZE): tr_data_torch[i] = torch.from_numpy(tr_data[i]) tr_labels_torch[i] = tr_labels[i] for t in range(EPOCH_NUM): labels_pred = model(tr_data_torch) loss = loss_fn(labels_pred, tr_labels_torch) #print(t, loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() I have to say, those are our first steps in Pytorch, so please forgive me if there are some obvious, dumb mistakes. I appreciate any help or hint, Thank you! EDIT 1 ------------------------------------------------------------------ Following the comments and answers, we improved our code. The Loss function has now for the first time reasonable values, around 250. Our new class definition looks like: class Net(nn.Module): def __init__(self): super(Net, self).__init__() #super().__init__() self.hidden1 = nn.Sequential(nn.Linear(DIM_IN, DIM_HIDDEN_1), nn.ReLU()) self.hidden2 = nn.Sequential(nn.Linear(DIM_HIDDEN_1, DIM_HIDDEN_2), nn.ReLU()) self.hidden3 = nn.Sequential(nn.Linear(DIM_HIDDEN_2, DIM_HIDDEN_3), nn.ReLU()) self.out = nn.Linear(DIM_HIDDEN_3, DIM_OUT) def forward(self, x): x = self.hidden1(x) x = self.hidden2(x) x = self.hidden3(x) x = self.out(x) return x and the loss function: loss_fn = nn.MSELoss(size_average=True, reduce=True) As we stated before, we already had far more satisfying results in keras with tensorflow backend. The loss function was around 30, with a similar network configuration. I share the essential parts(!) of our keras code here: model = Sequential() model.add(Dense(4, activation="linear", input_shape=(2,))) model.add(Dense(16, activation="relu")) model.add(Dense(4, activation="relu")) model.add(Dense(1, activation="linear" )) model.summary() model.compile ( loss="mean_squared_error", optimizer="adam", metrics=["mse"] ) history=model.fit ( np.array(tr_data), np.array(tr_labels), \ validation_data = ( np.array(val_data), np.array(val_labels) ), batch_size=50, epochs=200, callbacks = [ cbk ] ) Thank your already for all the help! If anybody still has suggestions to improve the network, we would be happy about it. As somebody already asked for the data, we want to share a pickle file here: https://mega.nz/#!RDYxSYLY!P4a9mEDtZ7A5Bl7ZRjRk8EzLXQt2gyURa3wN3NCWFPA together with the code to access it: import pickle f=open("data.pcl","rb") tr_data=pickle.load ( f ) tr_labels=pickle.load ( f ) val_data=pickle.load ( f ) val_labels=pickle.load ( f ) f.close()
It should be interesting for you to point out the differences between torch.nn and torch.nn.functional (see here). Essentially, it might be that your backpropagation graph might be executed not 100% correct due to a different specification. As pointed out by previous commenters, I would suggest to define your layers including the activations. My personal favorite way is to use nn.Sequential(), which allows you to specify multiple opeations chained together, like so: self.hidden1 = nn.Sequential(nn.Linear(DIM_IN, DIM_HIDDEN1), nn.ReLU()) and then simply calling self.hidden1 later (without wrapping it in F.relu()). May I also ask why you do not call the commented super(Net, self).__init__() (which is the generally recommended way)? Additionally, if that should not fix the problem, can you maybe just share the code for Keras in comparison?
https://stackoverflow.com/questions/51631155/
Model taking long time to train
I have added an LSTM layer after a convolution in the VGG-16 model using PyTorch. Overtime, the model learns just fine. However, after adding just one LSTM layer, which consists of 32 LSTM cells, the process of training and evaluating takes about 10x longer. I added the LSTM layer to a VGG framework as follows def make_layers(cfg, batch_norm=False): # print("Making layers!") layers = [] in_channels = 3 count=0 for v in cfg: count+=1 if v == 'M': layers += [nn.MaxPool2d(kernel_size=2, stride=2)] else: conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1) if batch_norm: layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)] else: layers += [conv2d, nn.ReLU(inplace=True)] in_channels=v if count==5: rlstm =RLSTM(v) rlstm=rlstm.cuda() layers+=[rlstm] RLSTM is my custom class, which implements RowLSTM, from Google’s Pixel RNN paper. Is this a common issue? Do LSTM layers just take long to train in general?
Yes, since LSTM (and many other RNNs) rely on sequential feeding of information you lose a big portion of parallelization speed ups you generally have with CNNs. There are other types of RNNs you can explore that leverage more parallelizable algorithms but the verdict on their predictive performance compared to LSTM/GRU is still not out
https://stackoverflow.com/questions/51637854/
How do you change require_grad to false for each parameters in your model in pytorch?
My code is below which I thought would do what I want but the output shows require_grad didn't change to false. import torch import torch.nn as nn encoder = nn.Sequential( nn.Conv2d(1, 4, 1), nn.Sigmoid()) for params in encoder.parameters(): params.require_grad = False print(params.requires_grad) # prints two True statements? What am I doing wrong?
You just have a typo :) Simply add an s at the end of grad in params.require_grad = False Change this to params.requires_grad = False (note the added s) Typos can be hard to catch sometimes ;)
https://stackoverflow.com/questions/51638932/
Cloud Storage Buckets for PyTorch
For a particular task I'm working on I have a dataset that is about 25 GB. I'm still experimenting with several methods of preprocessing and definitely don't have my data to it's final form yet. I'm not sure what the common workflow is for this sort of problem, so here is what I'm thinking: Copy dataset from bucket storage to Compute Engine machine SSD (maybe use around 50 GB SSD) using gcsfuse. Apply various preprocessing operations as an experiment. Run training with PyTorch on the data stored on the local disk (SSD) Copy newly processed data back to storage bucket with gcsfuse if it was successful. Upload results and delete the persistent disk that was used during training. The alternative approach is this: Run the processing operations on the data within the Cloud Bucket itself using the mounted directory with gcsfuse Run training with PyTorch directly on the mounted gcsfuse Bucket directory, using a compute engine instance with very limited storage. Upload Results and Delete Compute Engine Instance. Which of these approaches is suggested? Which will incur fewer charges and is used most often when running these kind of operations. Is there a different workflow that I'm not seeing here?
On the billing side, the charges would be the same, as the fuse operations are charged like any other Cloud Storage interface according to the documentation. In your use case I don’t know how you are going to train the data, but if you do more than one operation to files it would be better to have them downloaded, trained locally and then the final result uploaded, which would be 2 object operations. If you do, for example, more than one change or read to a file during the training, every operation would be an object operation. On the workflow side, the proposed one looks good to me.
https://stackoverflow.com/questions/51639141/
Can a Neural Network learn a simple interpolation?
I’ve tried to train a 2 layer neural network on a simple linear interpolation for a discrete function, I’ve tried lots of different learning rates as well as different activation functions, and it seems like nothing is being learned! I’ve literally spent the last 6 hours trying to debug the following code, but it seems like there’s no bug! What's the explanation? from torch.utils.data import Dataset import os import torch import numpy as np import torch.nn as nn import torch.optim as optim import random LOW_X=255 MID_X=40000 HIGH_X=200000 LOW_Y=torch.Tensor([0,0,1]) MID_Y=torch.Tensor([0.2,0.5,0.3]) HIGH_Y=torch.Tensor([1,0,0]) BATCH_SIZE=4 def x_to_tensor(x): if x<=MID_X: return LOW_Y+(x-LOW_X)*(MID_Y-LOW_Y)/(MID_X-LOW_X) if x<=HIGH_X: return MID_Y+(x-MID_X)*(HIGH_Y-MID_Y)/(HIGH_X-MID_X) return HIGH_Y class XYDataset(Dataset): LENGTH=10000 def __len__(self): return self.LENGTH def __getitem__(self, idx): x=random.randint(LOW_X,HIGH_X) y=x_to_tensor(x) return x,y class Interpolate(nn.Module): def __init__(self, num_outputs,hidden_size=10): super(Interpolate, self).__init__() self.hidden_size=hidden_size self.x_to_hidden = nn.Linear(1, hidden_size) self.hidden_to_out = nn.Linear(hidden_size,num_outputs) self.activation = nn.Tanh() #I have tried Sigmoid and Relu activations as well self.softmax=torch.nn.Softmax(dim=1) def forward(self, x): out = self.x_to_hidden(x) out = self.activation(out) out = self.hidden_to_out(out) out = self.softmax(out) return out dataset=XYDataset() trainloader = torch.utils.data.DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4) criterion= nn.MSELoss() def train_net(net,epochs=10,lr=5.137871216190041e-05,l2_regularization=2.181622809797563e-12): optimizer= optim.Adam(net.parameters(),lr=lr,weight_decay=l2_regularization) net.train(True) running_loss=0.0 for epoch in range(epochs): for i,data in enumerate(trainloader): inputs,targets=data inputs,targets=torch.FloatTensor(inputs.float()).view(-1,1),torch.FloatTensor(targets.float()) optimizer.zero_grad() outputs=net(inputs) loss=criterion(outputs,targets) loss.backward() optimizer.step() running_loss+=loss.item() if (len(trainloader)*epoch+i)%200==199: running_loss=running_loss/(200*BATCH_SIZE) print('[%d,%5d] loss: %.6f ' % (epoch+1,i+1,running_loss)) running_loss=0.0 for i in range(-11,3): net=Interpolate(num_outputs=3) train_net(net,lr=10**i,epochs=1) print('for learning rate {} net output on low x is {}'.format(i,net(torch.Tensor([255]).view(-1,1))))
Although your problem is quite simple, it is poorly scaled: x ranges from 255 to 200K. This poor scaling leads to numerical instability and overall makes the training process unnecessarily unstable. To overcome this technical issue, you simply need to scale your inputs to [-1, 1] (or [0, 1]) range. Note that this scaling is quite ubiquitous in deep-learning: images are scaled to [-1, 1] range (see, e.g., torchvision.transforms.Normalize). To understand better the importance of scaled responses, you can look into the mathematical analysis done in this paper.
https://stackoverflow.com/questions/51640064/
pytorch how to remove cuda() from tensor
I got TypeError: expected torch.LongTensor (got torch.cuda.FloatTensor). How do I convert torch.cuda.FloatTensor to torch.LongTensor? Traceback (most recent call last): File "train_v2.py", line 110, in <module> main() File "train_v2.py", line 81, in main model.update(batch) File "/home/Desktop/squad_vteam/src/model.py", line 131, in update loss_adv = self.adversarial_loss(batch, loss, self.network.lexicon_encoder.embedding.weight, y) File "/home/Desktop/squad_vteam/src/model.py", line 94, in adversarial_loss adv_embedding = torch.LongTensor(adv_embedding) TypeError: expected torch.LongTensor (got torch.cuda.FloatTensor)
You have a float tensor f and want to convert it to long, you do long_tensor = f.long() You have cuda tensor i.e data is on gpu and want to move it to cpu you can do cuda_tensor.cpu(). So to convert a torch.cuda.Float tensor A to torch.long do A.long().cpu()
https://stackoverflow.com/questions/51664192/
Data Augmentation in PyTorch
I am a little bit confused about the data augmentation performed in PyTorch. Now, as far as I know, when we are performing data augmentation, we are KEEPING our original dataset, and then adding other versions of it (Flipping, Cropping...etc). But that doesn't seem like happening in PyTorch. As far as I understood from the references, when we use data.transforms in PyTorch, then it applies them one by one. So for example: data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } Here , for the training, we are first randomly cropping the image and resizing it to shape (224,224). Then we are taking these (224,224) images and horizontally flipping them. Therefore, our dataset is now containing ONLY the horizontally flipped images, so our original images are lost in this case. Am I right? Is this understanding correct? If not, then where do we tell PyTorch in this code above (taken from Official Documentation) to keep the original images and resize them to the expected shape (224,224)? Thanks
The transforms operations are applied to your original images at every batch generation. So your dataset is left unchanged, only the batch images are copied and transformed every iteration. The confusion may come from the fact that often, like in your example, transforms are used both for data preparation (resizing/cropping to expected dimensions, normalizing values, etc.) and for data augmentation (randomizing the resizing/cropping, randomly flipping the images, etc.). What your data_transforms['train'] does is: Randomly resize the provided image and randomly crop it to obtain a (224, 224) patch Apply or not a random horizontal flip to this patch, with a 50/50 chance Convert it to a Tensor Normalize the resulting Tensor, given the mean and deviation values you provided What your data_transforms['val'] does is: Resize your image to (256, 256) Center crop the resized image to obtain a (224, 224) patch Convert it to a Tensor Normalize the resulting Tensor, given the mean and deviation values you provided (i.e. the random resizing/cropping for the training data is replaced by a fixed operation for the validation one, to have reliable validation results) If you don't want your training images to be horizontally flipped with a 50/50 chance, just remove the transforms.RandomHorizontalFlip() line. Similarly, if you want your images to always be center-cropped, replace transforms.RandomResizedCrop by transforms.Resize and transforms.CenterCrop, as done for data_transforms['val'].
https://stackoverflow.com/questions/51677788/
AttributeError: 'Image' object has no attribute 'new' occurs when trying to use Pytorchs AlexNet Lighting preprocessing
I tried to train my model on ImageNet using inception and Alexnet like preprocessing. I used Fast-ai imagenet training script provided script. Pytorch has support for inception like preprocessing but for AlexNets Lighting, we have to implement it ourselves : __imagenet_pca = { 'eigval': torch.Tensor([0.2175, 0.0188, 0.0045]), 'eigvec': torch.Tensor([ [-0.5675, 0.7192, 0.4009], [-0.5808, -0.0045, -0.8140], [-0.5836, -0.6948, 0.4203], ]) } # Lighting data augmentation taken from here - https://github.com/eladhoffer/convNet.pytorch/blob/master/preprocess.py class Lighting(object): """Lighting noise(AlexNet - style PCA - based noise)""" def __init__(self, alphastd, eigval, eigvec): self.alphastd = alphastd self.eigval = eigval self.eigvec = eigvec def __call__(self, img): if self.alphastd == 0: return img alpha = img.new().resize_(3).normal_(0, self.alphastd) rgb = self.eigvec.type_as(img).clone()\ .mul(alpha.view(1, 3).expand(3, 3))\ .mul(self.eigval.view(1, 3).expand(3, 3))\ .sum(1).squeeze() return img.add(rgb.view(3, 1, 1).expand_as(img)) which is finally used like this : train_tfms = transforms.Compose([ transforms.RandomResizedCrop(size), transforms.RandomHorizontalFlip(), transforms.ColorJitter(.4,.4,.4), transforms.ToTensor(), Lighting(0.1, __imagenet_pca['eigval'], __imagenet_pca['eigvec']), normalize, ]) train_dataset = datasets.ImageFolder(traindir, train_tfms) train_sampler = (torch.utils.data.distributed.DistributedSampler(train_dataset) if args.distributed else None) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None), num_workers=args.workers, pin_memory=True, sampler=train_sampler) However, the problem is, whenever I run the script I get the: 'AttributeError: 'Image' object has no attribute 'new'' Which complains about this line: alpha = img.new().resize_(3).normal_(0, self.alphastd) I am clueless as to why this is happening. I'm using Pytorch 0.4 by the way.
Thanks to @iacolippo's comment, I finally found the cause! Unlike the example I wrote here, in my actual script, I had used transforms.ToTensor() after the lighting() method. Doing so resulted in a PIL image being sent as the input for lightining()which expects a Tensor and that's why the error occurs. So basically the snippet I posted in the question is correct and .ToTensor has to be used prior to calling Lighting().
https://stackoverflow.com/questions/51685753/
CUDA runtime error (59) : device-side assert triggered
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/THC/generated/../generic/THCTensorMathPointwise.cu line=265 error=59 : device-side assert triggered Traceback (most recent call last): File "main.py", line 109, in <module> train(loader_train, model, criterion, optimizer) File "main.py", line 54, in train optimizer.step() File "/usr/local/anaconda35/lib/python3.6/site-packages/torch/optim/sgd.py", line 93, in step d_p.add_(weight_decay, p.data) RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/THC/generated/../generic/THCTensorMathPointwise.cu:265 How do I resolve this error?
In general, when encountering cuda runtine errors, it is advisable to run your program again using the CUDA_LAUNCH_BLOCKING=1 flag to obtain an accurate stack trace. In your specific case, the targets of your data were too high (or low) for the specified number of classes.
https://stackoverflow.com/questions/51691563/
Best way to bound outputs from neural networks on reinforcement learning
I am training a neural network (feedforward, Tanh hidden layers) that receives states as inputs and gives actions as outputs. I am following the REINFORCE algorithm for policy-gradient reinforcement learning. However, I need my control actions to be bounded (let us say from 0-5). Currently the way I am doing this is by using a sigmoid output function and multiplying the output by 5. Although my algorithm has a moderate performance, I find the following drawback from using this β€œbounding scheme” for the output: I know for regression (hence I guess for reinforcement learning) a linear output is best, and although the sigmoid has a linear part I am afraid the network has not been able to capture this linear output behaviour correctly, or it captures it way too slowly (as its best performance is for classification, therefore polarizing the output). I am wondering what other alternatives there are, and maybe some heuristics on the matter.
Have you considered using nn.ReLU6()? This is a bounded version of the rectified linear unit, which output is defined as out = min( max(x, 0), 6)
https://stackoverflow.com/questions/51693567/
How to construct a network with two inputs in PyTorch
Suppose I want to have the general neural network architecture: Input1 --> CNNLayer \ ---> FCLayer ---> Output / Input2 --> FCLayer Input1 is image data, input2 is non-image data. I have implemented this architecture in Tensorflow. All pytorch examples I have found are one input go through each layer. How can I define forward func to process 2 inputs separately then combine them in a middle layer?
By "combine them" I assume you mean to concatenate the two inputs. Assuming you concat along the second dimension: import torch from torch import nn class TwoInputsNet(nn.Module): def __init__(self): super(TwoInputsNet, self).__init__() self.conv = nn.Conv2d( ... ) # set up your layer here self.fc1 = nn.Linear( ... ) # set up first FC layer self.fc2 = nn.Linear( ... ) # set up the other FC layer def forward(self, input1, input2): c = self.conv(input1) f = self.fc1(input2) # now we can reshape `c` and `f` to 2D and concat them combined = torch.cat((c.view(c.size(0), -1), f.view(f.size(0), -1)), dim=1) out = self.fc2(combined) return out Note that when you define the number of inputs to self.fc2 you need to take into account both out_channels of self.conv as well as the output spatial dimensions of c.
https://stackoverflow.com/questions/51700729/
Issue with running a single prediction with PyTorch
I have a trained model using PyTorch now I want to simpy run it on one example >>> model nn.Sequential { [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> output] (0): nn.SpatialConvolutionMap (1): nn.Tanh (2): nn.SpatialMaxPooling(2x2, 2, 2) (3): nn.SpatialConvolutionMap (4): nn.Tanh (5): nn.SpatialMaxPooling(2x2, 2, 2) (6): nn.Reshape(6400) (7): nn.Linear(6400 -> 128) (8): nn.Tanh (9): nn.Linear(128 -> 5) (10): nn.LogSoftMax } Then I load an image from my test set: image = cv2.imread('image.png',cv2.IMREAD_GRAYSCALE) transformation = transforms.Compose([transforms.ToTensor()]) image_tensor = transformation(image).float() inp = Variable(image_tensor) and finally try to run the network output = model(inp) But I get error TypeError: 'Sequential' object is not callable
It seems like your model is not nn.Sequential (pytorch Sequential), but rather torch.legacy.nn.Sequential (a legacy lua torch model). Try using this model forward() explicitly: output = model.forward(inp[None, ...]) # don't forget to add "batch" dimension
https://stackoverflow.com/questions/51701908/
Make personnal Dataloader with PYTORCH
I'm searching to create a personnal dataloader with a specific format to use Pytorch library, someone have an idea how can I do it ? I have follow Pytorch Tutorial but I don't find my answer! I need a DataLoader that yields the tuples of the following format: (Bx3xHxW FloatTensor x, BxHxW LongTensor y, BxN LongTensor y_cls) where x - batch of input images, y - batch of groung truth seg maps, y_cls - batch of 1D tensors of dimensionality N: N total number of classes, y_cls[i, T] = 1 if class T is present in image i, 0 otherwise I hope that someone can unlock the problem .. :) Thanks !
You simply need to have a database derived from torch.utils.data.Dataset, where __getitem__(index) returns a tuple (x, y, y_cls) of the types you want, pytorch will take care of everything else. from torch.utils import data class MyTupleDataset(data.Dataset): def __init__(self): super(MyTupleDataset, self).__init__() # init your dataset here... def __getitem__(index): x = torch.Tensor(3, H, W) # batch dim is handled by the data loader y = torch.Tensor(H, W).to(torch.long) y_cls = torch.Tensor(N).to(torch.long) return x, y, y_cls That's it. Provide pytorch's torch.utils.data.DataLoader with MyTupleDataset and you are done.
https://stackoverflow.com/questions/51702669/
Pytorch tensor - How to get the indexes by a specific tensor
I have a tensor t = torch.tensor([[1, 0, 0, 0], [0, 0, 1, 0], [0, 1, 0, 0], [1, 0, 0, 0]]) and a query tensor q = torch.tensor([1, 0, 0, 0]) Is there a way to get the indexes of q like indexes = t.index(q) # get back [0, 3] in pytorch?
How about In [1]: torch.nonzero((t == q).sum(dim=1) == t.size(1)) Out[1]: tensor([[ 0], [ 3]]) Comparing t == q performs element-wise comparison between t and q, since you are looking for entire row match, you need to .sum(dim=1) along the rows and see what row is a perfect match == t.size(1). As of v0.4.1, torch.all() supports dim argument: torch.all(t==q, dim=1)
https://stackoverflow.com/questions/51703981/
Bidirectional RNN cells - shared or not?
Should I use the same weights to compute forward and backward passes in a bidirectional RNN, or should those weights be learned independently?
They should be learned independently as they learn different patterns, unless you have palindromes. In fact that is the default in the Bidirectional wrapper in Keras: self.forward_layer = copy.copy(layer) config = layer.get_config() config['go_backwards'] = not config['go_backwards'] self.backward_layer = layer.__class__.from_config(config) In the above source code the opposite direction is a copy with independent weights from the original direction.
https://stackoverflow.com/questions/51714462/
Why is it in Pytorch when I make a COPY of a network's weight it would be automatically updated after back-propagation?
I wrote the following code as a test because in my original network I use a ModuleDict and depends on what index I feed it would slice and train only parts of that network. I wanted to make sure that only the sliced layers would update their weight so I wrote some test code to double check. Well I am getting some weird results. Say if my model has 2 layers, layer1 is an FC and layer 2 is Conv2d, if I slice the network and ONLY use layer2 I would expect layer1's weight to be unchanged because they are unused and layer2's weight will get updated after 1 epoch. So my plan was to used a for loop to grab all the weights from the network Before training then I would do it AFTER 1 optimizer.step(). Both times I would store those weights completely separate in 2 Python lists so I can compare their results later. Well for some reason the two lists are completely the same if I compare them with torch.equal() I thought its because maybe there is still some sort of hidden link in memory? So I tried to use .detach() on the weights when I grab them from the loop and the result is still the same. Layer2's weight should be different in this case because it should contain weights from the network before training. Noted in the code below I am actually using layer1 and ignoring layer2. Full code: class mymodel(nn.Module): def __init__(self): super().__init__() self.layer1 = nn.Linear(10, 5) self.layer2 = nn.Conv2d(1, 5, 4, 2, 1) self.act = nn.Sigmoid() def forward(self, x): x = self.layer1(x) #only layer1 and act are used layer 2 is ignored so only layer1 and act's weight should be updated x = self.act(x) return x model = mymodel() weights = [] for param in model.parameters(): # loop the weights in the model before updating and store them print(param.size()) weights.append(param) critertion = nn.BCELoss() #criterion and optimizer setup optimizer = optim.Adam(model.parameters(), lr = 0.001) foo = torch.randn(3, 10) #fake input target = torch.randn(3, 5) #fake target result = model(foo) #predictions and comparison and backprop loss = criterion(result, target) optimizer.zero_grad() loss.backward() optimizer.step() weights_after_backprop = [] # weights after backprop for param in model.parameters(): weights_after_backprop.append(param) # only layer1's weight should update, layer2 is not used for i in zip(weights, weights_after_backprop): print(torch.equal(i[0], i[1])) # **prints all Trues when "layer1" and "act" should be different, I have also tried to call param.detach in the loop but I got the same result.
You have to clone the parameters, otherwise you just copy the reference. weights = [] for param in model.parameters(): weights.append(param.clone()) criterion = nn.BCELoss() # criterion and optimizer setup optimizer = optim.Adam(model.parameters(), lr=0.001) foo = torch.randn(3, 10) # fake input target = torch.randn(3, 5) # fake target result = model(foo) # predictions and comparison and backprop loss = criterion(result, target) optimizer.zero_grad() loss.backward() optimizer.step() weights_after_backprop = [] # weights after backprop for param in model.parameters(): weights_after_backprop.append(param.clone()) # only layer1's weight should update, layer2 is not used for i in zip(weights, weights_after_backprop): print(torch.equal(i[0], i[1])) which gives False False True True
https://stackoverflow.com/questions/51717874/
Skorch training object from scratch
I'm trying to use skorch class to execut GridSearch on a classifier. I tried running with the vanilla NeuralNetClassifier object, but I haven't found a way to pass the Adam optimizer only the trainable weights (I'm using pre-trained embeddings and I would like to keep them frozen). It's doable if a module is initialized, and then pass those weights with the optimizer__params option, but module needs an uninitialized model. Is there a way around this? net = NeuralNetClassifier(module=RNN, module__vocab_size=vocab_size, module__hidden_size=hidden_size, module__embedding_dim=embedding_dim, module__pad_id=pad_id, module__dataset=ClaimsDataset, lr=lr, criterion=nn.CrossEntropyLoss, optimizer=torch.optim.Adam, optimizer__weight_decay=35e-3, device='cuda', max_epochs=nb_epochs, warm_start=True) The code above works. However, with the batch_size set at 64, I've got to run the model for the specified number of epochs on every batch! Which is not the behavior I'm seeking. I'd be grateful if someone could suggest a nicer way to do this. My other issue is with subclassing skorch.NeuralNet. I run into a similar issue: figuring out a way to pass only the trainable weights to Adam optimizer. The code below is what I've got so far. class Train(skorch.NeuralNet): def __init__(self, module, lr, norm, *args, **kwargs): self.module = module self.lr = lr self.norm = norm self.params = [p for p in self.module.parameters(self) if p.requires_grad] super(Train, self).__init__(*args, **kwargs) def initialize_optimizer(self): self.optimizer = torch.optim.Adam(params=self.params, lr=self.lr, weight_decay=35e-3, amsgrad=True) def train_step(self, Xi, yi, **fit_params): self.module.train() self.optimizer.zero_grad() yi = variable(yi) output = self.module(Xi) loss = self.criterion(output, yi) loss.backward() nn.utils.clip_grad_norm_(self.params, max_norm=self.norm) self.optimizer.step() def score(self, y_t, y_p): return accuracy_score(y_t, y_p) Initializing the class gives the error: Traceback (most recent call last): File "/snap/pycharm-community/74/helpers/pydev/pydevd.py", line 1664, in <module> main() File "/snap/pycharm-community/74/helpers/pydev/pydevd.py", line 1658, in main globals = debugger.run(setup['file'], None, None, is_module) File "/snap/pycharm-community/74/helpers/pydev/pydevd.py", line 1068, in run pydev_imports.execfile(file, globals, locals) # execute the script File "/snap/pycharm-community/74/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/home/l/Documents/Bsrc/cv.py", line 115, in <module> main() File "/home/l/B/src/cv.py", line 86, in main trainer = Train(module=RNN, criterion=nn.CrossEntropyLoss, lr=lr, norm=max_norm) File "/home/l/B/src/cv.py", line 22, in __init__ self.params = [p for p in self.module.parameters(self) if p.requires_grad] File "/home/l/B/src/cv.py", line 22, in <listcomp> self.params = [p for p in self.module.parameters(self) if p.requires_grad] File "/home/l/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 739, in parameters for name, param in self.named_parameters(): AttributeError: 'Train' object has no attribute 'named_parameters'
but module needs an uninitialized model That is not correct, you can pass an initialized model as well. The documentation of the model parameter states: It is, however, also possible to pass an instantiated module, e.g. a PyTorch Sequential instance. The problem is that when passing an initialized model you cannot pass any module__ parameters to the NeuralNet as this would require the module to be re-initialized. But of course that's problematic if you want to do a grid search over module parameters. A solution for this would be to overwrite initialize_model and after creating a new instance loading and freezing the parameters (by setting the parameter's requires_grad attribute to False): def _load_embedding_weights(self): return torch.randn(1, 100) def initialize_module(self): kwargs = self._get_params_for('module') self.module_ = self.module(**kwargs) # load weights self.module_.embedding0.weight = self._load_embedding_weights() # freeze layer self.module_.embedding0.weight.requires_grad = False return self
https://stackoverflow.com/questions/51730294/
Where do I get a CPU-only version of PyTorch?
I'm trying to get a basic app running with Flask + PyTorch, and host it on Heroku. However, I run into the issue that the maximum slug size is 500mb on the free version, and PyTorch itself is ~500mb. After some google searching, someone wrote about finding a cpu-only version of PyTorch, and using that, which is much smaller here. However, I'm pretty lost as to how this is done, and the person didn't document this at all. Any advice is appreciated, thanks. EDIT: To be more specific about my problem, I tried installing torch by (as far as I understand), including a requirements.txt which listed torch as a dependency. Current I have: torch==0.4.1. However this doesn't work bc of size. My question is, do you know what I could write in the requirements file to get the cpu-only version of torch that is smaller, or alternatively, if the requirements.txt doesn't work for this, what I would do instead, to get the cpu version.
Per the Pytorch website, you can install pytorch-cpu with conda install pytorch-cpu torchvision-cpu -c pytorch You can see from the files on Anaconda cloud, that the size varies between 26 and 56MB depending on the OS where you want to install it. You can get the wheel from http://download.pytorch.org/whl/cpu/. The wheel is 87MB. You can setup the installation by putting the link to the wheel in the requirements.txt file. If you use Python 3.6 on Heroku: http://download.pytorch.org/whl/cpu/torch-0.4.1-cp36-cp36m-linux_x86_64.whl otherwise, for Python 2.7: http://download.pytorch.org/whl/cpu/torch-0.4.1-cp27-cp27mu-linux_x86_64.whl For example if your requirements are pytorch-cpu, numpy and scipy and you're using Python 3.6, the requirements.txt would look like: http://download.pytorch.org/whl/cpu/torch-0.4.1-cp36-cp36m-linux_x86_64.whl numpy scipy
https://stackoverflow.com/questions/51730880/
How to include batch size in pytorch basic example?
I am new to pytorch. The following is the basic example of using nn module to train a simple one-layer model with some random data (from here) import torch N, D_in, H, D_out = 64, 1000, 100, 10 x = torch.randn(N, D_in) y = torch.randn(N, D_out) model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) loss_fn = torch.nn.MSELoss(reduction='sum') optimizer = torch.optim.Adam(model.parameters(), lr=1e-4) for t in range(500): y_pred = model(x) loss = loss_fn(y_pred, y) print(t, loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() As far as I understand, the batch size is equal to 1 in the example, in other words, a single point (out of 64) is used to calculate gradients and update parameters. My question is: how to modify this example to train the model with the batch size greater than one?
In fact N is the batch size. So you just need to modify N currently its set to 64. So you have in every training batch 64 vectors with size / dim D_in. I checked the link you posted, you can also take a look at the comments - there is some explanation too :) # -*- coding: utf-8 -*- import numpy as np # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random input and output data x = np.random.randn(N, D_in) y = np.random.randn(N, D_out) # Randomly initialize weights w1 = np.random.randn(D_in, H) w2 = np.random.randn(H, D_out) learning_rate = 1e-6 for t in range(500): # Forward pass: compute predicted y h = x.dot(w1) h_relu = np.maximum(h, 0) y_pred = h_relu.dot(w2) # Compute and print loss loss = np.square(y_pred - y).sum() print(t, loss) # Backprop to compute gradients of w1 and w2 with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_w2 = h_relu.T.dot(grad_y_pred) grad_h_relu = grad_y_pred.dot(w2.T) grad_h = grad_h_relu.copy() grad_h[h < 0] = 0 grad_w1 = x.T.dot(grad_h) # Update weights w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2
https://stackoverflow.com/questions/51735001/
Is .data still useful in pytorch?
I'm new to pytorch. I read much pytorch code which heavily uses tensor's .data member. But I search .data in the official document and Google, finding little. I guess .data contains the data in the tensor, but I don't know when we need it and when not?
.data was an attribute of Variable (object representing Tensor with history tracking e.g. for automatic update), not Tensor. Actually, .data was giving access to the Variable's underlying Tensor. However, since PyTorch version 0.4.0, Variable and Tensor have been merged (into an updated Tensor structure), so .data disappeared along the previous Variable object (well Variable is still there for backward-compatibility, but is deprecated). Paragraph from Release Notes for version 0.4.0 (I recommend reading the whole section about Variable/Tensor updates): What about .data? .data was the primary way to get the underlying Tensor from a Variable. After this merge, calling y = x.data still has similar semantics. So y will be a Tensor that shares the same data with x, is unrelated with the computation history of x, and has requires_grad=False. However, .data can be unsafe in some cases. Any changes on x.data wouldn't be tracked by autograd, and the computed gradients would be incorrect if x is needed in a backward pass. A safer alternative is to use x.detach(), which also returns a Tensor that shares data with requires_grad=False, but will have its in-place changes reported by autograd if x is needed in backward.
https://stackoverflow.com/questions/51743214/
pytorch how to set .requires_grad False
I want to set some of my model frozen. Following the official docs: with torch.no_grad(): linear = nn.Linear(1, 1) linear.eval() print(linear.weight.requires_grad) But it prints True instead of False. If I want to set the model in eval mode, what should I do?
requires_grad=False If you want to freeze part of your model and train the rest, you can set requires_grad of the parameters you want to freeze to False. For example, if you only want to keep the convolutional part of VGG16 fixed: model = torchvision.models.vgg16(pretrained=True) for param in model.features.parameters(): param.requires_grad = False By switching the requires_grad flags to False, no intermediate buffers will be saved, until the computation gets to some point where one of the inputs of the operation requires the gradient. torch.no_grad() Using the context manager torch.no_grad is a different way to achieve that goal: in the no_grad context, all the results of the computations will have requires_grad=False, even if the inputs have requires_grad=True. Notice that you won't be able to backpropagate the gradient to layers before the no_grad. For example: x = torch.randn(2, 2) x.requires_grad = True lin0 = nn.Linear(2, 2) lin1 = nn.Linear(2, 2) lin2 = nn.Linear(2, 2) x1 = lin0(x) with torch.no_grad(): x2 = lin1(x1) x3 = lin2(x2) x3.sum().backward() print(lin0.weight.grad, lin1.weight.grad, lin2.weight.grad) outputs: (None, None, tensor([[-1.4481, -1.1789], [-1.4481, -1.1789]])) Here lin1.weight.requires_grad was True, but the gradient wasn't computed because the oepration was done in the no_grad context. model.eval() If your goal is not to finetune, but to set your model in inference mode, the most convenient way is to use the torch.no_grad context manager. In this case you also have to set your model to evaluation mode, this is achieved by calling eval() on the nn.Module, for example: model = torchvision.models.vgg16(pretrained=True) model.eval() This operation sets the attribute self.training of the layers to False, in practice this will change the behavior of operations like Dropout or BatchNorm that must behave differently at training and test time.
https://stackoverflow.com/questions/51748138/
PyTorch - Effect of normal() initialization on gradients
Suppose I have a neural network where I use a normal distribution initialization and I want to use the mean value which is used for initialization as a parameter of the network. I have a small example: import torch parameter_vector = torch.tensor(range(10), dtype=torch.float, requires_grad=True) sigma = torch.ones(parameter_vector.size(0), dtype=torch.float)*0.1 init_result = torch.normal(parameter_vector, sigma) print('requires_grad:', init_result.requires_grad) print('result: ', init_result) This results in: requires_grad: True result: tensor([ 0.1026, 0.9183, 1.9586, 3.1778, 4.0538, 4.8056, 5.9561, 6.9501, 7.7653, 8.9583]) So the requires_grad flag was obviously taken over from the mean value tensor resp. parameter_vector. But does this automatically mean that the parameter_vector will be updated through backward() in a larger network where init_result does affect the end result? Especially as normal() does not really seem like normal operation because it involves randomness.
Thanks to @iacolippo (see comments below the question) the problem is solved now. I just wanted to supplement this by posting what code I am using now, so this may help anyone else. As presumed in the question and also stated by @iacolippo the code posted in the question is not backpropable: import torch parameter_vector = torch.tensor(range(5), dtype=torch.float, requires_grad=True) print('- initial parameter weights:', parameter_vector) sigma = torch.ones(parameter_vector.size(0), dtype=torch.float)*0.1 init_result = torch.normal(parameter_vector, sigma) print('- normal init result requires_grad:', init_result.requires_grad) print('- normal init vector', init_result) #print('result: ', init_result) sum_result = init_result.sum() sum_result.backward() print('- summed dummy-loss:', sum_result) optimizer = torch.optim.SGD([parameter_vector], lr = 0.01, momentum=0.9) optimizer.step() print() print('- parameter weights after update:', parameter_vector) Out: - initial parameter weights: tensor([0., 1., 2., 3., 4.], requires_grad=True) - normal init result requires_grad: True - normal init vector tensor([-0.0909, 1.1136, 2.1143, 2.8838, 3.9340], grad_fn=<NormalBackward3>) - summed dummy-loss: tensor(9.9548, grad_fn=<SumBackward0>) - parameter weights after update: tensor([0., 1., 2., 3., 4.], requires_grad=True) As you can see calling backward() does not raise an error (see linked issue in comments above), but the parameters won't get updated either with SGD-Step. Working Example 1 One solution is to use the formula/trick given here: https://stats.stackexchange.com/a/342815/133099 x=ΞΌ+Οƒ sample(N(0,1)) To archive this: sigma = torch.ones(parameter_vector.size(0), dtype=torch.float)*0.1 init_result = torch.normal(parameter_vector, sigma) Changes to: dim = parameter_vector.size(0) sigma = 0.1 init_result = parameter_vector + sigma*torch.normal(torch.zeros(dim), torch.ones(dim)) After changing these lines the code gets backprobable and the parameter vector gets updated after calling backward() and SGD-Step. Output with changed lines: - initial parameter weights: tensor([0., 1., 2., 3., 4.], requires_grad=True) - normal init result requires_grad: True - normal init vector tensor([-0.1802, 0.9261, 1.9482, 3.0817, 3.9773], grad_fn=<ThAddBackward>) - summed dummy-loss: tensor(9.7532, grad_fn=<SumBackward0>) - parameter weights after update: tensor([-0.0100, 0.9900, 1.9900, 2.9900, 3.9900], requires_grad=True) Working Example 2 Another way would be using torch.distributions (Documentation Link). Do do so the respective lines in the code above have to be replaced by: i = torch.ones(parameter_vector.size(0)) sigma = 0.1 m = torch.distributions.Normal(parameter_vector, sigma*i) init_result = m.rsample() Output with changed lines: - initial parameter weights: tensor([0., 1., 2., 3., 4.], requires_grad=True) - normal init result requires_grad: True - normal init vector tensor([-0.0767, 0.9971, 2.0448, 2.9408, 4.1321], grad_fn=<ThAddBackward>) - summed dummy-loss: tensor(10.0381, grad_fn=<SumBackward0>) - parameter weights after update: tensor([-0.0100, 0.9900, 1.9900, 2.9900, 3.9900], requires_grad=True) As it can be seen in the output above - using torch.distributions yields also to backprobable code where the parameter vector gets updated after calling backward() and SGD-Step. I hope this is helpful for someone.
https://stackoverflow.com/questions/51751231/
How to re-use old weights in a slightly modified model?
I have a CNN network built like this for a particular task. class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv11 = nn.Conv2d(1, 128, kernel_size=3, padding=1) self.conv12 = nn.Conv2d(128, 256, kernel_size=3, padding=1) self.conv13 = nn.Conv2d(256, 2, kernel_size=3, padding=1) def forward(self, x): in_size = x.size(0) x = F.relu(self.conv11(x)) x = F.relu(self.conv12(x)) x = F.relu(self.conv13(x)) x = F.softmax(x, 2) return x The model is stored using the torch built-in method like this. net = Net() optimizer = optim.SGD(net.parameters(), lr=1e-3) state = { 'state_dict': net.state_dict() 'opt': optimizer.state_dict() } torch.save(state, 'model.pt') I have increased a single layer in the network while the rest of the model was kept the same. class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv11 = nn.Conv2d(1, 128, kernel_size=3, padding=1) self.conv12 = nn.Conv2d(128, 256, kernel_size=3, padding=1) self.conv13 = nn.Conv2d(256, 256, kernel_size=3, padding=1) # (new added) self.conv14 = nn.Conv2d(256, 2, kernel_size=3, padding=1) def forward(self, x): in_size = x.size(0) x = F.relu(self.conv11(x)) x = F.relu(self.conv12(x)) x = F.relu(self.conv13(x)) (new added) x = F.relu(self.conv14(x)) x = F.softmax(x, 2) return x Since the other conv layers are kept the same, is there any way I can re-use the saved model to load the weights to conv11, conv12 and conv14 ? Instead of starting to train from beginning ?
Assume you trained the following model and now you make a minor modification to it (like adding a layer) and want to use your trained weights import torch import torch.nn as nn import torch.optim as optim class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv11 = nn.Conv2d(1, 128, kernel_size=3, padding=1) self.conv12 = nn.Conv2d(128, 256, kernel_size=3, padding=1) self.conv13 = nn.Conv2d(256, 2, kernel_size=3, padding=1) def forward(self, x): in_size = x.size(0) x = F.relu(self.conv11(x)) x = F.relu(self.conv12(x)) x = F.relu(self.conv13(x)) x = F.softmax(x, 2) return x net = Net() optimizer = optim.SGD(net.parameters(), lr=1e-3) you save the model (and the optimizer state) with: state = {'state_dict': net.state_dict(), 'opt': optimizer.state_dict() } torch.save(state, 'state.pt') Your new model is (note that corresponding layers keep the same name, so you don't make conv13 -> conv14): class NewNet(nn.Module): def __init__(self): super(NewNet, self).__init__() self.conv11 = nn.Conv2d(1, 128, kernel_size=3, padding=1) self.conv12 = nn.Conv2d(128, 256, kernel_size=3, padding=1) self.convnew = nn.Conv2d(256, 256, kernel_size=3, padding=1) # (new added) self.conv13 = nn.Conv2d(256, 2, kernel_size=3, padding=1) def forward(self, x): in_size = x.size(0) x = F.relu(self.conv11(x)) x = F.relu(self.conv12(x)) x = F.relu(self.convnew(x)) # (new added) x = F.relu(self.conv13(x)) x = F.softmax(x, 2) return x Now you can load your model.pt file: state = torch.load('state.pt') state is a dict, state['opt'] contains all the parameters that you had for your optimizer, for example state['opt']['param_groups'][0]['lr'] gives 0.001 Assuming corresponding layers kept the same name, you can recover your parameters and initialize the appropriate layers by: net = NewNet() for name, param in net.named_parameters(): if name in state['state_dict'].keys(): param = param.data param.copy_(state['state_dict'][name])
https://stackoverflow.com/questions/51754789/
How do I turn a Pytorch Dataloader into a numpy array to display image data with matplotlib?
I am new to Pytorch. I have been trying to learn how to view my input images before I begin training on my CNN. I am having a very hard time changing the images into a form that can be used with matplotlib. So far I have tried this: from multiprocessing import freeze_support import torch from torch import nn import torchvision from torch.autograd import Variable from torch.utils.data import DataLoader, Sampler from torchvision import datasets from torchvision.transforms import transforms from torch.optim import Adam import matplotlib.pyplot as plt import numpy as np import PIL num_classes = 5 batch_size = 100 num_of_workers = 5 DATA_PATH_TRAIN = 'C:\\Users\Aeryes\PycharmProjects\simplecnn\images\\train' DATA_PATH_TEST = 'C:\\Users\Aeryes\PycharmProjects\simplecnn\images\\test' trans = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.Resize(32), transforms.CenterCrop(32), transforms.ToPImage(), transforms.Normalize((0.5, 0.5, 0.5),(0.5, 0.5, 0.5)) ]) train_dataset = datasets.ImageFolder(root=DATA_PATH_TRAIN, transform=trans) train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, num_workers=num_of_workers) def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() print(npimg) plt.imshow(np.transpose(npimg, (1, 2, 0, 1))) def main(): # get some random training images dataiter = iter(train_loader) images, labels = dataiter.next() # show images imshow(images) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) if __name__ == "__main__": main() However, this throws and error: [[0.27058825 0.18431371 0.31764707 ... 0.18823528 0.3882353 0.27450982] [0.23137254 0.11372548 0.24313724 ... 0.16862744 0.14117646 0.40784314] [0.25490198 0.19607842 0.30588236 ... 0.27450982 0.25882354 0.34509805] ... [0.2784314 0.21960783 0.2352941 ... 0.5803922 0.46666667 0.25882354] [0.26666668 0.16862744 0.23137254 ... 0.2901961 0.29803923 0.2509804 ] [0.30980393 0.39607844 0.28627452 ... 0.1490196 0.10588235 0.19607842]] [[0.2352941 0.06274509 0.15686274 ... 0.09411764 0.3019608 0.19215685] [0.22745097 0.07843137 0.12549019 ... 0.07843137 0.10588235 0.3019608 ] [0.20392156 0.13333333 0.1607843 ... 0.16862744 0.2117647 0.22745097] ... [0.18039215 0.16862744 0.1490196 ... 0.45882353 0.36078432 0.16470587] [0.1607843 0.10588235 0.14117646 ... 0.2117647 0.18039215 0.10980392] [0.18039215 0.3019608 0.2117647 ... 0.11372548 0.06274509 0.04705882]]] ... [[[0.8980392 0.8784314 0.8509804 ... 0.627451 0.627451 0.627451 ] [0.8509804 0.8235294 0.7921569 ... 0.54901963 0.5568628 0.56078434] [0.7921569 0.7529412 0.7176471 ... 0.47058824 0.48235294 0.49411765] ... [0.3764706 0.38431373 0.3764706 ... 0.4509804 0.43137255 0.39607844] [0.38431373 0.39607844 0.3882353 ... 0.4509804 0.43137255 0.39607844] [0.3882353 0.4 0.39607844 ... 0.44313726 0.42352942 0.39215687]] [[0.9254902 0.90588236 0.88235295 ... 0.60784316 0.6 0.5921569 ] [0.88235295 0.85490197 0.8235294 ... 0.5411765 0.5372549 0.53333336] [0.8235294 0.7882353 0.75686276 ... 0.47058824 0.47058824 0.47058824] ... [0.50980395 0.5176471 0.5137255 ... 0.58431375 0.5647059 0.53333336] [0.5137255 0.53333336 0.5254902 ... 0.58431375 0.5686275 0.53333336] [0.5176471 0.53333336 0.5294118 ... 0.5764706 0.56078434 0.5294118 ]] [[0.95686275 0.9372549 0.90588236 ... 0.18823528 0.19999999 0.20784312] [0.9098039 0.8784314 0.8352941 ... 0.1607843 0.17254901 0.18039215] [0.84313726 0.7921569 0.7490196 ... 0.1372549 0.14509803 0.15294117] ... [0.03921568 0.05490196 0.05098039 ... 0.11764705 0.09411764 0.02745098] [0.04705882 0.07843137 0.06666666 ... 0.12156862 0.10196078 0.03529412] [0.05098039 0.0745098 0.07843137 ... 0.12549019 0.10196078 0.04705882]]] [[[0.30588236 0.28627452 0.24313724 ... 0.2901961 0.26666668 0.21568626] [0.8156863 0.6666667 0.5921569 ... 0.18039215 0.23921567 0.21568626] [0.9019608 0.83137256 0.85490197 ... 0.21960783 0.36862746 0.23921567] ... [0.7058824 0.83137256 0.85490197 ... 0.2627451 0.24313724 0.20784312] [0.7137255 0.84313726 0.84705883 ... 0.26666668 0.29803923 0.21568626] [0.7254902 0.8235294 0.8392157 ... 0.2509804 0.27058825 0.2352941 ]] [[0.24705881 0.22745097 0.19215685 ... 0.2784314 0.25490198 0.19607842] [0.59607846 0.37254903 0.29803923 ... 0.16470587 0.22745097 0.20392156] [0.5921569 0.4509804 0.49803922 ... 0.20784312 0.3764706 0.2352941 ] ... [0.42352942 0.4627451 0.42352942 ... 0.23921567 0.23137254 0.19999999] [0.45882353 0.5176471 0.35686275 ... 0.23921567 0.26666668 0.19607842] [0.41568628 0.44313726 0.34901962 ... 0.21960783 0.23921567 0.21568626]] [[0.23137254 0.20784312 0.1490196 ... 0.30588236 0.28627452 0.19607842] [0.61960787 0.3764706 0.26666668 ... 0.16470587 0.24313724 0.21568626] [0.57254905 0.43137255 0.48235294 ... 0.2235294 0.40392157 0.25882354] ... [0.4 0.42352942 0.37254903 ... 0.25490198 0.24705881 0.21568626] [0.43137255 0.4509804 0.29411766 ... 0.25882354 0.28235295 0.20392156] [0.38431373 0.3529412 0.25490198 ... 0.2352941 0.25490198 0.23137254]]] [[[0.06274509 0.09019607 0.11372548 ... 0.5803922 0.5176471 0.59607846] [0.09411764 0.14509803 0.1372549 ... 0.5294118 0.49803922 0.5058824 ] [0.04705882 0.09411764 0.10196078 ... 0.45882353 0.42352942 0.38431373] ... [0.15294117 0.12941176 0.1607843 ... 0.85882354 0.8509804 0.80784315] [0.14509803 0.10588235 0.1607843 ... 0.8666667 0.85882354 0.8 ] [0.1490196 0.10588235 0.16470587 ... 0.827451 0.8156863 0.7921569 ]] [[0.06666666 0.12156862 0.17647058 ... 0.59607846 0.5529412 0.6039216 ] [0.07058823 0.10588235 0.11764705 ... 0.56078434 0.5254902 0.5372549 ] [0.03921568 0.0745098 0.09803921 ... 0.48235294 0.4392157 0.4117647 ] ... [0.2117647 0.14509803 0.2784314 ... 0.43137255 0.3529412 0.34117648] [0.2235294 0.11372548 0.2509804 ... 0.4509804 0.39607844 0.2509804 ] [0.25490198 0.12156862 0.24705881 ... 0.38039216 0.36078432 0.3254902 ]] [[0.05490196 0.09803921 0.12549019 ... 0.46666667 0.38039216 0.45490196] [0.06274509 0.09803921 0.10196078 ... 0.44705883 0.41568628 0.3882353 ] [0.03921568 0.06666666 0.0862745 ... 0.3764706 0.33333334 0.28235295] ... [0.12156862 0.14509803 0.16862744 ... 0.15686274 0.0745098 0.09411764] [0.10588235 0.11372548 0.16862744 ... 0.25882354 0.18431371 0.05490196] [0.12156862 0.11372548 0.17254901 ... 0.2352941 0.17254901 0.14117646]]]] Traceback (most recent call last): File "image_loader.py", line 51, in <module> main() File "image_loader.py", line 46, in main imshow(images) File "image_loader.py", line 38, in imshow plt.imshow(np.transpose(npimg, (1, 2, 0, 1))) File "C:\Users\Aeryes\AppData\Local\Programs\Python\Python36\lib\site-packages\numpy\core\fromnumeric.py", line 598, in transpose return _wrapfunc(a, 'transpose', axes) File "C:\Users\Aeryes\AppData\Local\Programs\Python\Python36\lib\site-packages\numpy\core\fromnumeric.py", line 51, in _wrapfunc return getattr(obj, method)(*args, **kwds) ValueError: repeated axis in transpose I tried to print out the arrays to get the dimensions but I do not know what to make of this. It is very confusing. Here is my direct question: How do I view the input images before training using the tensors in my DataLoader object?
First of all, dataloader output 4 dimensional tensor - [batch, channel, height, width]. Matplotlib and other image processing libraries often requires [height, width, channel]. You are right about using the transpose, just not in the right way. There will be a lot of images in your images so first you need to pick one (or write a for loop to save all of them). This will be simply images[i], typically I use i=0. Then, your transpose should convert a now [channel, height, width] tensor to a [height, width, channel] one. To do this, use np.transpose(image.numpy(), (1, 2, 0)), very much like yours. Putting them together, you should have plt.imshow(np.transpose(images[0].numpy(), (1, 2, 0))) Sometimes you need to call .detach() (detach this part from the computational graph) and .cpu() (transfer data from GPU to CPU) depending on the use case, that will be plt.imshow(np.transpose(images[0].cpu().detach().numpy(), (1, 2, 0)))
https://stackoverflow.com/questions/51756581/
In pytorch how do you use add_param_group () with a optimizer?
The documentation is pretty vague and there aren't example codes to show you how to use it. The documentation for it is Add a param group to the Optimizer s param_groups. This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses. Parameters: param_group (dict) – Specifies what Tensors should be optimized along with group optimization options. (specific) – I am assuming I can get a param_group parameter by feeding the values I get from a model's state_dict()? E.g. all the actual weight values? I am asking this because I want to make a progressive network, which means I need to constantly feed Adam parameters from newly created convolutions and activations modules.
Per the docs, the add_param_group method accepts a param_group parameter that is a dict. Example of use: import torch import torch.optim as optim w1 = torch.randn(3, 3) w1.requires_grad = True w2 = torch.randn(3, 3) w2.requires_grad = True o = optim.Adam([w1]) print(o.param_groups) gives [{'amsgrad': False, 'betas': (0.9, 0.999), 'eps': 1e-08, 'lr': 0.001, 'params': [tensor([[ 2.9064, -0.2141, -0.4037], [-0.5718, 1.0375, -0.6862], [-0.8372, 0.4380, -0.1572]])], 'weight_decay': 0}] now o.add_param_group({'params': w2}) print(o.param_groups) gives: [{'amsgrad': False, 'betas': (0.9, 0.999), 'eps': 1e-08, 'lr': 0.001, 'params': [tensor([[ 2.9064, -0.2141, -0.4037], [-0.5718, 1.0375, -0.6862], [-0.8372, 0.4380, -0.1572]])], 'weight_decay': 0}, {'amsgrad': False, 'betas': (0.9, 0.999), 'eps': 1e-08, 'lr': 0.001, 'params': [tensor([[-0.0560, 0.4585, -0.7589], [-0.1994, 0.4557, 0.5648], [-0.1280, -0.0333, -1.1886]])], 'weight_decay': 0}]
https://stackoverflow.com/questions/51756913/
Why does pytorch F.mse_loss behave differently w.r.t. Tensor and Parameter?
Below is my code: import torch as pt from torch.nn import functional as F a = pt.Tensor([[0, 1], [2, 3]]) b = pt.Tensor([[1, 0], [5, 4]]) print(F.mse_loss(a, b), F.mse_loss(a, b, reduction='elementwise_mean')) a = pt.nn.Parameter(a) b = pt.nn.Parameter(b) print(F.mse_loss(a, b), F.mse_loss(a, b, reduction='elementwise_mean')) The output was: tensor(3.) tensor(3.) tensor(12., grad_fn=<SumBackward0>) tensor(12., grad_fn=<SumBackward0>) I wonder why they gave two different results? Environment setting: python 3.6 pytorch 0.4.1
It is a bug according to pytorch forum.
https://stackoverflow.com/questions/51759566/
Is it possible to create a FIFO queue with pyTorch?
I need to create a fixed length Tensor in pyTorch that acts like a FIFO queue. I have this fuction to do it: def push_to_tensor(tensor, x): tensor[:-1] = tensor[1:] tensor[-1] = x return tensor For example, I have: tensor = Tensor([1,2,3,4]) >> tensor([ 1., 2., 3., 4.]) then using the function will give: push_to_tensor(tensor, 5) >> tensor([ 2., 3., 4., 5.]) However, I was wondering: Does pyTorch have a native method for doing this? If not, is there a more clever way of doing it?
I implemented another FIFO queue: def push_to_tensor_alternative(tensor, x): return torch.cat((tensor[1:], Tensor([x]))) The functionality is the same, but then I checked their performance in speed: # Small Tensor tensor = Tensor([1,2,3,4]) %timeit push_to_tensor(tensor, 5) >> 30.9 Β΅s Β± 1.26 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) %timeit push_to_tensor_alternative(tensor, 5) >> 22.1 Β΅s Β± 2.25 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) # Larger Tensor tensor = torch.arange(10000) %timeit push_to_tensor(tensor, 5) >> 57.7 Β΅s Β± 4.88 Β΅s per loop (mean Β± std. dev. of 7 runs, 10000 loops each) %timeit push_to_tensor_alternative(tensor, 5) >> 28.9 Β΅s Β± 570 ns per loop (mean Β± std. dev. of 7 runs, 10000 loops each) Seems like this push_to_tensor_alternative which uses torch.cat (instead of shifting all items to the left) is faster.
https://stackoverflow.com/questions/51761806/
pytorch skip connection in a sequential model
I am trying to wrap my head around skip connections in a sequential model. With the functional API I would be doing something as easy as (quick example, maybe not be 100% syntactically correct but should get the idea): x1 = self.conv1(inp) x = self.conv2(x) x = self.conv3(x) x = self.conv4(x) x = self.deconv4(x) x = self.deconv3(x) x = self.deconv2(x) x = torch.cat((x, x1), 1)) x = self.deconv1(x) I am now using a sequential model and trying to do something similar, create a skip connection that brings the activations of the first conv layer all the way to the last convTranspose. I have taken a look at the U-net architecture implemented here and it's a bit confusing, it does something like this: upconv = nn.ConvTranspose2d(inner_nc * 2, outer_nc, kernel_size=4, stride=2, padding=1, bias=use_bias) down = [downrelu, downconv, downnorm] up = [uprelu, upconv, upnorm] if use_dropout: model = down + [submodule] + up + [nn.Dropout(0.5)] else: model = down + [submodule] + up Isn't this just adding layers to the sequential model well, sequentially? There is the down conv which is followed by submodule (which recursively adds inner layers) and then concatenated to up which is the upconv layer. I am probably missing something important on how the Sequential API works, but how does the code snipped from U-NET actually implements the skip?
Your observations are correct, but you may have missed the definition of UnetSkipConnectionBlock.forward() (UnetSkipConnectionBlock being the Module defining the U-Net block you shared), which may clarify this implementation: (from pytorch-CycleGAN-and-pix2pix/models/networks.py#L259) # Defines the submodule with skip connection. # X -------------------identity---------------------- X # |-- downsampling -- |submodule| -- upsampling --| class UnetSkipConnectionBlock(nn.Module): # ... def forward(self, x): if self.outermost: return self.model(x) else: return torch.cat([x, self.model(x)], 1) The last line is the key (applied for all inner blocks). The skip layer is simply done by concatenating the input x and the (recursive) block output self.model(x), with self.model the list of operations you mentioned -- so not so differently from the Functional code you wrote.
https://stackoverflow.com/questions/51773208/
Pytorch trying to make a NN received an invalid combination of arguments
I am trying to build my first NN with pytroch and got an issue. TypeError: new() received an invalid combination of arguments - got (float, int, int, int), but expected one of: * (torch.device device) * (torch.Storage storage) * (Tensor other) * (tuple of ints size, torch.device device) * (object data, torch.device device) Now I know what this is saying in that I am not passing the right type to the method or init. But I dont know what I should pass as it looks right to me. def main(): #Get the time and data now = datetime.datetime.now() hourGlassToStack = 2 #Hourglasses to stack numModules= 2 #Residual Modules for each hourglass numFeats = 256 #Number of features in each hourglass numRegModules = 2 #Depth regression modules print("Creating Model") model = HourglassNet3D(hourGlassToStack, numModules, numFeats,numRegModules).cuda() print("Model Created") this is the main method that created the model. It then calls this methods. class HourglassNet3D(nn.Module): def __init__(self, nStack, nModules, nFeats, nRegModules): super(HourglassNet3D, self).__init__() self.nStack = nStack self.nModules = nModules self.nFeats = nFeats self.nRegModules = nRegModules self.conv1_ = nn.Conv2d(3, 64, bias = True, kernel_size = 7, stride = 2, padding = 3) self.bn1 = nn.BatchNorm2d(64) self.relu = nn.ReLU(inplace = True) self.r1 = Residual(64, 128) self.maxpool = nn.MaxPool2d(kernel_size = 2, stride = 2) self.r4 = Residual(128, 128) self.r5 = Residual(128, self.nFeats) _hourglass, _Residual, _lin_, _tmpOut, _ll_, _tmpOut_, _reg_ = [], [], [], [], [], [], [] for i in range(self.nStack): _hourglass.append(Hourglass(4, self.nModules, self.nFeats)) for j in range(self.nModules): _Residual.append(Residual(self.nFeats, self.nFeats)) lin = nn.Sequential(nn.Conv2d(self.nFeats, self.nFeats, bias = True, kernel_size = 1, stride = 1), nn.BatchNorm2d(self.nFeats), self.relu) _lin_.append(lin) _tmpOut.append(nn.Conv2d(self.nFeats, 16, bias = True, kernel_size = 1, stride = 1)) _ll_.append(nn.Conv2d(self.nFeats, self.nFeats, bias = True, kernel_size = 1, stride = 1)) _tmpOut_.append(nn.Conv2d(16, self.nFeats, bias = True, kernel_size = 1, stride = 1)) for i in range(4): for j in range(self.nRegModules): _reg_.append(Residual(self.nFeats, self.nFeats)) self.hourglass = nn.ModuleList(_hourglass) self.Residual = nn.ModuleList(_Residual) self.lin_ = nn.ModuleList(_lin_) self.tmpOut = nn.ModuleList(_tmpOut) self.ll_ = nn.ModuleList(_ll_) self.tmpOut_ = nn.ModuleList(_tmpOut_) self.reg_ = nn.ModuleList(_reg_) self.reg = nn.Linear(4 * 4 * self.nFeats,16 ) And this then call this class Residual(nn.Module): #set the number ofinput and output for each layer def __init__(self, numIn, numOut): super(Residual, self).__init__() self.numIn = numIn self.numOut = numOut self.bn = nn.BatchNorm2d(self.numIn) self.relu = nn.ReLU(inplace = True) self.conv1 = nn.Conv2d(self.numIn, self.numOut / 2, bias = True, kernel_size = 1) self.bn1 = nn.BatchNorm2d(self.numOut / 2) self.conv2 = nn.Conv2d(self.numOut / 2, self.numOut / 2, bias = True, kernel_size = 3, stride = 1, padding = 1) self.bn2 = nn.BatchNorm2d(self.numOut / 2) self.conv3 = nn.Conv2d(self.numOut / 2, self.numOut, bias = True, kernel_size = 1) if self.numIn != self.numOut: self.conv4 = nn.Conv2d(self.numIn, self.numOut, bias = True, kernel_size = 1) all of this looks fine to me, but I dont know how I am suppose to pass this then if I am doing it wrong. Thank you for any help
You might have to look out as to what you are passing to your convolutional layers in the Residual class. Per default, Python 3 will convert any division operation into a float variable. Try casting your variables back to an integer, and see if that helps. Fixed code for Residual: class Residual(nn.Module): #set the number ofinput and output for each layer def __init__(self, numIn, numOut): super(Residual, self).__init__() self.numIn = numIn self.numOut = numOut self.bn = nn.BatchNorm2d(self.numIn) self.relu = nn.ReLU(inplace = True) self.conv1 = nn.Conv2d(self.numIn, int(self.numOut / 2), bias = True, kernel_size = 1) self.bn1 = nn.BatchNorm2d(int(self.numOut / 2)) self.conv2 = nn.Conv2d(int(self.numOut / 2), int(self.numOut / 2), bias = True, kernel_size = 3, stride = 1, padding = 1) self.bn2 = nn.BatchNorm2d(int(self.numOut / 2)) self.conv3 = nn.Conv2d(int(self.numOut / 2), self.numOut, bias = True, kernel_size = 1) if self.numIn != self.numOut: self.conv4 = nn.Conv2d(self.numIn, self.numOut, bias = True, kernel_size = 1)
https://stackoverflow.com/questions/51780251/
Understanding Bilinear Layers
When having a bilinear layer in PyTorch I can't wrap my head around how the calculation is done. Here is a small example where I tried to figure out how it works: In: import torch.nn as nn B = nn.Bilinear(2, 2, 1) print(B.weight) Out: Parameter containing: tensor([[[-0.4394, -0.4920], [ 0.6137, 0.4174]]], requires_grad=True) I am putting through a zero-vector and a one-vector. In: print(B(torch.ones(2), torch.zeros(2))) print(B(torch.zeros(2), torch.ones(2))) Out: tensor([0.2175], grad_fn=<ThAddBackward>) tensor([0.2175], grad_fn=<ThAddBackward>) I tried adding up the weights in various ways but I'm not getting the same result. Thanks in advance!
The operation done by nn.Bilinear is B(x1, x2) = x1*A*x2 + b (c.f. doc) with: A stored in nn.Bilinear.weight b stored in nn.Bilinear.bias If you take into account the (optional) bias, you should obtain the expected results. import torch import torch.nn as nn def manual_bilinear(x1, x2, A, b): return torch.mm(x1, torch.mm(A, x2)) + b x_ones = torch.ones(2) x_zeros = torch.zeros(2) # --------------------------- # With Bias: B = nn.Bilinear(2, 2, 1) A = B.weight print(B.bias) # > tensor([-0.6748], requires_grad=True) b = B.bias print(B(x_ones, x_zeros)) # > tensor([-0.6748], grad_fn=<ThAddBackward>) print(manual_bilinear(x_ones.view(1, 2), x_zeros.view(2, 1), A.squeeze(), b)) # > tensor([[-0.6748]], grad_fn=<ThAddBackward>) print(B(x_ones, x_ones)) # > tensor([-1.7684], grad_fn=<ThAddBackward>) print(manual_bilinear(x_ones.view(1, 2), x_ones.view(2, 1), A.squeeze(), b)) # > tensor([[-1.7684]], grad_fn=<ThAddBackward>) # --------------------------- # Without Bias: B = nn.Bilinear(2, 2, 1, bias=False) A = B.weight print(B.bias) # None b = torch.zeros(1) print(B(x_ones, x_zeros)) # > tensor([0.], grad_fn=<ThAddBackward>) print(manual_bilinear(x_ones.view(1, 2), x_zeros.view(2, 1), A.squeeze(), b)) # > tensor([0.], grad_fn=<ThAddBackward>) print(B(x_ones, x_ones)) # > tensor([-0.7897], grad_fn=<ThAddBackward>) print(manual_bilinear(x_ones.view(1, 2), x_ones.view(2, 1), A.squeeze(), b)) # > tensor([[-0.7897]], grad_fn=<ThAddBackward>)
https://stackoverflow.com/questions/51782321/
Torchtext TabularDataset: data.Field doesn't contain actual imported data?
I learned from the Torchtext documentation that the way to import csv files is through TabularDataset. I did it like this: train = data.TabularDataset(path='./data.csv', format='csv', fields=[("label",data.Field(use_vocab=True,include_lengths=False)), ("statement",data.Field(use_vocab=True,include_lengths=True))], skip_header=True) "label" and "statement" are the header names of the 2 columns in my csv file. I defined them as data.Field, but "label" and "statement" don't seem to actually contain the data from my csv file, despite being recognized as data field objects by the console with no problem. I found out this issue when I tried to build a vocab list with statement.build_vocab(train, max_size=25000). I printed len(statement.vocab), the return is "2", which obviously doesn't reflect the actual data in the csv file. Did I do something wrong when importing the csv data or is my vocab building done wrong? Is there a separate method to put the data in the field objects? Thanks!!
The fields must be defined separately like this TEXT = data.Field(sequential=True,tokenize=tokenize, lower=True, include_lengths=True) LABEL = data.Field(sequential=True,tokenize=tokenize, lower=True) train = data.TabularDataset(path='./data.csv', format='csv', fields=[("label",LABEL), ("statement",TEXT)], skip_header=True) test = data.TabularDataset(path='./test.csv', format='csv', fields=[("label",LABEL), ("statement",TEXT)], skip_header=True)
https://stackoverflow.com/questions/51790509/
How to apply layer-wise learning rate in Pytorch?
I know that it is possible to freeze single layers in a network for example to train only the last layers of a pre-trained model. What I’m looking for is a way to apply certain learning rates to different layers. So for example a very low learning rate of 0.000001 for the first layer and then increasing the learning rate gradually for each of the following layers. So that the last layer then ends up with a learning rate of 0.01 or so. Is this possible in pytorch? Any idea how I can archive this?
Here is the solution: from torch.optim import Adam model = Net() optim = Adam( [ {"params": model.fc.parameters(), "lr": 1e-3}, {"params": model.agroupoflayer.parameters()}, {"params": model.lastlayer.parameters(), "lr": 4e-2}, ], lr=5e-4, ) Other parameters that are didn't specify in optimizer will not optimize. So you should state all layers or groups(OR the layers you want to optimize). and if you didn't specify the learning rate it will take the global learning rate(5e-4). The trick is when you create the model you should give names to the layers or you can group it.
https://stackoverflow.com/questions/51801648/
What exactly is the definition of a 'Module' in PyTorch?
Please excuse the novice question, but is Module just the same as saying model? That's what it sounds like, when the documentation says: Whenever you want a model more complex than a simple sequence of existing Modules you will need to define your model (as a custom Module subclass). Or... when they mention Module, are they referring to something more formal and computer-sciency, like a protocol / interface type thing?
It's a simple container. From the docs of nn.Module Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes. Submodules assigned in this way will be registered, and will have their parameters converted too when you call .cuda(), etc. From the tutorial: All network components should inherit from nn.Module and override the forward() method. That is about it, as far as the boilerplate is concerned. Inheriting from nn.Module provides functionality to your component. For example, it makes it keep track of its trainable parameters, you can swap it between CPU and GPU with the .to(device) method, where device can be a CPU device torch.device("cpu") or CUDA device torch.device("cuda:0"). A module is a container from which layers, model subparts (e.g. BasicBlock in resnet in torchvision) and models should inherit. Why should they? Because the inheritance from nn.Module allows you to call methods like to("cuda:0"), .eval(), .parameters() or register hooks easily. why not just call the 'module' a model, and call the layers 'layers'? I suppose maybe it's just semantics and splitting hairs, but still... That's an API design choice and I find having only a Module class instead of two separate Model and Layers to be cleaner and to allow more freedom (it's easier to send just a part of the model to GPU, to get parameters only for some layers...).
https://stackoverflow.com/questions/51804692/
TypeError: tensor is not a torch image
While working through the AI course at Udacity I came across this error during the Transfer Learning section. Here is the code that seems to be causing the trouble: import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models data_dir = 'filename' # TODO: Define transforms for the training data and testing data train_transforms= transforms.Compose([transforms.Resize((224,224)), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), transforms.ToTensor()]) test_transforms= transforms.Compose([transforms.Resize((224,224)), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), transforms.ToTensor()]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=32)
The problem is with the order of the transforms. The ToTensor transform should come before the Normalize transform, since the latter expects a tensor, but the Resize transform returns an image. Correct code with the faulty lines changed: train_transforms = transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) test_transforms = transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
https://stackoverflow.com/questions/51807040/
Is GEMM or BLAS used in Tensorflow, Theano, Pytorch
I know that Caffe uses GEneral Matrix to Matrix Multiplication (GEMM) which is part of Basic Linear Algebra Subprograms (BLAS) library for performing convolution operations. Where a convolution is converted to matrix multiplication operation. I have referred below article. https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/ I want to understand how other deep learning frameworks like Theano, Tensorflow, Pytorch perform convolution operations. Do they use similar libraries in the backend. There might be some articles present on this topic. If someone can point me to those or can explain with an answer. PS: I posted the same question on datascience.stackexchange.com. As I didn't get a reply there, I am posting it here as well. If there is a better forum to post this question please let me know.
tensorflow has multiple alternatives for the operations. for GPU, cuda support is used. Most of the operations are implemented with cuDNN, some use cuBLAS, and others use cuda. You can also use openCL instead of cuda, but you should compile tensorflow by yourself. for CPU, intel mkl is used as the blas library. I'm not familiar with pytorch and theano, but some commonly used blas libraries are listed below: cuDNN, cuBLAS, and cuda: nvidia GPU support, most popular library openCL: common GPU support, I don't know about it at all. MKL: CPU blas library provided by intel openBLAS: CPU library
https://stackoverflow.com/questions/51814148/
Pytorch RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor'
I am using pytorch for training models. But I got an runtime error when it was computing the cross-entropy loss. Traceback (most recent call last): File "deparser.py", line 402, in <module> d.train() File "deparser.py", line 331, in train total, correct, avgloss = self.train_util() File "deparser.py", line 362, in train_util loss = self.step(X_train, Y_train, correct, total) File "deparser.py", line 214, in step loss = nn.CrossEntropyLoss()(out.long(), y) File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 862, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py", line 1550, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/home/summer2018/TF/lib/python3.5/site-packages/torch/nn/functional.py", line 975, in log_softmax return input.log_softmax(dim) RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor' I think this is because the .cuda() function or conversion between torch.Float and torch.Long. But I have tried many ways to change the variable by .cpu()/.cuda() and .long()/.float(), but it still not work. This error message can't be found when searching it on google. Can anyone helps me? Thanks!!! This is the code cause error: def step(self, x, y, correct, total): self.optimizer.zero_grad() out = self.forward(*x) loss = nn.CrossEntropyLoss()(out.long(), y) loss.backward() self.optimizer.step() _, predicted = torch.max(out.data, 1) total += y.size(0) correct += int((predicted == y).sum().data) return loss.data And this function step() is called by: def train_util(self): total = 0 correct = 0 avgloss = 0 for i in range(self.step_num_per_epoch): X_train, Y_train = self.trainloader() self.optimizer.zero_grad() if torch.cuda.is_available(): self.cuda() for i in range(len(X_train)): X_train[i] = Variable(torch.from_numpy(X_train[i])) X_train[i].requires_grad = False X_train[i] = X_train[i].cuda() Y_train = torch.from_numpy(Y_train) Y_train.requires_grad = False Y_train = Y_train.cuda() loss = self.step(X_train, Y_train, correct, total) avgloss+=float(loss)*Y_train.size(0) self.optimizer.step() if i%100==99: print('STEP %d, Loss: %.4f, Acc: %.4f'%(i+1,loss,correct/total)) return total, correct, avgloss/self.data_len The input data X_train, Y_train = self.trainloader() are numpy arrays at begining. This is a data sample: >>> X_train, Y_train = d.trainloader() >>> X_train[0].dtype dtype('int64') >>> X_train[1].dtype dtype('int64') >>> X_train[2].dtype dtype('int64') >>> Y_train.dtype dtype('float32') >>> X_train[0] array([[ 0, 6, 0, ..., 0, 0, 0], [ 0, 1944, 8168, ..., 0, 0, 0], [ 0, 815, 317, ..., 0, 0, 0], ..., [ 0, 0, 0, ..., 0, 0, 0], [ 0, 23, 6, ..., 0, 0, 0], [ 0, 0, 297, ..., 0, 0, 0]]) >>> X_train[1] array([ 6, 7, 8, 21, 2, 34, 3, 4, 19, 14, 15, 2, 13, 3, 11, 22, 4, 13, 34, 10, 13, 3, 48, 18, 16, 19, 16, 17, 48, 3, 3, 13]) >>> X_train[2] array([ 4, 5, 8, 36, 2, 33, 5, 3, 17, 16, 11, 0, 9, 3, 10, 20, 1, 14, 33, 25, 19, 1, 46, 17, 14, 24, 15, 15, 51, 2, 1, 14]) >>> Y_train array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ..., [0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32) Try all possible combinations: case 1: loss = nn.CrossEntropyLoss()(out, y) I get: RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target' case 2: loss = nn.CrossEntropyLoss()(out.long(), y) as description above case 3: loss = nn.CrossEntropyLoss()(out.float(), y) I get: RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target' case 4: loss = nn.CrossEntropyLoss()(out, y.long()) I get: RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15 case 5: loss = nn.CrossEntropyLoss()(out.long(), y.long()) I get: RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor' case 6: loss = nn.CrossEntropyLoss()(out.float(), y.long()) I get: RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15 case 7: loss = nn.CrossEntropyLoss()(out, y.float()) I get: RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target' case 8: loss = nn.CrossEntropyLoss()(out.long(), y.float()) I get: RuntimeError: "host_softmax" not implemented for 'torch.cuda.LongTensor' case 9: loss = nn.CrossEntropyLoss()(out.float(), y.float()) I get: RuntimeError: Expected object of type torch.cuda.LongTensor but found type torch.cuda.FloatTensor for argument #2 'target'
I know where the problem is. y should be in torch.int64 dtype without one-hot encoding. And CrossEntropyLoss() will auto encoding it with one-hot (while out is the probability distribution of prediction like one-hot format). It can run now!
https://stackoverflow.com/questions/51818225/
Why the loss function can be apply on different size tensors
For example, I have a net that take tensor [N, 7](N is the samples num) as input and tensor [N, 4] as output, the β€œ4” represents the different classes’ probabilities. And the training data’s labels are the form of tensor [N], from range 0 to 3(represent the ground-truth class). Here’s my question, I’ve seen some demos, they directly apply the loss function on the output tensor and label tensor. I wonder why this can work, since they have different size, and there sizes seems don’t fit the β€œbroadcasting semantics”. Here’s the minimal demo. import torch import torch.nn as nn import torch.optim as optim if __name__ == '__main__': features = torch.randn(2, 7) gt = torch.tensor([1, 1]) model = nn.Sequential( nn.Linear(7, 4), nn.ReLU(), nn.Linear(4, 4) ) optimizer = optim.SGD(model.parameters(), lr=0.005) f = nn.CrossEntropyLoss() for epoch in range(1000): optimizer.zero_grad() output = model(features) loss = f(output, gt) loss.backward() optimizer.step()
In PyTorch the implementation is: Link to the Documentation: https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss So implementing this formula in pytorch you get: import torch import torch.nn.functional as F output = torch.tensor([ 0.1998, -0.2261, -0.0388, 0.1457]) target = torch.LongTensor([1]) # implementing the formula above print('manual cross-entropy:', (-output[target] + torch.log(torch.sum(torch.exp(output))))[0]) # calling build in cross entropy function to check the result print('pytorch cross-entropy:', F.cross_entropy(output.unsqueeze(0), target)) Output: manual cross-entropy: tensor(1.6462) pytorch cross-entropy: tensor(1.6462) I hope this helps!
https://stackoverflow.com/questions/51822974/
How to print a tensor without showing gradients
If I do something like this: tmp = torch.ones(3, 2, 2, requires_grad=True) out = tmp ** 2 print("\n{}".format(out)) I get as an output: tensor([[[1., 1.], [1., 1.]], [[1., 1.], [1., 1.]], [[1., 1.], [1., 1.]]], grad_fn=<PowBackward0>) I would like to print out just the values, not the grad_fn part. However, doing print("\n{}".format(out[0])) results in: tensor([[1., 1.], [1., 1.]], grad_fn=<SelectBackward>) The only way I know is to out.detach() or there is another/better way? Just to clarify, I am happy the gradient is calculated. I just want to show the vector values without additional data.
Using the data should do the job for you: tmp = torch.ones(3, 2, 2, requires_grad=True) out = tmp ** 2 print("\n{}".format(out.data)) Output: tensor([[[1., 1.], [1., 1.]], [[1., 1.], [1., 1.]], [[1., 1.], [1., 1.]]])
https://stackoverflow.com/questions/51828551/
PyTorch LSTM States
Consider the following code snipped: lstm = nn.LSTM(10, 5, batch_first=True) states = (torch.rand(1, 1, 5), torch.rand(1, 1, 5)) h, states = lstm(torch.rand(1, 1, 10), states) print('h:') print(h) print('states[0]:') print(states[0]) Output: h: tensor([[[0.2808, 0.3357, 0.1290, 0.1413, 0.2648]]], grad_fn=<TransposeBackward0>) states[0]: tensor([[[0.2808, 0.3357, 0.1290, 0.1413, 0.2648]]], grad_fn=<ViewBackward>) Because I have to hand over states as parameter for the forward() anyway I'd prefer using states[0] over h. I've just noticed that the grad_fn is different, therefore I'm wondering if it makes any difference for backpropagation if using h or states for further computation of the outputs. I can hardly imagine that there is a difference, so I'd probably just continue with states[0] but I also would like to understand why it is different. Thanks in advance!
It's best practice and more intuitive to use h or (often called output) since states are meant to be passed into the lstm for internal use (think of tensorflow's dynamic_rnn to see why this would be the case. That said you are correct that it actually doesn't make a difference. I'm not sure why the grad_fns are different, but empirically they function the same: import torch from torch import nn lstm = nn.LSTM(10, 5, batch_first=True) state = (torch.rand(1, 1, 5), torch.rand(1, 1, 5)) inp = torch.rand(1, 1, 10) h, states = lstm(inp, state) param = next(lstm.parameters()) l1 = h.sum() l1.backward(retain_graph=True) g1 = param.grad.clone() param.grad.zero_() l2 = states[0].sum() l2.backward(retain_graph=True) g2 = param.grad.clone() print((g1 == g2).all()) # 1
https://stackoverflow.com/questions/51845675/
pytorch .stack final shape after .squeeze
I had a pandas dataframe 200 columns by 2500 rows which I made it into a tensor tensor = torch.tensor(df.values) tensor.size() => ([2500,200]) which i chunked and enumerated list=[] for i,chunk in enumerate(tensor.chunk(100,dim=0)) chunk.size =>([25,200]) output = hiddenlayer(chunks) output.size() => ([25,1]) list += output chunks were fed through some layers and outputted as 1 feature tensors. So now I have a list of 100 tensors, each with 25 blocks of 1, 100x25x1 so i stacked = torch.stack(list, 1).squeeze(2) stacked.size()=([25,100]) I've played around with the stacking and squeezing but i can't seem to get back to ([2500,1]) which is what I want. Am I missing something? If you could quickly help me understand what stacking and squeezing is doing and why it's not working for me I'd be forever in your debt! Thanks
Renaming list to tensor_list since it's bad practice to use reserved keywords as variable names. tensor_list =[] for i,chunk in enumerate(tensor.chunk(100,dim=0)): output = hiddenlayer(chunk).squeeze() tensor_list.append(output) result = torch.reshape(torch.stack(tensor_list,0), (-1, 1)) result.size() should now return torch.Size([2500, 1])
https://stackoverflow.com/questions/51851966/
Parsing CSV into Pytorch tensors
I have a CSV files with all numeric values except the header row. When trying to build tensors, I get the following exception: Traceback (most recent call last): File "pytorch.py", line 14, in <module> test_tensor = torch.tensor(test) ValueError: could not determine the shape of object type 'DataFrame' This is my code: import torch import dask.dataframe as dd device = torch.device("cuda:0") print("Loading CSV...") test = dd.read_csv("test.csv", encoding = "UTF-8") train = dd.read_csv("train.csv", encoding = "UTF-8") print("Converting to Tensor...") test_tensor = torch.tensor(test) train_tensor = torch.tensor(train) Using pandas instead of Dask for CSV parsing produced the same error. I also tried to specify dtype=torch.float64 inside the call to torch.tensor(data), but got the same error again.
Try converting it to an array first: test_tensor = torch.Tensor(test.values)
https://stackoverflow.com/questions/51858067/
pip - Installing specific package version does not work
I was trying to install a library (allennlp) via pip3. But it complained about the PyTorch version. While allennlp requires torch=0.4.0 I have torch=0.4.1: ... Collecting torch==0.4.0 (from allennlp) Could not find a version that satisfies the requirement torch==0.4.0 (from allennlp) (from versions: 0.1.2, 0.1.2.post1, 0.4.1) No matching distribution found for torch==0.4.0 (from allennlp) Also manually install: pip3 install torch==0.4.0 Doesn't work either: Could not find a version that satisfies the requirement torch==0.4.0 (from versions: 0.1.2, 0.1.2.post1, 0.4.1) No matching distribution found for torch==0.4.0 Same for other versions. Python is version Python 3.7.0 installed via brew on Mac OS. I remember that some time ago I was able to switch between version 0.4.0 and 0.3.1 by using pip3 install torch==0.X.X. How do I solve this?
allennlp master branch specifies torch>=0.4.0,<0.5.0. The latest release is v0.6.0 - released only about 3 hours ago - and also specifies this range: https://github.com/allenai/allennlp/blob/v0.6.0/setup.py#L104 It's possible you are using an older release (probably v0.51) which pinned torch==0.4.0: https://github.com/allenai/allennlp/blob/v0.5.1/setup.py#L104 torch have not yet released a v0.4.0 distribution supporting 3.7 to PyPI: there are wheels for CPython 2.7, 3.5, and 3.6. No source distributions. allennlp==0.6.0 and torch==0.4.1post2 should work on Python 3.7. I was able to resolve the (considerably large) dependency tree on a linux/Python 3.7.0 runtime using my project johnnydep: $ johnnydep allennlp --fields name version_latest_in_spec name version_latest_in_spec ------------------------------------------- ------------------------ allennlp 0.6.0 β”œβ”€β”€ awscli>=1.11.91 1.15.78 β”‚ β”œβ”€β”€ PyYAML<=3.13,>=3.10 3.13 β”‚ β”œβ”€β”€ botocore==1.10.77 1.10.77 β”‚ β”‚ β”œβ”€β”€ docutils>=0.10 0.14 β”‚ β”‚ β”œβ”€β”€ jmespath<1.0.0,>=0.7.1 0.9.3 β”‚ β”‚ └── python-dateutil<3.0.0,>=2.1 2.7.3 β”‚ β”‚ └── six>=1.5 1.11.0 β”‚ β”œβ”€β”€ colorama<=0.3.9,>=0.2.5 0.3.9 β”‚ β”œβ”€β”€ docutils>=0.10 0.14 β”‚ β”œβ”€β”€ rsa<=3.5.0,>=3.1.2 3.4.2 β”‚ β”‚ └── pyasn1>=0.1.3 0.4.4 β”‚ └── s3transfer<0.2.0,>=0.1.12 0.1.13 β”‚ └── botocore<2.0.0,>=1.3.0 1.10.77 β”‚ β”œβ”€β”€ docutils>=0.10 0.14 β”‚ β”œβ”€β”€ jmespath<1.0.0,>=0.7.1 0.9.3 β”‚ └── python-dateutil<3.0.0,>=2.1 2.7.3 β”‚ └── six>=1.5 1.11.0 β”œβ”€β”€ cffi==1.11.2 1.11.2 β”‚ └── pycparser 2.18 β”œβ”€β”€ conllu==0.11 0.11 β”œβ”€β”€ editdistance 0.4 β”œβ”€β”€ flaky 3.4.0 β”œβ”€β”€ flask-cors==3.0.3 3.0.3 β”‚ β”œβ”€β”€ Flask>=0.9 1.0.2 β”‚ β”‚ β”œβ”€β”€ Jinja2>=2.10 2.10 β”‚ β”‚ β”‚ └── MarkupSafe>=0.23 1.0 β”‚ β”‚ β”œβ”€β”€ Werkzeug>=0.14 0.14.1 β”‚ β”‚ β”œβ”€β”€ click>=5.1 6.7 β”‚ β”‚ └── itsdangerous>=0.24 0.24 β”‚ └── Six 1.11.0 β”œβ”€β”€ flask==0.12.1 0.12.1 β”‚ β”œβ”€β”€ Jinja2>=2.4 2.10 β”‚ β”‚ └── MarkupSafe>=0.23 1.0 β”‚ β”œβ”€β”€ Werkzeug>=0.7 0.14.1 β”‚ β”œβ”€β”€ click>=2.0 6.7 β”‚ └── itsdangerous>=0.21 0.24 β”œβ”€β”€ gevent==1.3.5 1.3.5 β”‚ └── greenlet>=0.4.13 0.4.14 β”œβ”€β”€ h5py 2.8.0 β”‚ β”œβ”€β”€ numpy>=1.7 1.15.0 β”‚ └── six 1.11.0 β”œβ”€β”€ jsonnet==0.10.0 0.10.0 β”œβ”€β”€ nltk 3.3.0 β”‚ └── six 1.11.0 β”œβ”€β”€ numpy 1.15.0 β”œβ”€β”€ numpydoc==0.8.0 0.8.0 β”‚ β”œβ”€β”€ Jinja2>=2.3 2.10 β”‚ β”‚ └── MarkupSafe>=0.23 1.0 β”‚ └── sphinx>=1.2.3 1.7.6 β”‚ β”œβ”€β”€ Jinja2>=2.3 2.10 β”‚ β”‚ └── MarkupSafe>=0.23 1.0 β”‚ β”œβ”€β”€ Pygments>=2.0 2.2.0 β”‚ β”œβ”€β”€ alabaster<0.8,>=0.7 0.7.11 β”‚ β”œβ”€β”€ babel!=2.0,>=1.3 2.6.0 β”‚ β”‚ └── pytz>=0a 2018.5 β”‚ β”œβ”€β”€ docutils>=0.11 0.14 β”‚ β”œβ”€β”€ imagesize 1.0.0 β”‚ β”œβ”€β”€ packaging 17.1 β”‚ β”‚ β”œβ”€β”€ pyparsing>=2.0.2 2.2.0 β”‚ β”‚ └── six 1.11.0 β”‚ β”œβ”€β”€ requests>=2.0.0 2.19.1 β”‚ β”‚ β”œβ”€β”€ certifi>=2017.4.17 2018.8.13 β”‚ β”‚ β”œβ”€β”€ chardet<3.1.0,>=3.0.2 3.0.4 β”‚ β”‚ β”œβ”€β”€ idna<2.8,>=2.5 2.7 β”‚ β”‚ └── urllib3<1.24,>=1.21.1 1.23 β”‚ β”œβ”€β”€ setuptools 40.0.0 β”‚ β”œβ”€β”€ six>=1.5 1.11.0 β”‚ β”œβ”€β”€ snowballstemmer>=1.1 1.2.1 β”‚ └── sphinxcontrib-websupport 1.1.0 β”œβ”€β”€ overrides 1.9 β”œβ”€β”€ parsimonious==0.8.0 0.8.0 β”‚ └── six>=1.9.0 1.11.0 β”œβ”€β”€ pytest 3.7.1 β”‚ β”œβ”€β”€ atomicwrites>=1.0 1.1.5 β”‚ β”œβ”€β”€ attrs>=17.4.0 18.1.0 β”‚ β”œβ”€β”€ more-itertools>=4.0.0 4.3.0 β”‚ β”‚ └── six<2.0.0,>=1.0.0 1.11.0 β”‚ β”œβ”€β”€ pluggy>=0.7 0.7.1 β”‚ β”œβ”€β”€ py>=1.5.0 1.5.4 β”‚ β”œβ”€β”€ setuptools 40.0.0 β”‚ └── six>=1.10.0 1.11.0 β”œβ”€β”€ pytz==2017.3 2017.3 β”œβ”€β”€ requests>=2.18 2.19.1 β”‚ β”œβ”€β”€ certifi>=2017.4.17 2018.8.13 β”‚ β”œβ”€β”€ chardet<3.1.0,>=3.0.2 3.0.4 β”‚ β”œβ”€β”€ idna<2.8,>=2.5 2.7 β”‚ └── urllib3<1.24,>=1.21.1 1.23 β”œβ”€β”€ responses>=0.7 0.9.0 β”‚ β”œβ”€β”€ cookies 2.2.1 β”‚ β”œβ”€β”€ requests>=2.0 2.19.1 β”‚ β”‚ β”œβ”€β”€ certifi>=2017.4.17 2018.8.13 β”‚ β”‚ β”œβ”€β”€ chardet<3.1.0,>=3.0.2 3.0.4 β”‚ β”‚ β”œβ”€β”€ idna<2.8,>=2.5 2.7 β”‚ β”‚ └── urllib3<1.24,>=1.21.1 1.23 β”‚ └── six 1.11.0 β”œβ”€β”€ scikit-learn 0.19.2 β”œβ”€β”€ scipy 1.1.0 β”‚ └── numpy>=1.8.2 1.15.0 β”œβ”€β”€ spacy<2.1,>=2.0 2.0.12 β”‚ β”œβ”€β”€ cymem<1.32,>=1.30 1.31.2 β”‚ β”œβ”€β”€ dill<0.3,>=0.2 0.2.8.2 β”‚ β”œβ”€β”€ murmurhash<0.29,>=0.28 0.28.0 β”‚ β”œβ”€β”€ numpy>=1.7 1.15.0 β”‚ β”œβ”€β”€ plac<1.0.0,>=0.9.6 0.9.6 β”‚ β”œβ”€β”€ preshed<2.0.0,>=1.0.0 1.0.1 β”‚ β”‚ └── cymem<1.32.0,>=1.30 1.31.2 β”‚ β”œβ”€β”€ regex==2017.4.5 2017.4.5 β”‚ β”œβ”€β”€ requests<3.0.0,>=2.13.0 2.19.1 β”‚ β”‚ β”œβ”€β”€ certifi>=2017.4.17 2018.8.13 β”‚ β”‚ β”œβ”€β”€ chardet<3.1.0,>=3.0.2 3.0.4 β”‚ β”‚ β”œβ”€β”€ idna<2.8,>=2.5 2.7 β”‚ β”‚ └── urllib3<1.24,>=1.21.1 1.23 β”‚ β”œβ”€β”€ thinc<6.11.0,>=6.10.3 6.10.3 β”‚ β”‚ β”œβ”€β”€ cymem<1.32.0,>=1.30.0 1.31.2 β”‚ β”‚ β”œβ”€β”€ cytoolz<0.10,>=0.9.0 0.9.0.1 β”‚ β”‚ β”‚ └── toolz>=0.8.0 0.9.0 β”‚ β”‚ β”œβ”€β”€ dill<0.3.0,>=0.2.7 0.2.8.2 β”‚ β”‚ β”œβ”€β”€ msgpack-numpy<1.0.0,>=0.4.1 0.4.3.1 β”‚ β”‚ β”‚ β”œβ”€β”€ msgpack>=0.3.0 0.5.6 β”‚ β”‚ β”‚ └── numpy>=1.9.0 1.15.0 β”‚ β”‚ β”œβ”€β”€ msgpack<1.0.0,>=0.5.6 0.5.6 β”‚ β”‚ β”œβ”€β”€ murmurhash<0.29.0,>=0.28.0 0.28.0 β”‚ β”‚ β”œβ”€β”€ numpy>=1.7.0 1.15.0 β”‚ β”‚ β”œβ”€β”€ plac<1.0.0,>=0.9.6 0.9.6 β”‚ β”‚ β”œβ”€β”€ preshed<2.0.0,>=1.0.0 1.0.1 β”‚ β”‚ β”‚ └── cymem<1.32.0,>=1.30 1.31.2 β”‚ β”‚ β”œβ”€β”€ six<2.0.0,>=1.10.0 1.11.0 β”‚ β”‚ β”œβ”€β”€ tqdm<5.0.0,>=4.10.0 4.24.0 β”‚ β”‚ └── wrapt<1.11.0,>=1.10.0 1.10.11 β”‚ └── ujson>=1.35 1.35 β”œβ”€β”€ tensorboardX==1.2 1.2 β”‚ β”œβ”€β”€ numpy 1.15.0 β”‚ β”œβ”€β”€ protobuf>=0.3.2 3.6.1 β”‚ β”‚ β”œβ”€β”€ setuptools 40.0.0 β”‚ β”‚ └── six>=1.9 1.11.0 β”‚ └── six 1.11.0 β”œβ”€β”€ torch<0.5.0,>=0.4.0 0.4.1.post2 β”œβ”€β”€ tqdm>=4.19 4.24.0 β”œβ”€β”€ typing 3.6.4 └── unidecode 1.0.22
https://stackoverflow.com/questions/51860628/
pytorch (numpy) calculation about the closest pixels to points
I am trying to solve a complicated problem. For example, I have a batch of 2D predicted images (softmax output, value between 0 and 1) with size: Batch x H x W and ground truth Batch x H x W The light gray color pixels are the background with value 0, and the dark gray color pixels are the foreground with value 1. I try to compute the mass center coordinates using scipy.ndimage.center_of_mass on each ground truth image. Then I get the center location point C (red color) for each ground truth. The C points set is Batch x 1. Now, for each pixel A (yellow color) in the predicted images, I want to get three pixels B1, B2, B3 (blue color) which are the closest to A on the line AC (here C is corresponding location of mass center in ground truth). I used following code to get the three closest points B1, B2, B3. def connect(ends, m=3): d0, d1 = np.abs(np.diff(ends, axis=0))[0] if d0 > d1: return np.c_[np.linspace(ends[0, 0], ends[1, 0], m + 1, dtype=np.int32), np.round(np.linspace(ends[0, 1], ends[1, 1], m + 1)) .astype(np.int32)] else: return np.c_[np.round(np.linspace(ends[0, 0], ends[1, 0], m + 1)) .astype(np.int32), np.linspace(ends[0, 1], ends[1, 1], m + 1, dtype=np.int32)] So the B points set is Batch x 3 x H x W. Then, I want to compute like this: |Value(A)-Value(B1)|+|Value(A)-Value(B2)|+|Value(A)-Value(B3)|. The size of the result should be Batch x H x W. Is there any numpy vectorization tricks that can be used to update the value of each pixel in predicted images? Or can this be solved using pytorch functions? I need to find a method to update the whole image. The predicted image is the softmax output. I cannot use for loop to compute each single value since it will become non-differentiable. Thanks a lot.
As suggested by @Matin, you could consider Bresenham's algorithm to get your points on the AC line. A simplistic PyTorch implementation could be as follows (directly adapted from the pseudo-code here ; could be optimized): import torch def get_points_from_low(x0, y0, x1, y1, num_points=3): dx = x1 - x0 dy = y1 - y0 xi = torch.sign(dx) yi = torch.sign(dy) dy = dy * yi D = 2 * dy - dx y = y0 x = x0 points = [] for n in range(num_points): x = x + xi is_D_gt_0 = (D > 0).long() y = y + is_D_gt_0 * yi D = D + 2 * dy - is_D_gt_0 * 2 * dx points.append(torch.stack((x, y), dim=-1)) return torch.stack(points, dim=len(x0.shape)) def get_points_from_high(x0, y0, x1, y1, num_points=3): dx = x1 - x0 dy = y1 - y0 xi = torch.sign(dx) yi = torch.sign(dy) dx = dx * xi D = 2 * dx - dy y = y0 x = x0 points = [] for n in range(num_points): y = y + yi is_D_gt_0 = (D > 0).long() x = x + is_D_gt_0 * xi D = D + 2 * dx - is_D_gt_0 * 2 * dy points.append(torch.stack((x, y), dim=-1)) return torch.stack(points, dim=len(x0.shape)) def get_points_from(x0, y0, x1, y1, num_points=3): is_dy_lt_dx = (torch.abs(y1 - y0) < torch.abs(x1 - x0)).long() is_x0_gt_x1 = (x0 > x1).long() is_y0_gt_y1 = (y0 > y1).long() sign = 1 - 2 * is_x0_gt_x1 x0_comp, x1_comp, y0_comp, y1_comp = x0 * sign, x1 * sign, y0 * sign, y1 * sign points_low = get_points_from_low(x0_comp, y0_comp, x1_comp, y1_comp, num_points=num_points) points_low *= sign.view(-1, 1, 1).expand_as(points_low) sign = 1 - 2 * is_y0_gt_y1 x0_comp, x1_comp, y0_comp, y1_comp = x0 * sign, x1 * sign, y0 * sign, y1 * sign points_high = get_points_from_high(x0_comp, y0_comp, x1_comp, y1_comp, num_points=num_points) * sign points_high *= sign.view(-1, 1, 1).expand_as(points_high) is_dy_lt_dx = is_dy_lt_dx.view(-1, 1, 1).expand(-1, num_points, 2) points = points_low * is_dy_lt_dx + points_high * (1 - is_dy_lt_dx) return points # Inputs: # (@todo: extend A to cover all points in maps): A = torch.LongTensor([[0, 1], [8, 6]]) C = torch.LongTensor([[6, 4], [2, 3]]) num_points = 3 # Getting points between A and C: # (@todo: what if there's less than `num_points` between A-C?) Bs = get_points_from(A[:, 0], A[:, 1], C[:, 0], C[:, 1], num_points=num_points) print(Bs) # tensor([[[1, 1], # [2, 2], # [3, 2]], # [[7, 6], # [6, 5], # [5, 5]]]) Once you have your points, you could retrieve their "values" (Value(A), Value(B1), etc.) using torch.index_select() (note that as of now, this method only accept 1D indices, so you need to unravel your data). All things put together, this would look like something such as the following (extending A from shape (Batch, 2) to (Batch, H, W, 2) is left for exercise...) # Inputs: # (@todo: extend A to cover all points in maps): A = torch.LongTensor([[0, 1], [8, 6]]) C = torch.LongTensor([[6, 4], [2, 3]]) batch_size = A.shape[0] num_points = 3 map_size = (9, 9) map_num_elements = map_size[0] * map_size[1] map_values = torch.stack((torch.arange(0, map_num_elements).view(*map_size), torch.arange(0, -map_num_elements, -1).view(*map_size))) # Getting points between A and C: # (@todo: what if there's less than `num_points` between A-C?) Bs = get_points_from(A[:, 0], A[:, 1], C[:, 0], C[:, 1], num_points=num_points) # Get map values in positions A: A_unravel = torch.arange(0, batch_size) * map_num_elements A_unravel = A_unravel + A[:, 0] * map_size[1] + A[:, 1] values_A = torch.index_select(map_values.view(-1), dim=0, index=A_unravel) print(values_A) # tensor([ 1, -4]) # Get map values in positions A: A_unravel = torch.arange(0, batch_size) * map_num_elements A_unravel = A_unravel + A[:, 0] * map_size[1] + A[:, 1] values_A = torch.index_select(map_values.view(-1), dim=0, index=A_unravel) print(values_A) # tensor([ 1, -78]) # Get map values in positions B: Bs_flatten = Bs.view(-1, 2) Bs_unravel = (torch.arange(0, batch_size) .unsqueeze(1) .repeat(1, num_points) .view(num_points * batch_size) * map_num_elements) Bs_unravel = Bs_unravel + Bs_flatten[:, 0] * map_size[1] + Bs_flatten[:, 1] values_B = torch.index_select(map_values.view(-1), dim=0, index=Bs_unravel) values_B = values_B.view(batch_size, num_points) print(values_B) # tensor([[ 10, 20, 29], # [-69, -59, -50]]) # Compute result: res = torch.abs(values_A.unsqueeze(-1).expand_as(values_B) - values_B) print(res) # tensor([[ 9, 19, 28], # [ 9, 19, 28]]) res = torch.sum(res, dim=1) print(res) # tensor([56, 56])
https://stackoverflow.com/questions/51873797/
Expected parameters of Conv2d
Below code : import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms import torch.utils.data as data_utils import numpy as np train_dataset = [] mu, sigma = 0, 0.1 # mean and standard deviation num_instances = 20 batch_size_value = 10 for i in range(num_instances) : image = [] image_x = np.random.normal(mu, sigma, 1000).reshape((1 , 100, 10)) train_dataset.append(image_x) labels = [1 for i in range(num_instances)] x2 = torch.tensor(train_dataset).float() y2 = torch.tensor(labels).long() my_train2 = data_utils.TensorDataset(x2, y2) train_loader2 = data_utils.DataLoader(my_train2, batch_size=batch_size_value, shuffle=False) # Device configuration device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') # Hyper parameters num_epochs = 5 num_classes = 1 batch_size = 5 learning_rate = 0.001 # Convolutional neural network (two convolutional layers) class ConvNet(nn.Module): def __init__(self, num_classes=1): super(ConvNet, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.fc = nn.Linear(7*7*32, num_classes) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = out.reshape(out.size(0), -1) out = self.fc(out) return out model = ConvNet(num_classes).to(device) # Loss and optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model total_step = len(train_loader2) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader2): images = images.to(device) labels = labels.to(device) # Forward pass outputs = model(images) loss = criterion(outputs, labels) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() if (i+1) % 100 == 0: print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch+1, num_epochs, i+1, total_step, loss.item())) returns error : RuntimeError: size mismatch, m1: [10 x 1600], m2: [1568 x 1] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:249 Reading the documentation for conv2d, I tried to change the first parameter to 10X100 to match input – input tensor of shape (minibatchΓ—in_channelsΓ—iHΓ—iW) from https://pytorch.org/docs/stable/nn.html#torch.nn.functional.conv2d but then received the error : RuntimeError: Given groups=1, weight[16, 1000, 5, 5], so expected input[10, 1, 100, 10] to have 1000 channels, but got 1 channels instead So I'm not sure if I've corrected the original error or just caused a new one? How should Conv2d be set in order to match image shape of (10,100) ?
The error comes from your final fully-connected layer self.fc = nn.Linear(7*7*32, num_classes), not your convolution ones. Given your input dimensions ((10, 100)), the shape of out = self.layer2(out) is (batch_size, 32, 25, 2), and thus the shape of out = out.reshape(out.size(0), -1) is (batch_size, 32*25*2) = (batch_size, 1600). On the other hand, your fully-connected layer is defined for inputs of shape (batch_size, 32*7*7) = (batch_size, 1568). This mismatch between the shape of your 2nd convolution output and the expected shape for your fully-connected layer is causing the error (notice how the shapes mentioned in the trace correspond to the aforementioned ones).
https://stackoverflow.com/questions/51885408/
Custom NN architecture using Pytorch
I am trying to make a custom CNN architecture using Pytorch. I want to have about the same control as what I would get if I make the architecture using numpy only. I am new to Pytorch and would like to see some code samples of CNNs implemented without the nn.module class, if possible.
You have to implement backward() function in your custom class. However from your question it is not clear whether your need just a new series of CNN block (so you better use nn.module and something like nn.Sequential(nn.Conv2d( ...) ) you just need gradient descent https://github.com/jcjohnson/pytorch-examples#pytorch-autograd , so computation backward on your own.
https://stackoverflow.com/questions/51893903/
Pytorch LSTM - Training for Q&A classification
I'm trying to train a model to classify if an answer answers the question given using this dataset. I'm training in batches and using GloVe word embeddings. I train in batches of 1000 except the last one. The method I'm trying to use is to first giving the first sentence (question), and then the second sentence (answer) to LSTM and have it give me a number between 0 and 1 by using sigmoid function. The problem is, loss always repeats itself after epoch 1. It never converges to the correct result which is if the answer belongs to the question, 1, 0 otherwise. My code is as below: class QandA(nn.Module): def __init__(self, input_size, hidden_size): super(QandA, self).__init__() self.hidden_size = hidden_size self.num_layers = 1 self.bidirectional = True self.lstm = nn.LSTM(input_size, self.hidden_size, num_layers = self.num_layers, bidirectional = self.bidirectional) self.lstm.to(device) self.hidden2class = nn.Linear(self.hidden_size * 2, 1) self.hidden2class.to(device) def forward(self, glove_vec, glove_vec2): # glove_vec.shape = (sentence_len, batch_size, 300) output, hidden = self.lstm(glove_vec) output, _ = self.lstm(glove_vec2, hidden) # output.shape = (sentence_len, batch_size, hidden_size * 2) output = self.hidden2class(output[-1,:,:]) # output.shape = (batch_size, 1) return F.sigmoid(output) model = QandA(300, 60).to(device) loss_function = nn.BCELoss() optimizer = optim.Adam(model.parameters(), lr=0.1) Is my approach so wrong that it can't work in practice? Or is there any other problem that I'm overseeing? edit: Extra code regarding the training; batch_size = 1000 # load_dataset loads the data from the file. questions, answers, outputs = load_dataset() N = len(outputs) losses = [] for epoch in range(10): for batch in range(math.ceil(N / batch_size)): model.zero_grad() # get_data gets the data from the dataset (size batch_size, sequence batch) input1, input2, targets = get_data(batch, batch_size) class_pred = model(input1, input2) loss = loss_function(class_pred, targets) loss.backward() optimizer.step()
I would suggest to encode question and answer independently and put a classifier on top of it. For example, you can encode with biLSTM question and answer, concatenate their representations and feed to the classifier. The code could be something like this (not tested, but hope you got the idea): class QandA(nn.Module): def __init__(self, input_size, hidden_size): super(QandA, self).__init__() self.hidden_size = hidden_size self.num_layers = 1 self.bidirectional = True self.lstm_question = nn.LSTM(input_size, self.hidden_size, num_layers = self.num_layers, bidirectional = self.bidirectional) self.lstm_question.to(device) self.lstm_answer = nn.LSTM(input_size, self.hidden_size, num_layers = self.num_layers, bidirectional = self.bidirectional) self.lstm_answer.to(device) self.fc = nn.Linear(self.hidden_size * 4, 1) self.fc.to(device) def forward(self, glove_question, glove_answer): # glove.shape = (sentence_len, batch_size, 300) question_last_hidden, _ = self.lstm_question(glove_question) # question_last_hidden.shape = (question_len, batch_size, hidden_size * 2) answer_last_hidden, _ = self.lstm_answer(glove_answer) # answer_last_hidden.shape = (answer_len, batch_size, hidden_size * 2) # flatten output of the lstm, if you have multiple lstm layers you need to take only the last layers backward/forward hidden states question_last_hidden = question_last_hidden[-1,:,:] answer_last_hidden = answer_last_hidden[-1,:,:] representation = torch.cat([question_last_hidden, answer_last_hidden], -1) # check here to concatenate over feature size # representation.shape = (hidden_size * 4, batch_size) output = self.fc(representation) # output.shape = (batch_size, 1) return F.sigmoid(output)
https://stackoverflow.com/questions/51895142/
Get values of tensors in loss function
I would like to get the values of the y_pred and y_true tensors of this keras backend function. I need this to be able to perform some custom calculations and change the loss, these calculations are just possible with the real array values. def mean_squared_error(y_true, y_pred): #some code here return K.mean(K.square(y_pred - y_true), axis=-1) There is a way to do this in keras? Or in any other ML framework (tf, pytorch, theano)?
No, in general you can't compute the loss that way, because Keras is based on frameworks that do automatic differentiation (like Theano, TensorFlow) and they need to know which operations you are doing in between in order to compute the gradients of the loss. You need to implement your loss computations using keras.backend functions, else there is no way to compute gradients and optimization won't be possible.
https://stackoverflow.com/questions/51901952/