instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Full Page Text Recognition Dataset Creation
I have been reading OCR papers such as this one https://arxiv.org/pdf/1704.08628.pdf , and I am have trouble finding out how these datasets are actually generated. In the linked paper, they use a regressor to predict the start location (a point) and height of a line of text. Then, based on that starting point and height, a second network performs OCR and end of line detection. I realize this is a very simplified explanation, but it follows that their dataset consists (at least in part) of full page text 'images' annotated with where each line begins, and then a transcription of the text on a given line. Alternatively, they could have just used the lower left point of bounding boxes as the start point and the height of the box as the word height (avoiding the need to re-annotate if the data was previously prepared using bounding boxes). So how is a dataset like this actually created? Looking at other datasets it seems like there is some software that can create XML files containing the ground truths relevant to each image, can someone point me in the right direction? I've been googling around and finding lots of tools for annotating text with sentiment etc and other tools for annotating images for segmentation (for something like a YOLO network), but I'm coming up empty for the creation something like the Maurdoor dataset used in the linked paper. Thank you
So after submitting this, the related threads window showed me many threads that my googling did not turn up. This http://www.prima.cse.salford.ac.uk/tools software seems to be what I was looking for, but I would still love to hear other ideas.
https://stackoverflow.com/questions/50456692/
PyTorch : How to properly create a list of nn.Linear()
I have created a class that has nn.Module as subclass. In my class, I have to create N number of linear transformation, where N is given as class parameters. I therefore proceed as follow : self.list_1 = [] for i in range(N): self.list_1.append(nn.Linear(self.x, 1, bias=mlp_bias)) In the forward method, i call these matrices (with list_1[i]) and concat the results. Two things : 1) Even though I use model.cuda(), these Linear transform are used on cpu and i get the following error : RuntimeError: Expected object of type Variable[torch.cuda.FloatTensor] but found type Variable[torch.FloatTensor] for argument #1 'mat2' I have to do self.list_1.append(nn.Linear(self.x, 1, bias=mlp_bias).cuda()) This is not required if instead, i do : self.nn = nn.Linear(self.x, 1, bias=mlp_bias) and then use self.nn directly. 2) For more obvious reason, when I print(model) in my main, the Linear matrices in my list arent printed. Is there any other way. maybe using bmm ? I find it less easy, and i actually want to have my N results separately. Thank you in advance, M
You can use nn.ModuleList to wrap your list of linear layers as explained here self.list_1 = nn.ModuleList(self.list_1)
https://stackoverflow.com/questions/50463975/
Input 3 and 1 channel input to the network in pytorch?
My dataset consists mostly of 3 channel images, but i also have a few 1 channel images,Is it possible to train a network that takes in both 3 channels and 1 channels as inputs? Any suggestions are welcome,Thanks in advance,
You can detect the grayscale images by checking the size and apply some transformation to have 3 channels. It seems to be better to convert images from grayscale to RGB than simply copying the image three times on the channels. You can do that by cv2.cvtColor(gray_img, cv.CV_GRAY2RGB) if you have opencv-python installed. If you want a clean implementation you can extend torchvision.transform with a new Transform that does this job automatically.
https://stackoverflow.com/questions/50471053/
PyTorch - Torchvision - BrokenPipeError: [Errno 32] Broken pipe
I'm trying to carry out the tutorial named "Training a classifier" with PyTorch. WHen trying to debug this part of the code : import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) I get this error message : Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified Files already downloaded and verified Traceback (most recent call last): File "<string>", line 1, in <module> File "D:\Anaconda\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "D:\Anaconda\lib\multiprocessing\spawn.py", line 114, in _main prepare(preparation_data) File "D:\Anaconda\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "D:\Anaconda\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path run_name="__mp_main__") File "D:\Anaconda\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "D:\Anaconda\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "D:\Anaconda\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "d:\Yggdrasil\Programmation\PyTorch\TutorialCIFAR10.py", line 36, in <module> dataiter = iter(trainloader) File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__ return _DataLoaderIter(self) File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__ w.start() File "D:\Anaconda\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "D:\Anaconda\lib\multiprocessing\spawn.py", line 143, in get_preparation_data _check_not_importing_main() File "D:\Anaconda\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main is not going to be frozen to produce an executable.) RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. Traceback (most recent call last): File "d:\Yggdrasil\Programmation\PyTorch\TutorialCIFAR10.py", line 36, in <module> dataiter = iter(trainloader) File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__ return _DataLoaderIter(self) File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__ w.start() File "D:\Anaconda\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__ reduction.dump(process_obj, to_child) File "D:\Anaconda\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) BrokenPipeError: [Errno 32] Broken pipe All the previous lines in the tutorial are working perfectly. Does someone know how to solve this, please ? Thanks a lot in advance
This doesn't look to be a PyTorch problem. Try executing the code in Jupyter notebooks and other environment troubleshooting.
https://stackoverflow.com/questions/50480689/
Can't install torch on linux box using pip
As the title states I am trying to install torch on linux using pip. I run the command pip install torch==0.3.1 And I get the following output: Collecting torch==0.3.1 Could not find a version that satisfies the requirement torch==0.3.1 (from versions: 0.1.2, 0.1.2.post1) No matching distribution found for torch==0.3.1 Any ideas what the issue might be?
Try update pip itself, using pip install --upgrade pip then, pip install torch==0.3.1
https://stackoverflow.com/questions/50488869/
If I'm not specifying to use CPU/GPU, which one is my script using?
In pytorch, if I'm not writing anything about using CPU/GPU, and my machine supports CUDA (torch.cuda.is_available() == True): What is my script using, CPU or GPU? If CPU, what should I do to make it run on GPU? Do I need to rewrite everything? If GPU, will this script crash if torch.cuda.is_available() == False? Does this do anything about making the training faster? I'm aware of Porting PyTorch code from CPU to GPU but this is old. Does this situation change in v0.4 or the upcoming v1.0?
My way is like this (below pytorch 0.4): dtype = torch.cuda.float if torch.cuda.is_available() else torch.float torch.zeros(2, 2, dtype=dtype) UPDATE pytorch 0.4: device = torch.device("cuda" if use_cuda else "cpu") model = MyRNN().to(device) from PyTorch 0.4.0 Migration Guide.
https://stackoverflow.com/questions/50495053/
Tips or patterns for reshaping 4D/5D arrays, (videos to frames)
I find it really hard to visualize reshaping 4D 5D arrays in numpy/pytorch. (I assume both reshape in similar patter, I am using pytorch currently!). Like suppose I have videos with dimension [N x C x D x H x W] (num videos x channels video x frames video x height video x width video) Suppose I want to reshape video into frames as [N x C x H x W], how should I proceed in reshape. Simple applying x = x.reshape(N*D, C, H, W) doesn't actually do it, it gives wrong order of elements. Can you help me with how to do this, and any slight of intuition of pattern you used? On a sidenote, if i have one video (i.e suppose 1x3x100x256x256 I use : the following code approach: x = x.squeeze(0).T.reshape((100,3,256,256))[:,:,None,:,:] and it works great. Couldnt figure out for more than 1 video. Thanks! As per the request : input = np.random.randn(N,C,D,H,W) output = np.zeros((N*D,C,H,W)) As per the request, a for loop based code to show what I want for h in range(N): for i in range(D): for j in range(C): for k in range(H): for l in range(W): output[h*D + i,j,k,l] = input[h,j,i,k,l]
Simply swap the second and third axes, and then merge the new second axis (old third one) with the first one with reshaping - output = input_array.swapaxes(1,2).reshape(N*D,C,H,W) We can also use transpose : input_array.transpose(0,2,1,3,4) to get the same swapping axes effect. For a general intuitive method, please refer to Intuition and idea behind reshaping 4D array to 2D array in NumPy.
https://stackoverflow.com/questions/50502700/
Pytorch simple text generator not working and loss keeps diverging
I am new to pytorch and deep learning in general , and am trying to build a simple text generator. For reasons I don't understand, the loss keeps diverging and the model doesn't. Here's the code. class RNN(nn.Module): def __init__(self, embed_size, hidden_size): super(RNN, self).__init__() self.embeds = nn.Embedding(num_chars, embed_size) self.l1 = nn.Linear(embed_size, hidden_size) self.l2 = nn.Linear(hidden_size, hidden_size) self.l3 = nn.Linear(hidden_size, num_chars) self.relu = nn.ReLU() self.softmax = nn.Softmax() def forward(self, inp): out = self.embeds(inp) out = self.l3(self.relu(self.l2(self.relu(self.l1(out))))) return self.softmax(out) rnn = RNN(10, 50) optimizer = torch.optim.Adam(rnn.parameters(), lr = 0.002) criterion = nn.NLLLoss() def charTensor(x): out = torch.zeros(1, num_chars) out[0][all_chars.index(x)] = 1 return out.long() for epoch in range(5): epoch_loss = 0 rnn.zero_grad() for i in range(len(train_str[:400])-1): inp = charTensor(train_str[i]) output = rnn(inp) loss = criterion(output, charTensor(train_str[i+1])) epoch_loss += loss loss.backward(retain_graph=True) optimizer.step() print("Epoch Loss:", epoch_loss) //epoch loss is always tensor(-399) first_char = 'c' inp_t = charTensor(first_char) fin = first_char for i in range(10): next_t = rnn(inp_t) next_char = all_chars[torch.argmax(next_t).numpy()] //always ends up as 0, which is the char for space fin += next_char inp_t = charTensor(next_char) print(fin) //prints new line
Are you trying to implement an RNN? Because I see you are naming your model as RNN but the implementation doesn't seem to take signals from previous time steps. It seems that you are not implementing batches and are training based on inputting 1 character and then backpropagating on that. This is known to cause instability. You may want to iterate over a few characters and accumulated the loss and average it before backpropagating. In generating text, what you want to do is train the model by having prepared sequence of data for example "the fox jumped over the lazy dog". In a word level prediction, your input would be: ["the","fox","jumped", "over","the","lazy"] and the target would be: ["fox","jumped", "over","the","lazy", "dog"] What model does is try to predict the next word given the previous words. In character level, then simply change the list to each character withing the sentence. That way you will have a model that learns the probability distribution. For PyTorch specific implementation check here: https://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html Also, you don't need retain_graph=True as it will build up memory. Instead just type: loss.backward() optimizer.step() optimizer.zero_grad()
https://stackoverflow.com/questions/50511212/
How to do softmax for pixelwise classification
My goal is to do grey scale image segmentation using pixelwise classification. So I have two labels 0 and 1. I made a network in pytorch which looks like the following. class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.up = nn.Upsample(scale_factor=2, mode='nearest') self.conv11 = nn.Conv2d(1, 128, kernel_size=3, padding=1) self.conv12 = nn.Conv2d(128, 256, kernel_size=3, padding=1) self.conv13 = nn.Conv2d(256, 2, kernel_size=3, padding=1) def forward(self, x): in_size = x.size(0) x = F.relu(self.conv11(x)) x = F.relu(self.conv12(x)) x = F.relu(self.conv13(x)) x = F.softmax(x, 2) return x In the last layer I designed the conv13 in such that it produces 2 channels one for each class. Since I was using the softmax I was expecting that summation of value of same index on 2 separate channel would equal to 1. For example assume the output image is ( 2{channel}, 4, 4). So I was expecting that image[ channel 1 ][0][0] + image[ channel 2 ][0][0] = 1 But the output I get is 0.0015 which is not even close to 1. How can i use the softmax to predict channelwise ? To check this I used the following code for batch, data in enumerate(trainloader, 0): inputs , labels = data inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = net(inputs) loss = rmse(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() predicted = outputs.data predicted = predicted.to('cpu') predicted_img = predicted.numpy() predicted_img = np.reshape(predicted_img,(2, 4, 4)) print(predicted_img[0]) print(predicted_img[1]) Those prints showed this [[**0.2762002** 0.13305853 0.2510342 0.23114938] [0.26812425 0.28500515 0.05682982 0.15851443] [0.1640967 0.5409352 0.43547812 0.44782472] [0.29157883 0.0410011 0.2566578 0.16251141]] [[**0.23052207** 0.868455 0.43436486 0.0684725 ] [0.18001427 0.02341573 0.0727293 0.2525512 ] [0.06587404 0.04974682 0.3773188 0.6559266 ] [0.5235896 0.05838248 0.11558701 0.02304965]] It is clear that the corresponding elements are not summing up to 1 like 0.2762002 (index 0, 0) + 0.23052207 (index 0, 0) != 1 How can I fix it ?
Please check last line of my code .. basically your dimension for softmax was wrong. class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.up = nn.Upsample(scale_factor=2, mode='nearest') self.conv11 = nn.Conv2d(1, 128, kernel_size=3, padding=1) self.conv12 = nn.Conv2d(128, 256, kernel_size=3, padding=1) self.conv13 = nn.Conv2d(256, 2, kernel_size=3, padding=1) def forward(self, x): in_size = x.size(0) x = F.relu(self.conv11(x)) x = F.relu(self.conv12(x)) x = F.relu(self.conv13(x)) x = F.softmax(x, 1) #this line is changed return x net = Net() inputs = torch.rand(1,1,4,4) out = net (Variable(inputs)) print (out) out.sum(dim=1) Hope that helps.
https://stackoverflow.com/questions/50534515/
How to do CIFAR-10 with PyTorch on CUDA?
I'm following the CIFAR-10 PyTorch tutorial at this pytorch page , and can't get PyTorch running on the GPU. The code is exactly as in the tutorial. The error I get is Traceback (most recent call last): File "(file path)/CIFAR10_tutorial.py", line 116, in <module> outputs = net(images) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "(file path)/CIFAR10_tutorial.py", line 65, in forward x = self.pool(F.relu(self.conv1(x)).cuda()) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/conv.py", line 301, in forward self.padding, self.dilation, self.groups) My CUDA version is 9.0, Pytorch 0.4.0. I have used tensorflow-gpu on the machine, so I know CUDA is set up correctly. Where exactly must I use .cuda() and .to(device) as suggested in the tutorial?
I'm leaving an answer, in case anyone else is stuck on the same. First, configure Pytorch to use the GPU if available device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device) Then, in the init function, cast to gpu by calling .cuda() on every element of the NN, e.g. self.conv1 = nn.Conv2d(3, 24, 5).cuda() self.pool = nn.MaxPool2d(2, 2).cuda() If you're not sure about the GPU, call .to(device) on every element. In the forward(self, x) function, before the steps, I did x = x.to(device) Right after net object is created, cast it to device by net.to(device) All inputs and labels should be cast to device before any operation is performed on them. inputs, labels = inputs.to(device), labels.to(device) I am, skipping writing the entire code as the link has already been mentioned in the question. If there are seem to be a few redundant casts to gpu, they're not breaking anything. I might also put together an ipynb with the changes.
https://stackoverflow.com/questions/50539641/
Getting multiprocessing lock error when running vizdoom and pytorch program on Windows Subsystem for Linux
whenever I try to run my program on WSL, I get the following error. I'm pretty new to pytorch and vizdoom, so I don't know how to solve this problem. Setup - Windows 10 x64 - Ubuntu 14 (on WSL) - Python 2.7.14 (Anaconda 2) - OpenAI Gym 0.9.5 - Vizdoom 1.1.4 - doom-py 0.0.14 - ppaquette/gym-doom - pytorch 0.0.12 (doomenv) hybridsyntax@Blacklynx:/mnt/f/_TUTORIALS/ai/doom/code$ python ai.py > wsl.log [2018-05-25 18:21:44,354] Making new env: ppaquette/DoomCorridor-v0 [2018-05-25 18:21:44,365] Clearing 2 monitor files from previous run (because force=True was provided) Assertion 'pthread_mutex_unlock(&m->mutex) == 0' failed at pulsecore/mutex-posix.c:108, function pa_mutex_unlock(). Aborting. Traceback (most recent call last): File "ai.py", line 155, in <module> memory.run_steps(200) File "/mnt/f/_TUTORIALS/ai/doom/code/experience_replay.py", line 70, in run_steps entry = next(self.n_steps_iter) # 10 consecutive steps File "/mnt/f/_TUTORIALS/ai/doom/code/experience_replay.py", line 21, in __iter__ state = self.env.reset() File "/mnt/f/_TUTORIALS/ai/doom/gym/gym/core.py", line 104, in reset return self._reset() File "/mnt/f/_TUTORIALS/ai/doom/gym/gym/wrappers/monitoring.py", line 39, in _reset observation = self.env.reset(**kwargs) File "/mnt/f/_TUTORIALS/ai/doom/gym/gym/core.py", line 104, in reset return self._reset() File "/mnt/f/_TUTORIALS/ai/doom/gym/gym/core.py", line 311, in _reset observation = self.env.reset(**kwargs) File "/mnt/f/_TUTORIALS/ai/doom/gym/gym/core.py", line 104, in reset return self._reset() File "/mnt/f/_TUTORIALS/ai/doom/gym/gym/wrappers/frame_skipping.py", line 33, in _reset return self.env.reset() File "/mnt/f/_TUTORIALS/ai/doom/gym/gym/core.py", line 104, in reset return self._reset() File "/mnt/f/_TUTORIALS/ai/doom/gym/gym/core.py", line 283, in _reset return self.env.reset(**kwargs) File "/mnt/f/_TUTORIALS/ai/doom/gym/gym/core.py", line 104, in reset return self._reset() File "/mnt/f/_TUTORIALS/ai/doom/gym/gym/wrappers/time_limit.py", line 49, in _reset return self.env.reset() File "/mnt/f/_TUTORIALS/ai/doom/gym/gym/core.py", line 104, in reset return self._reset() File "/mnt/f/_TUTORIALS/ai/doom/gym-doom/ppaquette_gym_doom/doom_env.py", line 244, in _reset return self._load_level() File "/mnt/f/_TUTORIALS/ai/doom/gym-doom/ppaquette_gym_doom/doom_env.py", line 157, in _load_level 'singleton lock in memory.') gym.error.Error: [ViZDoomUnexpectedExitException, ViZDoomErrorException] VizDoom exited unexpectedly. This is likely caused by a missing multiprocessing lock. To run VizDoom across multiple processes, you need to pass a lock when you configure the env [e.g. env.configure(lock=my_multiprocessing_lock)], or create and close an env before starting your processes [e.g. env = gym.make("DoomBasic-v0"); env.close()] to cache a singleton lock in memory. [2018-05-25 18:21:44,696] Finished writing results. You can upload them to the scoreboard via gym.upload('/mnt/f/_TUTORIALS/ai/doom/code/videos') (doomenv) hybridsyntax@Blacklynx:/mnt/f/_TUTORIALS/ai/doom/code$ Thanks in advance
Upgrading the Ubuntu on WSL to the latest version (18.04) solved the problem for me. For me it was running the following commands on WSL. sudo -S env RELEASE_UPGRADER_NO_SCREEN=1 do-release-upgrade sudo apt-get update sudo apt-get upgrade -y
https://stackoverflow.com/questions/50541672/
Is there a function in google.colab module to close the runtime
Sometimes when I give run in google.colab I cant stay infront of the computer to manually disconnect from the server when the run is complete and the connection stays on even when my run is complete occupying the node for no reason. Is there a function in google.colab so that say I can insert the function to close the connection after some epochs? I am looking for something like colab.disconnect() etc...
import sys sys.exit() This will end the runtime, freeing up the GPU. EDIT: Apparently my last answer doesn't work any more. The thing to do now is !kill -9 -1.
https://stackoverflow.com/questions/50541851/
How do I split a custom dataset into training and test datasets?
import pandas as pd import numpy as np import cv2 from torch.utils.data.dataset import Dataset class CustomDatasetFromCSV(Dataset): def __init__(self, csv_path, transform=None): self.data = pd.read_csv(csv_path) self.labels = pd.get_dummies(self.data['emotion']).as_matrix() self.height = 48 self.width = 48 self.transform = transform def __getitem__(self, index): pixels = self.data['pixels'].tolist() faces = [] for pixel_sequence in pixels: face = [int(pixel) for pixel in pixel_sequence.split(' ')] # print(np.asarray(face).shape) face = np.asarray(face).reshape(self.width, self.height) face = cv2.resize(face.astype('uint8'), (self.width, self.height)) faces.append(face.astype('float32')) faces = np.asarray(faces) faces = np.expand_dims(faces, -1) return faces, self.labels def __len__(self): return len(self.data) This is what I could manage to do by using references from other repositories. However, I want to split this dataset into train and test. How can I do that inside this class? Or do I need to make a separate class to do that?
Using Pytorch's SubsetRandomSampler: import torch import numpy as np from torchvision import datasets from torchvision import transforms from torch.utils.data.sampler import SubsetRandomSampler class CustomDatasetFromCSV(Dataset): def __init__(self, csv_path, transform=None): self.data = pd.read_csv(csv_path) self.labels = pd.get_dummies(self.data['emotion']).as_matrix() self.height = 48 self.width = 48 self.transform = transform def __getitem__(self, index): # This method should return only 1 sample and label # (according to "index"), not the whole dataset # So probably something like this for you: pixel_sequence = self.data['pixels'][index] face = [int(pixel) for pixel in pixel_sequence.split(' ')] face = np.asarray(face).reshape(self.width, self.height) face = cv2.resize(face.astype('uint8'), (self.width, self.height)) label = self.labels[index] return face, label def __len__(self): return len(self.labels) dataset = CustomDatasetFromCSV(my_path) batch_size = 16 validation_split = .2 shuffle_dataset = True random_seed= 42 # Creating data indices for training and validation splits: dataset_size = len(dataset) indices = list(range(dataset_size)) split = int(np.floor(validation_split * dataset_size)) if shuffle_dataset : np.random.seed(random_seed) np.random.shuffle(indices) train_indices, val_indices = indices[split:], indices[:split] # Creating PT data samplers and loaders: train_sampler = SubsetRandomSampler(train_indices) valid_sampler = SubsetRandomSampler(val_indices) train_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=train_sampler) validation_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=valid_sampler) # Usage Example: num_epochs = 10 for epoch in range(num_epochs): # Train: for batch_index, (faces, labels) in enumerate(train_loader): # ...
https://stackoverflow.com/questions/50544730/
Do I need to make multiple instances of a neural network in PyTorch to test multiple loss functions?
I have written out a neural network in PyTorch and I would like to compare the results of two different loss functions on this one network Should I go about making two different instances of the network and test one loss function per network like this network_w_loss_1 = ANN().cuda() network_w_loss_2 = ANN().cuda() crit_loss_1 = loss_1() crit_loss_2 = loss_2() opt_loss_1 = optim.SGD('params') opt_loss_2 = optim.SGD('params') for epoch in range(num_epochs): for i, dat in enumerate(data_loader): #unpack data opt_loss_1.zero_grad() opt_loss_2.zero_grad() output1 = network_w_loss_1('params') output2 = network_w_loss_2('params') los_1 = crit_loss_1(output1) los_2 = crit_loss_2(output2) los_1.backward() los_2.backward() opt_loss_1.step() opt_loss_2.step() or can I get away with doing this? network = ANN().cuda() crit_loss_1 = loss_1() crit_loss_2 = loss_2() opt = optim.SGD('params') for epoch in range(num_epochs): for i, dat in enumerate(data_loader): #unpack data opt.zero_grad() output1 = network('params') output2 = network('params') los_1 = crit_loss_1(output1) los_2 = crit_loss_2(output2) los_1.backward() los_2.backward() opt.step() I am using Python 3.6.5 and PyTorch 0.4.0
You have to make 2 different instances. Otherwise you are just training one network alternating between 2 losses (both losses would update its parameters).
https://stackoverflow.com/questions/50546862/
Issue with torch.cuda() function
I'm new to PyTorch. It is working when I use gpu running my program with TensorFlow. But this is a problem with PyTorch. I search a lot, but cannot find any useful answer. Who can help me? Error: My environment: windows10 64bit python3.6 cuda9.0 cudnn64 gpu:GTX965m
As you can see for yourself in the release notes here, the PyTorch developers have decided to deprecate compute capability 3.0 and 5.0 devices from their builds starting with PyTorch 0.3.1. Your device is a compute 5.0 device and is, therefore, not supported in the most recent versions of PyTorch. You can read about what alternatives are open to you here (basically use an older version of PyTorch, or build your own from source with support for your GPU).
https://stackoverflow.com/questions/50562552/
Fast way to multiple 3D tensors of shape (1, 1, 256) and (10, 1, 256) in PyTorch and Numpy
I am trying to adapt the seq2seq model for my own task, https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb I have two tensors at the decoder stage rnn_output: (1, 1, 256) # time_step x batch_size x hidden_dimension encoder_inputs: (10, 1, 256) # seq_len x batch_size x hidden_dimension They should be multipied to get attention score (before softmax) of shape attn_score: (10, 1, 1) What's the best way to do so? The notebook seems to use a for loop, is there are better via a matrix multiplication kind of operation?
Example using torch.bmm(): import torch from torch.autograd import Variable import numpy as np seq_len = 10 rnn_output = torch.rand((1, 1, 256)) encoder_outputs = torch.rand((seq_len, 1, 256)) # As computed in the tutorial: attn_score = Variable(torch.zeros(seq_len)) for i in range(seq_len): attn_score[i] = rnn_output.squeeze().dot(encoder_outputs[i].squeeze()) # note: the code would fail without the "squeeze()". I would assume the tensors in # the tutorial are actually (,256) and (10, 256) # Alternative using batched matrix multiplication (bmm) with some data reformatting first: attn_score_v2 = torch.bmm(rnn_output.expand(seq_len, 1, 256), encoder_outputs.view(seq_len, 256, 1)).squeeze() # ... Interestingly though, there are some numerical discrepancies between the 2 methods: np.testing.assert_array_almost_equal(attn_score.data.numpy(), attn_score_v2.data.numpy(), decimal=5) # AssertionError: # Arrays are not almost equal to 5 decimals # # (mismatch 30.0%) # x: array([60.32436, 69.04288, 72.04784, 70.19503, 71.75543, 67.45459, # 63.01708, 71.70189, 63.07552, 67.48799], dtype=float32) # y: array([60.32434, 69.04287, 72.0478 , 70.19504, 71.7554 , 67.4546 , # 63.01709, 71.7019 , 63.07553, 67.488 ], dtype=float32)
https://stackoverflow.com/questions/50570697/
Implementing Luong Attention in PyTorch
I am trying to implement the attention described in Luong et al. 2015 in PyTorch myself, but I couldn't get it work. Below is my code, I am only interested in the "general" attention case for now. I wonder if I am missing any obvious error. It runs, but doesn't seem to learn. class AttnDecoderRNN(nn.Module): def __init__(self, hidden_size, output_size, dropout_p=0.1): super(AttnDecoderRNN, self).__init__() self.hidden_size = hidden_size self.output_size = output_size self.dropout_p = dropout_p self.embedding = nn.Embedding( num_embeddings=self.output_size, embedding_dim=self.hidden_size ) self.dropout = nn.Dropout(self.dropout_p) self.gru = nn.GRU(self.hidden_size, self.hidden_size) self.attn = nn.Linear(self.hidden_size, self.hidden_size) # hc: [hidden, context] self.Whc = nn.Linear(self.hidden_size * 2, self.hidden_size) # s: softmax self.Ws = nn.Linear(self.hidden_size, self.output_size) def forward(self, input, hidden, encoder_outputs): embedded = self.embedding(input).view(1, 1, -1) embedded = self.dropout(embedded) gru_out, hidden = self.gru(embedded, hidden) # [0] remove the dimension of directions x layers for now attn_prod = torch.mm(self.attn(hidden)[0], encoder_outputs.t()) attn_weights = F.softmax(attn_prod, dim=1) # eq. 7/8 context = torch.mm(attn_weights, encoder_outputs) # hc: [hidden: context] out_hc = F.tanh(self.Whc(torch.cat([hidden[0], context], dim=1)) # eq.5 output = F.log_softmax(self.Ws(out_hc), dim=1) eq. 6 return output, hidden, attn_weights I have studied the attention implemented in https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html and https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb The first one isn't the exact attention mechanism I am looking for. A major disadvantage is that its attention depends on the sequence length (self.attn = nn.Linear(self.hidden_size * 2, self.max_length)), which could be expensive for long sequences. The second one is more similar to what's described in the paper, but still not the same as there is not tanh. Besides, it is really slow after updating it to latest version of pytorch (ref). Also I don't know why it takes the last context (ref).
This version works, and it follows the definition of Luong Attention (general), closely. The main difference from that in the question is the separation of embedding_size and hidden_size, which appears to be important for training after experimentation. Previously, I made both of them the same size (256), which creates trouble for learning, and it seems that the network could only learn half the sequence. class EncoderRNN(nn.Module): def __init__(self, input_size, embedding_size, hidden_size, num_layers=1, bidirectional=False, batch_size=1): super(EncoderRNN, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.bidirectional = bidirectional self.batch_size = batch_size self.embedding = nn.Embedding(input_size, embedding_size) self.gru = nn.GRU(embedding_size, hidden_size, num_layers, bidirectional=bidirectional) def forward(self, input, hidden): embedded = self.embedding(input).view(1, 1, -1) output, hidden = self.gru(embedded, hidden) return output, hidden def initHidden(self): directions = 2 if self.bidirectional else 1 return torch.zeros( self.num_layers * directions, self.batch_size, self.hidden_size, device=DEVICE ) class AttnDecoderRNN(nn.Module): def __init__(self, embedding_size, hidden_size, output_size, dropout_p=0): super(AttnDecoderRNN, self).__init__() self.embedding_size = embedding_size self.hidden_size = hidden_size self.output_size = output_size self.dropout_p = dropout_p self.embedding = nn.Embedding( num_embeddings=output_size, embedding_dim=embedding_size ) self.dropout = nn.Dropout(self.dropout_p) self.gru = nn.GRU(embedding_size, hidden_size) self.attn = nn.Linear(hidden_size, hidden_size) # hc: [hidden, context] self.Whc = nn.Linear(hidden_size * 2, hidden_size) # s: softmax self.Ws = nn.Linear(hidden_size, output_size) def forward(self, input, hidden, encoder_outputs): embedded = self.embedding(input).view(1, 1, -1) embedded = self.dropout(embedded) gru_out, hidden = self.gru(embedded, hidden) attn_prod = torch.mm(self.attn(hidden)[0], encoder_outputs.t()) attn_weights = F.softmax(attn_prod, dim=1) context = torch.mm(attn_weights, encoder_outputs) # hc: [hidden: context] hc = torch.cat([hidden[0], context], dim=1) out_hc = F.tanh(self.Whc(hc)) output = F.log_softmax(self.Ws(out_hc), dim=1) return output, hidden, attn_weights
https://stackoverflow.com/questions/50571991/
Why pytorch isn't minimizing x*x for me?
I expect x to converge to 0, which is minimum of x*x. But this doesn't happen. What am I doing wrong in this small sample code: import torch from torch.autograd import Variable tns = torch.FloatTensor([3]) x = Variable(tns, requires_grad=True) z = x*x opt = torch.optim.Adam([x], lr=.01, betas=(0.5, 0.999)) for i in range(3000): z.backward(retain_graph=True) # Calculate gradients opt.step() print(x)
The problem you have is that you don't zero the gradients when you are calculating each loop. Instead, by setting retain_graph=True and not calling opt.zero_grad() at each step of the loop you are actually adding the gradients calculated to ALL previous gradients calculated. So instead of taking a step in gradient descent, you are taking a step with respect to all accumulated gradients which is certainly NOT what you want. You should instead make sure to call opt.zero_grad() at the beginning of your loop, and move the z=x*x inside your loop so that you don't have to retain_graph. I made these slight modifications: import torch from torch.autograd import Variable tns = torch.FloatTensor([3]) x = Variable(tns, requires_grad=True) opt = torch.optim.Adam([x], lr=.01, betas=(0.5, 0.999)) for i in range(3000): opt.zero_grad() z = x*x z.backward() # Calculate gradients opt.step() print(x) And my final x is 1e-25.
https://stackoverflow.com/questions/50588958/
PyTorch equivalent of index_add_ that takes the maximum instead
In PyTorch, the index_add_ method of a Tensor does a summation using a provided index tensor: idx = torch.LongTensor([0,0,0,0,1,1]) child = torch.FloatTensor([1, 3, 5, 10, 8, 1]) parent = torch.FloatTensor([0, 0]) parent.index_add_(0, idx, child) The first four child values sum into parent[0] and the next two go into parent[1], so the result is tensor([ 19., 9.]) However, I need to do index_max_ instead, which doesn't exist in the API. Is there a way to do it efficiently (without having to loop or allocate more memory)? One (bad) loop solution would be: for i in range(max(idx)+1): parent[i] = torch.max(child[idx == i]) This produces the desired result of tensor([ 10., 8.]), but very slowly.
A solution playing with the indices: def index_max(child, idx, num_partitions): # Building a num_partition x num_samples matrix `idx_tiled`: partition_idx = torch.range(0, num_partitions - 1, dtype=torch.long) partition_idx = partition_idx.view(-1, 1).expand(num_partitions, idx.shape[0]) idx_tiled = idx.view(1, -1).repeat(num_partitions, 1) idx_tiled = (idx_tiled == partition_idx).float() # i.e. idx_tiled[i,j] == 1 if idx[j] == i, else 0 parent = idx_tiled * child parent, _ = torch.max(parent, dim=1) return parent Benchmarking: import timeit setup = ''' import torch def index_max_v0(child, idx, num_partitions): parent = torch.zeros(num_partitions) for i in range(max(idx) + 1): parent[i] = torch.max(child[idx == i]) return parent def index_max(child, idx, num_partitions): # Building a num_partition x num_samples matrix `idx_tiled` # containing for each row indices of partition_idx = torch.range(0, num_partitions - 1, dtype=torch.long) partition_idx = partition_idx.view(-1, 1).expand(num_partitions, idx.shape[0]) idx_tiled = idx.view(1, -1).repeat(num_partitions, 1) idx_tiled = (idx_tiled == partition_idx).float() parent = idx_tiled * child parent, _ = torch.max(parent, dim=1) return parent idx = torch.LongTensor([0,0,0,0,1,1]) child = torch.FloatTensor([1, 3, 5, 10, 8, 1]) num_partitions = torch.unique(idx).shape[0] ''' print(min(timeit.Timer('index_max_v0(child, idx, num_partitions)', setup=setup).repeat(5, 1000))) # > 0.05308796599274501 print(min(timeit.Timer('index_max(child, idx, num_partitions)', setup=setup).repeat(5, 1000))) # > 0.024736385996220633
https://stackoverflow.com/questions/50605205/
Variational Autoencoder gives same output image for every input mnist image when using KL divergence
When not using KL divergence term, the VAE reconstructs mnist images almost perfectly but fails to generate new ones properly when provided with random noise. When using KL divergence term, the VAE gives the same weird output both when reconstructing and generating images. Here's the pytorch code for the loss function: def loss_function(recon_x, x, mu, logvar): BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784), size_average=True) KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp()) return (BCE+KLD) recon_x is the reconstructed image, x is the original_image, mu is the mean vector while logvar is the vector containing the log of variance. What is going wrong here? Thanks in advance :)
A possible reason is the numerical unbalance between the two losses, with your BCE loss computed as an average over the batch (c.f. size_average=True) while the KLD one is summed.
https://stackoverflow.com/questions/50607516/
How can I assign pytorch tensor a matrix from numpy?
Create WxW tensor: x = Variable(torch.FloatTensor(W,W).zero_(), requires_grad=True) Do some calculations: x_copy = x0=np.copy(x.data.numpy()) x_upd = handleArray(x_copy) How can I assign x data from x_upd ?
Ok, the solution was to do: x.data.from_numpy(x_upd)
https://stackoverflow.com/questions/50611420/
Overflow when unpacking long - Pytorch
I am running the following code import torch from __future__ import print_function x = torch.empty(5, 3) print(x) on an Ubuntu machine in CPU mode, which gives me following error, what would be the reason and how to fix x = torch.empty(5, 3) ----> print(x) /usr/local/lib/python3.6/dist-packages/torch/tensor.py in __repr__(self) 55 # characters to replace unicode characters with. 56 if sys.version_info > (3,): ---> 57 return torch._tensor_str._str(self) 58 else: 59 if hasattr(sys.stdout, 'encoding'): /usr/local/lib/python3.6/dist-packages/torch/_tensor_str.py in _str(self) 216 suffix = ', dtype=' + str(self.dtype) + suffix 217 --> 218 fmt, scale, sz = _number_format(self) 219 if scale != 1: 220 prefix = prefix + SCALE_FORMAT.format(scale) + ' ' * indent /usr/local/lib/python3.6/dist-packages/torch/_tensor_str.py in _number_format(tensor, min_sz) 94 # TODO: use fmod? 95 for value in tensor: ---> 96 if value != math.ceil(value.item()): 97 int_mode = False 98 break RuntimeError: Overflow when unpacking long
Since, torch.empty() gives uninitialized memory, so you may or may not get a large value from it. Try x = torch.rand(5, 3) print(x) this would give the response.
https://stackoverflow.com/questions/50617917/
Custom convolution kernel and toroidal convolution in PyTorch
I want to do two things with a PyTorch convolution which aren't mentioned in the documentation or code: I want to create a convolution with a fixed kernel like this: 000010000 000010000 100010001 000010000 000010000 The horizontal aspect is like dilation, I guess, but the vertical part is different. I see that dilation is available as a parameter in the code, but it has to be a scalar or single-element tuple (not one element per dimension), so I don't think it can do what I want here. I would like my convolutions to "wrap around" like a toroid, rather than use padding. EDIT TO ADD: I see that there is an open issue for this , which also provides a suboptimal workaround. So, I guess that there's no "right" way to do it, yet.
Unlike torch.nn.conv2d() (which instantiates its own trainable kernel), torch.nn.functional.conv2d() takes as parameters both your matrix and kernel, so you can pass it whatever custom kernel you want. As suggested by @zou3519 in a Github issue (linked to the issue you mentioned yourself), you could implement yourself a 2D circular padding by "repeating the tensor in a nxn grid, then cropping out the part you need.": def circular_pad_2d(x, pad=(1, 1)): # Snipped by @zou3519 (https://github.com/zou3519) return x.repeat(*x_shape[:2])[ (x.shape[0]-pad[0]):(2*x.shape[0]+pad[0]), (x.shape[1]-pad[1]):(2*x.shape[1]+pad[1]) ] # Example: x = torch.tensor([[1,2,3],[4,5,6]]) y = circular_pad_2d(x, pad=(2, 3)) print(y) # 1 2 3 1 2 3 1 2 3 # 4 5 6 4 5 6 4 5 6 # 1 2 3 1 2 3 1 2 3 # 4 5 6 4 5 6 4 5 6 (previous) In the torch.nn.functional module too, torch.nn.functional.pad() can take as parameter mode=reflect, which is what you want I believe (?). You could use this method to manually pad your input matrix before performing the convolution. (note: you also have the torch.nn.ReflectionPad2d layer specifically tailored for fixed 2D padding by reflection)
https://stackoverflow.com/questions/50635736/
Convert image to proper dimension PyTorch
I have an input image, as numpy array of shape [H, W, C] where H - height, W - width and C - channels. I want to convert it into [B, C, H, W] where B - batch size, which should be equal to 1 every time, and changing the place for C. _image = np.array(_image) h, w, c = _image.shape image = torch.from_numpy(_image).unsqueeze_(0).view(1, c, h, w) So, will this preserve the image properly i.e without displacing the original image pixel values?
I'd prefer the following, which leaves the original image unmodified and simply adds a new axis as desired: _image = np.array(_image) image = torch.from_numpy(_image) image = image[np.newaxis, :] # _unsqueeze works fine here too Then to swap the axes as desired: image = image.permute(0, 3, 1, 2) # permutation applies the following mapping # axis0 -> axis0 # axis1 -> axis3 # axis2 -> axis1 # axis3 -> axis2
https://stackoverflow.com/questions/50657449/
Finding euclidean distance given an index array and a pytorch tensor
I got a pytorch tensor: Z = np.random.rand(100,2) tZ = autograd.Variable(torch.cuda.FloatTensor(Z), requires_grad=True) and an index array: idx = (np.array([0, 0, 0, 4, 3, 8], dtype="int64"), np.array([0, 1, 2, 3, 7, 4], dtype="int64")) I need to find the distances of all the pairs of points in my tZ tensor using the idx array as indexes. Right now I am doing it using numpy, but it would be nice if it could all be done using torch dist = np.linalg.norm(tZ.cpu().data.numpy()[idx[0]]-tZ.cpu().data.numpy()[idx[1]], axis=1) If anyone knows a way of doing it utilizing pytorch, to speed it up, it would be a great help!
Using torch.index_select(): Z = np.random.rand(100,2) tZ = autograd.Variable(torch.cuda.FloatTensor(Z), requires_grad=True) idx = (np.array([0, 0, 0, 4, 3, 8], dtype="int64"), np.array([0, 1, 2, 3, 7, 4], dtype="int64")) tZ_gathered = [torch.index_select(tZ, dim=0, index=torch.cuda.LongTensor(idx[i])) # note: you may have to wrap it in a Variable too # (thanks @rvd for the comment): # index = autograd.Variable(torch.cuda.LongTensor(idx[i]))) for i in range(len(idx))] print(tZ_gathered[0].shape) # > torch.Size([6, 2]) dist = torch.norm(tZ_gathered[0] - tZ_gathered[1])
https://stackoverflow.com/questions/50658111/
Training on minibatches of varying size
I'm trying to train a deep learning model in PyTorch on images that have been bucketed to particular dimensions. I'd like to train my model using mini-batches, but the mini-batch size does not neatly divide the number of examples in each bucket. One solution I saw in a previous post was to pad the images with additional whitespace (either on the fly or all at once at the beginning of training), but I do not want to do this. Instead, I would like to allow the batch size to be flexible during training. Specifically, if N is the number of images in a bucket and B is the batch size, then for that bucket I would like to get N // B batches if B divides N, and N // B + 1 batches otherwise. The last batch can have fewer than B examples. As an example, suppose I have indexes [0, 1, ..., 19], inclusive and I'd like to use a batch size of 3. The indexes [0, 9] correspond to images in bucket 0 (shape (C, W1, H1)) The indexes [10, 19] correspond to images in bucket 1 (shape (C, W2, H2)) (The channel depth is the same for all images). Then an acceptable partitioning of the indexes would be batches = [ [0, 1, 2], [3, 4, 5], [6, 7, 8], [9], [10, 11, 12], [13, 14, 15], [16, 17, 18], [19] ] I would prefer to process the images indexed at 9 and 19 separately because they have different dimensions. Looking through PyTorch's documentation, I found the BatchSampler class that generates lists of mini-batch indexes. I made a custom Sampler class that emulates the partitioning of indexes described above. If it helps, here's my implementation for this: class CustomSampler(Sampler): def __init__(self, dataset, batch_size): self.batch_size = batch_size self.buckets = self._get_buckets(dataset) self.num_examples = len(dataset) def __iter__(self): batch = [] # Process buckets in random order dims = random.sample(list(self.buckets), len(self.buckets)) for dim in dims: # Process images in buckets in random order bucket = self.buckets[dim] bucket = random.sample(bucket, len(bucket)) for idx in bucket: batch.append(idx) if len(batch) == self.batch_size: yield batch batch = [] # Yield half-full batch before moving to next bucket if len(batch) > 0: yield batch batch = [] def __len__(self): return self.num_examples def _get_buckets(self, dataset): buckets = defaultdict(list) for i in range(len(dataset)): img, _ = dataset[i] dims = img.shape buckets[dims].append(i) return buckets However, when I use my custom Sampler class I generate the following error: Traceback (most recent call last): File "sampler.py", line 143, in <module> for i, batch in enumerate(dataloader): File "/home/roflcakzorz/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 263, in __next__ indices = next(self.sample_iter) # may raise StopIteration File "/home/roflcakzorz/anaconda3/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 139, in __iter__ batch.append(int(idx)) TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' The DataLoader class seems to expect to be passed indexes, not list of indexes. Should I not be using a custom Sampler class for this task? I also considered making a custom collate_fn to pass to the DataLoader, but with that approach I don't believe I can control which indexes are allowed to be in the same mini-batch. Any guidance would be greatly appreciated.
Do you have 2 networks for each of the samples(A cnn kernel size has to be fix). If yes just pass the above custom_sampler to the batch_sampler args of DataLoader class. That would fix the issue.
https://stackoverflow.com/questions/50663803/
Converting a scipy coo_matrix to pytorch sparse tensor
I have a coo_matrix: from scipy.sparse import coo_matrix coo = coo_matrix((3, 4), dtype = "int8") That I want converted to a pytorch sparse tensor. According to the documentation https://pytorch.org/docs/master/sparse.html it should follow the coo format, but I cannot find a simple way to do the conversion. Any help would be greatly appreciated!
Using the data as in the Pytorch docs, it can be done simply using the attributes of the Numpy coo_matrix: import torch import numpy as np from scipy.sparse import coo_matrix coo = coo_matrix(([3,4,5], ([0,1,1], [2,0,2])), shape=(2,3)) values = coo.data indices = np.vstack((coo.row, coo.col)) i = torch.LongTensor(indices) v = torch.FloatTensor(values) shape = coo.shape torch.sparse.FloatTensor(i, v, torch.Size(shape)).to_dense() Output 0 0 3 4 0 5 [torch.FloatTensor of size 2x3]
https://stackoverflow.com/questions/50665141/
Google Colab: "Unable to connect to the runtime" after uploading Pytorch model from local
I am using a simple (not necessarily efficient) method for Pytorch model saving. import torch from google.colab import files torch.save(model, filename) # save a trained model on the VM files.download(filename) # download the model to local best_model = files.upload() # select the model just downloaded best_model[filename] # access the model Colab disconnects during execution of the last line, and hitting RECONNECT tab always shows ALLOCATING -> CONNECTING (fails, with "unable to connect to the runtime" message in the left bottom corner) -> RECONNECT. At the same time, executing any one of the cells gives Error message "Failed to execute cell, Could not send execute message to runtime: [object CloseEvent]" I know it is related to the last line, because I can successfully connect with my other google accounts which doesn't execute that. Why does it happen? It seems the google accounts which have executed the last line can no longer connect to the runtime. Edit: One night later, I can reconnect with the google account after session expiration. I just attempted the approach in the comment, and found that just files.upload() the Pytorch model would lead to the problem. Once the upload completes, Colab disconnects.
(I wrote this answer before reading your update. Think it may help.) files.upload() is just for uploading files. We have no reason to expect it to return some pytorch type/model. When you call a = files.upload(), a is a dictionary of filename - a big bytes array. {'my_image.png': b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR....' } type(a['my_image.png']) Just like when you do open('my_image', 'b').read() So, I think the next line best_model[filename] try to print the whole huge bytes array, which bugs the colab.
https://stackoverflow.com/questions/50675219/
Beta Distribution in PyTorch for a, b>1?
PyTorch supports Beta distributions however, when alpha or beta is greater than 1, it doesn't work: m = Beta(torch.tensor([2]), torch.tensor([2])) m.sample()
It works as expected using FloatTensor with torch==0.4.0: import torch from torch.distributions import Beta m = Beta(torch.FloatTensor([2]), torch.FloatTensor([2])) m.sample()
https://stackoverflow.com/questions/50686080/
Missing Weight Vectors when converting from PyTorch to CoreML via ONNX
I am trying to convert a PyTorch model to CoreML via ONNX, but the ONNX-->CoreML conversion is missing weight vectors? I am following the tutorial here which makes this statement: Step 3: Converting the model to CoreML It's as easy as running the convert function. The resulting object is a coremltools MLModel object that you can save to a file and import in XCode later. cml = onnx_coreml.convert(model) Unfortunately when I try to do this it fails horribly. Here's my code: # convert.py import torch import torch.onnx from torch.autograd import Variable import onnx from onnx_coreml import convert from hourglass_model import Hourglass model_no = 1 torch_model = Hourglass(joint_count=14, size=256) state_dict = torch.load("hourglass_model_{}.model".format(model_no)) torch_model.load_state_dict(state_dict) torch_model.train(False) torch_model.eval() # Dummy Input to the model x = Variable(torch.randn(1,3,256,256,dtype=torch.float32)) # Export the model onnx_filename = "test_hourglass.onnx" torch_out = torch.onnx.export(torch_model, x, onnx_filename, export_params=False) # Load back in ONNX model onnx_model = onnx.load(onnx_filename) # Check that the IR is well formed onnx.checker.check_model(onnx_model) # Print a human readable representation of the graph graph = onnx.helper.printable_graph(onnx_model.graph) print(graph) coreml_model = convert(onnx_model, add_custom_layers=True, image_input_names=["input"], image_output_names=["output"]) coreml_model.save('test_hourglass.mlmodel') Here's what the print(graph) line gives. graph torch-jit-export ( %0[FLOAT, 1x3x256x256] %1[FLOAT, 64x3x5x5] %2[FLOAT, 64] %3[FLOAT, 64x64x5x5] %4[FLOAT, 64] %5[FLOAT, 64x64x5x5] %6[FLOAT, 64] %7[FLOAT, 64x64x5x5] %8[FLOAT, 64] %9[FLOAT, 64x64x5x5] %10[FLOAT, 64] %11[FLOAT, 64x64x5x5] %12[FLOAT, 64] %13[FLOAT, 64x64x5x5] %14[FLOAT, 64] %15[FLOAT, 64x64x1x1] %16[FLOAT, 64] %17[FLOAT, 14x64x1x1] %18[FLOAT, 14] ) { %19 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%0, %1, %2) %20 = Relu(%19) %21 = MaxPool[kernel_shape = [4, 4], pads = [0, 0, 0, 0], strides = [4, 4]](%20) %22 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%21, %3, %4) %23 = Relu(%22) %24 = MaxPool[kernel_shape = [4, 4], pads = [0, 0, 0, 0], strides = [4, 4]](%23) %25 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%24, %5, %6) %26 = Relu(%25) %27 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%26, %7, %8) %28 = Relu(%27) %29 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%28, %9, %10) %30 = Relu(%29) %31 = Upsample[height_scale = 4, mode = 'nearest', width_scale = 4](%30) %32 = Add(%31, %23) %33 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%32, %11, %12) %34 = Relu(%33) %35 = Upsample[height_scale = 4, mode = 'nearest', width_scale = 4](%34) %36 = Add(%35, %20) %37 = Conv[dilations = [1, 1], group = 1, kernel_shape = [5, 5], pads = [2, 2, 2, 2], strides = [1, 1]](%36, %13, %14) %38 = Relu(%37) %39 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%38, %15, %16) %40 = Relu(%39) %41 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [1, 1]](%40, %17, %18) %42 = Relu(%41) return %42 } And this is the error message: 1/24: Converting Node Type Conv Traceback (most recent call last): File "convert.py", line 38, in <module> image_output_names=["output"]) File "/Users/stephenf/Developer/miniconda3/envs/pytorch/lib/python3.6/site-packages/onnx_coreml/converter.py", line 396, in convert _convert_node(builder, node, graph, err) File "/Users/stephenf/Developer/miniconda3/envs/pytorch/lib/python3.6/site-packages/onnx_coreml/_operators.py", line 994, in _convert_node return converter_fn(builder, node, graph, err) File "/Users/stephenf/Developer/miniconda3/envs/pytorch/lib/python3.6/site-packages/onnx_coreml/_operators.py", line 31, in _convert_conv "Weight tensor: {} not found in the graph initializer".format(weight_name,)) File "/Users/stephenf/Developer/miniconda3/envs/pytorch/lib/python3.6/site-packages/onnx_coreml/_error_utils.py", line 71, in missing_initializer format(node.op_type, node.inputs[0], node.outputs[0], err_message) ValueError: Missing initializer error in op of type Conv, with input name = 0, output name = 19. Error message: Weight tensor: 1 not found in the graph initializer From what I can gather, it says the weight tensor %1[FLOAT, 64x3x5x5] is missing. This is how I'm saving the model: torch.save(model.state_dict(), "hourglass_model_{}.model".format(epoch)) ONNX loads it fine - it's just the step where I'm converting from ONNX to CoreML. Any help in figuring this out would be greatly appreciated. I'm sure I've done a bunch of other things wrong, but I just need this thing to export for now. Thanks,
You are calling torch.onnx.export with export_params=False, which, as the 0.3.1 doc reads, is saving the model architecture without the actual parameter tensors. The more recent documentation doesn't specify this, but we can assume that due to the Weight tensor not found error that you are getting. Try it with export_params=True, you should see how the saved model's size increases notably. Glad it helped! Andres
https://stackoverflow.com/questions/50689203/
PyTorch Tutorial Error Training a Classifier
I just started the PyTorch-Tutorial Deep Learning with PyTorch: A 60 Minute Blitz and I should add, that I haven't programmed any python (but other languages like Java) before. Right now, my Code looks like import torch import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt import numpy as np print("\n-------------------Backpropagation-------------------\n") transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True,download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') dataiter = iter(trainloader) images, labels = dataiter.next() def imshow(img): img = img / 2 + 0.5 npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) imshow(torchvision.utils.make_grid(images)) print(' '.join('%5s' % classes[labels[j]] for j in range(4))) which should be consistent with the tutorial. If I execute this, I'll get the following error: "C:\Program Files\Anaconda3\python.exe" C:/MA/pytorch/deepLearningWithPytorchTutorial/trainingClassifier.py -------------------Backpropagation------------------- Files already downloaded and verified Files already downloaded and verified -------------------Backpropagation------------------- Files already downloaded and verified Files already downloaded and verified Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main prepare(preparation_data) File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path run_name="__mp_main__") File "C:\Program Files\Anaconda3\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Program Files\Anaconda3\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Program Files\Anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\MA\pytorch\deepLearningWithPytorchTutorial\trainingClassifier.py", line 23, in <module> dataiter = iter(trainloader) File "C:\Program Files\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__ return _DataLoaderIter(self) File "C:\Program Files\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__ w.start() File "C:\Program Files\Anaconda3\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "C:\Program Files\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Program Files\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Program Files\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data _check_not_importing_main() File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. Traceback (most recent call last): File "C:/MA/pytorch/deepLearningWithPytorchTutorial/trainingClassifier.py", line 23, in <module> dataiter = iter(trainloader) File "C:\Program Files\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__ return _DataLoaderIter(self) File "C:\Program Files\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__ w.start() File "C:\Program Files\Anaconda3\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "C:\Program Files\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Program Files\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Program Files\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__ reduction.dump(process_obj, to_child) File "C:\Program Files\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) BrokenPipeError: [Errno 32] Broken pipe Process finished with exit code 1 I already downloaded the *.py and *.ipynb. Running the *.ipynb with jupyter works fine (but I don't want to programm in the juniper web-interface, I prefer pyCharm) while the *.py in the console (Anaconda prompt and cmd) fails with the same error. Does anyone know how to fix this? (I'm using Python 3.6.5 (from Anaconda) and pyCharm, OS: Win10 64-bit) Thanks! Bene Update: If it is relevant, I just set num_workers=2 to num_workers=0 (both) and then it'll work.. .
Check out the documentation for multiprocessing: programming guidelines for windows. You should wrap all operations in functions and then call them inside an if __name__ == '__main__' clause: # required imports def load_datasets(...): # Code to load the datasets with multiple workers def train(...): # Code to train the model if __name__ == '__main__': load_datasets() train() In short, the the idea here is to wrap the example code inside an if __name__ == '__main__' statement.
https://stackoverflow.com/questions/50701690/
How reshape 3D tensor of shape (3, 1, 2) to (1, 2, 3)
I intended (Pdb) aa = torch.tensor([[[1,2]], [[3,4]], [[5,6]]]) (Pdb) aa.shape torch.Size([3, 1, 2]) (Pdb) aa tensor([[[ 1, 2]], [[ 3, 4]], [[ 5, 6]]]) (Pdb) aa.view(1, 2, 3) tensor([[[ 1, 2, 3], [ 4, 5, 6]]]) But what I really want is tensor([[[ 1, 3, 5], [ 2, 4, 6]]]) How? In my application, I am trying to transform my input data of shape (L, N, C_in) to (N, C_in, L) in order to use Conv1d, where L: sequence length N: batch size C_in: number of channels in the input, I also understand it as the dimensionality of the input at each position of a sequence. I am also wondering the input of Conv1d doesn't have the same input shape as GRU?
You can permute the axes to the desired shape. (This is similar to numpy.moveaxis() operation). In [90]: aa Out[90]: tensor([[[ 1, 2]], [[ 3, 4]], [[ 5, 6]]]) In [91]: aa.shape Out[91]: torch.Size([3, 1, 2]) # pass the desired ordering of the axes as argument # assign the result back to some tensor since permute returns a "view" In [97]: permuted = aa.permute(1, 2, 0) In [98]: permuted.shape Out[98]: torch.Size([1, 2, 3]) In [99]: permuted Out[99]: tensor([[[ 1, 3, 5], [ 2, 4, 6]]])
https://stackoverflow.com/questions/50710037/
CIFAR-10 Meaningless Normalization Values
I tried to build a neural network for a CIFAR-10 database. I used Pytorch Framework. I have a question about of data loading step. transform_train = T.Compose([ T.RandomCrop(32, padding=4), T.RandomHorizontalFlip(), T.ToTensor(), T.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ]) trainset = tv.datasets.CIFAR10(root=root, train=True, download=True, transform=transform_train) This is a normal step for data loading step. While loading data, I am normalizing values. At the beginning of the my project I found below row. T.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) After I searched better transform values, I found this values. T.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)) I did not find an explanation why these values were used. Do you have a description of these values?
I think you can have a look here: The first three values are the means over each channel, while the second triple are the standard deviations.
https://stackoverflow.com/questions/50710493/
Resize PyTorch Tensor
I am currently using the tensor.resize() function to resize a tensor to a new shape t = t.resize(1, 2, 3). This gives me a deprecation warning: non-inplace resize is deprecated Hence, I wanted to switch over to the tensor.resize_() function, which seems to be the appropriate in-place replacement. However, this leaves me with an cannot resize variables that require grad error. I can fall back to from torch.autograd._functions import Resize Resize.apply(t, (1, 2, 3)) which is what tensor.resize() does in order to avoid the deprecation warning. This doesn't seem like an appropriate solution but rather a hack to me. How do I correctly make use of tensor.resize_() in this case?
You can instead choose to go with tensor.reshape(new_shape) or torch.reshape(tensor, new_shape) as in: # a `Variable` tensor In [15]: ten = torch.randn(6, requires_grad=True) # this would throw RuntimeError error In [16]: ten.resize_(2, 3) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-16-094491c46baa> in <module>() ----> 1 ten.resize_(2, 3) RuntimeError: cannot resize variables that require grad The above RuntimeError can be resolved or avoided by using tensor.reshape(new_shape) In [17]: ten.reshape(2, 3) Out[17]: tensor([[-0.2185, -0.6335, -0.0041], [-1.0147, -1.6359, 0.6965]]) # yet another way of changing tensor shape In [18]: torch.reshape(ten, (2, 3)) Out[18]: tensor([[-0.2185, -0.6335, -0.0041], [-1.0147, -1.6359, 0.6965]])
https://stackoverflow.com/questions/50718045/
How to share a list of tensors in PyTorch multiprocessing?
I am programming with PyTorch multiprocessing. I want all the subprocesses can read/write the same list of tensors (no resize). For example the variable can be m = list(torch.randn(3), torch.randn(5)) Because each tensor has different sizes, I cannot organize them into a single tensor. A python list has no share_memory_() function, and multiprocessing.Manager cannot handle a list of tensors. How can I share the variable m among multiple subprocesses?
I find the solution by myself. It is pretty straightforward. Just call share_memory_() for each list elements. The list itself is not in the shared memory, but the list elements are. Demo code import torch.multiprocessing as mp import torch def foo(worker,tl): tl[worker] += (worker+1) * 1000 if __name__ == '__main__': tl = [torch.randn(2), torch.randn(3)] for t in tl: t.share_memory_() print("before mp: tl=") print(tl) p0 = mp.Process(target=foo, args=(0, tl)) p1 = mp.Process(target=foo, args=(1, tl)) p0.start() p1.start() p0.join() p1.join() print("after mp: tl=") print(tl) Output before mp: tl= [ 1.5999 2.2733 [torch.FloatTensor of size 2] , 0.0586 0.6377 -0.9631 [torch.FloatTensor of size 3] ] after mp: tl= [ 1001.5999 1002.2733 [torch.FloatTensor of size 2] , 2000.0586 2000.6377 1999.0370 [torch.FloatTensor of size 3] ]
https://stackoverflow.com/questions/50735493/
PyTorch how to implement disconnection (connections and corresponding gradients are masked)?
I try to implement the following graph. As you can see, the neurons are not fully connected, i.e., the weights are masked and so are their corresponding gradients. import torch import numpy as np x = torch.rand((3, 1)) # tensor([[ 0.8525], # [ 0.1509], # [ 0.9724]]) weights = torch.rand((2, 3), requires_grad=True) # tensor([[ 0.3240, 0.0792, 0.6858], # [ 0.5248, 0.4565, 0.3625]]) mask = torch.Tensor([[0,1,0],[1,0,1]]) # tensor([[ 0., 1., 0.], # [ 1., 0., 1.]]) mask_weights = weights * mask # tensor([[ 0.0000, 0.0792, 0.0000], # [ 0.5248, 0.0000, 0.3625]]) y = torch.mm(mask_weights, x) # tensor([[ 0.0120], # [ 0.7999]]) This question is originally posted at Pytorch Forum. Note the above way mask_weights = weights * mask is NOT suitable since corresponding gradients are not 0. Is there an elegant way to do that please? Thank you in advance.
Actually, the above method is correct. The disconnections essentially block feed-forward and back-propogation on corresponding connections. In other words, weights and gradients are masked. The codes in question reveal the first while this answer reveals the latter. mask_weights.register_hook(print) z = torch.Tensor([[1], [1]]) # tensor([[ 1.], # [ 1.]]) out = (y-z).mean() # tensor(-0.6595) out.backward() # tensor([[ 0.1920, 0.1757, 0.0046], # [ 0.1920, 0.1757, 0.0046]]) weights.grad # tensor([[ 0.0000, 0.1757, 0.0000], # [ 0.1920, 0.0000, 0.0046]]) As you can see, the gradients of weights are masked automatically.
https://stackoverflow.com/questions/50740557/
PyTorch: When using backward(), how can I retain only part of the graph?
I have a PyTorch computational graph, which consists of a sub-graph performing some calculation, and the result of this calculation (let's call it x) is then branched into two other sub-graphs. Each of these two sub-graphs yields some scalar results (lets call them y1 and y2). I want to do a backward pass for each of these two results (that is, I want to accumulate the gradients of the two sub-graphs. I do not want to perform the actual optimization step). Now, since memory is an issue here, I would like to perform the operations in the following order: First, calculate x. Then, calculate y1, and perform y1.backward() while (and this is the key point) retaining the graph that leads to x, but freeing the graph from x to y1. Then, calculate y2, and perform y2.backward(). In other words, to save memory without sacrificing too much speed, I want to keep x without needing to recalculate it, but I want to drop all calculations leading from x to y1 after I don't need them anymore. The problem is that the argument retain_graph of the function backward() will retain the entire graph leading to y1, whereas I need to retain only the part of the graph leading to x. Here is an example of what I would ideally want: import torch w = torch.tensor(1.0) w.requires_grad_(True) # sub-graph for calculating `x` x = w+10 # sub-graph for calculating `y1` x1 = x*x y1 = x1*x1 y1.backward(retain_graph=x) # this would not work, since retain_graph is a boolean and can either retain the entire graph or free it. # sub-graph for calculating `y2` x2 = torch.sqrt(x) y2 = x2/2 y2.backward() How can this be done?
The argument retain_graph will retain the entire graph, not just a sub-graph. However, we can use garbage collection to free unneeded parts of the graph. By removing all references to the sub-graph from x to y1, this sub-graph will be freed: import torch w = torch.tensor(1.0) w.requires_grad_(True) # sub-graph for calculating `x` x = w+10 # sub-graph for calculating `y1` x1 = x*x y1 = x1*x1 y1.backward(retain_graph=True) # all graph is retained # remove unneeded parts of graph. Note that these parts will be freed from memory (even if they were on GPU), due to python's garbage collection y1 = None x1 = None # sub-graph for calculating `y2` x2 = torch.sqrt(x) y2 = x2/2 y2.backward()
https://stackoverflow.com/questions/50741344/
Gradient error when calculating - pytorch
I am learning to use pytorch (0.4.0) to automate the gradient calculation, however I did not quite understand how to use the backward () and grad, as I'm doing an exercise I need to calculate df / dw using pytorch and making the derivative analytically, returning respectively auto_grad, user_grad, but I did not quite understand the use of automatic differentiation, in the code I made f.backward () and did w.grad to find df / dw, in addition the two calculations are not corresponding, if I even erred the derivative, it follows the graph that I am using and the code that I am trying to do: import numpy as np import torch import torch.nn.functional as F def graph2(W_np, x_np, b_np): W = torch.Tensor(W_np) W.requires_grad = True x = torch.Tensor(x_np) b = torch.Tensor(b_np) u = torch.matmul(W, x) + b g = F.sigmoid(u) f = torch.sum(g) user_grad = (sigmoid(W_np*x_np + b_np)*(1 - sigmoid(W_np*x_np + b_np))).T*x_np f.backward(retain_graph=True) auto_grad = W.grad print(auto_grad) print(user_grad) # raise NotImplementedError("falta completar a função graph2") # END YOUR CODE return f, auto_grad, user_grad test: iterations = 1000 sizes = np.random.randint(2,10, size=(iterations)) for i in range(iterations): size = sizes[i] W_np = np.random.rand(size, size) x_np = np.random.rand(size, 1) b_np = np.random.rand(size, 1) f, auto_grad, user_grad = graph2(W_np, x_np, b_np) manual_f = np.sum(sigmoid(np.matmul(W_np, x_np) + b_np)) assert np.isclose(f.data.numpy(), manual_f, atol=1e-4), "f not correct" assert np.allclose(auto_grad.numpy(), user_grad), "Gradient not correct"
I think you computed the gradients in the wrong way. Try this. import numpy as np import torch from torch.autograd import Variable import torch.nn.functional as F def sigmoid(x): return 1.0 / (1.0 + np.exp(-x)) def graph2(W_np, x_np, b_np): W = Variable(torch.Tensor(W_np), requires_grad=True) x = torch.tensor(x_np, requires_grad=True).type(torch.FloatTensor) b = torch.tensor(b_np, requires_grad=True).type(torch.FloatTensor) u = torch.matmul(W, x) + b g = F.sigmoid(u) f = torch.sum(g) user_grad = (sigmoid(np.matmul(W_np, x_np) + b_np)*(1 - sigmoid(np.matmul(W_np, x_np) + b_np)))*x_np.T f.backward(retain_graph=True) auto_grad = W.grad print("auto_grad", auto_grad) print("user_grad", user_grad) # END YOUR CODE return f, auto_grad, user_grad iterations = 1000 sizes = np.random.randint(2,10, size=(iterations)) for i in range(iterations): size = sizes[i] print("i, size", i, size) W_np = np.random.rand(size, size) x_np = np.random.rand(size, 1) b_np = np.random.rand(size, 1) f, auto_grad, user_grad = graph2(W_np, x_np, b_np) manual_f = np.sum(sigmoid(np.matmul(W_np, x_np) + b_np)) assert np.isclose(f.data.numpy(), manual_f, atol=1e-4), "f not correct" assert np.allclose(auto_grad.numpy(), user_grad), "Gradient not correct"
https://stackoverflow.com/questions/50750463/
Pytorch : How .grad() function returning result?
I am trying to understand grad() function in python, I know about backpropagation but having some doubt in .grad() function result. So if i have a very simple network say with one single input and one single weight : import torch from torch.autograd import Variable from torch import FloatTensor a_tensor=Variable(FloatTensor([1])) weight=Variable(FloatTensor([1]),requires_grad=True) Now i am running this in ipython cell: net_out=a_tensor*weight loss=5-net_out loss.backward() print("atensor",a_tensor) print('weight',weight) print('net_out',net_out) print('loss',loss) print(weight.grad) During first run it returns : atensor tensor([ 1.]) weight tensor([ 1.]) net_out tensor([ 1.]) loss tensor([ 4.]) tensor([-1.]) Which is correct because if i am right then computing gradient equation would be here : Now netout/w would be (w*a) w.r.t to w ==> 1*a And loss/netout (5-netout) w.r.t to netout ==> (0-1) Which would be 1*a*-1 ==> -1 But problem is if i press same cell again without modifying anything then i am getting grad -2 , -3 ,-4 ...etc atensor tensor([ 1.]) weight tensor([ 1.]) net_out tensor([ 1.]) loss tensor([ 4.]) tensor([-2.]) next run: atensor tensor([ 1.]) weight tensor([ 1.]) net_out tensor([ 1.]) loss tensor([ 4.]) tensor([-3.]) so on.. I am not getting what's happening there why and how the value of grad is increasing?
This is because you are not zeroing the gradients. What loss.backward() does is accumulate gradients - it adds gradients to existing ones. If you don't zero the gradient, then running loss.backward() over and over just keep adding the gradients to each other. What you want to do is zero the gradients after each step and you will see that the gradients are calculated correctly. If you have built a network net( which should be a nn.Module class object), you can zero the gradients simply by calling net.zero_grad(). If you haven't built a net (or an torch.optim object) you will have to zero the gradients yourself manually. Use weight.grad.data.zero_() method there.
https://stackoverflow.com/questions/50751689/
Adding modules in Pytorch Custom Module
Is it considered bad practice to add modules to a custom pytorch nn.Module using self.add_module()? All of the documentation seems to assign the layers to properties, then access them in the forward() method. For example: class ConvLayer(nn.Module): def __init__(self): self.add_module('conv',nn.Conv2d(...)) self.add_module('bn',nn.BatchNorm2d(...))
Calling add_module will add an entry to the _modules dict. The Module class also overwrites __getattr__ so that when you try to access a layer, it will look inside the _modules dict, despite the fact that the layer is not actually an attribute of the object. But from the user's perspective, it doesn't make a difference whether module.layer returns an actual property or simply an element from some private dict.
https://stackoverflow.com/questions/50753038/
What does data.norm() < 1000 do in PyTorch?
I am following the PyTorch tutorial here. It says that x = torch.randn(3, requires_grad=True) y = x * 2 while y.data.norm() &lt; 1000: y = y * 2 print(y) Out: tensor([-590.4467, 97.6760, 921.0221]) Could someone explain what data.norm() does here? When I change .randn to .ones its output is tensor([ 1024., 1024., 1024.]).
It's simply the L2 norm (a.k.a Euclidean norm) of the tensor. Below is a reproducible illustration: In [15]: x = torch.randn(3, requires_grad=True) In [16]: y = x * 2 In [17]: y.data Out[17]: tensor([-1.2510, -0.6302, 1.2898]) In [18]: y.data.norm() Out[18]: tensor(1.9041) # computing the norm using elementary operations In [19]: torch.sqrt(torch.sum(torch.pow(y, 2))) Out[19]: tensor(1.9041) Explanation: First, it takes a square of every element in the input tensor x, then it sums them together, and finally it takes a square root of the resulting sum. All in all, these operations compute the so-called L2 or Euclidean norm.
https://stackoverflow.com/questions/50753477/
Debugging GAN covergence error
Building a GAN to generate images. The images have 3 color channels, 96 x 96. The images that are generated by the generator at the beginning are all black, which is an issue given that is statistically highly unlikely. Also, the loss for both networks is not improving. I have posted the entire code below, and commented to allow it to be easily read. This is my first time building a GAN and I am new to Pytorch so any help is very appreciated! Thanks. import torch from torch.optim import Adam from torch.utils.data import DataLoader from torch.autograd import Variable import numpy as np import os import cv2 from collections import deque # training params batch_size = 100 epochs = 1000 # loss function loss_fx = torch.nn.BCELoss() # processing images X = deque() for img in os.listdir('pokemon_images'): if img.endswith('.png'): pokemon_image = cv2.imread(r'./pokemon_images/{}'.format(img)) if pokemon_image.shape != (96, 96, 3): pass else: X.append(pokemon_image) # data loader for processing in batches data_loader = DataLoader(X, batch_size=batch_size) # covert output vectors to images if flag is true, else input images to vectors def images_to_vectors(data, reverse=False): if reverse: return data.view(data.size(0), 3, 96, 96) else: return data.view(data.size(0), 27648) # Generator model class Generator(torch.nn.Module): def __init__(self): super(Generator, self).__init__() n_features = 1000 n_out = 27648 self.model = torch.nn.Sequential( torch.nn.Linear(n_features, 128), torch.nn.ReLU(), torch.nn.Linear(128, 256), torch.nn.ReLU(), torch.nn.Linear(256, 512), torch.nn.ReLU(), torch.nn.Linear(512, 1024), torch.nn.ReLU(), torch.nn.Linear(1024, n_out), torch.nn.Tanh() ) def forward(self, x): img = self.model(x) return img def noise(self, s): x = Variable(torch.randn(s, 1000)) return x # Discriminator model class Discriminator(torch.nn.Module): def __init__(self): super(Discriminator, self).__init__() n_features = 27648 n_out = 1 self.model = torch.nn.Sequential( torch.nn.Linear(n_features, 512), torch.nn.ReLU(), torch.nn.Linear(512, 256), torch.nn.ReLU(), torch.nn.Linear(256, n_out), torch.nn.Sigmoid() ) def forward(self, img): output = self.model(img) return output # discriminator training def train_discriminator(discriminator, optimizer, real_data, fake_data): N = real_data.size(0) optimizer.zero_grad() # train on real # get prediction pred_real = discriminator(real_data) # calculate loss error_real = loss_fx(pred_real, Variable(torch.ones(N, 1))) # calculate gradients error_real.backward() # train on fake # get prediction pred_fake = discriminator(fake_data) # calculate loss error_fake = loss_fx(pred_fake, Variable(torch.ones(N, 0))) # calculate gradients error_fake.backward() # update weights optimizer.step() return error_real + error_fake, pred_real, pred_fake # generator training def train_generator(generator, optimizer, fake_data): N = fake_data.size(0) # zero gradients optimizer.zero_grad() # get prediction pred = discriminator(generator(fake_data)) # get loss error = loss_fx(pred, Variable(torch.ones(N, 0))) # compute gradients error.backward() # update weights optimizer.step() return error # Instance of generator and discriminator generator = Generator() discriminator = Discriminator() # optimizers g_optimizer = torch.optim.Adam(generator.parameters(), lr=0.001) d_optimizer = torch.optim.Adam(discriminator.parameters(), lr=0.001) # training loop for epoch in range(epochs): for n_batch, batch in enumerate(data_loader, 0): N = batch.size(0) # Train Discriminator # REAL real_images = Variable(images_to_vectors(batch)).float() # FAKE fake_images = generator(generator.noise(N)).detach() # TRAIN d_error, d_pred_real, d_pred_fake = train_discriminator( discriminator, d_optimizer, real_images, fake_images ) # Train Generator # generate noise fake_data = generator.noise(N) # get error based on discriminator g_error = train_generator(generator, g_optimizer, fake_data) # convert generator output to image and preprocess to show test_img = np.array(images_to_vectors(generator(fake_data), reverse=True).detach()) test_img = test_img[0, :, :, :] test_img = test_img[..., ::-1] # show example of generated image cv2.imshow('GENERATED', test_img[0]) if cv2.waitKey(1) &amp; 0xFF == ord('q'): break print('EPOCH: {0}, D error: {1}, G error: {2}'.format(epoch, d_error, g_error)) cv2.destroyAllWindows() # save weights # torch.save('weights.pth')
One can't really easily debug your training without the data and so on, but a possible problem is that your generator's last layer is a Tanh(), which means output values between -1 and 1. You probably want: To have your real images normalized to the same range, e.g. in train_discriminator(): # train on real pred_real = discriminator(real_data * 2. - 1.) # supposing real_data in [0, 1] To re-normalize your generated data to [0, 1] before visualization/use. # convert generator output to image and preprocess to show test_img = np.array( images_to_vectors(generator(fake_data), reverse=True).detach()) test_img = test_img[0, :, :, :] test_img = test_img[..., ::-1] test_img = (test_img + 1.) / 2.
https://stackoverflow.com/questions/50762466/
Pytorch not using cuda device
I have the following code: from __future__ import print_function import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import numpy as np import scipy.io folder = 'small/' mat = scipy.io.loadmat(folder+'INISTATE.mat'); ini_state = np.float32(mat['ini_state']); ini_state = torch.from_numpy(ini_state); ini_state = ini_state.cuda(); mat = scipy.io.loadmat(folder+'TARGET.mat'); target = np.float32(mat['target']); target = torch.from_numpy(target); target = target.cuda(); class MLPNet(nn.Module): def __init__(self): super(MLPNet, self).__init__() self.fc1 = nn.Linear(3, 64) self.fc2 = nn.Linear(64, 128) self.fc3 = nn.Linear(128, 128) self.fc4 = nn.Linear(128, 41) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = self.fc4(x) return x def name(self): return "MLP" model = MLPNet(); model = model.cuda(); criterion = nn.MSELoss(); criterion = criterion.cuda(); learning_rate = 0.001; optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) batch_size = 20 iter_size = int(target.size(0)/batch_size) print(iter_size) for epoch in range(50): for i in range(iter_size): start = i*batch_size; end = (i+1)*batch_size-1; samples = ini_state[start:end,:]; labels = target[start:end,:]; optimizer.zero_grad() # zero the gradient buffer outputs = model(samples) loss = criterion(outputs, labels) loss.backward() optimizer.step() if (i+1) % 500 == 0: print("Epoch %s, batch %s, loss %s" % (epoch, i, loss)) if (epoch+1) % 7 == 0: for g in optimizer.param_groups: g['lr'] = g['lr']*0.1; But when I train the simple MLP, the CPU usage is around 100% while the gpu is only around 10%. What is the problem that prevents using the GPU?
Actually your model indeed runs on GPU instead of CPU. The reason of low GPU usage is that both your model and batch size are small, which demands low computational cost. You may try increasing the batch size to around 1000, and the GPU usage should be higher. In fact PyTorch prevents operations that mix CPU and GPU data, e.g., you can't multiply a GPU tensor and a CPU tensor. So usually it is unlikely that part of your network runs on CPU and the other part runs on GPU, unless you deliberately design it. By the way, data shuffling is necessary for neural networks. As your are using mini-batch training, in each iteration you are hoping that the mini batch approximates the whole dataset. Without data shuffling, it is likely that samples in a mini batch are highly correlated, which leads to biased estimation of parameter update. The data loader provided by PyTorch can help you do the data shuffling.
https://stackoverflow.com/questions/50771001/
Pytorch - handling picutres and .jpeg files (beginner's questions)
I am new at Pytorch, and have a couple of questions regarding the way pictures are being handled: 1) In the "training a classifier" tutorial, the pictures are PIL files, and are being handled via the following commands (where "transform" also turns the PIL format into a tensor format): trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) It seems like trainset[1] (and also for the other indices) consists of a tensor, and a number. I want to define a new variable "image" that will consist of the tensor part of trainset[ 1 ] and then print it - how can I do it? 2) Assume that I have a different dataset that I want to classify. It consists of .jpeg images that are located in the folder "C:/temp/dataset". How can I define the variable "trainset" to consist of these images? Thanks a lot in advance!
For your first question: image = trainset[1][0] print(image) For your second question: from PIL import Image import numpy as np import os def load_image(infilename): """This function loads an image into memory when you give it the path of the image """ img = Image.open(infilename) img.load() data = np.asarray(img, dtype="float32") return data def create_npy_from_image(images_folder, output_name, num_images, image_dim): """Loops through the images in a folder and saves all of them as a numpy array in output_name """ image_matrix = np.empty((num_images, image_dim, image_dim, 3), dtype=np.float32) for i, filename in enumerate(os.listdir(images_folder)): if filename.endswith(".jpg"): data = load_image(images_folder + filename) image_matrix[i] = data else: continue np.save(output_name, image_matrix) So I would write something like this: create_npy_from_image(path_to_images_folder, "trainset.npy", numer_of_images_in_your_folder, DIM) DIM is 64 for example if your images are 64x64x3 You can then load the saved array with np.load and then convert it to a pytorch tensor using from_numpy function. Let me know if this works. Good luck!
https://stackoverflow.com/questions/50772128/
Understanding the Input Parameters in RNN
I'm having a hard time to understand the different "jargons" used in RNN. They are the following: batch_size, time_steps, inputs and instances. Let me go through my understanding of each input parameters &amp; please correct me where I'm wrong. Suppose I've got a sequence of numbers and I want to predict the next number. The numbers are the following: [1,2,3,4,5,....,100] time_steps: This parameter means how far RNN will look into past before it predicts the future. For simplicity, I want to predict 1 number ahead. And want to do after I see 10 numbers in the past. So, in this case, time_steps will be 10. inputs: These are the values at each time_steps. In first time_step (t) the inputs are t0: [1] t1: [2] . . . t10: [10]` batch_size: This helps in efficient computation of RNN model. Suppose my batch_size is 2. In that case, at time_step 2, the RNN input will be t0: [1] t0: [11] Then what's the usage of instances? E.g. in this post, instances have been used. And there are multiple cases where instances are used. Is it means each loop over batch? E.g. there are 5 batches, each of size 2. Then there will be 5 instances. Please help me correct my understanding. Thanks!
Alright pal, you did good learning those concepts. I had a hard time learning those correctly. Everything you know seems to be in order and as for "instances". They're basically a set of data. There's no fixed term of usage of "instances" in a deep learning community. Some people use it for referring for a different set of data or batches of data. I rarely hear it in papers.
https://stackoverflow.com/questions/50773509/
How to calculate Pixel wise accuracy in pytorch?
My code looks like the following and I get accuracy from 0 to 9000, which means its clearly not working. optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() predicted = outputs.data predicted = predicted.to('cpu') predicted_img = predicted.numpy() labels_data = labels.data labels_data = labels_data.to('cpu') labels_data = labels_data.numpy() labels = labels.to(device) _, predicted = torch.max(outputs.data, 1) total = labels.size(0) * labels.size(1) * labels.size(2) correct = (predicted_img == labels_data).sum().item() accuracy += ( correct / total) avg_accuracy = accuracy/(batch) What am I doing wrong ?
I am assuming the following line accumulates accuracy over mini-batches. accuracy += (correct/total) And avg_accuracy = accuracy/batch gives average accuracy over the entire dataset where batch represents the total number of mini-batches representing the whole dataset. If you are getting accuracy greater than 100, then you should check if in any mini-batch, you get correct &gt; total? Also check if total = labels_data.size gives you the same value as the following line. total = labels.size(0) * labels.size(1) * labels.size(2)
https://stackoverflow.com/questions/50773842/
Pytorch - Pick best probability after softmax layer
I have a logistic regression model using Pytorch 0.4.0, where my input is high-dimensional and my output must be a scalar - 0, 1 or 2. I'm using a linear layer combined with a softmax layer to return a n x 3 tensor, where each column represents the probability of the input falling in one of the three classes (0, 1 or 2). However, I must return a n x 1 tensor, so I need to somehow pick the highest probability for each input and create a tensor indicating which class had the highest probability. How can I achieve this using Pytorch? To illustrate, my Softmax outputs this: [[0.2, 0.1, 0.7], [0.6, 0.2, 0.2], [0.1, 0.8, 0.1]] And I must return this: [[2], [0], [1]]
torch.argmax() is probably what you want: import torch x = torch.FloatTensor([[0.2, 0.1, 0.7], [0.6, 0.2, 0.2], [0.1, 0.8, 0.1]]) y = torch.argmax(x, dim=1) print(y.detach()) # tensor([ 2, 0, 1]) # If you want to reshape: y = y.view(1, -1) print(y.detach()) # tensor([[ 2, 0, 1]])
https://stackoverflow.com/questions/50776548/
Pytorch - meaning of a command in a basic "forward" pass
I am new with Pytorch, and will be glad if someone will be able to help me understand the following (and correct me if I am wrong), regarding the meaning of the command x.view in Pytorch first tutorial, and in general about the input of convolutional layers and the input of fully-connected layers: def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x As far as I understand, an input 256X256 image to a convolutional layer is inserted in its 2D form (i.e. - a 256X256 matrix, or a 256X256X3 in the case of a color image). Nevertheless, when we insert an image to a fully-connected linear layer, we need to first reshape the 2D image into a 1D vector (am I right? Is this true also in general (or only in Pytorch)? ). Is this why we use the command “x = x.view(-1, 16 * 5 * 5)” before inserting x into the fully-connected layers? If the input image x would be 3D (e.g. 256X256X256), would the syntax of the given above “forward” function remain the same? Thanks a lot in advance
Its from Petteri Nevavuori's lecture notes and shows how a feature map is produced from an image I with a kernel K. With each application of the kernel a dot product is calculated, which effectively is the sum of element-wise multiplications between I and K in an K-sized area within I. You could say that kernel looks for diagonal features. It then searches the image and finds a perfect matching feature in the lower left corner. Otherwise the kernel is able to identify only parts the feature its looking for. This why the product is called a feature map, as it tells how well a kernel was able to identify a feature in any location of the image it was applied to. Answer adapted from: https://discuss.pytorch.org/t/convolution-input-and-output-channels/10205/3 Let's say we consider an input image of shape (W x H x 3) where input volume has 3 channels (RGB image). Now we would like to create a ConvLayer for this image. Each kernel in the ConvLayer will use all input channels of the input volume. Let’s assume we would like to use a 3 by 3 kernel. This kernel will have 27 weights and 1 bias parameter, since (W * H * input_Channels = 3 * 3 * 3 = 27 weights). The number of output channels is the number of different kernels used in the ConvLayer. If we would like to output 64 channels, we need to define ConvLayer such that it uses 64 different 3x3 kernels. If you check out the documentation of Conv2d, we can define a ConvLayer mimicking above scenario as follows. nn.Conv2d(3, 64, 3, stride=1) Where in_channels = 3, out_channels = 64, kernel_size = 3x3. Check out what is stride in the documentation. If you check out the implementation of Linear layer, you would see the underlying mathematical equation that a linear operation mimics is: y = Ax + b. According to pytorch documentation of linear layer, we can see it expects an input of shape (N,∗,in_features) and the output is of shape (N,∗,out_features). So, in your case, if the input image x is of shape 256 x 256 x 256, and you want to transform all the (256*256*256) features to a specific number of feature, you can define a linear layer as: llayer = nn.Linear(256*256*256, num_features)
https://stackoverflow.com/questions/50777675/
AttributeError: module 'torch' has no attribute "device"
---&gt; 13 device = torch.device({"cuda"} if torch.cuda.is_available() else {"cpu"}) 14 15 AttributeError: module 'torch' has no attribute 'device' I'm 99% sure this is because I didn't upgrade pytorch from 0.31 to 0.4 however I can't upgrade pytorch for now. I need to translate .device (0.4) to something that works in 0.31. I check the migration document however it doesn't provide how I can convert torch.device in retrospect. Please help!
torch.cuda.device() is a context manager. torch.cuda.set_device(0) # On device 0 with torch.cuda.device(1): print("Inside device is 1") # On device 1 print("Outside is still 0") # On device 0 And the above works from 0.2 version.
https://stackoverflow.com/questions/50781020/
Invalid device Ordinal , CUDA / TORCH
I am getting this error on running the script in ubuntu 16.04 . Please bear with me , i am new to python , I have checked the already available options on internet but i couldnt fix it. RuntimeError: cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:32 I am currently running this file . from __future__ import print_function from models import LipRead import torch import toml from training import Trainer from validation import Validator print("Loading options...") with open('options.toml', 'r') as optionsFile: options = toml.loads(optionsFile.read()) if(options["general"]["usecudnnbenchmark"] and options["general"] ["usecudnn"]): print("Running cudnn benchmark...") torch.backends.cudnn.benchmark = True #Create the model. model = LipRead(options) if(options["general"]["loadpretrainedmodel"]): model.load_state_dict(torch.load(options["general"] ["pretrainedmodelpath"])) #Move the model to the GPU. if(options["general"]["usecudnn"]): model = model.cuda(options["general"]["gpuid"]) trainer = Trainer(options) validator = Validator(options) for epoch in range(options["training"]["startepoch"], options["training"]["epochs"]): if(options["training"]["train"]): trainer.epoch(model, epoch) if(options["validation"]["validate"]): validator.epoch(model) And I doubt this file has something to do with the error popped Title = "TOML Example" [general] usecudnn = true usecudnnbenchmark = true gpuid = 0 loadpretrainedmodel = true pretrainedmodelpath = "trainedmodel.pt" savemodel = true modelsavepath = "savedmodel.pt" [input] batchsize = 18 numworkers = 18 shuffle = true [model] type = "LSTM" inputdim = 256 hiddendim = 256 numclasses = 500 numlstms = 2 [training] train = true epochs = 15 startepoch = 10 statsfrequency = 1000 dataset = "/udisk/pszts-ssd/AV-ASR-data/BBC_Oxford/lipread_mp4" learningrate = 0.003 momentum = 0.9 weightdecay = 0.0001 [validation] validate = true dataset = "/udisk/pszts-ssd/AV-ASR-data/BBC_Oxford/lipread_mp4" saveaccuracy = true accuracyfilelocation = "accuracy.txt" The error is mostly in the gpuid line as i have finally reached.
The pre-trained weights might be mapped to a different gpuid. If a model pre-trained on multiple Cuda devices is small enough, it might be possible to run it on a single GPU. This is assuming at least batch of size 1 fits in the available GPU and RAM. #WAS model.load_state_dict(torch.load(final_model_file, map_location={'cuda:0':'cuda:1'})) #IS model.load_state_dict(torch.load(final_model_file, map_location={'cuda:0':'cuda:0'}))
https://stackoverflow.com/questions/50783853/
What does -1 mean in pytorch view?
As the question says, what does -1 do in pytorch view? &gt;&gt;&gt; a = torch.arange(1, 17) &gt;&gt;&gt; a tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16.]) &gt;&gt;&gt; a.view(1,-1) tensor([[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16.]]) &gt;&gt;&gt; a.view(-1,1) tensor([[ 1.], [ 2.], [ 3.], [ 4.], [ 5.], [ 6.], [ 7.], [ 8.], [ 9.], [ 10.], [ 11.], [ 12.], [ 13.], [ 14.], [ 15.], [ 16.]]) Does it (-1) generate additional dimension? Does it behave the same as numpy reshape -1?
Yes, it does behave like -1 in numpy.reshape(), i.e. the actual value for this dimension will be inferred so that the number of elements in the view matches the original number of elements. For instance: import torch x = torch.arange(6) print(x.view(3, -1)) # inferred size will be 2 as 6 / 3 = 2 # tensor([[ 0., 1.], # [ 2., 3.], # [ 4., 5.]]) print(x.view(-1, 6)) # inferred size will be 1 as 6 / 6 = 1 # tensor([[ 0., 1., 2., 3., 4., 5.]]) print(x.view(1, -1, 2)) # inferred size will be 3 as 6 / (1 * 2) = 3 # tensor([[[ 0., 1.], # [ 2., 3.], # [ 4., 5.]]]) # print(x.view(-1, 5)) # throw error as there's no int N so that 5 * N = 6 # RuntimeError: invalid argument 2: size '[-1 x 5]' is invalid for input with 6 elements print(x.view(-1, -1, 3)) # throw error as only one dimension can be inferred # RuntimeError: invalid argument 1: only one dimension can be inferred
https://stackoverflow.com/questions/50792316/
Pytorch: how to make the trainloader use a specific amount of images?
Assume I am using the following calls: trainset = torchvision.datasets.ImageFolder(root="imgs/", transform=transform) trainloader = torch.utils.data.DataLoader(trainset,batch_size=4,suffle=True,num_workers=1) As far as I can tell, this defines the trainset as consisting of all the images in the folder "images", with labels as defined by the specific folder location. My question is - Is there any direct/easy way to define the trainset to be a sub-sample of the images in this folder? For example, define trainset to be a random sample of 10 images from every sub-folder? Thanks in advance
You can wrap the class DatasetFolder (or ImageFolder) in another class to limit the dataset: class LimitDataset(data.Dataset): def __init__(self, dataset, n): self.dataset = dataset self.n = n def __len__(self): return self.n def __getitem__(self, i): return self.dataset[i] You can also define some mapping between the index in LimitDataset and the index in the original dataset to define more complex behavior (such as random subsets). If you want to limit the batches per epoch instead of the dataset size: from itertools import islice for data in islice(dataloader, 0, batches_per_epoch): ... Note that if you use this shuffle, the dataset size will be the same, but the data that each epoch will see will be limited. If you don't shuffle the dataset this will also limit the dataset size.
https://stackoverflow.com/questions/50798172/
Error in pip install torchvision on Windows 10
on pytorch, installing on Windows 10, conda and Cuda 9.0. cmd did not complain when i ran conda install pytorch cuda90 -c pytorch, then when I ran pip3 install torchvision I get this error message. Requirement already satisfied: torchvision in PATHTOFILE\python35\lib\site-packages (0.2.1) Requirement already satisfied: numpy in PATHTOFILE\python35\lib\site-packages (from torchvision) (1.12.0+mkl) Requirement already satisfied: six in PATHTOFILE\python35\lib\site-packages (from torchvision) (1.10.0) Collecting pillow&gt;=4.1.1 (from torchvision) Using cached https://files.pythonhosted.org/packages/ab/d2/d27a21bd3e64db1ca1dc7dc16026a16d77f5c3ffca9ec619eddeea7c47ce/Pillow-5.1.0-cp35-cp35m-win_amd64.whl Collecting torch (from torchvision) Using cached https://files.pythonhosted.org/packages/5f/e9/bac4204fe9cb1a002ec6140b47f51affda1655379fe302a1caef421f9846/torch-0.1.2.post1.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "C:\Users\USERNAME~1\AppData\Local\Temp\pip-install-a70g611u\torch\setup.py", line 11, in &lt;module&gt; raise RuntimeError(README) RuntimeError: PyTorch does not currently provide packages for PyPI (see status at https://github.com/pytorch/pytorch/issues/566). Please follow the instructions at http://pytorch.org/ to install with miniconda instead. ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in C:\Users\USERNAME~1\AppData\Local\Temp\pip-install-a70g611u\torch\ Anyone got this error?
Fixed it by running the following pip3 install http://download.pytorch.org/whl/cu90/torch-0.4.0-cp35-cp35m-win_amd64.whl pip3 install torchvision This weirdly fixes the problem. No idea why. Next time just try to run everything on pip
https://stackoverflow.com/questions/50812838/
Pytorch - Subclasses of torchvision.dataset.ImageFolder - Import Error
Following my last post, I am now trying to implement a subclass of the torchvision.datasets.ImageFolder class. The following code returns an error ("name 'default_loader' is not defined"), and I can't figure out why. Will you please help me? class ExtendingImageFolder(torchvision.datasets.ImageFolder) def __init__(self,root,transform=None, target_transform=None,loader=default_loader): super().__init__(root,transform,target_transform,loader) When I delete the "None" and "default_loader", and write it like this; class ExtendingImageFolder(torchvision.datasets.ImageFolder) def __init__(self,root,transform, target_transform,loader): super().__init__(root,transform,target_transform,loader) I get an error of missing input arguments when trying to create an instance of this class, like: JJ=ExtendingImageFolder(root='C:/',transform=transform) What am I doing wrong here? Thanks in advance!
default_loader() is a function defined in torchvision/datasets/folder.py, along ImageFolder and other folder-based dataset helpers. It is however not exported in torchvision/datasets/__init__.py (unlike ImageFolder). You can still import it directly with "from torchvision.datasets.folder import default_loader" - which should solve your import error.
https://stackoverflow.com/questions/50817964/
pytorch 0.4.0 broadcasting doesn't work in optimizer
I can't seem to get broadcasting to work with autograd in pytorch 0.4.0! Any help appreciated. Below is a minimal code example that reproduces my problem. I would like to find a single value "bias", which minimizes the loss over the dataset. The understand the error message as it wants to backpropagate a vector with 5 entries into a scalar, which it cannot figure out. However, this is the whole idea of broadcasting. The behavior I expected was that it would propagate the mean of the error back to the broadcasted scalar value (here bias). Please advice. import numpy as np import torch from torch import nn import torch.nn.functional as F from torch.utils.data import Dataset print(torch.__version__) class AddBias(torch.autograd.Function): @staticmethod def forward(ctx, input, bias): ctx.save_for_backward(input, bias) return input - bias @staticmethod def backward(ctx, grad_out): input, bias = ctx.saved_tensors grad_in = grad_bias = None len_grad = len(ctx.needs_input_grad) assert len_grad in {0, 1, 2} if ctx.needs_input_grad[0]: grad_in = grad_out if len_grad == 2: grad_bias = -1 * grad_out return grad_in, grad_bias class BiasModel(nn.Module): def __init__(self, size): super(BiasModel, self).__init__() self.bias_model = AddBias.apply self.bias = nn.Parameter(torch.tensor(0.5, dtype=torch.float, requires_grad=True)) def forward(self, arr): return self.bias_model(arr[:], self.bias).unsqueeze(-1) class MyData(Dataset): def __init__(self, data): self.data = data def __len__(self): return len(self.data) def __getitem__(self, i): arr = torch.tensor(data[i], dtype=torch.float) target = torch.tensor(arr &gt; 0.5, dtype=torch.float).unsqueeze(-1) return arr, target m = 5 data = np.random.random((100, m)) model = BiasModel(m) my_data = MyData(data) loss_func = F.binary_cross_entropy_with_logits with torch.no_grad(): loss = 0. for arr, target in my_data: loss += loss_func(model(arr), target) print('loss before', loss / len(my_data)) optimizer = torch.optim.SGD(model.parameters(), lr=0.1) loss_tot = 0. for arr, target in my_data: model.zero_grad() loss = loss_func(model(arr), target) loss_tot += loss loss.backward() optimizer.step() Output: 0.4.0 loss before tensor(0.5735) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) &lt;ipython-input-4-27bce65b553b&gt; in &lt;module&gt;() 56 loss_tot += loss 57 loss.backward() ---&gt; 58 optimizer.step() ~/miniconda3/envs/myproject/lib/python3.6/site-packages/torch/optim/sgd.py in step(self, closure) 105 d_p = buf 106 --&gt; 107 p.data.add_(-group['lr'], d_p) 108 109 return loss RuntimeError: expand(torch.FloatTensor{[5]}, size=[]): the number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1)
I forgot to do a reverse broadcasting in the backward pass! Specifically, had to change if len_grad == 2: grad_bias = -1 * grad_out to if len_grad == 2: grad_bias = -1 * torch.mean(grad_out)
https://stackoverflow.com/questions/50826045/
Why do we do batch matrix-matrix product?
I'm following Pytorch seq2seq tutorial and ittorch.bmm method is used like below: attn_applied = torch.bmm(attn_weights.unsqueeze(0), encoder_outputs.unsqueeze(0)) I understand why we need to multiply attention weight and encoder outputs. What I don't quite understand is the reason why we need bmm method here. torch.bmm document says Performs a batch matrix-matrix product of matrices stored in batch1 and batch2. batch1 and batch2 must be 3-D tensors each containing the same number of matrices. If batch1 is a (b×n×m) tensor, batch2 is a (b×m×p) tensor, out will be a (b×n×p) tensor.
In the seq2seq model, the encoder encodes the input sequences given in as mini-batches. Say for example, the input is B x S x d where B is the batch size, S is the maximum sequence length and d is the word embedding dimension. Then the encoder's output is B x S x h where h is the hidden state size of the encoder (which is an RNN). Now while decoding (during training) the input sequences are given one at a time, so the input is B x 1 x d and the decoder produces a tensor of shape B x 1 x h. Now to compute the context vector, we need to compare this decoder hidden state with the encoder's encoded states. So, consider you have two tensors of shape T1 = B x S x h and T2 = B x 1 x h. So if you can do batch matrix multiplication as follows. out = torch.bmm(T1, T2.transpose(1, 2)) Essentially you are multiplying a tensor of shape B x S x h with a tensor of shape B x h x 1 and it will result in B x S x 1 which is the attention weight for each batch. Here, the attention weights B x S x 1 represent a similarity score between the decoder's current hidden state and encoder's all the hidden states. Now you can take the attention weights to multiply with the encoder's hidden state B x S x h by transposing first and it will result in a tensor of shape B x h x 1. And if you perform squeeze at dim=2, you will get a tensor of shape B x h which is your context vector. This context vector (B x h) is usually concatenated to decoder's hidden state (B x 1 x h, squeeze dim=1) to predict the next token.
https://stackoverflow.com/questions/50826644/
How to perform sum pooling in PyTorch
How to perform sum pooling in PyTorch. Specifically, if we have input (N, C, W_in, H_in) and want output (N, C, W_out, H_out) using a particular kernel_size and stride just like nn.Maxpool2d ?
You could use torch.nn.AvgPool1d (or torch.nn.AvgPool2d, torch.nn.AvgPool3d) which are performing mean pooling - proportional to sum pooling. If you really want the summed values, you could multiply the averaged output by the pooling surface.
https://stackoverflow.com/questions/50838876/
pytorch seq2seq encoder forward method
I'm following Pytorch seq2seq tutorial and below is how they define the encoder function. class EncoderRNN(nn.Module): def __init__(self, input_size, hidden_size): super(EncoderRNN, self).__init__() self.hidden_size = hidden_size self.embedding = nn.Embedding(input_size, hidden_size) self.gru = nn.GRU(hidden_size, hidden_size) def forward(self, input, hidden): embedded = self.embedding(input).view(1, 1, -1) output = embedded output, hidden = self.gru(output, hidden) return output, hidden def initHidden(self): return torch.zeros(1, 1, self.hidden_size, device=device) However, it seems like forward method is never really being called during the training. Here is how the encoder forward method is being used in the tutorial: for ei in range(input_length): encoder_output, encoder_hidden = encoder(input_tensor[ei], encoder_hidden) encoder_outputs[ei] = encoder_output[0, 0] isn't it supposed to be encoder.forward instead of just encoder? Is there some automatic 'forward' mechanism in Pytorch that I am not aware of?
In PyTorch, you write your own class by extending torch.nn.Module and define the forward method to express your desired computational steps that serve as the "paperwork" (e.g. calling hooks) in the model.__call__(...) method (which is what model(x) will call by python special name specifications). If you are curious you can look at what model(x) does behind the scenes beyond calling model.forward(x) here: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L462 Also, you can see what is the difference between explicitly calling the .foward(x) method and just simply using model(x) here: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L72
https://stackoverflow.com/questions/50847438/
Is it possible to use a machine learning library with streaming inputs and outputs?
I want to incorporate machine learning into a project ive been working on but i havent seen anything about my intended use case. It seems like the old pandoras box project did something like this but with textual input and output. I want to train a model in real time as well as use it (and then switch it from testing to live api endpoints when it works well enough.) But every library ive found works like "feed in a datablob, get an answer" I want to be able to stream data into it: instead of giving it "5,4,3,4,3,2,3,4,5" and it says "1" or "-1" or "0" I want to give it "5" then "4" then "3" then "4" etc and each time it responds. Im not even sure if "streaming" is the right word for this. Please help!
It sounds like a usecase for recurrent neural networks, which translate sequences (your stream) into single outputs or other sequences. This a well-explored approach, e.g., in natural language processing. Tensorflow has support for different flavors of such nets.
https://stackoverflow.com/questions/50850497/
Taking the last state from BiLSTM (BiGRU) in PyTorch
After reading several articles, I am still quite confused about correctness of my implementation of getting last hidden states from BiLSTM. Understanding Bidirectional RNN in PyTorch (TowardsDataScience) PackedSequence for seq2seq model (PyTorch forums) What's the difference between “hidden” and “output” in PyTorch LSTM? (StackOverflow) Select tensor in a batch of sequences (Pytorch formums) The approach from the last source (4) seems to be the cleanest for me, but I am still uncertain if I understood the thread correctly. Am I using the right final hidden states from LSTM and reversed LSTM? This is my implementation # pos contains indices of words in embedding matrix # seqlengths contains info about sequence lengths # so for instance, if batch_size is 2 and pos=[4,6,9,3,1] and # seqlengths contains [3,2], we have batch with samples # of variable length [4,6,9] and [3,1] all_in_embs = self.in_embeddings(pos) in_emb_seqs = pack_sequence(torch.split(all_in_embs, seqlengths, dim=0)) output,lasthidden = self.rnn(in_emb_seqs) if not self.data_processor.use_gru: lasthidden = lasthidden[0] # u_emb_batch has shape batch_size x embedding_dimension # sum last state from forward and backward direction u_emb_batch = lasthidden[-1,:,:] + lasthidden[-2,:,:] Is it correct?
In a general case if you want to create your own BiLSTM network, you need to create two regular LSTMs, and feed one with the regular input sequence, and the other with inverted input sequence. After you finish feeding both sequences, you just take the last states from both nets and somehow tie them together (sum or concatenate). As I understand, you are using built-in BiLSTM as in this example (setting bidirectional=True in nn.LSTM constructor). Then you get the concatenated output after feeding the batch, as PyTorch handles all the hassle for you. If it is the case, and you want to sum the hidden states, then you have to u_emb_batch = (lasthidden[0, :, :] + lasthidden[1, :, :]) assuming you have only one layer. If you have more layers, your variant seem better. This is because the result is structured (see documentation): h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len By the way, u_emb_batch_2 = output[-1, :, :HIDDEN_DIM] + output[-1, :, HIDDEN_DIM:] should provide the same result.
https://stackoverflow.com/questions/50856936/
PyTorch custom dataset dataloader returns strings (of keys) not tensors
I am trying to load my own dataset and I use a custom Dataloader that reads in images and labels and converts them to PyTorch Tensors. However when the Dataloader is instantiated it returns strings x "image" and y "labels" but not the real values or tensors when read (iter) print(self.train_loader) # shows a Tensor object tic = time.time() with tqdm(total=self.num_train) as pbar: for i, (x, y) in enumerate(self.train_loader): # x and y are returned as string (where it fails) if self.use_gpu: x, y = x.cuda(), y.cuda() x, y = Variable(x), Variable(y) This is how dataloader.py looks like: from __future__ import print_function, division #ds import numpy as np from utils import plot_images import os #ds import pandas as pd #ds from skimage import io, transform #ds import torch from torchvision import datasets from torch.utils.data import Dataset, DataLoader #ds from torchvision import transforms from torchvision import utils #ds from torch.utils.data.sampler import SubsetRandomSampler class CDataset(Dataset): def __init__(self, csv_file, root_dir, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.frame = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.frame) def __getitem__(self, idx): img_name = os.path.join(self.root_dir, self.frame.iloc[idx, 0]+'.jpg') image = io.imread(img_name) # image = image.transpose((2, 0, 1)) labels = np.array(self.frame.iloc[idx, 1])#.as_matrix() #ds #landmarks = landmarks.astype('float').reshape(-1, 2) #print(image.shape) #print(img_name,labels) sample = {'image': image, 'labels': labels} if self.transform: sample = self.transform(sample) return sample class ToTensor(object): """Convert ndarrays in sample to Tensors.""" def __call__(self, sample): image, labels = sample['image'], sample['labels'] #print(image) #print(labels) # swap color axis because # numpy image: H x W x C # torch image: C X H X W image = image.transpose((2, 0, 1)) #print(image.shape) #print((torch.from_numpy(image))) #print((torch.from_numpy(labels))) return {'image': torch.from_numpy(image), 'labels': torch.from_numpy(labels)} def get_train_valid_loader(data_dir, batch_size, random_seed, #valid_size=0.1, #ds #shuffle=True, show_sample=False, num_workers=4, pin_memory=False): """ Utility function for loading and returning train and valid multi-process iterators over the MNIST dataset. A sample 9x9 grid of the images can be optionally displayed. If using CUDA, num_workers should be set to 1 and pin_memory to True. Args ---- - data_dir: path directory to the dataset. - batch_size: how many samples per batch to load. - random_seed: fix seed for reproducibility. - #ds valid_size: percentage split of the training set used for the validation set. Should be a float in the range [0, 1]. In the paper, this number is set to 0.1. - shuffle: whether to shuffle the train/validation indices. - show_sample: plot 9x9 sample grid of the dataset. - num_workers: number of subprocesses to use when loading the dataset. - pin_memory: whether to copy tensors into CUDA pinned memory. Set it to True if using GPU. Returns ------- - train_loader: training set iterator. - valid_loader: validation set iterator. """ #ds #error_msg = "[!] valid_size should be in the range [0, 1]." #assert ((valid_size &gt;= 0) and (valid_size &lt;= 1)), error_msg #ds # define transforms #normalize = transforms.Normalize((0.1307,), (0.3081,)) trans = transforms.Compose([ ToTensor(), #normalize, ]) # load train dataset #train_dataset = datasets.MNIST( # data_dir, train=True, download=True, transform=trans #) train_dataset = CDataset(csv_file='/home/Desktop/6June17/util/train.csv', root_dir='/home/caffe/data/images/',transform=trans) # load validation dataset #valid_dataset = datasets.MNIST( #ds # data_dir, train=True, download=True, transform=trans #ds #) valid_dataset = CDataset(csv_file='/home/Desktop/6June17/util/eval.csv', root_dir='/home/caffe/data/images/',transform=trans) num_train = len(train_dataset) train_indices = list(range(num_train)) #ds split = int(np.floor(valid_size * num_train)) num_valid = len(valid_dataset) #ds valid_indices = list(range(num_valid)) #ds #if shuffle: # np.random.seed(random_seed) # np.random.shuffle(indices) #ds train_idx, valid_idx = indices[split:], indices[:split] train_idx = train_indices #ds valid_idx = valid_indices #ds train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=batch_size, sampler=train_sampler, num_workers=num_workers, pin_memory=pin_memory, ) print(train_loader) valid_loader = torch.utils.data.DataLoader( valid_dataset, batch_size=batch_size, sampler=valid_sampler, num_workers=num_workers, pin_memory=pin_memory, ) # visualize some images if show_sample: sample_loader = torch.utils.data.DataLoader( dataset, batch_size=9, #shuffle=shuffle, num_workers=num_workers, pin_memory=pin_memory ) data_iter = iter(sample_loader) images, labels = data_iter.next() X = images.numpy() X = np.transpose(X, [0, 2, 3, 1]) plot_images(X, labels) return (train_loader, valid_loader) def get_test_loader(data_dir, batch_size, num_workers=4, pin_memory=False): """ Utility function for loading and returning a multi-process test iterator over the MNIST dataset. If using CUDA, num_workers should be set to 1 and pin_memory to True. Args ---- - data_dir: path directory to the dataset. - batch_size: how many samples per batch to load. - num_workers: number of subprocesses to use when loading the dataset. - pin_memory: whether to copy tensors into CUDA pinned memory. Set it to True if using GPU. Returns ------- - data_loader: test set iterator. """ # define transforms #normalize = transforms.Normalize((0.1307,), (0.3081,)) trans = transforms.Compose([ ToTensor(), #normalize, ]) # load dataset #dataset = datasets.MNIST( # data_dir, train=False, download=True, transform=trans #) test_dataset = CDataset(csv_file='/home/Desktop/6June17/util/test.csv', root_dir='/home/caffe/data/images/',transform=trans) test_loader = torch.utils.data.DataLoader( test_dataset, batch_size=batch_size, shuffle=False, num_workers=num_workers, pin_memory=pin_memory, ) return test_loader #for i_batch, sample_batched in enumerate(dataloader): # print(i_batch, sample_batched['image'].size(), # sample_batched['landmarks'].size()) # # observe 4th batch and stop. # if i_batch == 3: # plt.figure() # show_landmarks_batch(sample_batched) # plt.axis('off') # plt.ioff() # plt.show() # break A minimal working sample will be difficult to post here but basically I am trying to modify this project http://torch.ch/blog/2015/09/21/rmva.html which works smoothly with MNIST. I am just trying to run it with my own dataset with the custom dataloader.py I use above. It instantiates a Dataloader like this: in trainer.py: if config.is_train: self.train_loader = data_loader[0] self.valid_loader = data_loader[1] self.num_train = len(self.train_loader.sampler.indices) self.num_valid = len(self.valid_loader.sampler.indices) -> run from main.py: if config.is_train: data_loader = get_train_valid_loader( config.data_dir, config.batch_size, config.random_seed, #config.valid_size, #config.shuffle, config.show_sample, **kwargs )
You are not properly using python's enumerate(). (x, y) are currently assigned the 2 keys of your batch dictionary i.e. the strings "image" and "labels". This should solve your problem: for i, batch in enumerate(self.train_loader): x, y = batch["image"], batch["labels"] # ...
https://stackoverflow.com/questions/50878650/
F.conv2d stuck on my CentOS
I run my pytorch code well on mac and even on windows system but the same code seems stuck on CentOS6.3. I debug with ipdb, and found the code was stuck at F.conv2d function: &gt; /home/work/anaconda2/envs/PyTorch/lib/python2.7/site-packages/torch/nn/modules/conv.py(301)forward() 300 return F.conv2d(input, self.weight, self.bias, self.stride, --&gt; 301 self.padding, self.dilation, self.groups) 302 ipdb&gt; s The running env was created with anaconda(python 2.7/3.6), pytorch version is 0.4.0. I tried for a long time to resolve this problem and i tried. Do you have a suggestion? Thank you so much!
I reinstall CentOS6.3, and then upgrade glibc2.14, glibc2.17 due to the pytorch0.4.0 running error info. Now everything is ok. By the way, the pytorch0.3.1 perform well before i upgrade the glibc(up to 2.12). So i think the lastest pytorch0.4.0 may haven’t deal very well with glibc, leave running deadlock appearance and doesn’t tell any error and warning info, just stuck at F.conv2d in torch/nn/modules/conv.py(301). See also: https://discuss.pytorch.org/t/f-conv2d-stuck-on-my-centos/19794/3
https://stackoverflow.com/questions/50888863/
Channel wise CrossEntropyLoss for image segmentation in pytorch
I am doing an image segmentation task. There are 7 classes in total so the final outout is a tensor like [batch, 7, height, width] which is a softmax output. Now intuitively I wanted to use CrossEntropy loss but the pytorch implementation doesn't work on channel wise one-hot encoded vector So I was planning to make a function on my own. With a help from some stackoverflow, My code so far looks like this from torch.autograd import Variable import torch import torch.nn.functional as F def cross_entropy2d(input, target, weight=None, size_average=True): # input: (n, c, w, z), target: (n, w, z) n, c, w, z = input.size() # log_p: (n, c, w, z) log_p = F.log_softmax(input, dim=1) # log_p: (n*w*z, c) log_p = log_p.permute(0, 3, 2, 1).contiguous().view(-1, c) # make class dimension last dimension log_p = log_p[ target.view(n, w, z, 1).repeat(0, 0, 0, c) &gt;= 0] # this looks wrong -&gt; Should rather be a one-hot vector log_p = log_p.view(-1, c) # target: (n*w*z,) mask = target &gt;= 0 target = target[mask] loss = F.nll_loss(log_p, target.view(-1), weight=weight, size_average=False) if size_average: loss /= mask.data.sum() return loss images = Variable(torch.randn(5, 3, 4, 4)) labels = Variable(torch.LongTensor(5, 3, 4, 4).random_(3)) cross_entropy2d(images, labels) I get two errors. One is mentioned on the code itself, where it expects one-hot vector. The 2nd one says the following RuntimeError: invalid argument 2: size '[5 x 4 x 4 x 1]' is invalid for input with 3840 elements at ..\src\TH\THStorage.c:41 For example purpose I was trying to make it work on a 3 class problem. So the targets and labels are (excluding the batch parameter for simplification ! ) Target: Channel 1 Channel 2 Channel 3 [[0 1 1 0 ] [0 0 0 1 ] [1 0 0 0 ] [0 0 1 1 ] [0 0 0 0 ] [1 1 0 0 ] [0 0 0 1 ] [0 0 0 0 ] [1 1 1 0 ] [0 0 0 0 ] [0 0 0 1 ] [1 1 1 0 ] Labels: Channel 1 Channel 2 Channel 3 [[0 1 1 0 ] [0 0 0 1 ] [1 0 0 0 ] [0 0 1 1 ] [.2 0 0 0] [.8 1 0 0 ] [0 0 0 1 ] [0 0 0 0 ] [1 1 1 0 ] [0 0 0 0 ] [0 0 0 1 ] [1 1 1 0 ] So how can I fix my code to calculate channel wise CrossEntropy loss ?
As Shai's answer already states, the documentation on the torch.nn.CrossEntropy() function can be found here and the code can be found here. The built-in functions do indeed already support KD cross-entropy loss. In the 3D case, the torch.nn.CrossEntropy() functions expects two arguments: a 4D input matrix and a 3D target matrix. The input matrix is in the shape: (Minibatch, Classes, H, W). The target matrix is in the shape (Minibatch, H, W) with numbers ranging from 0 to (Classes-1). If you start with a one-hot encoded matrix, you will have to convert it with np.argmax(). Example with three classes and minibatch size of 1: import pytorch import numpy as np input_torch = torch.randn(1, 3, 2, 5, requires_grad=True) one_hot = np.array([[[1, 1, 1, 0, 0], [0, 0, 0, 0, 0]], [[0, 0, 0, 0, 0], [1, 1, 1, 0, 0]], [[0, 0, 0, 1, 1], [0, 0, 0, 1, 1]]]) target = np.array([np.argmax(a, axis = 0) for a in target]) target_torch = torch.tensor(target_argmax) loss = torch.nn.CrossEntropyLoss() output = loss(input_torch, target_torch) output.backward()
https://stackoverflow.com/questions/50896412/
How to import the tensorflow lite interpreter in Python?
I'm developing a Tensorflow embedded application using TF lite on the Raspberry Pi 3b, running Raspbian Stretch. I've converted the graph to a flatbuffer (lite) format and have built the TFLite static library natively on the Pi. So far so good. But the application is Python and there seems to be no Python binding available. The Tensorflow Lite development guide (https://www.tensorflow.org/mobile/tflite/devguide) states "There are plans for Python bindings and a demo app." Yet there is wrapper code in /tensorflow/contrib/lite/python/interpreter_wrapper that has all the needed interpreter methods. Yet calling this from Python eludes me. I have generated a SWIG wrapper but the build step fails with many errors. There is no readme.md describing the state of the interpreter_wrapper. So, I wonder if the wrapper has worked for others and I should persist or is it fundamentally broken and I should look elsewhere (PyTorch)? Has anyone found a path to the TFLite Python bindings for the Pi3?
I was able to write python scripts to do classification 1, object-detection (tested with SSD MobilenetV{1,2}) 2, and image semantic segmentation 3 on an x86 running Ubuntu and an ARM64 board running Debian. How to build Python binding for TF Lite code: Build pip with recent TensorFlow master branch and install it (Yes, those binding was in TF 1.8. However, I don't know why they are not installed). See 4 for how to to build and install TensorFlow pip package.
https://stackoverflow.com/questions/50902067/
Can't open jupyter notebook in docker
I am trying to open the jupyter notebook in a container, but I just came cross this situation: [I 10:01:25.051 NotebookApp] The Jupyter Notebook is running at: [I 10:01:25.051 NotebookApp] http://8c1eb91f0492:8888/?token=7671a7abe557349c8d8ad1cbf207702451925efd2c27c84e [I 10:01:25.051 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 10:01:25.051 NotebookApp] Copy/paste this URL into your browser when you connect for the first time, to login with a token: http://8c1eb91f0492:8888/?token=7671a7abe557349c8d8ad1cbf207702451925efd2c27c84e&amp;token=7671a7abe557349c8d8ad1cbf207702451925efd2c27c84e As you can see the url is the container ID, I tried many solutions, like the ip setting. All don't help at all. Could someone give the hints? Thanks in advance!
Try using localhost step by step : 1 - Launch the following command : docker run -p 8888:8888 jupyter/scipy-notebook 2 - Copy/paste the url URL into your browser : http://e6ef92c5e5d6:8888/?token=... 3 - Replace the hostname by localhost : http://localhost:8888/?token=... It worked for me : [I 03:22:51.414 NotebookApp] 302 GET /?token=... (172.17.0.1) 0.97ms
https://stackoverflow.com/questions/50919752/
Training Error in PyTorch - RuntimeError: Expected object of type FloatTensor vs ByteTensor
A minimal working sample will be difficult to post here but basically I am trying to modify this project http://torch.ch/blog/2015/09/21/rmva.html which works smoothly with MNIST. I am trying to run it with my own dataset with a custom dataloader.py as below: from __future__ import print_function, division #ds import numpy as np from utils import plot_images import os #ds import pandas as pd #ds from skimage import io, transform #ds import torch from torchvision import datasets from torch.utils.data import Dataset, DataLoader #ds from torchvision import transforms from torchvision import utils #ds from torch.utils.data.sampler import SubsetRandomSampler class CDataset(Dataset): def __init__(self, csv_file, root_dir, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.frame = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.frame) def __getitem__(self, idx): img_name = os.path.join(self.root_dir, self.frame.iloc[idx, 0]+'.jpg') image = io.imread(img_name) # image = image.transpose((2, 0, 1)) labels = np.array(self.frame.iloc[idx, 1])#.as_matrix() #ds #landmarks = landmarks.astype('float').reshape(-1, 2) #print(image.shape) #print(img_name,labels) sample = {'image': image, 'labels': labels} if self.transform: sample = self.transform(sample) return sample class ToTensor(object): """Convert ndarrays in sample to Tensors.""" def __call__(self, sample): image, labels = sample['image'], sample['labels'] #print(image) #print(labels) # swap color axis because # numpy image: H x W x C # torch image: C X H X W image = image.transpose((2, 0, 1)) #print(image.shape) #print((torch.from_numpy(image))) #print((torch.from_numpy(labels))) return {'image': torch.from_numpy(image), 'labels': torch.from_numpy(labels)} def get_train_valid_loader(data_dir, batch_size, random_seed, #valid_size=0.1, #ds #shuffle=True, show_sample=False, num_workers=4, pin_memory=False): """ Utility function for loading and returning train and valid multi-process iterators over the MNIST dataset. A sample 9x9 grid of the images can be optionally displayed. If using CUDA, num_workers should be set to 1 and pin_memory to True. Args ---- - data_dir: path directory to the dataset. - batch_size: how many samples per batch to load. - random_seed: fix seed for reproducibility. - #ds valid_size: percentage split of the training set used for the validation set. Should be a float in the range [0, 1]. In the paper, this number is set to 0.1. - shuffle: whether to shuffle the train/validation indices. - show_sample: plot 9x9 sample grid of the dataset. - num_workers: number of subprocesses to use when loading the dataset. - pin_memory: whether to copy tensors into CUDA pinned memory. Set it to True if using GPU. Returns ------- - train_loader: training set iterator. - valid_loader: validation set iterator. """ #ds #error_msg = "[!] valid_size should be in the range [0, 1]." #assert ((valid_size &gt;= 0) and (valid_size &lt;= 1)), error_msg #ds # define transforms #normalize = transforms.Normalize((0.1307,), (0.3081,)) trans = transforms.Compose([ ToTensor(), #normalize, ]) # load train dataset #train_dataset = datasets.MNIST( # data_dir, train=True, download=True, transform=trans #) train_dataset = CDataset(csv_file='/home/Desktop/6June17/util/train.csv', root_dir='/home/caffe/data/images/',transform=trans) # load validation dataset #valid_dataset = datasets.MNIST( #ds # data_dir, train=True, download=True, transform=trans #ds #) valid_dataset = CDataset(csv_file='/home/Desktop/6June17/util/eval.csv', root_dir='/home/caffe/data/images/',transform=trans) num_train = len(train_dataset) train_indices = list(range(num_train)) #ds split = int(np.floor(valid_size * num_train)) num_valid = len(valid_dataset) #ds valid_indices = list(range(num_valid)) #ds #if shuffle: # np.random.seed(random_seed) # np.random.shuffle(indices) #ds train_idx, valid_idx = indices[split:], indices[:split] train_idx = train_indices #ds valid_idx = valid_indices #ds train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) train_loader = torch.utils.data.DataLoader( train_dataset, batch_size=batch_size, sampler=train_sampler, num_workers=num_workers, pin_memory=pin_memory, ) print(train_loader) valid_loader = torch.utils.data.DataLoader( valid_dataset, batch_size=batch_size, sampler=valid_sampler, num_workers=num_workers, pin_memory=pin_memory, ) # visualize some images if show_sample: sample_loader = torch.utils.data.DataLoader( dataset, batch_size=9, #shuffle=shuffle, num_workers=num_workers, pin_memory=pin_memory ) data_iter = iter(sample_loader) images, labels = data_iter.next() X = images.numpy() X = np.transpose(X, [0, 2, 3, 1]) plot_images(X, labels) return (train_loader, valid_loader) def get_test_loader(data_dir, batch_size, num_workers=4, pin_memory=False): """ Utility function for loading and returning a multi-process test iterator over the MNIST dataset. If using CUDA, num_workers should be set to 1 and pin_memory to True. Args ---- - data_dir: path directory to the dataset. - batch_size: how many samples per batch to load. - num_workers: number of subprocesses to use when loading the dataset. - pin_memory: whether to copy tensors into CUDA pinned memory. Set it to True if using GPU. Returns ------- - data_loader: test set iterator. """ # define transforms #normalize = transforms.Normalize((0.1307,), (0.3081,)) trans = transforms.Compose([ ToTensor(), #normalize, ]) # load dataset #dataset = datasets.MNIST( # data_dir, train=False, download=True, transform=trans #) test_dataset = CDataset(csv_file='/home/Desktop/6June17/util/test.csv', root_dir='/home/caffe/data/images/',transform=trans) test_loader = torch.utils.data.DataLoader( test_dataset, batch_size=batch_size, shuffle=False, num_workers=num_workers, pin_memory=pin_memory, ) return test_loader #for i_batch, sample_batched in enumerate(dataloader): # print(i_batch, sample_batched['image'].size(), # sample_batched['landmarks'].size()) # # observe 4th batch and stop. # if i_batch == 3: # plt.figure() # show_landmarks_batch(sample_batched) # plt.axis('off') # plt.ioff() # plt.show() # break Other main change I have made is closing off the parameter intake for validation size and shuffling (as I am using a pre-existing train, validation and test split and I have already shuffled these splits) And my last change is in train_one_epoch(self, epoch) function, while iterating in trainer.py. I have changed this part because formerly the x, y were being returned as strings of "image" and "labels" - headers of the python dictionary rather than the values in batches. for i, batch in enumerate(self.train_loader): x, y = batch["image"], batch["labels"] But now I get errors with the network training that I can not figure out as I am new to pytorch: [*] Train on 64034 samples, validate on 18951 samples Epoch: 1/200 - LR: 0.000300 &lt;torch.utils.data.dataloader.DataLoader object at 0x7fe065fd4f60&gt; 0%| | 0/64034 [00:00&lt;?, ?it/s]/home/duygu/recurrent-visual-attention-master/modules.py:106: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number from_x, to_x = from_x.data[0], to_x.data[0] /home/duygu/recurrent-visual-attention-master/modules.py:107: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number from_y, to_y = from_y.data[0], to_y.data[0] Traceback (most recent call last): File "main.py", line 49, in &lt;module&gt; main(config) File "main.py", line 40, in main trainer.train() File "/home/duygu/recurrent-visual-attention-master/trainer.py", line 168, in train train_loss, train_acc = self.train_one_epoch(epoch) File "/home/duygu/recurrent-visual-attention-master/trainer.py", line 252, in train_one_epoch h_t, l_t, b_t, p = self.model(x, l_t, h_t) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/duygu/recurrent-visual-attention-master/model.py", line 101, in forward g_t = self.sensor(x, l_t_prev) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/duygu/recurrent-visual-attention-master/modules.py", line 214, in forward phi_out = F.relu(self.fc1(phi)) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/linear.py", line 55, in forward return F.linear(input, self.weight, self.bias) File "/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py", line 992, in linear return torch.addmm(bias, input, weight.t()) RuntimeError: Expected object of type torch.FloatTensor but found type torch.ByteTensor for argument #4 'mat1' I am seeking recommendations on how to fix this error and to understand what is causing it.I get this error even when I run it without GPU support on. I wonder if somehow my parameters are passed empty by looking at the initial warning.
As far as I can tell, it seems that as you commented the normalize / transforms.Normalize operations applied to your dataset, your images don't have their values normalize to float between [0, 1], and are instead keeping their byte values between [0, 255]. Try applying data normalization or at least converting your images to float (32-bit, not 64) values (e.g. in ToTensor, add image = image.float() or while it is still a numpy array using data.astype(numpy.float32)) before feeding them to your network.
https://stackoverflow.com/questions/50930300/
Understanding torch.nn.Parameter
How does torch.nn.Parameter() work?
I will break it down for you. Tensors, as you might know, are multi dimensional matrices. Parameter, in its raw form, is a tensor i.e. a multi dimensional matrix. It sub-classes the Variable class. The difference between a Variable and a Parameter comes in when associated with a module. When a Parameter is associated with a module as a model attribute, it gets added to the parameter list automatically and can be accessed using the 'parameters' iterator. Initially in Torch, a Variable (which could for example be an intermediate state) would also get added as a parameter of the model upon assignment. Later on there were use cases identified where a need to cache the variables instead of having them added to the parameter list was identified. One such case, as mentioned in the documentation is that of RNN, where in you need to save the last hidden state so you don't have to pass it again and again. The need to cache a Variable instead of having it automatically register as a parameter to the model is why we have an explicit way of registering parameters to our model i.e. nn.Parameter class. For instance, run the following code - import torch import torch.nn as nn from torch.optim import Adam class NN_Network(nn.Module): def __init__(self,in_dim,hid,out_dim): super(NN_Network, self).__init__() self.linear1 = nn.Linear(in_dim,hid) self.linear2 = nn.Linear(hid,out_dim) self.linear1.weight = torch.nn.Parameter(torch.zeros(in_dim,hid)) self.linear1.bias = torch.nn.Parameter(torch.ones(hid)) self.linear2.weight = torch.nn.Parameter(torch.zeros(in_dim,hid)) self.linear2.bias = torch.nn.Parameter(torch.ones(hid)) def forward(self, input_array): h = self.linear1(input_array) y_pred = self.linear2(h) return y_pred in_d = 5 hidn = 2 out_d = 3 net = NN_Network(in_d, hidn, out_d) Now, check the parameter list associated with this model - for param in net.parameters(): print(type(param.data), param.size()) """ Output &lt;class 'torch.FloatTensor'&gt; torch.Size([5, 2]) &lt;class 'torch.FloatTensor'&gt; torch.Size([2]) &lt;class 'torch.FloatTensor'&gt; torch.Size([5, 2]) &lt;class 'torch.FloatTensor'&gt; torch.Size([2]) """ Or try, list(net.parameters()) This can easily be fed to your optimizer - opt = Adam(net.parameters(), learning_rate=0.001) Also, note that Parameters have require_grad set by default.
https://stackoverflow.com/questions/50935345/
AttributeError: 'builtin_function_or_method' object has no attribute 'requires_grad'
I'm getting this error when training the MNIST data, the csvfiles is from Kaggle. Can someone show me where I went wrong? Here is my code. The version of PyTorch is 0.4.0. import numpy as np import pandas as pd import torch import torch.nn as nn from torch.autograd import Variable import torch.utils.data as data import torchvision import matplotlib.pyplot as plt torch.manual_seed(1) # Training Parameters EPOCH = 20 BATCH_size = 15 LR = 0.001 img_row, img_col = 28, 28 # Networks structure class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d( in_channels=1, out_channels=32, kernel_size=5, stride=1, padding=2 ), nn.ReLU(), nn.Conv2d(32, 32, 5, 1, 2), nn.ReLU(), nn.MaxPool2d(kernel_size=2), nn.Dropout(0.25) ) self.conv2 = nn.Sequential( nn.Conv2d(32, 64, 3, 1, 1), nn.ReLU(), nn.Conv2d(64, 64, 3, 1, 1), nn.ReLU(), nn.MaxPool2d(2), nn.Dropout(0.25) ) self.out = nn.Sequential( nn.Linear(64*7*7, 512), nn.ReLU(), nn.Dropout(0.5), nn.Linear(512, 10) ) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0), -1) output = self.out(x) return output # Torch Dataset class Torch_Dataset(data.Dataset): def __init__(self, root_dir, csvfile, img_rows, img_cols, train=True, transform=None): self.root_dir = root_dir self.transform = transform self.train = train if self.train: y_data0 = pd.read_csv(csvfile, header=0, usecols=['label']) y_data1 = np.array(y_data0) self.y_data = torch.from_numpy(y_data1) x_data0 = pd.read_csv(csvfile, header=0, usecols=[i for i in range(1, 785)]) x_data1 = np.array(x_data0) x_data1 = x_data1.reshape(x_data1.shape[0], 1, img_rows, img_cols) x_data1 = x_data1.astype('float32') x_data1 /= 255 self.x_data = torch.from_numpy(x_data1) else: x_data0 = pd.read_csv(csvfile, header=0) x_data1 = np.array(x_data0) x_data1 = x_data1.reshape(x_data1.shape[0], 1, img_rows, img_cols) x_data1 = x_data1.astype('float32') x_data1 /= 255 self.x_data = torch.from_numpy(x_data1) def __len__(self): return len(self.x_data) def __getitem__(self, idx): if self.train: img, target = self.x_data[idx], self.y_data[idx] else: img = self.x_data[idx] target = None # sample = {'img': img, 'target': target} return img, target train = Torch_Dataset( root_dir='./', # root csvfile='train.csv', # filename img_rows=img_row, # image rows img_cols=img_col, # image cols train=True # train or test ) # DataLoader loader = data.DataLoader( dataset=train, # torch dataset format batch_size=BATCH_size, # mini batch size shuffle=True, # shuffle the data ) # train the data cnn = CNN() optimizer = torch.optim.Adam(cnn.parameters(), lr=LR) loss_f = nn.CrossEntropyLoss() for epoch in range(EPOCH): for step, (x, y) in enumerate(loader): b_x = Variable(x) b_y = Variable(y) b_y = b_y.squeeze output = cnn(b_x) loss = loss_f(output, b_y) optimizer.zero_grad() loss.backward() optimizer.step() Traceback (most recent call last): File "C:/Users/Bryan Zoe/PycharmProjects/MNIST_TEST/PyTorch/test1.py", line 118, in loss = loss_f(output, b_y) File "C:\Users\Bryan Zoe\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 491, in __ call __ result = self.forward(*input, **kwargs) File "C:\Users\Bryan Zoe\Anaconda3\lib\site-packages\torch\nn\modules\loss.py", line 757, in forward _assert_no_grad(target) File "C:\Users\Bryan Zoe\Anaconda3\lib\site-packages\torch\nn\modules\loss.py", line 11, in _assert_no_grad assert not tensor.requires_grad, \ AttributeError: 'builtin_function_or_method' object has no attribute 'requires_grad'
You are not calling the squeeze method,This should work b_y = b_y.squeeze()
https://stackoverflow.com/questions/50939730/
Using CUDA with pytorch?
Is there a way to reliably enable CUDA on the whole model? I want to run the training on my GPU. I found on some forums that I need to apply .cuda() on anything I want to use CUDA with (I've applied it to everything I could without making the program crash). Surprisingly, this makes the training even slower. Then, I found that you could use this torch.set_default_tensor_type('torch.cuda.FloatTensor') to use CUDA. With both enabled, nothing changes. What is happening?
You can use the tensor.to(device) command to move a tensor to a device. The .to() command is also used to move a whole model to a device, like in the post you linked to. Another possibility is to set the device of a tensor during creation using the device= keyword argument, like in t = torch.tensor(some_list, device=device) To set the device dynamically in your code, you can use device = torch.device("cuda" if torch.cuda.is_available() else "cpu") to set cuda as your device if possible. There are various code examples on PyTorch Tutorials and in the documentation linked above that could help you.
https://stackoverflow.com/questions/50954479/
Pytorch - Purpose of images preprocessing in the transfer learning tutorial
In the Pytorch transfer learning tutorial, the images in both the training and the test sets are being pre-processed using the following code: data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } My question is - what is the intuition behind this choice of transforms? In particular, what is the intuition behind choosing RandomResizedCrop(224) and RandomHorizontalFlip()? Wouldn't it be better to just let the neural network train on the entire image? (or at least, augment the dataset using these transformation)? I understand why it is reasonable to insert only the portion of the image that contains the ant/bees to the neural network but can't understand why it is reasonable to insert a random crop... Hope I managed to make all my questions clear Thanks!
Regarding RandomResizedCrop Why ...ResizedCrop? - This answer is straightforward. Resizing crops to the same dimensions allows you to batch your input data. Since the training images in your toy dataset have different dimensions, this is the best way to make your training more efficient. Why Random...? - Generating different random crops per image every iteration (i.e. random center and random cropping dimensions/ratio before resizing) is a nice way to artificially augment your dataset, i.e. feeding your network different-looking inputs (extracted from the same original images) every iteration. This helps to partially avoid over-fitting for small datasets, and makes your network overall more robust. You are however right that, since some of your training images are up to 500px wide and the semantic targets (ant/bee) sometimes cover only a small portion of the images, there is a chance that some of these random crops won't contain an insect... But as long as the chances this happens stay relatively low, it won't really impact your training. The advantage of feeding different training crops every iteration (instead of always the same non-augmented images) vastly counterbalances the side-effect of sometimes giving "empty" crops. You could verify this assertion by replacing RandomResizedCrop(224) by Resize(224) (fixed resizing) in your code and compare the final accuracies on the test set. Furthermore, I would add that neural networks are smart cookies, and sometimes learn to recognize images through features you wouldn't expect (i.e. they tend to learn recognition shortcuts if your dataset or losses are biased, c.f. over-fitting). I wouldn't be surprised if this toy network is performing so well despite being trained sometimes on "empty" crops just because it learns e.g. to distinguish between usual "ant backgrounds" (ground floor, leaves, etc.) and "bee backgrounds" (flowers). Regarding RandomHorizontalFlip Its purpose is also to artificially augment your dataset. For the network, an image and its flipped version are two different inputs, so you are basically artificially doubling the size of your training dataset for "free". There are plenty more operations one can use to augment training datasets (e.g. RandomAffine, ColorJitter, etc). One has however to be careful to choose transformations which are meaningful for the target use-case / which are not impacting the target semantic information (e.g. for ant/bee classification, RandomHorizontalFlip is fine as you will probably get as many images of insects facing right than facing left; however RandomVerticalFlip doesn't make much sense as you won't get pictures of insects upside-down most certainly).
https://stackoverflow.com/questions/50963295/
how is the batch size determined?
I'm looking at this pytorch starter tutorial: https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py the zero_grad() function is being used to zero the gradients which means that it's running with mini-batches, is this a correct assumption? If so, where is the batch size defined?? I found the following for nn.conv2d: For example, nn.Conv2d will take in a 4D Tensor of nSamples x nChannels x Height x Width. in that case nSamples is the batch size? but how do you specify the batch size for a nn.Linear layer? do you decide what your mini-batches are when you load the data or what? I am making a few assumptions here that may be totally incorrect, pls correct me if i'm wrong. thank you!
You predefine the batch_Size in the dataloader, For a linear layer you do not specify batch size but the number of features of your previous layer and the number of features you wish to get after the linear operation. This is a code sample from the Pytorch Docs m = nn.Linear(20, 30) input = Variable(torch.randn(128, 20)) output = m(input) print(output.size())
https://stackoverflow.com/questions/50978781/
Multi-label, multi-class image classifier (ConvNet) with PyTorch
I am trying to implement an image classifier (CNN/ConvNet) with PyTorch where I want to read my labels from a csv-file. I have 4 different classes and an image may belong to more than one class. I have read through the PyTorch Tutorial and this Stanford tutorial and this one, but none of them cover my specific case. I have managed to build a custom function of the torch.utils.data.Dataset class which works fine for reading the labels from a csv-file for a binary classifier only though. This is the code for the torch.utils.data.Dataset class I have so far (slightly modified from the third tutorial linked above): import torch import torchvision.transforms as transforms import torch.utils.data as data from PIL import Image import numpy as np import pandas as pd class MyCustomDataset(data.Dataset): # __init__ function is where the initial logic happens like reading a csv, # assigning transforms etc. def __init__(self, csv_path): # Transforms self.random_crop = transforms.RandomCrop(800) self.to_tensor = transforms.ToTensor() # Read the csv file self.data_info = pd.read_csv(csv_path, header=None) # First column contains the image paths self.image_arr = np.asarray(self.data_info.iloc[:, 0]) # Second column is the labels self.label_arr = np.asarray(self.data_info.iloc[:, 1]) # Calculate len self.data_len = len(self.data_info.index) # __getitem__ function returns the data and labels. This function is # called from dataloader like this def __getitem__(self, index): # Get image name from the pandas df single_image_name = self.image_arr[index] # Open image img_as_img = Image.open(single_image_name) img_cropped = self.random_crop(img_as_img) img_as_tensor = self.to_tensor(img_cropped) # Get label(class) of the image based on the cropped pandas column single_image_label = self.label_arr[index] return (img_as_tensor, single_image_label) def __len__(self): return self.data_len Specifically, I am trying to read my labels from a file with the following structure: And my specific problem is, that I can't figure out how to implement this into my Dataset class. I think I am missing the link between the (manual) assignment of the labels in the csv and how they are read by PyTorch, as I am rather new to the framework. I'd appreciate any help on how to get this to work, or if there are actually examples covering this, a link would be highly appreciated as well!
Maybe I am missing something, but if you want to convert your columns 1..N (N = 4 here) into a label vector or shape (N,) (e.g. given your example data, label(img1) = [0, 0, 0, 1], label(img3) = [1, 0, 1, 0], ...), why not: Read all the label columns into self.label_arr: self.label_arr = np.asarray(self.data_info.iloc[:, 1:]) # columns 1 to N Return accordingly the labels in __getitem__() (no change here): single_image_label = self.label_arr[index] To train your classifier, you could then compute e.g. the cross-entropy between your (N,) predictions and the target labels.
https://stackoverflow.com/questions/50981714/
Pytorch broadcasting product of two tensors
I want to multiply two tensors, here is what I have got: A tensor of shape (20, 96, 110) B tensor of shape (20, 16, 110) The first index is for batch size. What I want to do is essentially take each tensor from B - (20, 1, 110), for example, and with that, I want to multiply each A tensor (20, n, 110). So the product will be at the end: tensor AB which shape is (20, 96 * 16, 110). So I want to multiply each tensor from A by broadcasting with B. Is there a method in PyTorch that does it?
Using torch.einsum followed by torch.reshape: AB = torch.einsum("ijk,ilk-&gt;ijlk", (A, B)).reshape(A.shape[0], -1, A.shape[2]) Example: import numpy as np import torch # A of shape (2, 3, 2): A = torch.from_numpy(np.array([[[1, 1], [2, 2], [3, 3]], [[4, 4], [5, 5], [6, 6]]])) # B of shape (2, 2, 2): B = torch.from_numpy(np.array([[[1, 1], [10, 10]], [[2, 2], [20, 20]]])) # AB of shape (2, 3*2, 2): AB = torch.einsum("ijk,ilk-&gt;ijlk", (A, B)).reshape(A.shape[0], -1, A.shape[2]) # tensor([[[ 1, 1], [ 10, 10], [ 2, 2], [ 20, 20], [ 3, 3], [ 30, 30]], # [[ 8, 8], [ 80, 80], [ 10, 10], [ 100, 100], [ 12, 12], [ 120, 120]]])
https://stackoverflow.com/questions/50982503/
How to sample through a small dataset for more iterations than data size?
I have one small and one large dataset and they signify two separate classes. The network I am training is style transfer, so I need one image of each class in order to keep training. The training stops though, as soon as the smaller dataset runs out. How can I keep sampling from the small dataset randomly beyond its size? I tried RandomSampler() but that didn't work. Here's my code for the small dataset : sampler = RandomSampler(self) dataloader = DataLoader(self, batch_size=26, shuffle=False, sampler=sampler) while True: for data in dataloader: yield data I also tried iterator.cycle but that didn't help either. loader = iter(cycle(self.dataset.gen(attribute_id, True))) A, y_A = next(loader) B, y_B = next(self.dataset.gen(attribute_id, False))
Your idea with the RandomSampler was not far off. There is a sampler called SubsetRandomSampler. While a subset typically is smaller than the whole set, this has not to be the case. Lets say your smaller dataset has A entries and your second dataset has B. You could define your indices: indices = np.random.randint(0, A, B) sampler = torch.utils.data.sampler.SubsetRandomSampler(indices) This generates B indices in a range valid for the smaller dataset. Test: loader = torch.utils.data.DataLoader(set_A, batch_size=1, sampler=sampler) print(len(loader)) # B
https://stackoverflow.com/questions/50982781/
How to optimize lower Cholesky Parameter in Pytorch?
Is there any way to create a parameter which is lower triangular with positive diagonal and enforce this constraint during optimization in Pytorch?
Check this one torch.potrf. A simple example: a = torch.randn(3, 3) a = torch.mm(a, a.t()) # make symmetric positive definite l = torch.potrf(a, upper=False) tri_loss = l.sum() opt.zero_grad() tri_loss.backward() opt.step()
https://stackoverflow.com/questions/50988668/
Is there any pytorch function can combine the specific continuous dimensions of tensor into one?
Let's call the function I'm looking for "magic_combine", which can combine the continuous dimensions of tensor I give to it. For more specific, I want it to do the following thing: a = torch.zeros(1, 2, 3, 4, 5, 6) b = a.magic_combine(2, 5) # combine dimension 2, 3, 4 print(b.size()) # should be (1, 2, 60, 6) I know that torch.view() can do the similar thing. But I'm just wondering if there is any more elegant way to achieve the goal?
I am not sure what you have in mind with "a more elegant way", but Tensor.view() has the advantage not to re-allocate data for the view (original tensor and view share the same data), making this operation quite light-weight. As mentioned by @UmangGupta, it is however rather straight-forward to wrap this function to achieve what you want, e.g.: import torch def magic_combine(x, dim_begin, dim_end): combined_shape = list(x.shape[:dim_begin]) + [-1] + list(x.shape[dim_end:]) return x.view(combined_shape) a = torch.zeros(1, 2, 3, 4, 5, 6) b = magic_combine(a, 2, 5) # combine dimension 2, 3, 4 print(b.size()) # torch.Size([1, 2, 60, 6])
https://stackoverflow.com/questions/50991189/
loading librispeech in pytorch for ASR
I'm newly working to train an automatic speech recognition machine using neural network and CTC loss. But the first thing I'm supposed to do is to prepare the data for training the model. Since the Librispeech contains huge amounts of data, initially I am going to use a subset of it called "Mini LibriSpeech ASR corpus". (http://www.openslr.org/31/). Also I am using SeanNaren Pytorch bindings for Warp-ctc (https://github.com/SeanNaren/warp-ctc). After reading the audio files and their corresponding transcripts, I'm using Spicy package to calculate the spectrogram of each audio file. The problem arises when I'm going to feed the spectrograms to a convolutional layer for feature extraction. The length of each spectrogram differs from the other ones. After searching more on the issue, I figured out I should probably pass a specific number of frames to the network, but in order to reach this I need to tag each frame of the sound file with the corresponding character(also containing blank symbol). Is there a way to do that in python?
Your question is quite broad : are you looking after the transcripts of the audio files ? If so they are in a text file in each directory, each line starting with the filename (without the extension). You can look here : https://github.com/inikdom/rnn-speech/blob/master/util/dataprocessor.py Especially this method which give a list of audio files with their transcription for the Librispeech corpus : def get_data_librispeech(self, raw_data_path): text_files = self.find_files(raw_data_path, ".txt") result = [] for text_file in text_files: directory = os.path.dirname(text_file) with open(text_file, "r") as f: lines = f.read().split("\n") for line in lines: head = line.split(' ')[0] if len(head) &lt; 5: # Not a line with a file desc break audio_file = directory + "/" + head + ".flac" if os.path.exists(audio_file): result.append([audio_file, self.clean_label(line.replace(head, "")), None]) return result Note : the third value for each item is always None because it's supposed to be replaced by the audio length in another method. You do not tag each frame of the audio with the corresponding character, CTC will take care of it by working on a full length audio and the corresponding transcript.
https://stackoverflow.com/questions/50993861/
What does the gather function do in pytorch in layman terms?
What does torch.gather do? This answer is hard to understand.
The torch.gather function (or torch.Tensor.gather) is a multi-index selection method. Look at the following example from the official docs: t = torch.tensor([[1,2],[3,4]]) r = torch.gather(t, 1, torch.tensor([[0,0],[1,0]])) # r now holds: # tensor([[ 1, 1], # [ 4, 3]]) Let's start with going through the semantics of the different arguments: The first argument, input, is the source tensor that we want to select elements from. The second, dim, is the dimension (or axis in tensorflow/numpy) that we want to collect along. And finally, index are the indices to index input. As for the semantics of the operation, this is how the official docs explain it: out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0 out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1 out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2 So let's go through the example. the input tensor is [[1, 2], [3, 4]], and the dim argument is 1, i.e. we want to collect from the second dimension. The indices for the second dimension are given as [0, 0] and [1, 0]. As we "skip" the first dimension (the dimension we want to collect along is 1), the first dimension of the result is implicitly given as the first dimension of the index. That means that the indices hold the second dimension, or the column indices, but not the row indices. Those are given by the indices of the index tensor itself. For the example, this means that the output will have in its first row a selection of the elements of the input tensor's first row as well, as given by the first row of the index tensor's first row. As the column-indices are given by [0, 0], we therefore select the first element of the first row of the input twice, resulting in [1, 1]. Similarly, the elements of the second row of the result are a result of indexing the second row of the input tensor by the elements of the second row of the index tensor, resulting in [4, 3]. To illustrate this even further, let's swap the dimension in the example: t = torch.tensor([[1,2],[3,4]]) r = torch.gather(t, 0, torch.tensor([[0,0],[1,0]])) # r now holds: # tensor([[ 1, 2], # [ 3, 2]]) As you can see, the indices are now collected along the first dimension. For the example you referred, current_Q_values = Q(obs_batch).gather(1, act_batch.unsqueeze(1)) gather will index the rows of the q-values (i.e. the per-sample q-values in a batch of q-values) by the batch-list of actions. The result will be the same as if you had done the following (though it will be much faster than a loop): q_vals = [] for qv, ac in zip(Q(obs_batch), act_batch): q_vals.append(qv[ac]) q_vals = torch.cat(q_vals, dim=0)
https://stackoverflow.com/questions/50999977/
How to vectorise a list of matrix vector multiplications using pytorch/numpy
For example, I have a list of N B x H tensor(i.e. a N x B x H tensor) and a list of N vectors (i.e. N x B tensor). And I want multiply each B x H tensor in the list with corresponding B dimensional tensor, resulting a N x H tensor. I know how to use a single for-loop with PyTorch to implement the computation, but is there any vectorised implantation? (i.e. no for-loop, just using PyTorch/numpy operations)
You could achieve this with torch.bmm() and some torch.squeeze()/torch.unsqueeze(). I am personally rather fond of the more generictorch.einsum() (which I find more readable): import torch import numpy as np A = torch.from_numpy(np.array([[[1, 10, 100], [2, 20, 200], [3, 30, 300]], [[4, 40, 400], [5, 50, 500], [6, 60, 600]]])) B = torch.from_numpy(np.array([[ 1, 2, 3], [-1, -2, -3]])) AB = torch.einsum("nbh,nb-&gt;nh", (A, B)) print(AB) # tensor([[ 14, 140, 1400], # [ -32, -320, -3200]])
https://stackoverflow.com/questions/51001968/
Why do we "pack" the sequences in PyTorch?
I was trying to replicate How to use packing for variable-length sequence inputs for rnn but I guess I first need to understand why we need to &quot;pack&quot; the sequence. I understand why we &quot;pad&quot; them but why is &quot;packing&quot; (via pack_padded_sequence) necessary?
I have stumbled upon this problem too and below is what I figured out. When training RNN (LSTM or GRU or vanilla-RNN), it is difficult to batch the variable length sequences. For example: if the length of sequences in a size 8 batch is [4,6,8,5,4,3,7,8], you will pad all the sequences and that will result in 8 sequences of length 8. You would end up doing 64 computations (8x8), but you needed to do only 45 computations. Moreover, if you wanted to do something fancy like using a bidirectional-RNN, it would be harder to do batch computations just by padding and you might end up doing more computations than required. Instead, PyTorch allows us to pack the sequence, internally packed sequence is a tuple of two lists. One contains the elements of sequences. Elements are interleaved by time steps (see example below) and other contains the size of each sequence the batch size at each step. This is helpful in recovering the actual sequences as well as telling RNN what is the batch size at each time step. This has been pointed by @Aerin. This can be passed to RNN and it will internally optimize the computations. I might have been unclear at some points, so let me know and I can add more explanations. Here's a code example: a = [torch.tensor([1,2,3]), torch.tensor([3,4])] b = torch.nn.utils.rnn.pad_sequence(a, batch_first=True) &gt;&gt;&gt;&gt; tensor([[ 1, 2, 3], [ 3, 4, 0]]) torch.nn.utils.rnn.pack_padded_sequence(b, batch_first=True, lengths=[3,2]) &gt;&gt;&gt;&gt;PackedSequence(data=tensor([ 1, 3, 2, 4, 3]), batch_sizes=tensor([ 2, 2, 1]))
https://stackoverflow.com/questions/51030782/
Getting different output in the Pytorch NLP example Part-of-Speech Tagging
I am following the NLP tutorials on Pytorch’s tutorials website. I am getting different output than what it should show, so I just copy pasted the whole code as it is and still the output is different. My code is shared in this gist: Example: An LSTM for Part-of-Speech Tagging For the 1st sentence [‘The’, ‘dog’, ‘ate’, ‘the’, ‘apple’] [‘DET’, ‘NN’, ‘V’, ‘DET’, ‘NN’] the output is coming as below: tensor([[-0.7662, -0.6405, -4.8002], [-2.7163, -0.0698, -6.6515], [-3.1324, -5.7668, -0.0479], [-0.0528, -3.3832, -4.0481], [-2.4527, -0.0931, -5.8702]]) I am getting the sequence: 1 1 2 0 1 rather than 0 1 2 0 1 Can anyone please check this and point out why I am getting different output?
I updated the epochs=500, i.e ran 500 times and it now outputs the correct sequence.
https://stackoverflow.com/questions/51032181/
PyTorch : predict single example
Following the example from: https://github.com/jcjohnson/pytorch-examples This code trains successfully: # Code in file tensor/two_layer_net_tensor.py import torch device = torch.device('cpu') # device = torch.device('cuda') # Uncomment this to run on GPU # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random input and output data x = torch.randn(N, D_in, device=device) y = torch.randn(N, D_out, device=device) # Randomly initialize weights w1 = torch.randn(D_in, H, device=device) w2 = torch.randn(H, D_out, device=device) learning_rate = 1e-6 for t in range(500): # Forward pass: compute predicted y h = x.mm(w1) h_relu = h.clamp(min=0) y_pred = h_relu.mm(w2) # Compute and print loss; loss is a scalar, and is stored in a PyTorch Tensor # of shape (); we can get its value as a Python number with loss.item(). loss = (y_pred - y).pow(2).sum() print(t, loss.item()) # Backprop to compute gradients of w1 and w2 with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_w2 = h_relu.t().mm(grad_y_pred) grad_h_relu = grad_y_pred.mm(w2.t()) grad_h = grad_h_relu.clone() grad_h[h &lt; 0] = 0 grad_w1 = x.t().mm(grad_h) # Update weights using gradient descent w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2 How can I predict a single example ? My experience thus far is utilising feedforward networks using just numpy. After training a model I utilise forward propagation but for a single example : numpy code snippet where new is the output value I'm attempting to predict: new = np.asarray(toclassify) Z1 = np.dot(weight_layer_1, new.T) + bias_1 sigmoid_activation_1 = sigmoid(Z1) Z2 = np.dot(weight_layer_2, sigmoid_activation_1) + bias_2 sigmoid_activation_2 = sigmoid(Z2) sigmoid_activation_2 contains the predicted vector attributes Is the idiomatic PyTorch way same? Use forward propagation in order to make a single prediction?
The code you posted is a simple demo trying to reveal the inner mechanism of such deep learning frameworks. These frameworks, including PyTorch, Keras, Tensorflow and many more automatically handle the forward calculation, the tracking and applying gradients for you as long as you defined the network structure. However, the code you showed still try to do these stuff manually. That's the reason why you feel cumbersome when predicting one example, because you are still doing it from scratch. In practice, we will define a model class inherited from torch.nn.Module and initialize all the network components (like neural layer, GRU, LSTM layer etc.) in the __init__ function, and define how these components interact with the network input in the forward function. Taken the example from the page you've provided: # Code in file nn/two_layer_net_module.py import torch class TwoLayerNet(torch.nn.Module): def __init__(self, D_in, H, D_out): &quot;&quot;&quot; In the constructor we instantiate two nn.Linear modules and assign them as member variables. &quot;&quot;&quot; super(TwoLayerNet, self).__init__() self.linear1 = torch.nn.Linear(D_in, H) self.linear2 = torch.nn.Linear(H, D_out) def forward(self, x): &quot;&quot;&quot; In the forward function we accept a Tensor of input data and we must return a Tensor of output data. We can use Modules defined in the constructor as well as arbitrary (differentiable) operations on Tensors. &quot;&quot;&quot; h_relu = self.linear1(x).clamp(min=0) y_pred = self.linear2(h_relu) return y_pred # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random Tensors to hold inputs and outputs x = torch.randn(N, D_in) y = torch.randn(N, D_out) # Construct our model by instantiating the class defined above. model = TwoLayerNet(D_in, H, D_out) # Construct our loss function and an Optimizer. The call to model.parameters() # in the SGD constructor will contain the learnable parameters of the two # nn.Linear modules which are members of the model. loss_fn = torch.nn.MSELoss(size_average=False) optimizer = torch.optim.SGD(model.parameters(), lr=1e-4) for t in range(500): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) # Compute and print loss loss = loss_fn(y_pred, y) print(t, loss.item()) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step() The code defined a model named TwoLayerNet, it initializes two linear layers in the __init__ function and further defines how these two linears interact with the input x in the forward function. Having the model defined, we can perform a single feed-forward operation as follows. Say xu contains a single unseen example: xu = torch.randn(D_in) Then this performs the prediction: y_pred = model(torch.atleast_2d(xu))
https://stackoverflow.com/questions/51041128/
PyTorch Autograd automatic differentiation feature
I am just curious to know, how does PyTorch track operations on tensors (after the .requires_grad is set as True and how does it later calculate the gradients automatically. Please help me understand the idea behind autograd. Thanks.
That's a great question! Generally, the idea of automatic differentiation (AutoDiff) is based on the multivariable chain rule, i.e. . What this means is that you can express the derivative of x with respect to z via a "proxy" variable y; in fact, that allows you to break up almost any operation in a bunch of simpler (or atomic) operations that can then be "chained" together. Now, what AutoDiff packages like Autograd do, is simply to store the derivative of such an atomic operation block, e.g., a division, multiplication, etc. Then, at runtime, your provided forward pass formula (consisting of multiple of these blocks) can be easily turned into an exact derivative. Likewise, you can also provide derivatives for your own operations, should you think AutoDiff does not exactly do what you want it to. The advantage of AutoDiff over derivative approximations like finite differences is simply that this is an exact solution. If you are further interested in how it works internally, I highly recommend the AutoDidact project, which aims to simplify the internals of an automatic differentiator, since there is usually also a lot of code optimization involved. Also, this set of slides from a lecture I took was really helpful in understanding.
https://stackoverflow.com/questions/51054627/
Applying convolution operation to image - PyTorch
To render an image if shape 27x35 I use : random_image = [] for x in range(1 , 946): random_image.append(random.randint(0 , 255)) random_image_arr = np.array(random_image) matplotlib.pyplot.imshow(random_image_arr.reshape(27 , 35)) This generates : I then try to apply a convolution to the image using the torch.nn.Conv2d : conv2 = torch.nn.Conv2d(3, 18, kernel_size=3, stride=1, padding=1) image_d = np.asarray(random_image_arr.reshape(27 , 35)) conv2(torch.from_numpy(image_d)) But this displays error : ~/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input) 299 def forward(self, input): 300 return F.conv2d(input, self.weight, self.bias, self.stride, --&gt; 301 self.padding, self.dilation, self.groups) 302 303 RuntimeError: input has less dimensions than expected The shape of the input image_d is (27, 35) Should I change the parameters of Conv2d in order to apply the convolution to the image ? Update. From @McLawrence answer I have : random_image = [] for x in range(1 , 946): random_image.append(random.randint(0 , 255)) random_image_arr = np.array(random_image) matplotlib.pyplot.imshow(random_image_arr.reshape(27 , 35)) This renders image : Applying the convolution operation : conv2 = torch.nn.Conv2d(1, 18, kernel_size=3, stride=1, padding=1) image_d = torch.FloatTensor(np.asarray(random_image_arr.reshape(1, 1, 27 , 35))).numpy() fc = conv2(torch.from_numpy(image_d)) matplotlib.pyplot.imshow(fc[0][0].data.numpy()) renders image :
There are two problems with your code: First, 2d convolutions in pytorch are defined only for 4d tensors. This is convenient for use in neural networks. The first dimension is the batch size while the second dimension are the channels (a RGB image for example has three channels). So you have to reshape your tensor like image_d = torch.FloatTensor(np.asarray(random_image_arr.reshape(1, 1, 27 , 35))) The FloatTensoris important here, since convolutions are not defined on the LongTensor which will be created automatically if your numpy array only includes ints. Secondly, You have created a convolution with three input channels, while your image has just one channel (it is greyscale). So you have to adjust the convolution to: conv2 = torch.nn.Conv2d(1, 18, kernel_size=3, stride=1, padding=1)
https://stackoverflow.com/questions/51115476/
How to find IoU from segmentation masks?
I am doing an image segmentation task and I am using a dataset that only has ground truths but no bounding boxes or polygons. I have 2 classes( ignoring 0 for background) and the outputs and ground truth labels are in an array like Predicted--/---Labels 0|0|0|1|2 0|0|0|1|2 0|2|1|0|0 0|2|1|0|0 0|0|1|1|1 0|0|1|1|1 0|0|0|0|1 0|0|0|0|1 How do I calculate IoU from these ? PS: I am using python3 with pytorch api
So I just found out that jaccard_similarity_score is regarded as IoU. So the solution is very simple, from sklearn.metrics import jaccard_similarity_score jac = jaccard_similarity_score(predictions, label, Normalize = True/False) Source link: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.jaccard_score.html#sklearn.metrics.jaccard_score
https://stackoverflow.com/questions/51115630/
Jupyterhub with custom singleuser image raises 'no such file or directory' for '/home/jovyan/work'
I tried to setup Jupyterhub to serve Jupyter pytorch notebook using this repo as image https://github.com/stepankuzmin/pytorch-notebook, which is also available as a Docker image on dockerhub as https://hub.docker.com/r/stepankuzmin/pytorch-notebook/. This involves modifying the config.yml file to point to the image, e.g.: singleuser: image: name: stepankuzmin/pytorch-notebook tag: latest I am getting: oci runtime error: container_linux.go:247: starting container process caused \"chdir to cwd (\\\"/home/jovyan/work\\\") set in config.json failed: no such file or directory
Resolved. After much googling I stumbled on this: https://github.com/jupyterhub/jupyterhub/issues/1425. The summary from this is thread is the error can occur due to: * The singleuser jupyterhub image you are building does not conform to the requirements * You are building a singleuser jupyterhub image based on an old base image of singleuser jupyter The only way for me to proceed was to build my own singleuser jupyterhub pytorch jupyter notebook image, which is now available here: https://hub.docker.com/r/nethsix/pytorch-notebook/ The repo is: https://github.com/nethsix/pytorch-notebook I built the image using repo2docker as documented here: http://zero-to-jupyterhub.readthedocs.io/en/latest/user-environment.html#build-a-custom-docker-image-with-repo2docker
https://stackoverflow.com/questions/51120540/
Why a larger neural network back-propagate faster than a smaller one
I wrote the following two NN in pytorch for image segmentation: The smaller one: class ConvNetV0(nn.Module): def __init__(self): super(ConvNetV0, self).__init__() self.conv1 = nn.Conv2d(3, 30, 4, padding=2) self.conv2 = nn.Conv2d(30, 50, 16, padding=7, bias=True) self.conv3 = nn.Conv2d(50, 20, 2, stride=2) self.conv4 = nn.Conv2d(20, 2, 2, stride=2) def forward(self, x): x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) x = self.conv3(x) x = F.relu(x) y = self.conv4(x) return y The larger one: class ConvNetV1(nn.Module): def __init__(self): super(ConvNetV1, self).__init__() self.conv0 = nn.Conv2d(3, 50, 4, padding=1, stride=2) self.conv_r1 = nn.Conv2d(50, 40, 15, padding=7, bias=True) self.conv_r2 = nn.Conv2d(40, 25, 3, padding=1) self.conv_r3 = nn.Conv2d(25, 25, 2, stride=2) # self.conv_r3 = nn.MaxPool2d(2, stride=2) self.conv_b1 = nn.Conv2d(50, 15, 4, padding=1, stride=2) self.conv1 = nn.Conv2d(40, 2, 1) def forward(self, x): x = self.conv0(x) x = F.relu(x) x1 = self.conv_r1(x) x1 = F.relu(x1) x1 = self.conv_r2(x1) x1 = F.relu(x1) x1 = self.conv_r3(x1) x2 = self.conv_b1(x) y = torch.cat([x1, x2], dim=1) y = self.conv1(y) return y However during training at mini-batch size = 8. The smaller net take 2s to complete one iteration while the larger one only take 0.3s to complete. I also observe that the no of parameters ratio between the two net is around 5:6. However, during training the smaller net only takes 1GB VRAM while the larger takes 3GB. Since My 1050ti have 4GB VRAM. I would like trade off memory for speed. Any idea how I can do so?
I ran a quick benchmark of your models on synthetic data of the size you indicated. At least on my system, the difference isn't actually given by the model forward or backward, but by the computation of the loss. This is likely to be due to the fact that the first model uses more the GPU, therefore the queuing of the operations is slightly longer. In fact your first model performs ~630 millions operations, while the second one is around ~270 millions. Notice that in the second model you immediately reduce the size of the feature maps from 256x256 to 128x128, while in the first model you reduce the size only in the last two convolutions. This has a big effect in the number of operations performed. So if you wish to use a V0-like model and make it faster, you should try to decrease the size of the feature maps right away. With this smaller model (in terms of memory), you'll also be able to increase the batch size. If you wish to use the V1 instead, there isn't much you can do. You could try to use checkpoints introduced in Pytorch 0.4 to trade compute for memory to the point where you can increase the batch to 16. It may run a bit faster, or not, depending on how much compute you need to trade off. Another simple thing you can do to make it run faster, if your input size don't change, is set torch.cudnn.benchmark = True. This will look for the fastest set of algorithms for a particular configuration.
https://stackoverflow.com/questions/51124101/
LSTM layer returns nan when fed by its own output in PyTorch
I’m trying to generate time-series data with an LSTM and a Mixture Density Network as described in https://arxiv.org/pdf/1308.0850.pdf Here is a link to my implementation: https://github.com/NeoVand/MDNLSTM The repository contains a toy dataset to train the network. On training, the LSTM layer returns nan for its hidden state after one iteration. A similar issue is reported here. For your convenience, here is the code: import torch import torch.nn as nn import torch.optim as optim from torch.autograd import Variable import torch.nn.functional as F import matplotlib.pyplot as plt import numpy as np import numpy.random as npr device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ts = torch.load('LDS_Toy_Data.pt') def detach(states): return [state.detach() for state in states] class MDNLSTM(nn.Module): def __init__(self, d_obs, d_lat=2, n_gaussians=2, n_layers=1): super(MDNLSTM, self).__init__() self.d_obs = d_obs self.d_lat = d_lat self.n_gaussians = n_gaussians self.n_layers = n_layers self.lstm = nn.LSTM(d_obs, d_lat, n_layers, batch_first=True) self.fcPi = nn.Linear(d_lat, n_gaussians*d_obs) self.fcMu = nn.Linear(d_lat, n_gaussians*d_obs) self.fcSigma = nn.Linear(d_lat, n_gaussians*d_obs) def get_mixture_coef(self, y): time_steps = y.size(1) pi, mu, sigma = self.fcPi(y), self.fcMu(y), self.fcSigma(y) pi = pi.view(-1, time_steps, self.n_gaussians, self.d_obs) mu = mu.view(-1, time_steps, self.n_gaussians, self.d_obs) sigma = sigma.view(-1, time_steps, self.n_gaussians, self.d_obs) pi = F.softmax(pi, 2) sigma = torch.exp(sigma) return pi, mu, sigma def forward(self, x, h): y, (h, c) = self.lstm(x, h) #print(h) pi, mu, sigma = self.get_mixture_coef(y) return (pi, mu, sigma), (h, c) def init_hidden(self, bsz): return (torch.zeros(self.n_layers, bsz, self.d_lat).to(device), torch.zeros(self.n_layers, bsz, self.d_lat).to(device)) def mdn_loss_fn(y, pi, mu, sigma): m = torch.distributions.Normal(loc=mu, scale=sigma) loss = torch.exp(m.log_prob(y)) loss = torch.sum(loss * pi, dim=2) loss = -torch.log(loss) return loss.mean() def criterion(y, pi, mu, sigma): y = y.unsqueeze(2) return mdn_loss_fn(y, pi, mu, sigma) DOBS = 10 DLAT = 2 INSTS = 100 seqlen = 30 epochs = 200 mdnlstm = MDNLSTM(DOBS, DLAT).to(device) optimizer = torch.optim.Adam(mdnlstm.parameters()) z = torch.from_numpy(ts[:INSTS,:,:]).float().to(device) # hiddens=[] # Train the model for epoch in range(epochs): # Set initial hidden and cell states hidden = mdnlstm.init_hidden(INSTS) for i in range(0, z.size(1) - seqlen, seqlen): # Get mini-batch inputs and targets inputs = z[:, i:i+seqlen, :] targets = z[:, (i+1):(i+1)+seqlen, :] hidden = detach(hidden) # hiddens.append(hidden) (pi, mu, sigma), hidden = mdnlstm(inputs, hidden) loss = criterion(targets, pi, mu, sigma) mdnlstm.zero_grad() loss.backward() optimizer.step() if epoch % 100 == 0: print ('Epoch [{}/{}], Loss: {:.4f}' .format(epoch, epochs, loss.item())) I would appreciate any help on this.
The issue was caused by the log-sum-exp operation not being done in a stable way. Here is an implementation of a weighted log-sum-exp trick that I used and could fix the problem: def weighted_logsumexp(x,w, dim=None, keepdim=False): if dim is None: x, dim = x.view(-1), 0 xm, _ = torch.max(x, dim, keepdim=True) x = torch.where( # to prevent nasty nan's (xm == float('inf')) | (xm == float('-inf')), xm, xm + torch.log(torch.sum(torch.exp(x - xm)*w, dim, keepdim=True))) return x if keepdim else x.squeeze(dim) and using that implemented the stable loss function: def mdn_loss_stable(y,pi,mu,sigma): m = torch.distributions.Normal(loc=mu, scale=sigma) m_lp_y = m.log_prob(y) loss = -weighted_logsumexp(m_lp_y,pi,dim=2) return loss.mean() This worked like a charm. In general, the problem is that torch won't report under-flows.
https://stackoverflow.com/questions/51125933/
How to format TSV files to use with torchtext?
The way i'm formatting is like: Jersei N atinge V média N . PU Programe V ... First string in each line is the lexical item, the other is a pos tag. But the empty-line (that i'm using to indicate the end of a sentence) gives me the error AttributeError: 'Example' object has no attribute 'text' when running the given code: src = data.Field() trg = data.Field(sequential=False) mt_train = datasets.TabularDataset( path='/path/to/file.tsv', fields=(src, trg)) src.build_vocab(train) How the proper way to indicate EOS to torchtext?
The following code reads the TSV the way i formatted: mt_train = datasets.SequenceTaggingDataset(path='/path/to/file.tsv', fields=(('text', text), ('labels', labels))) It happens that SequenceTaggingDataset properly identifies an empty line as the sentence separator.
https://stackoverflow.com/questions/51127880/
what's the difference between torch.Tensor() vs torch.empty() in pytorch?
I have tried it out as below. It seems to me they're the same. What's the difference between torch.Tensor() vs torch.empty() in pytorch?
torch.Tensor() is just an alias to torch.FloatTensor() which is the default type of tensor, when no dtype is specified during tensor construction. From the torch for numpy users notes, it seems that torch.Tensor() is a drop-in replacement of numpy.empty() So, in essence torch.FloatTensor() and torch.empty() does the same job of returning a tensor filled with garbage values of dtype torch.float32. Below is a small run: In [87]: torch.FloatTensor(2, 3) Out[87]: tensor([[-1.0049e+08, 4.5688e-41, -8.9389e-38], [ 3.0638e-41, 4.4842e-44, 0.0000e+00]]) In [88]: torch.FloatTensor(2, 3) Out[88]: tensor([[-1.0049e+08, 4.5688e-41, -1.6512e-38], [ 3.0638e-41, 4.4842e-44, 0.0000e+00]]) In [89]: torch.empty(2, 3) Out[89]: tensor([[-1.0049e+08, 4.5688e-41, -9.0400e-38], [ 3.0638e-41, 4.4842e-44, 0.0000e+00]]) In [90]: torch.empty(2, 3) Out[90]: tensor([[-1.0049e+08, 4.5688e-41, -9.2852e-38], [ 3.0638e-41, 4.4842e-44, 0.0000e+00]])
https://stackoverflow.com/questions/51129043/
Why pytorch has two kinds of Non-linear activations?
Why pytorch has two kinds of Non-linear activations? Non-liner activations (weighted sum, nonlinearity): https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity Non-linear activations (other): https://pytorch.org/docs/stable/nn.html#non-linear-activations-other
The primary difference is that the functions listed under Non-linear activations (weighted sum, nonlinearity) perform only thresholding and do not normalize the output. (i.e. the resultant tensor need not necessarily sum up to 1, either on the whole or along some specified axes/dimensions) Example non-linearities: nn.ReLU nn.Sigmoid nn.SELU nn.Tanh Whereas the non-linearities listed under Non-linear activations (other) perform thresholding and normalization (i.e. the resultant tensor sums up to 1, either for the whole tensor if no axis/dimension is specified; Or along the specified axes/dimensions) Example non-linearities: (note the normalization term in the denominator) However, with the exception of nn.LogSoftmax() for which the resultant tensor doesn't sum up to 1 since we apply log over the softmax output.
https://stackoverflow.com/questions/51129751/
what is uninitialized data in pytorch.empty function
i was going through pytorch tutorial and came across pytorch.empty function. it was mentioned that empty can be used for uninitialized data. But, when i printed it, i got a value. what is the difference between this and pytorch.rand which also generates data(i know that rand generates between 0 and 1). Below is the code i tried a = torch.empty(3,4) print(a) Output: tensor([[ 8.4135e-38, 0.0000e+00, 6.2579e-41, 5.4592e-39], [-5.6345e-08, 2.5353e+30, 5.0447e-44, 1.7020e-41], [ 1.4000e-38, 5.7697e-05, 2.5353e+30, 2.1580e-43]]) b = torch.rand(3,4) print(b) Output: tensor([[ 0.1514, 0.8406, 0.2708, 0.3422], [ 0.7196, 0.6120, 0.4476, 0.6705], [ 0.6989, 0.2086, 0.5100, 0.8285]]) Here is the link to official documentation
Once you call torch.empty(), a block of memory is allocated according to the size (shape) of the tensor. By uninitialized data, it's meant that torch.empty() would simply return the values in the memory block as is. These values could be default values or it could be the values stored in those memory blocks as a result of some other operations, which used that part of the memory block before. Here's a simple illustration: # a block of memory with the values in it In [74]: torch.empty(2, 3) Out[74]: tensor([[-1.0049e+08, 4.5688e-41, -9.1450e-38], [ 3.0638e-41, 4.4842e-44, 0.0000e+00]]) # same run; but note the change in values. # i.e. different memory addresses than on the previous run were used. In [75]: torch.empty(2, 3) Out[75]: tensor([[-1.0049e+08, 4.5688e-41, -7.9421e-38], [ 3.0638e-41, 4.4842e-44, 0.0000e+00]])
https://stackoverflow.com/questions/51140927/
OpenNMT issue with PyTorch: .copy_ function not clear behavior
I'm working with the PyTorch version OpenNMT and I'm trying to modify the Beam Search algorithm. I'm currently stuck in the beam_update function (in OpenNMT-py/onmt/decoders/decoder.py file). When it is called: sent_states.data.copy_( sent_states.data.index_select(1, positions)) according to the pythorch documentation of the .copy_ function it will Copies the elements from src into self tensor and returns self. But, what is "self tensor" referring to? Can someone explain to me what this function do or point me to the source code, since I cannot find it...
The self tensor is the tensor you call copy_ on. In your example it is sent_states.data. To answer the question raised in the comments: Why does copy not behave like assigning with = .copy() creates a real copy to a new memory location, while assigning with = only stores a reference to the memory location. The code below shows the difference in execution: import torch torch.manual_seed(3515) tensor1 = torch.rand(2, 3) tensor2 = torch.rand(2, 2) tensor3 = torch.rand(2, 3) positions = torch.tensor([2, 0]) tensor2.data.copy_(tensor1.data.index_select(1, positions)) tensor3.data = tensor1.data.index_select(1, positions) print(tensor2) print(tensor3) print(id(tensor1.data[0])) print(id(tensor2.data[0])) print(id(tensor3.data[0])) Output: tensor([[ 0.5939, 0.8861], [ 0.7525, 0.1537]]) tensor([[ 0.5939, 0.8861], [ 0.7525, 0.1537]]) 4583187080 4583187008 4583187080 The location of tensor1 and tensor3 are the same, while tensor2 has been copied to a new location
https://stackoverflow.com/questions/51152088/
Building recurrent neural network with feed forward network in pytorch
I was going through this tutorial. I have a question about the following class code: class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.output_size = output_size self.i2h = nn.Linear(input_size + hidden_size, hidden_size) self.i2o = nn.Linear(input_size + hidden_size, output_size) self.softmax = nn.LogSoftmax() def forward(self, input, hidden): combined = torch.cat((input, hidden), 1) hidden = self.i2h(combined) output = self.i2o(combined) output = self.softmax(output) return output, hidden def init_hidden(self): return Variable(torch.zeros(1, self.hidden_size)) This code was taken from Here. There it was mentioned that Since the state of the network is held in the graph and not in the layers, you can simply create an nn.Linear and reuse it over and over again for the recurrence. What I don't understand is, how can one just increase input feature size in nn.Linear and say it is a RNN. What am I missing here?
The network is recurrent, because you evaluate multiple timesteps in the example. The following code is also taken from the pytorch tutorial you linked to. loss_fn = nn.MSELoss() batch_size = 10 TIMESTEPS = 5 # Create some fake data batch = torch.randn(batch_size, 50) hidden = torch.zeros(batch_size, 20) target = torch.zeros(batch_size, 10) loss = 0 for t in range(TIMESTEPS): # yes! you can reuse the same network several times, # sum up the losses, and call backward! hidden, output = rnn(batch, hidden) loss += loss_fn(output, target) loss.backward() So the network itself is not recurrent, but in this loop you use it as a recurrent network by feeding the hidden state of the previous forward step together with your batch-input multiple times. You could also use it non-recurrent by just backpropagating the loss in every step and ignoring the hidden state. Since the state of the network is held in the graph and not in the layers, you can simply create an nn.Linear and reuse it over and over again for the recurrence. This means, that the information to compute the gradient is not held in the model itself, so you can append multiple evaluations of the module to the graph and then backpropagate through the full graph. This is described in the previous paragraphs of the tutorial.
https://stackoverflow.com/questions/51152658/
How do I install and run pytorch in MSVS2017 (to avoid "module not found" error on "import torch" statement)?
I'm trying to use pytorch in MSVS2017. I started a pytorch project, have anaconda environment set using python3.6, but when I run the debugger, I get a "module not found" error on the first import statement "import torch". I've tried various methods for installing pytorch in a way that allows MSVS2017 to use it, including command line and Anaconda command line installations (using tips from other closely related StackOverflow questions), but I cannot clear the error. This is a native MSVS2017 project type that came with their AI Tools module. What am I doing wrong?
Probably, at the date of our MSVS2017 installation (esp. if prior to April 2018), there were no official .whl files for Windows pytorch (this has since changed). Also, given the default installation pathway, permissions on Windows (or file lock access) may be a problem (for example, when attempting to install to the "c:\ProgramData" folder). The solution is to 1) ensure all pytorch requisites are installed first (for example, if, during your failed pytorch installation you get a "_____ requires _____ which is not installed, for example cython, then install cython) 2) avoid permission errors by using the --user switch, and 3) install directly from the online repository. So, at the environment command line (top right corner in the "Python Environments" tool) provide --user http://download.pytorch.org/whl/cpu/torch-0.4.0-cp36-cp36m-win_amd64.whl. This operation will create and execute the command: pip install --user http://download.pytorch.org/whl/cpu/torch-0.4.0-cp36-cp36m-win_amd64.whl. Incidentally, you can install all packages at this environmental command line simply by typing the package name (e.g., cython, torchvision, scipy, etc...).
https://stackoverflow.com/questions/51173695/
GPU performing slower than CPU for Pytorch on Google Colaboratory
The GPU trains this network in about 16 seconds. The CPU in about 13 seconds. (I am uncommenting/commenting appropriate lines to do the test). Can anyone see what's wrong with my code or pytorch installation? (I have already checked that the GPU is available, and that there is sufficient memory available on the GPU. from os import path from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu' print(accelerator) !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.0-{platform}-linux_x86_64.whl torchvision print("done") ######################### import torch from datetime import datetime startTime = datetime.now() dtype = torch.float device = torch.device("cpu") # Comment this to run on GPU # device = torch.device("cuda:0") # Uncomment this to run on GPU # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1024, 128, 8 # Create random Tensors to hold input and outputs. x = torch.randn(N, D_in, device=device, dtype=dtype) t = torch.randn(N, D_out, device=device, dtype=dtype) # Create random Tensors for weights. w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True) w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True) w3 = torch.randn(D_out, D_out, device=device, dtype=dtype, requires_grad=True) learning_rate = 1e-9 for i in range(10000): y_pred = x.mm(w1).clamp(min=0).mm(w2).clamp(min=0).mm(w3) loss = (y_pred - t).pow(2).sum() if i % 1000 == 0: print(i, loss.item()) loss.backward() # Manually update weights using gradient descent with torch.no_grad(): w1 -= learning_rate * w1.grad w2 -= learning_rate * w2.grad # Manually zero the gradients after updating weights w1.grad.zero_() w2.grad.zero_() print(datetime.now() - startTime)
I see you're timing things you shouldn't be timing (definition of dtype, device, ...). What's interesting to time here is the creation of the input, output and weight tensors. startTime = datetime.now() # Create random Tensors to hold input and outputs. x = torch.randn(N, D_in, device=device, dtype=dtype) t = torch.randn(N, D_out, device=device, dtype=dtype) torch.cuda.synchronize() print(datetime.now()-startTime) # Create random Tensors for weights. startTime = datetime.now() w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True) w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True) w3 = torch.randn(D_out, D_out, device=device, dtype=dtype, requires_grad=True) torch.cuda.synchronize() print(datetime.now()-startTime) and the training loop startTime = datetime.now() for i in range(10000): y_pred = x.mm(w1).clamp(min=0).mm(w2).clamp(min=0).mm(w3) loss = (y_pred - t).pow(2).sum() if i % 1000 == 0: print(i, loss.item()) loss.backward() # Manually update weights using gradient descent with torch.no_grad(): w1 -= learning_rate * w1.grad w2 -= learning_rate * w2.grad # Manually zero the gradients after updating weights w1.grad.zero_() w2.grad.zero_() torch.cuda.synchronize() print(datetime.now() - startTime) Why the GPU is slower I run it on my machine with a GTX1080 and a very good CPU, so the absolute timing is lower, but the explanation should still be valid. If you open a Jupyter notebook and run it on the CPU: 0:00:00.001786 time to create input/output tensors 0:00:00.003359 time to create weight tensors 0:00:04.030797 time to run training loop Now you set device to cuda and we call this "cold start" (nothing has been previously run on the GPU in this notebook) 0:00:03.180510 time to create input/output tensors 0:00:00.000642 time to create weight tensors 0:00:03.534751 time to run training loop You see that the time to run the training loop is reduced by a small amount, but there is an overhead of 3 seconds because you need to move the tensors from CPU to GPU RAM. If you run it again without closing the Jupyter notebook: 0:00:00.000421 time to create input/output tensors 0:00:00.000733 time to create weight tensors 0:00:03.501581 time to run training loop The overhead disappears, because Pytorch uses a caching memory allocator to speed things up. You can notice that the speedup you get on the training loop is very small, this is because the operations you're doing are on tensors of pretty small size. When dealing with small architectures and data I always run a quick test to see if I actually gain anything by running it on GPU. For example if I set N, D_in, H, D_out = 64, 5000, 5000, 8, the training loop runs in 3.5 seconds on the GTX1080 and in 85 seconds on the CPU.
https://stackoverflow.com/questions/51179133/
How to get the input and output channels in a CNN?
I am specifically looking at the AlexNet architecture found here: https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py I am confused as to how they are getting the input and output channels. Based on my readings of the AlexNet, I can't figure out where they are getting outputchannels = 64 from (as the second argument to the Conv2d function). Even if the 256 is split across 2 GPUs, that should give 128 rather than 64. The input channel of 3 initially represents the color channels as per my assumption. However, the other input and output channels don't make sense to me either. Could anyone clarify what the input and output channels are? class AlexNet(nn.Module): def __init__(self, num_classes=1000): super(AlexNet, self).__init__() self.features = nn.Sequential( nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), #why 64? nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(64, 192, kernel_size=5, padding=2), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), nn.Conv2d(192, 384, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(384, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(256, 256, kernel_size=3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2), )
The 3 is the number of input channels (R, G, B). That 64 is the number of channels (i.e. feature maps) in the output of the first convolution operation. So, the first conv layer takes a color (RGB) image as input, applies 11x11 kernel with a stride 4, and outputs 64 feature maps. I agree that this is different from the number of channels (96, 48 in each GPU) in the architecture diagram (of original AlexNet implementation). However, PyTorch does not implement the original Alexnet architecture. Rather it implements a variant of the AlexNet implementation described in the paper: One weird trick for parallelizing convolutional neural networks. Also, see cs231n - convolutional networks for more details about how input, filters, stride, and padding equates to output after the conv operation. P.S: See pytorch/vision/issues/185
https://stackoverflow.com/questions/51180135/
Is it possible to see the read data of a pytorchtext.data.Tabulardataset?
train, test = data.TabularDataset.splits(path="./data/", train="train.csv",test="test.csv",format="csv",fields=[("Tweet",TEXT), ("Affect Dimension",LABEL)]) I have this code and want to evaluate, if the loaded data is correct or if it's using wrong columns for the actual text fields etc. If my file has the columns "Tweet" for the Texts and "Affect Dimension" for the Class name, is it correct to put them like this is the fields section? Edit: TabularDataset includes an Example object, in which the data can be read. When reading csv files, only a "," is accepted as a delimiter. Everything else will result in corrupted data.
You can put any field name irrespective of what your file has. Also, I recommend NOT TO use white-spaces in the field names. So, rename Affect Dimension to Affect_Dimension or anything convenient for you. Then you can iterate over different fields like below to check the read data. for i in train.Tweet: print i for i in train.Affect_Dimension: print i for i in test.Tweet: print i for i in test.Affect_Dimension: print i
https://stackoverflow.com/questions/51183040/
(Single Layer) Perceptron in PyTorch, bad convergence
I'm trying to develop a simple single layer perceptron with PyTorch (v0.4.0) to classify AND boolean operation. I want to develop it by using autograd to calculate gradient of weights and bias and then update them in a SGD manner. The code is very simple and is the following: # AND points and labels data = torch.tensor([ [0, 0], [0, 1], [1, 0], [1, 1] ], dtype=torch.float32) labels = torch.tensor([0,0,0,1], dtype=torch.float32) weights = torch.zeros(2, dtype=torch.float32, requires_grad=True) bias = torch.zeros(1, requires_grad=True) losses = [] epochs = 100 eta = 0.01 for epoch in range(epochs): total_loss = 0 for idx in range(4): # take current input X = data[idx,:] y = labels[idx] # compute output and loss out = torch.add(torch.dot(weights, X), bias) loss = (out-y).pow(2) total_loss += loss.item() # backpropagation loss.backward() # compute accuracy and update parameters with torch.no_grad(): weights -= eta * weights.grad bias -= eta * bias.grad # reset gradient to zero weights.grad.zero_() bias.grad.zero_() losses.append(total_loss) The model converges, as you can see from the learning curve but the resulting plane is: with 50% of accuracy. I tried with different inital parameters and also by using the SGD optimizer from PyTorch but nothing changed. I know that MSE is a regression loss but I don't think the problem is there. Any ideas? Update The plane is computed with these 2 lines of code xr = np.linspace(0, 1, 10) yr = (-1 / weights[1].item()) * (weights[0].item() * xr + bias.item()) plt.plot(xr,yr,'-')
The equation you use to compute the plane yr = (-1 / weights[1].item()) * (weights[0].item() * xr + bias.item()) is derived in the case where y_i = [+1, -1] and there is a sign function: it's computed by looking for the plane that separates positive and negative examples. This assumption is not valid anymore if you change targets. If you draw this: x1 = np.linspace(0, 1, 10) x2 = np.linspace(0, 1, 10) X, Y = np.meshgrid(x1, x2) w1, w2 = weights.detach().numpy()[0, 0], weights.detach().numpy()[1, 0] b = bias.detach().numpy()[0] Z = w1*X + w2*Y + b which is the correct plane in 3D, you get the correct separation You can get a correct separation with your formula if you offset by a factor that depends on the average of the labels, like: yr = (-1 / weights[1].item()) * (weights[0].item() * xr + bias.item() - 0.5) but I can't come around at justifying it formally.
https://stackoverflow.com/questions/51198135/
PyTorch Linear Regression Issue
I am trying to implement a simple linear model in PyTorch that can be given x data and y data, and then trained to recognize the equation y = mx + b. However, whenever I try to test my model after training, it thinks that the equation is y= mx + 2b. I'll show my code, and hopefully someone will be able to spot an issue. Thank you in advance for any help. import torch D_in = 500 D_out = 500 batch=200 model=torch.nn.Sequential( torch.nn.Linear(D_in,D_out), ) Next I create some data and set a rule. Let's do 3x+4. x_data=torch.rand(batch,D_in) y_data=torch.randn(batch,D_out) for i in range(batch): for j in range(D_in): y_data[i][j]=3*x_data[i][j]+5 # model thinks y=mx+c -&gt; y=mx+2c? loss_fn=torch.nn.MSELoss(size_average=False) optimizer=torch.optim.Adam(model.parameters(),lr=0.001) Now to training... for epoch in range(500): y_pred=model(x_data) loss=loss_fn(y_pred,y_data) optimizer.zero_grad() loss.backward() optimizer.step() Then I test my model with a Tensor/matrix of just 1's. test_data=torch.ones(batch,D_in) y_pred=model(test_data) Now, I'd expect to get 3*1 + 4 = 7, but instead, my model thinks it is 11. [[ 10.7286, 11.0499, 10.9448, ..., 11.0812, 10.9387, 10.7516], [ 10.7286, 11.0499, 10.9448, ..., 11.0812, 10.9387, 10.7516], [ 10.7286, 11.0499, 10.9448, ..., 11.0812, 10.9387, 10.7516], ..., [ 10.7286, 11.0499, 10.9448, ..., 11.0812, 10.9387, 10.7516], [ 10.7286, 11.0499, 10.9448, ..., 11.0812, 10.9387, 10.7516], [ 10.7286, 11.0499, 10.9448, ..., 11.0812, 10.9387, 10.7516]]) Similarly, if I change the rule to y=3x+8, my model guesses 19. So, I am not sure what is going on. Why is the constant being added twice? By the way, if I just set the rule to y=3x, my model correctly infers 3, and for y=mx in general my model correctly infers m. For some reason, the constant term is throwing it off. Any help to solve this problem is much appreciated. Thanks!
Your network does not learn long enough. It gets a vector with 500 features to describe a single datum. Your network has to map the big input of 500 features to an output including 500 values. Your trainingdata is randomly created, not like your simple example, so I think you just have to train longer to fit your weights to approximate this function from R^500 to R^500. If I reduce the input and output dimensionality and increase the batch size, learning rate and training steps I get the expected result: import torch D_in = 100 D_out = 100 batch = 512 model=torch.nn.Sequential( torch.nn.Linear(D_in,D_out), ) x_data=torch.rand(batch,D_in) y_data=torch.randn(batch,D_out) for i in range(batch): for j in range(D_in): y_data[i][j]=3*x_data[i][j]+4 # model thinks y=mx+c -&gt; y=mx+2c? loss_fn=torch.nn.MSELoss(size_average=False) optimizer=torch.optim.Adam(model.parameters(),lr=0.01) for epoch in range(10000): y_pred=model(x_data) loss=loss_fn(y_pred,y_data) optimizer.zero_grad() loss.backward() optimizer.step() test_data=torch.ones(batch,D_in) y_pred=model(test_data) print(y_pred) If you just want to approximate f(x) = 3x + 4 with only one input you could also set D_in and D_out to 1.
https://stackoverflow.com/questions/51198474/