id
stringlengths
3
8
text
stringlengths
1
115k
st100400
Thank you for the response. Do you know any method that I can get the profile for each layer? not just the memory usage but also execution time, etc.
st100401
I dont know of anything specific to PyTorch. But I have used tools suggested here https://stackoverflow.com/questions/582336/how-can-you-profile-a-script 157
st100402
Assuming we are in the unfortunate case of having a nan valued Variable. If it is passed through a ReLU activation the output is a zero. Is that the desired behaviour? (Other activation functions return nan instead as I would have expected) import torch from torch.autograd import Variable import torch.nn.functional as F A = Variable(torch.zeros(1))/0 # nan print(F.relu(A)) # 0 print(F.elu(A)) # nan print(F.leaky_relu(A)) # nan print(F.sigmoid(A)) # nan
st100403
yes this is desired/expected. doing a max(x, nan) will ignore the nan and pass through x.
st100404
As of pytorch 4.1 this is not the case anymore. relu(NaN) == NaN In [1]: import torch In [2]: x = torch.ones(1).float()+float('NaN') In [3]: x Out[3]: tensor([ nan]) In [4]: x.relu() Out[4]: tensor([ nan]) I’ve previously happily (ab)used the previous behaviour. Is there a suggested new method on How to set ‘nan’ in Tensor to 0 14? I assume the suggested method 14 my_tensor[torch.isnan(my_tensor)] = 0. will cause problems for GPU and have a high memory cost. Is there some other method?
st100405
Edit: I’ll continue the discussion about https://discuss.pytorch.org/t/how-to-set-nan-in-tensor-to-0 116 in that thread
st100406
Hi everyone, I don’t have a Nvidia graphic card, but I wonder whether it’s possible to train my Network with Geforce Now 4(an online cloud gaming service), since Cuda is also powered by Nvidia. Thanks
st100407
I’ve skimmed through the quick start guide and it looks like you would have to use some kind of app to play your games. I doubt you’ll have an ssh login to install libraries etc. That being said, I’ve never heard on this service before, so feel free to correct me if I’m wrong.
st100408
It doesn’t seemed to be suitable for machine learning, but I found an alternative, that is well suited for machine learning and Pytorch is especially mentioned on their website. It’s called Paperspace 30 and I think it’s good for everyone, who has to do a lot of training and has no Nvidia graphic card. I think I will try it and I will write a report it right here, so you know whether it is worth its money.
st100409
I am using Paperspace, it is working well for me. Using their Linux Machine Learning computer.
st100410
Hi, I am trying to do train-test split for the dataset. I find that there is a built-in SubsetRandomSampler which does the job. Does anyone know what is the difference between this one with the train-test split function sklearn.model_selection.train_test_split provided in sklearn? Thanks!
st100411
Solved by ptrblck in post #2 The SubsetRandomSampler samples randomly from a list of indices. As it won’t create these indices internally, you might want to use something like train_test_split to create the training, eval and test indices and then pass them to the SubsetRandomSampler.
st100412
The SubsetRandomSampler samples randomly from a list of indices. As it won’t create these indices internally, you might want to use something like train_test_split to create the training, eval and test indices and then pass them to the SubsetRandomSampler.
st100413
Great, thank you! I have two other questions: if I create train/validation set by creating two DataLoader object outside the training loop, does it mean that in the training phase, train/validation sets are fixed? Do I need to put them inside the training loop if I want to get different train/validation sets for different epochs? Does it automatically shuffle the training set when getting mini-batch in each epoch?
st100414
The DataLoaders will use the Dataset you are passing. If you created these Datasets before your training loop, they won’t change. I’m not sure it’s a good idea to shuffle the training and validation data inside the training loop, as this will yield a data leak. If you really want to do that, you could create a new sampler or dataset after an epoch and wrap it in a DataLoader and start the next epoch. The DataLoader will shuffle the data, if you pass shuffle=True as an argument and don’t use an own Sampler. If that’s the case, the Sampler determines, if the data is shuffled or not.
st100415
Thanks! In the documentation (https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader 12) it says: sampler ( Sampler 2 , optional ) – defines the strategy to draw samples from the dataset. If specified, shuffle must be False. Does that mean there would be problems if I define my training/validation set this way: train_loader = DataLoader(dataset, batch_size=50, sampler=SubsetRandomSampler(train_idx), shuffle=True) valid_loader = DataLoader(dataset, batch_size=50, sampler=SubsetRandomSampler(valid_idx), shuffle=True) Why would it be an issue if I want to shuffle the data when generating batch if SubsetRandomSampler is applied?
st100416
Since the sampler defines the sampling strategy, the shuffle argument would make the sampler meaningless. Think about some sampler you’ve created to draw a sequence of images in a particular order. If the DataLoader could shuffle, the sequence might be broken. That is why the sampler now defines how to draw samples and when to shuffle. In your example, you don’t need to specify shuffle=True, as SubsetRandomSampler automatically shuffles the data using the subset indices.
st100417
Hi! I’ve installed PyTorch from source (yesterday master) and Reinforcement Learning (DQN) tutorial needs couple tweaks to run Can I update it with PR or you will do this? Problems are as follows (I refer to downloaded Python source): Line 449: _, reward, done, _ = env.step(action**.cpu().numpy()**[0, 0]), accessing torch.Tensor element no longer returns value, it’s needed to convert it to numpy explicitly. Note that I don’t use CUDA, but converting it to cpu doesn’t hurt me but is needed for CUDA case. Line 414: expected_state_action_values = Variable(expected_state_action_values.data**.view(-1, 1)**), without adding this dummy dim further in code (line 417) there are problems with F.smooth_l1_loss(...). And of course tourch.no_grad() should be used, but it’s not in official release for now so I omit it.
st100418
We haven’t updated the tutorials yet because we haven’t officially released pytorch 0.4 yet. They’ll be released when that happens
st100419
What will happen if in case all sampled states have next state as None if len(memory) < BATCH_SIZE: return transitions = memory.sample(BATCH_SIZE) batch = Transition(*zip(*transitions)) # Compute a mask of non-final states and concatenate the batch elements non_final_mask = torch.tensor(tuple(map(lambda s: s is not None, batch.next_state)), device=device, dtype=torch.uint8) non_final_next_states = torch.cat([s for s in batch.next_state if s is not None]) state_batch = torch.cat(batch.state) action_batch = torch.cat(batch.action) reward_batch = torch.cat(batch.reward) Is this is a bug?
st100420
GPU memory for model in train mode is enough, but not enough for model in eval mode. Why?
st100421
Usually, it would be the other way around. You may need to check the batch size for test mode (if it is higher than training) as well as if you are testing with torch.no_grad(). If the reason is not one of the mentioned above, you may need to post any short snippet code with dataloader/test method to get a valid answer.
st100422
Dataloader is defined as followes: import json import h5py import os from PIL import Image from PIL.ImageOps import expand import numpy as np import torch.utils.data as data import multiprocessing import random import torchvision from random import choice import torch train_augmentation = torchvision.transforms.Compose([torchvision.transforms.Resize(260), torchvision.transforms.RandomResizedCrop(224), torchvision.transforms.RandomHorizontalFlip(), torchvision.transforms.ColorJitter(random.randint(0, 1)), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) test_augmentation = torchvision.transforms.Compose([torchvision.transforms.Resize(224), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) class DataLoader(data.Dataset): def reset_iterator(self, split): del self._prefetch_process[split] self._prefetch_process[split] = BlobFetcher(split, self, split == 'train') self.iterators[split] = 0 def get_vocab_size(self): return self.vocab_size def get_vocab(self): return self.ix_to_word def word_to_ix(self): w2ix = {} for k, v in self.ix_to_word.items(): w2ix[v] = k return w2ix def get_vb_vocab(self): vocab = {} for k, v in self.vb_ix_to_word.items(): vocab[v] = k return vocab def get_unk_ix(self): vb_vocab = self.get_vb_vocab() return int(vb_vocab['unk']) def get_vb_weights(self): vb_weights = [] total_vb = sum(self.vb_counts.values()) vb_vocab = self.get_vb_vocab() vb_vocab = sorted(vb_vocab.items(), key=lambda item:int(item[1])) for each in vb_vocab: weight = total_vb / (int(self.vb_counts[each[0]]) * 10000) vb_weights.append(weight) return torch.from_numpy(np.asarray(vb_weights)).float() def get_seq_length(self): return self.seq_length def read_files(self): self.feats_fc = h5py.File(os.path.join( self.opt.input_fc_dir, 'feats_fc.h5'), 'r') self.feats_att = h5py.File(os.path.join( self.opt.input_att_dir, 'feats_att.h5'), 'r') def get_data(self, ix, split): img_path = os.path.join(self.imgpth, self.info['images'][ix]['file_path']) image = Image.open(img_path) h, w = image.size g = max(h, w) delta_w = g - h delta_h = g - w padding = (delta_w // 2, delta_h // 2, delta_w - (delta_w // 2), delta_h - (delta_h // 2)) image = expand(image, padding) if split == 'train': image = train_augmentation(image) else: image = test_augmentation(image) return image, ix def __init__(self, opt): self.opt = opt self.batch_size = self.opt.batch_size self.imgpth = opt.input_img_path # load json file which contains additional information about dataset print('DataLoader loading json file: ', opt.input_json) self.info = json.load(open(self.opt.input_json)) self.ix_to_word = self.info['ix_to_word'] self.w2ix = self.word_to_ix() self.vocab_unk_ix = int(self.w2ix['UNK']) self.vocab_size = len(self.ix_to_word) print('vocab size is ', self.vocab_size) self.vb_ix_to_word = self.info['vb_ix_to_word'] self.vb_counts = self.info['vb_counts'] self.vb_vocab_size = len(self.vb_ix_to_word) self.noun_labels = self.info['noun_labels'] print('vb vocab size is ', self.vb_vocab_size) # open the hdf5 file print('DataLoader loading h5 file: ', opt.input_label_h5) self.h5_label_file = h5py.File(self.opt.input_label_h5, 'r', driver='core') # load in the sequence data seq_size = self.h5_label_file['labels'].shape self.seq_length = seq_size[1] print('max sequence length in data is', self.seq_length) # load the pointers in full to RAM (should be small enough) self.label_start_ix = self.h5_label_file['label_start_ix'][:] self.label_end_ix = self.h5_label_file['label_end_ix'][:] self.noun_mask = self.h5_label_file['noun_mask'][:] self.num_images = self.label_start_ix.shape[0] print('read %d image features' % (self.num_images)) # separate out indexes for each of the provided splits self.split_ix = {'train': [], 'test': []} for ix in range(len(self.info['images'])): img = self.info['images'][ix] if img['split'] == 'train': self.split_ix['train'].append(ix) # elif img['split'] == 'val': # self.split_ix['val'].append(ix) elif img['split'] == 'test': self.split_ix['test'].append(ix) elif opt.train_only == 0: # restval self.split_ix['train'].append(ix) print('assigned %d images to split train' % len(self.split_ix['train'])) # print('assigned %d images to split val' % len(self.split_ix['val'])) print('assigned %d images to split test' % len(self.split_ix['test'])) self.iterators = {'train': 0, 'test': 0} self._prefetch_process = {} # The three prefetch process for split in self.iterators.keys(): self._prefetch_process[split] = BlobFetcher(split, self, split == 'train') # Terminate the child process when the parent exists def cleanup(): print('Terminating BlobFetcher') for split in self.iterators.keys(): del self._prefetch_process[split] import atexit atexit.register(cleanup) def get_batch(self, split, batch_size=None): batch_size = batch_size or self.batch_size img_batch = np.zeros([batch_size, 3, 224, 224], dtype='float32') label_batch = np.zeros([batch_size, self.seq_length + 2], dtype='int') vb_label_batch = np.zeros([batch_size, self.vb_vocab_size],dtype='int') mask_batch = np.zeros([batch_size, self.seq_length + 2], dtype='float32') noun_batch = np.zeros([batch_size, self.vocab_size + 1], dtype='float32') noun_mask_batch = np.zeros([batch_size, 49], dtype='float32') masks = [] wrapped = False infos = [] gts = [] for i in range(batch_size): img, ix, tmp_wrapped = self._prefetch_process[split].get() img_batch[i] = img ix1 = self.label_start_ix[ix] ix2 = self.label_end_ix[ix] - 1 ncap = ix2 - ix1 + 1 for x in range(ix1, ix2): if len(self.h5_label_file['vblabels'][x]) > 0: ind = self.h5_label_file['vblabels'][x].nonzero() for each in ind: vb_label_batch[i][self.h5_label_file['vblabels'][x][each]] = 1 assert ncap > 0, 'an image does not have any label' ixl = random.randint(ix1, ix2) label_batch[i, 1: self.seq_length + 1] = self.h5_label_file['labels'][ixl] nouns = self.noun_labels[ixl]['subject'] noun_mask_batch[i] = self.noun_mask[ixl] l_nouns = len(nouns) if l_nouns > 0: seed = random.randint(0, l_nouns - 1) masks.append(nouns[seed]) for each in nouns[seed]: noun_batch[i][each - 1] = 1 else: noun_batch[i][self.vocab_unk_ix - 1] = 1 # vb_labels = ((self.h5_label_file['vblabels'][ixl] >= 1) * self.h5_label_file['vblabels'][ixl]).nonzero()[0] # if len(self.h5_label_file['vblabels'][ixl]) >= 1: # for each in self.h5_label_file['vblabels'][ixl]: # vb_label_batch[i][each - 1] =1 # if len(vb_labels) == 0: # vb_label_batch[i] = self.get_unk_ix() - 1 # else: # vb_label_batch[i] = self.h5_label_file['vblabels'][ixl][choice(vb_labels)] - 1 if tmp_wrapped: wrapped = True gts.append(self.h5_label_file['labels'][self.label_start_ix[ix]:self.label_end_ix[ix]]) info_dict = {} info_dict['ix'] = ix info_dict['id'] = int(self.info['images'][ix]['id']) info_dict['file_path'] = self.info['images'][ix]['file_path'] infos.append(info_dict) nonzeros = np.array(list(map(lambda x: (x != 0).sum() + 2, label_batch))) for ix, row in enumerate(mask_batch): row[:nonzeros[ix]] = 1 data = {} # print('img_batch size: ', img_batch.size(0), img_batch.size(1)) data['imgs'] = img_batch data['labels'] = label_batch data['vb_labels'] = vb_label_batch data['gts'] = gts data['masks'] = mask_batch data['noun_batch'] = noun_batch data['noun_mask_batch'] = noun_mask_batch data['bounds'] = {'it_pos_now': self.iterators[split], 'it_max': len(self.split_ix[split]), 'wrapped': wrapped} data['infos'] = infos data['masks_'] = masks return data # It's not coherent to make DataLoader a subclass of Dataset, # but essentially, we only need to implement the following to functions, # so that the torch.utils.data.DataLoader can load the data according # the index. However, it's minimum change to switch to pytorch data loading def __getitem__(self, index): ix = index return self.get_data(ix, split='train') def __len__(self): return len(self.info['images']) class BlobFetcher(): def __init__(self, split, dataloader, is_shuffle=False): self.split = split self.dataloader = dataloader self.is_shuffle = is_shuffle def reset(self): sampler = self.dataloader.split_ix[self.split][self.dataloader.iterators[self.split]:] self.split_loader = iter( data.DataLoader(dataset=self.dataloader, batch_size=1, sampler=sampler, shuffle=False, pin_memory=True, num_workers=multiprocessing.cpu_count(), collate_fn=lambda x: x[0])) def _get_next_minibatch_inds(self): max_index = len(self.dataloader.split_ix[self.split]) wrapped = False ri = self.dataloader.iterators[self.split] ix = self.dataloader.split_ix[self.split][ri] ri_next = ri + 1 if ri_next >= max_index: ri_next = 0 if self.is_shuffle: random.shuffle(self.dataloader.split_ix[self.split]) wrapped = True self.dataloader.iterators[self.split] = ri_next return ix, wrapped def get(self): if not hasattr(self, 'split_loader'): self.reset() ix, wrapped = self._get_next_minibatch_inds() tmp = self.split_loader.next() if wrapped: self.reset() assert tmp[1] == ix, 'ix not equal' return tmp + [wrapped]
st100423
Are the batch sizes of train and test mode same? Are you using with torch.no_grad() while testing?
st100424
Yes, batch size of train and test mode same. I do not use torch.no_grad(), because my torch is 0.3.0.
st100425
you could set volatile=True for data and target, if the pytorch version < 0.4. data = Variable(data, volatile=True) target = Variable(target, volatile=True)
st100426
Hi, I need to use synchronized BatchNorm so I downloaded the project from https://github.com/zhanghang1989/PyTorch-Encoding 5 and runned their train.py to test if everything works fine. The error “all cuda_capable devices are busy or unavailable” appears as soon as the training procedure starts, i’m not sure what the error is depending on. I’m using a remote server and can not reset GPUs by myself for the moment, they are pretty empty. Here are some information about my environment: Ubuntu: 16.04 cuda: 8.0 Pytorch: 0.4.1 gcc: 5.5 gpu compute mode: default error.png1155×1069 196 KB
st100427
How can I load rotated mnist dataset (https://github.com/ChaitanyaBaweja/RotNIST/tree/master/data 30) using pytorch ?
st100428
Solved by ptrblck in post #2 The main.py seems to show how to load the data. Probably you could adapt the code to load the data first and then wrap it in a Dataset. The Data Loading tutorial might help you to implement an own Dataset. Note that in a vanilla classification use case PyTorch expects a target filled with class ind…
st100429
The main.py 12 seems to show how to load the data. Probably you could adapt the code to load the data first and then wrap it in a Dataset. The Data Loading tutorial 16 might help you to implement an own Dataset. Note that in a vanilla classification use case PyTorch expects a target filled with class indices while the code seems to load target in a one-hot encoded format. To get the indices you can just call torch.argmax(targets, dim=1).
st100430
When I specify to run my code on specific GPU(GPU-2), it uses that GPU specified but also creates subprocesses of around 500MB in GPU-0. Assigned GPU using: network.cuda(2) image.cuda(2) Moved data to GPU using: mydata.to(2) Pytorch version used: 0.4 Assigned GPU: Green (GPU-2) Subprocesses: Red (GPU-0) Screenshot from 2018-09-12 12-32-11.png732×438 56.9 KB Why does this subprocesses PID 33106 GPU-0 get created?
st100431
Multi-GPU always allocates on cuda:0 This issue is fixed in version 0.5.0
st100432
When I use multi-gpus to train my model ( model=torch.nn.DataParallel(model) model=model.to(device)), where I have to create some gpu tensors in the loss function, then I find the new gpu tensors have to be on the same gpu as the model’s, otherwise ‘data on different gpus’ will appear. However, if I put all the additional tensors on the same gpu, the weird memory assignment occurs, like gpu0 is 10000M, while other gpu is 3000M. I have to shrink my batch size to enable the training. Is there any solution?
st100433
see the title. I want to build from source and get a whl, as I don’t want to install those CUDA, MKL, etc. as conda packages, and want PyTorch to be self-contained. Thanks. Currently, I can only install PyTorch as pip wheel using precompiled binaries from the official website. This prevents me from avoiding some bugs that get fixed in the master branch.
st100434
Hi, you can follow the usual instructions for building from source and call setup.py bdist_wheel instead of setup.py 147 install. This will put the whl in the dist directory. Best regards Thomas
st100435
So that would give me a similar wheel as those in the official website, with everything self-contained, right?
st100436
here is the script to build the wheel: https://github.com/pytorch/builder/tree/master/wheel 1.5k
st100437
Thanks. So what are the dependencies? Just an environment with conda installed (plus some new enough gcc, say 4.8)?
st100438
Hi Soumith, I checked some parts of the builder script. https://github.com/pytorch/builder/blob/master/conda/switch_cuda_version.sh 92 Although I haven’t tested it, it seems that, the CUDA and cuDNN is not handled by conda, and instead they are installed in system-level directories. Seems that I should get myself an environment like that in https://github.com/pytorch/builder/tree/master/manywheel 76, and then run scripts under wheel?
st100439
yes you are correct. you will need this environment: https://github.com/pytorch/builder/blob/master/manywheel/conda_build.sh 354
st100440
@smth @zym1010 excuse me. I am facing the same problem these days so I hope you can help me. Since I am working on an offline machine with ubuntu16.04, I can’t directly use the builder script in the reposity smth offered. Instead, I write a new one according to https://github.com/pytorch/builder/blob/master/manywheel/build.sh 65. In this script, the enviroment variables and the deps list maintains the same. The problem is, using this script, I can obtain a .whl file and can install it in another machine (with cuda installed). However, I find that it will take a long time (about 3 minutes) in the procedure “lambda t: t.to_cuda()” when call module.apply(). I guess that the reason may be the cuda dependencies has not been fully integerated in the whl file. Could you please tell the reason, or share the correct process to build a “manywheel” file?
st100441
@YuxiaoXu if you kept these particular env variables intact: https://github.com/pytorch/builder/blob/master/manywheel/build.sh#L27-L39 73 then there’s no reason it will take 3 minutes startup time in 2nd machine. The only situation I can think of, where that will happen is this: You build wheel with CUDA 8 2nd machine has Volta GPU In this case, it will runtime-compile some CUDA kernels for Volta (because Volta needs atleast CUDA9 for direct support)
st100442
I have two Tensor objects, t1 of size (D, m, n) and t2 of size (D, n, n) and I want to perform something like a NumPy tensordot(t1,t2, axes=([0, 2], [0, 2])), that is perform 2D matrix multiplications over the axis 0 and 2 of the 3D tensors. Is it possible to perform it in pytorch?
st100443
The simplest way I see is to use view to merge the common dimensions into one single common dimension and then use classical 2d mm. This is an example with d=7 and n=3: t1 = torch.rand(7,5,3) t2 = torch.rand(7,2,3) # put t1 and t2 into compatible shapes: t1 = t1.transpose(1,2).contiguous().view(7*3,-1).transpose(0,1) t2 = t2.transpose(1,2).contiguous().view(7*3,-1) result = torch.mm(t1,t2)
st100444
The problem here is that t2 is of size (D, n, n), so in your example it should be of size (7, 3, 3), thus when I view it, how can I be sure to “flatten”, the first and third dimension, and not the first and second ones?
st100445
By default view will “merge” the first set of dimension components that divides the target dimension size. if you have a (D,n,n) tensor and use .view(D*n, n), the two first dimension will be merged.
st100446
The answer above matches the behavior of np.tensordot For those looking to do slice multiplication over 3D tensors, torch.matmul might be better. A = torch.tensor([[[i for i in range(1,3)]] * 5] * 2) >>>tensor([[[1, 2], [1, 2], [1, 2], [1, 2], [1, 2]], [[1, 2], [1, 2], [1, 2], [1, 2], [1, 2]]]) A.size() >>>torch.Size([2, 5, 2]) B = torch.tensor([[[i for i in range(2,4)]] * 3] * 2) >>>tensor([[[2, 3], [2, 3], [2, 3]], [[2, 3], [2, 3], [2, 3]]]) B.size() >>>torch.Size([2, 3, 2]) # np.tensordot result: D = np.tensordot(A.numpy(), B.numpy(), axes=([0, 2], [0, 2])) >>>array([[16, 16, 16], [16, 16, 16], [16, 16, 16], [16, 16, 16], [16, 16, 16]]) ## The proposed solution above: A.transpose(1,2).contiguous().view(2*2,-1).transpose(0,1) >>>tensor([[1, 2, 1, 2], [1, 2, 1, 2], [1, 2, 1, 2], [1, 2, 1, 2], [1, 2, 1, 2]]) B.transpose(1,2).contiguous().view(2*2,-1) >>>tensor([[2, 2, 2], [3, 3, 3], [2, 2, 2], [3, 3, 3]]) torch.mm(A.transpose(1,2).contiguous().view(2*2,-1).transpose(0,1), B.transpose(1,2).contiguous().view(2*2,-1)) >>>tensor([[16, 16, 16], [16, 16, 16], [16, 16, 16], [16, 16, 16], [16, 16, 16]]) # --> Equivalent to tensordot answer # With matmul, we can get slice by slice: torch.matmul(A,B.transpose(1,2)) >>>tensor([[[8, 8, 8], [8, 8, 8], [8, 8, 8], [8, 8, 8], [8, 8, 8]], [[8, 8, 8], [8, 8, 8], [8, 8, 8], [8, 8, 8], [8, 8, 8]]])
st100447
Hello, Is there any tutorial that explains how to fine tune a per-trained model with new dataset
st100448
I know, I was facing similar problems too. So after i was done, I wrote this tutorial on fine tuning a pretrained model. https://github.com/Spandan-Madan/Pytorch_fine_tuning_Tutorial 1.9k Hope this helps!
st100449
I’ve updated the tutorial to work with PyTorch 0.4 now! Here is the link again - https://github.com/Spandan-Madan/Pytorch_fine_tuning_Tutorial 477
st100450
First I tried training a CNN with the function f(x)=x. This worked fine. Then my new function is f(x)=x, x<0.5 f(x)=0, x>0.5 This did not work, network could not figure it out. Is that a weakness of neural network that it cannot model non-differentiable function, or what. Thanks Matt
st100451
I think I have it working now. Maybe something I did wrong although I can’t figure out what it was.
st100452
I have a k x 2 tensor named points and I have another k x 1 tensor named mask. mask contains 1 or 0 for each index. I want to filter points and remove the entire row if mask does not contain a 1 for that specific k. How can I do this?
st100453
Solved by ptrblck in post #2 You could use torch.masked_select: k = 10 x = torch.randn(k, 2) mask = torch.empty(k, 1, dtype=torch.uint8).random_(2) x.masked_select(mask).view(-1, 2)
st100454
You could use torch.masked_select: k = 10 x = torch.randn(k, 2) mask = torch.empty(k, 1, dtype=torch.uint8).random_(2) x.masked_select(mask).view(-1, 2)
st100455
can you elaborate on what this is? with a link to an implementation elsewhere? afaik this is not in pytorch core.
st100456
Thanks for your reply. Tensorflow has this: https://www.tensorflow.org/api_docs/python/nn/candidate_sampling 276
st100457
We don’t have any such thing in the core. We’ll need to add them. Thanks for the pointer!
st100458
There’s a lua torch implementation github.com Element-Research/dpnn/blob/master/NCECriterion.lua 60 ------------------------------------------------------------------------ --[[ Noise Contrast Estimation Criterion ]]-- -- Ref.: A. http://mi.eng.cam.ac.uk/~xc257/papers/ICASSP2015-rnnlm-nce.pdf -- B. https://www.cs.toronto.edu/~amnih/papers/ncelm.pdf ------------------------------------------------------------------------ local NCECriterion, parent = torch.class("nn.NCECriterion", "nn.Criterion") local eps = 0.0000001 function NCECriterion:__init() parent.__init(self) self.sizeAverage = true self.gradInput = {torch.Tensor(), torch.Tensor(), torch.Tensor(), torch.Tensor()} end function NCECriterion:updateOutput(inputTable, target) -- P_model(target), P_model(sample), P_noise(target), P_noise(sample) local Pmt, Pms, Pnt, Pns = unpack(inputTable) local k = Pms:size(2) This file has been truncated. show original
st100459
@ngimel: Thanks for the link. I don’t know Lua and will have a look at the code.
st100460
As far as I know, NCE (Noise Contrast Estimation) is different from sampled softmax from tensorflow, see Jozefowicz et al. (2016) or here 70 for a comparison. EDIT: sorry, I see that original link is to a page with a number of different softmax approximations, and NCE is one of them. I personally would be more interested in sampled softmax, as it tends to work better for me. EDIT2: here 74 is a TF implementation of sampled softmax and NCE, hopefully they can be implemented using existing pytorch functions.
st100461
You may also be interested in this implementation: GitHub facebookresearch/adaptive-softmax 384 Implements an efficient softmax approximation as described in the paper "Efficient softmax approximation for GPUs" (http://arxiv.org/abs/1609.04309) - facebookresearch/adaptive-softmax Giving very good results for the LM task.
st100462
Any updates on this?? It probably isn’t a priority…but I secretly wish PyTorch has a big development team like TensorFlow and can add these functionalities easily!
st100463
@windweller Adaptive Softmax was part of PyTorch 0.4.1, see: https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveLogSoftmaxWithLoss 275 Sampled Softmax is implemented in this repo: https://github.com/rdspring1/PyTorch_GBW_LM 323
st100464
I’ve already categorised some ways to do least squares in PyTorch in this gist 272. Which of these is preferred, or is there some other way that is better? Not necessarily in terms of speed; ie is any way preferred for numerical stability?
st100465
RuntimeError: smooth_l1_loss_forward is not implemented for type torch.cuda.IntTensor -> 1687 return torch._C._nn.smooth_l1_loss(input, target, reduction) code: import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import numpy as np class AGLoss(nn.Module): def __init__(self): super(AGLoss, self).__init__() # def forward(self, gender_preds, gender_targets): def forward(self, age_preds, age_targets, gender_preds, gender_targets): """Compute loss between (age_preds, age_targets) and (gender_preds, gender_targets)""" age_prob = F.softmax(age_preds, dim=1).cuda() age_expect = torch.from_numpy(np.array([torch.argmax(age_prob[i]) + 1 for i in range(0, 128)])).int().cuda() print(age_prob.shape) print(age_expect.shape) print(gender_preds.shape) print(gender_preds.shape) age_loss = F.smooth_l1_loss(age_expect, age_targets) gender_loss = F.binary_cross_entropy_with_logits(gender_preds.float().cuda(), gender_targets.float().cuda()) print("age_loss: %.3f | gender_loss: %.3f" & (age_loss.data[0], gender_loss.data[0]), end='|') # print("gender_loss: {}".format(gender_loss.data[0])) return age_loss + gender_loss # return gender_loss```
st100466
That is because you need floatTensors to pass into SmoothL1. L1 loss is for regression, it can only be done for continuous tensors like floatTensors. IntTensors or LongTensors are not continuous (they are discrete). So your code should be age_loss = F.smooth_l1_loss(age_expect.float(), age_targets.float())
st100467
I’ve reimplemented a Caffe network in PyTorch. I am training with identical data splits, augmentation, loss weights, and learning parameters. Yet while I can get decent results, my network is still not nearly as good as the original Caffe model. The only way I can get somewhat close is by using a learning rate decay–which they don’t use in the original paper. Instead they only use Adam with a weight decay. When I do the same, my results are meh. What could I be missing here?
st100468
I’m not sure what the state of ATen for regular use is, but the README has some basic examples. I couldn’t find any snippet showing how to do autograd with ATen. Is autograd implemented only in python? If I were considering to write an application in C++ that required tensor operations and autograd, would it be feasible to do it with ATen in its current state or in the future?
st100469
Hi, No Aten is purely a tensor library and has nothing to do with autograd. The cpp autograd is located in the libtorch cpp library. That being said, I am not sure cpp-only autograd is completely supported at the moment. It will be in the near future but I am not sure what is the current state. @smth should be able to tell you.
st100470
we’ll have more examples, documentation and guidance / usage of this API in about 2 to 3 weeks.
st100471
I want to use PCA (sklearn.decomposition) for feature reduction dimension. INPUT: X [200,4096] and I want to reduce to 400 dimensions pca = PCA(n_components=400) pca.fit(X) newX = pca.transform(X) but newX [200,200] How can I solve this problem?
st100472
Hi everyone, I have been witnessing some unstable training in a large network in a P100. I managed to nicely train the network a couple of months ago but, after an update of the NVIDIA drivers to v387.26, I’ve been finding some unstable training in PyTorch, with losses going down much slower than before and getting validation losses with large oscillations. Around that time I also updated the PyTorch version to 0.3.0, so at the beginning I thought it was related with the version of PyTorch and some subtle API change. However, after many tests, I think I have isolated the problem to the change in the version of the NVIDIA drivers because the training (with PyTorch 0.3 and the very same code) is smooth when using a Titan X with NVIDIA drivers 367.48. Here are the details of the system, but the results are independent of the CUDA version (it also happens in CUDA 9.0) nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2016 NVIDIA Corporation Built on Tue_Jan_10_13:22:03_CST_2017 Cuda compilation tools, release 8.0, V8.0.61 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 387.26 Driver Version: 387.26 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla P100-PCIE... Off | 00000000:02:00.0 Off | Off | | N/A 30C P0 24W / 250W | 29MiB / 12193MiB | 0% Default | +-------------------------------+----------------------+----------------------+ Has anyone witnessed this kind of instability for this specific version of the drivers and Pytorch? Thanks!
st100473
I’ve had similar problems–no trouble training for a few months, updated to 0.3.0 and started training some new models which started to go wildly and unexpectedly unstable; the slight changes to my code made it hard to pin it on anything in particular. It disappeared for a while as I was training smaller models, but I’ve been running a ResNext50 on ImageNet and it’s had crazy unexpected up-and-downs (like, randomly jumping from 20% top-1 error to 60% top-1 val error, then going back down slowly, then randomly jumping up to 99% val error). I’m downgrading to 0.2.0 and running it again, will report back in a few days if it goes unstable or not.
st100474
Yeah, just got through with a run after switching back to 0.2.0, and it was completely stable whereas the run on 0.3.0 with all identical code was crazy unstable. Not sure what to make of it; probably should try and run master at some point.
st100475
After some days, I managed to find the time to run the very same code on v0.2.0 and I don’t find these instabilities. This happens not only on the P100 but also on a Titan X, where training the same network both in v0.3.0 and v0.2.0 give completely different losses and performances (with 0.3.0 being much worse). I will try to make some tests to see if I can isolate my problem. A priori I don’t know what could be happening because my networks are pretty simple ones using conv2d, batchnorms and relus.
st100476
Hey, do you find the root cause of this issue? I have a similar problem after updating pytorch from 3.0 to 4.1. The training becomes unstable using the p100. Would you recall the reason for this? thanks
st100477
I never found the cause of this issue but found that increasing the value of eps in the BatchNorm2d layers made the convergence smooth again. My first guess was that it must be related with some precision problem when computing the mean and variance of the datasets that produces a somehow erratic behavior of the BatchNorm2d layer.
st100478
Andres_Asensio_Ramos: I never found the cause of this issue but found that increasing the value of eps in the BatchNorm2d layers made the convergence smooth again. My first guess was that it must be related with some precision problem when computing the mean and variance of the datasets that produces a somehow erratic behavior of the BatchNorm2d layer. Ok, Thanks man I should have a try this
st100479
Yes, I did. Maybe the non-deterministic calculations are the problem but I don’t know why this only happened on the P100.
st100480
I was asking because you mentioned that changing the BN eps parameter helped. PyTorch binaries of different versions ship with different cudnn (later PyTorch versions ship with newer cudnn of course). So it might be cudnn related as well. If the memory is not too big an issue, maybe you want to run with cudnn disabled and see if it helps.
st100481
Hi I am trying to predict embedded musical sequences. Using a [8x100] input vector for a LSTM network. Is it possible to predict a next step in the input sequence, so that the target in time step t is input in timestep t+1 ? For small training samples the lstm converges, but when using a larger training set it does not. Any ideas? Thanks in advance
st100482
Hi, I am trying to initialize the weights of a conv net (with nn.Sequential) using a custom method. When I do this initialization my network achieves an accuracy equal to ~10% (for CIFAR-10 this is equivalent to a random response => the network doesn’t learn anything). Without this initialization I get ~58% accuracy (the conv net can learn without this init). I am sure that I am doing something wrong but I don’t know where is the problem. I would like to initialize the weights using the weights_init, random_weight, zero_weight methods. Any help/advice is welcome . Thanks. The code (some code is from the cs231 course from Stanford): def random_weight(shape): """ Kaiming normalization: sqrt(2 / fan_in) """ if len(shape) == 2: # FC weight fan_in = shape[0] else: fan_in = np.prod(shape[1:]) # conv weight [out_channel, in_channel, kH, kW] w = torch.randn(shape, device=device, dtype=dtype) * np.sqrt(2. / fan_in) w.requires_grad = True return w def zero_weight(shape): return torch.zeros(shape, device=device, dtype=dtype, requires_grad=True) def weights_init(m): if type(m) in [nn.Conv2d, nn.Linear]: m.weight.data = random_weight(m.weight.data.size()) m.bias.data = zero_weight(m.bias.data.size()) class Flatten(nn.Module): def forward(self, x): return flatten(x) model = nn.Sequential( nn.Conv2d(in_channel, channel_1, (5, 5), padding=2), nn.ReLU(), nn.Conv2d(channel_1, channel_2, (3, 3), padding=1), nn.ReLU(), Flatten(), nn.Linear(channel_2 * 32 * 32, num_classes) ) model.apply(weights_init) optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9, nesterov=True)
st100483
Solved by ptrblck in post #2 I’ve tested your code for some random input and the model could fit it. Using CIFAR10, I just set the channels to 6 and 12 in the conv layers. The model indeed struggles to learn and the default random init seems to be better. Changing fan_in = shape[1] for linear layers makes it a bit better. …
st100484
I’ve tested your code for some random input and the model could fit it. Using CIFAR10, I just set the channels to 6 and 12 in the conv layers. The model indeed struggles to learn and the default random init seems to be better. Changing fan_in = shape[1] for linear layers makes it a bit better. I don’t think you have a code bug, but probably your current custom initialization does not provide the benefit you expect.
st100485
I try to use backward() to compute the gradient w.r.t parameter of some linear network. I found that if the weight matrix is sparse, say one row is all 0, the gradient got by back propagation can be wrong. Some gradient shouldn’t be zero but give back zero. Can anyone help me verify it? Thanks
st100486
Hi All, I am writing a CUDA kernel for my project. I have been following the pytorch CUDA extension tutorials 7. However, as I understand, such an approach only supports operations contiguous tensors. How can I improve my extension to support non-contiguous tensors? Pointers to the codes of PyTorch’s own supports for non-contiguous tensor would also be very helpful. Thank you in advance!
st100487
Solved by albanD in post #2 Hi, First of all, calling .contiguous() on the input will make sure you have a contiguous tensor and won’t be noticeable for most workload. I would recommand this solution as it is much simpler and may actually be faster than the non-contiguous counterpart. To support non-contiguous tensor, you wo…
st100488
Hi, First of all, calling .contiguous() on the input will make sure you have a contiguous tensor and won’t be noticeable for most workload. I would recommand this solution as it is much simpler and may actually be faster than the non-contiguous counterpart. To support non-contiguous tensor, you would need to access each element by taking into account the stride of each dimension properly. So val[ind0, ind1] = data_ptr + storage_offset + ind0*stride0 + ind1*stride1. The thing is that this can make contiguous reads in cuda non contiguous anymore and destroy your kernel’s performances.
st100489
Following the example from: github.com hugo1840/Pytorch_tutorial/blob/master/classification_cifar10.py 11 # -*- coding: utf-8 -*- """ Created on Tue May 8 20:45:03 2018 @author: Hugot """ # -*- coding: utf-8 -*- """ Training a classifier ===================== For this tutorial, we will use the CIFAR10 dataset. It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size. .. figure:: /_static/img/cifar10.png :alt: cifar10 cifar10 This file has been truncated. show original This code trains successfully. I ran this code and saved model. Now I want to predict my own image. I used the following code import torch from mod1 import Net from PIL import Image import numpy image = Image.open("plane.png") pix = numpy.array(image) #convert image to numpy array image.show() net = Net() net.eval() img = torch.Tensor(pix) #convert numpy array to tensor net = torch.load('pytorch_Network.h5') print(net(img)) But I got this error: Traceback (most recent call last): File "/home/ihor/Tasks/try1/pytorch_load_model.py", line 14, in <module> print(net(img)) File "/home/ihor/anaconda3/envs/tensorflow/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/ihor/Tasks/try1/pytorch.py", line 51, in forward x = self.pool(F.relu(self.conv1(x))) File "/home/ihor/anaconda3/envs/tensorflow/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/ihor/anaconda3/envs/tensorflow/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward self.padding, self.dilation, self.groups) RuntimeError: Expected 4-dimensional input for 4-dimensional weight [6, 3, 5, 5], but got input of size [368, 860] instead Sorry if this is a bit basic of a question, but for some reason I could not find much online to guide me on this. I have googled a lot, read different articles but nothing helps me. Thanks
st100490
Your model expects a 4 dimensional input, i.e. [batch_size, channels, height, width]. Since you are loading a single image, the batch dimension is missing. You can add it with img = img.unsqueeze(0). Also, I would recommend to transform your numpy array to a tensor using torch.from_numpy. Note that you are not normalizing the image either, which will yield bad results. In the code you’re using, the data is transformed using: [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) I would suggest to do the same in your test phase.
st100491
How to get the predicted label of unlabelled test images of our own saved in a folder from trained neural network in pytorch. Because ImageFolder function only works with categorised images according to their label. So my question is what is the way in pytorch to predict the labels of images which we don’t know from which category they belong? That is what actually the use of Neural networks i.e. prediction. Thanks in Advance
st100492
it may help when you use the code from Transfer Learning tutorial 160. This tutorial show you how to construct an image recognition neural network and for further test. you can use it to infer your unlabeled image data.
st100493
Hello, I am trying to customize a new module, I need to define the forward and backward function. Do I need to divide the grad_weight by minibatch when I customize the backward function?
st100494
Hi, I am currently trying to implement a variant of autoencoder as follows: class AE_net(nn.Module): def __init__(self, node_num_1, node_num_2, activations, hierarchical=1, hi_variant=2): super(AE_net, self).__init__() self._activations = activations self._hierarchical=hierarchical self._hi_variant = hi_variant encoder_list = [nn.Sequential(*[nn.Linear(node_num_1[item], node_num_1[item + 1]), nn.Tanh()]) for item in range(len(node_num_1) - 2)] self._encoder_1 = nn.Sequential(*encoder_list) if not hierarchical: self._encoder_2 = [nn.Sequential(nn.Linear(node_num_1[-2], node_num_1[-1]), nn.Tanh())] decoder_list = [[nn.Linear(node_num_2[item], node_num_2[item + 1]), nn.Tanh()] for item in range(len(node_num_2) - 1)] self._decoder = nn.Sequential(*[nn.Sequential(*item) for item in decoder_list]) else: self._encoder_2 = [nn.Sequential(nn.Linear(node_num_1[-2], 1), nn.Tanh()) for _ in range(node_num_1[-1])] if hi_variant == 2: temp_node_num_2 = node_num_2[:] temp_node_num_2[0] = 1 self._decoder = [nn.Sequential(*[nn.Sequential( nn.Linear(temp_node_num_2[item], temp_node_num_2[item + 1]), nn.Tanh()) for item in range(len(temp_node_num_2) - 1)]) for _ in range(node_num_2[0])] return def forward(self, x): temp = self._encoder_1(x) latent_z_split = [item_l(temp) for item_l in self._encoder_2] latent_z = torch.cat(latent_z_split, dim=-1) if not self._hierarchical: rec_x = self._decoder(latent_z) elif self._hi_variant == 2: temp_decoded = [self._decoder[item](latent_z_split[item]) for item in range(len(self._decoder))] decoded_list = [temp_decoded[0]] for item in temp_decoded[1:]: decoded_list.append(torch.add(decoded_list[-1], item)) rec_x = torch.cat(decoded_list, dim=-1) return rec_x, latent_z But when I save the model using torch.save(the_model.state_dict(), PATH), only parameters of _encoder_1 are saved. All parameters in other parts (lists of nn.Module objects) are not saved. Does anyone know what might be the reason and how I could save all these parameters? Thanks!
st100495
Solved by SimonW in post #2 use ModuleList instead of python list https://pytorch.org/docs/master/nn.html#modulelist. pytorch can’t know about things you hide in a plain list.
st100496
use ModuleList instead of python list https://pytorch.org/docs/master/nn.html#modulelist 2. pytorch can’t know about things you hide in a plain list.
st100497
I am not sure if I understand it correctly, but when I read the code here: https://github.com/jcjohnson/pytorch-examples/blob/master/nn/two_layer_net_module.py 2. In training part: for t in range(500): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) # Compute and print loss loss = loss_fn(y_pred, y) print(t, loss.item()) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step() It seems to me that in each epoch, the computation graph is reconstructed by: y_pred = model(x) # Compute and print loss loss = loss_fn(y_pred, y) Is it doing repeated construction here or is it actually doing something similar to keras’s “compile and run” operations to construct computation graph once and then do computation only afterwards? Thank you!
st100498
Solved by ptrblck in post #2 The computation graph is constructed in each forward pass, which allows you to create your model dynamically and use plain python control-flow operators like for loops etc.
st100499
The computation graph is constructed in each forward pass, which allows you to create your model dynamically and use plain python control-flow operators like for loops etc.