id
stringlengths
3
8
text
stringlengths
1
115k
st100000
I have an integer to be returned, along with some other stuff. Should I just return the integer, or return something like torch.LongTensor(num)?
st100001
If you are return something with grad, you shouldn’t convert it as it will be used for back prop. But if its just for visualization that value, you could do it after backprop step.
st100002
This is the data so it is neither grad nor visualization. The data will be used in forward step for the model.
st100003
Also if its data for forward step, you should keep it as Tensor type as while backprop needs grad. Pytorch backprop operations are defined only for Tensor type.
st100004
Have you ever tried to return an integer itself? That will also be converted to a tensor… The question then is whether we want to explicitly convert that.
st100005
I have an integer to be returned, along with some other stuff. Why do you want to return this integer? What does this integer contain? Is it the model output? Is the model input type Tensor? Have you ever tried to return an integer itself? Yes. If you try to backprop with this returned int it will thrown an error as it doesn’t have grad. That will also be converted to a tensor Why are you converting back and forth to int and Tensor? Please give more details so that we could help you.
st100006
Let us say if you want to do a rnn model, you will need to have a padded sequence and its original length (that comes to the integer). I got no issue when just returning an integer so I am not sure why you are getting the error.
st100007
I got no issue when just returning an integer so I am not sure why you are getting the error. Maybe this 7 post will make things clear about backprop problem I mentioned.
st100008
I did not get this issue. I think we were probably making the problem complex so I’d try to rephrase it. The fundamental question here is, when making a customized dataset, what is the best way to return the sequence (with variable lengths) and its length? For example, it would be something like input: ["some string", "some other string" ...., "final string"], with a max length max_len embedding mapping: {"some": 0, "string": 1, ...} task: return embedded sequences, with their original lengths if doing padding. Hopefully this is clearer.
st100009
According to this 5 comment, ScatterAssign only works on CPU, but I see that it also exists in the corresponding .cu file here 2. Is the first comment accurate?
st100010
Hi, I have a question regarding using register_forward_hook for recurrent networks. Suppose I have a module that contains a LSTMCell as a submodule which runs for N time steps in one forward() call. I want to examine the hidden states of this LSTMCell at every time step. But, if I do register_forward_hook for that LSTMCell, I think I will only get the hidden states (h_t, c_t) at the very last time step. Am I right? If so, what is a good way to store the hidden states of the LSTMCell at every time step of a single forward pass? Should I just explicitly define variables to store them in the module definition?
st100011
after I saw the method,I still don’t know How to extract features of an image from a pretrained model,can you give a complete example?I need help.
st100012
How to extract features of an image from a trained model To complement @apaszke reply, once you have a trained model, if you want to extract the result of an intermediate layer (say fc7 after the relu), you have a couple of possibilities. You can either reconstruct the classifier once the model was instantiated, as in the following example: import torch import torch.nn as nn from torchvision import models model = models.alexnet(pretrained=True) # remove last fully-connected layer new_classifier = nn.Sequential(*list(model.classifier.children())[:-…
st100013
Screen Shot 2018-09-21 at 7.36.10 AM.png1366×1014 193 KB I’m getting this error when trying to train my network. I’ve followed this tutorial https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel 2 to use a custom dataloader and whenever I reference the dataloader object I get this same error. Any advice would be incredibly appreciated! THanks
st100014
I am trying to convert torchvision vgg layers into a block of layers which can be seen in the following code. I am trying to put them in a defaultdict and use it as a block. When I try to print the model the defaultdict doesnot appear. How do I make defaultdict appear in my model ? # here feature head is vgg model taken from torchvision.models class FCN8(_FCN_BASE): def __init__(self, cfgs, feature_head, num_classes=2): super(FCN8, self).__init__() self.cfgs = cfgs self.feature_head = self.convert_features_to_blocks(feature_head) self.fc6 = nn.Conv2d(512, 4096, kernel_size=7) self.relu6 = nn.ReLU(inplace=True) self.drop6 = nn.Dropout2d() # fc7 self.fc7 = nn.Conv2d(4096, 4096, 1) self.relu7 = nn.ReLU(inplace=True) self.drop7 = nn.Dropout2d() self.score_fr = nn.Conv2d(4096, num_classes, 1) self.score_pool3 = nn.Conv2d(256, num_classes, 1) self.score_pool4 = nn.Conv2d(512, num_classes, 1) self.upscore2 = nn.ConvTranspose2d( num_classes, num_classes, 4, stride=2, bias=False) self.upscore8 = nn.ConvTranspose2d( num_classes, num_classes, 16, stride=8, bias=False) self.upscore_pool4 = nn.ConvTranspose2d( num_classes, num_classes, 4, stride=2, bias=False) self._initialize_weights() def convert_features_to_blocks(self, feature_head): blocks = defaultdict() if feature_head.__class__.__name__ in ['VGG']: features = feature_head.features start = 0 layer = 1 for i, feature in enumerate(features): if feature.__class__.__name__ == 'MaxPool2d': block = nn.Sequential(features[start: i + 1]) blocks['l_{}'.format(layer)] = block start = i + 1 layer += 1 return blocks def forward(self, x): x = self.feature_head['l_1'](x) x = self.feature_head['l_2'](x) When I print the model. It does not show the defaultdict FCN8( (fc6): Conv2d(512, 4096, kernel_size=(7, 7), stride=(1, 1)) (relu6): ReLU(inplace) (drop6): Dropout2d(p=0.5) (fc7): Conv2d(4096, 4096, kernel_size=(1, 1), stride=(1, 1)) (relu7): ReLU(inplace) (drop7): Dropout2d(p=0.5) (score_fr): Conv2d(4096, 1, kernel_size=(1, 1), stride=(1, 1)) (score_pool3): Conv2d(256, 1, kernel_size=(1, 1), stride=(1, 1)) (score_pool4): Conv2d(512, 1, kernel_size=(1, 1), stride=(1, 1)) (upscore2): ConvTranspose2d(1, 1, kernel_size=(4, 4), stride=(2, 2), bias=False) (upscore8): ConvTranspose2d(1, 1, kernel_size=(16, 16), stride=(8, 8), bias=False) (upscore_pool4): ConvTranspose2d(1, 1, kernel_size=(4, 4), stride=(2, 2), bias=False) )
st100015
I finally solved it by using ModuleDict instead of defaultdict. Here is the implementation of ModuleDict https://pytorch.org/docs/stable/_modules/torch/nn/modules/container.html#ModuleDict 5
st100016
I have 1 encoder and 2 decoder i want to give weighted loss to encoder, meaning the decoders will be trained from their respective loss, but while going in to the encoder, their will be some weight-age to both the decoder losses. How can i implement the weighted back propagation for this setup?
st100017
Hi, I’m trying to train a network using cudnn but at every execution I’m getting different results. I have no idea why, as I’m trying to ensure determinism in every way I know. Here is what I’m currently using to do so: torch.manual_seed(0) numpy.random.seed(0) random.seed(0) torch.backends.cudnn.deterministic = True also I use num_workers=0 in the dataloader and I have manually checked that the input data on the network is always the same in every execution. The parameters of the network also are also initialized in the same way, but as soon as the second/thirds batch comes in, some parameters and outputs of the network start to change sligthly leading to diferent training results. Am I missing something? Thanks.
st100018
I’m also struggling with reproducibility, and I’m interested to see what the solution(s) discovered by this thread are. By the way, did you try checking with cpu, and seeing if the cpu version is more reproducible?
st100019
If you are sampling random numbers on the GPU, you might have to set the torch.cuda.manual_seed. Have a look at this example code 24.
st100020
I found that manual seed set both for me. (At least, on 0.4.0). https://pytorch.org/docs/stable/_modules/torch/random.html#manual_seed 22
st100021
hughperkins: l seed set both for me. (At le Yes, using the CPU I always get the same results. So it must be something to do with cudnn.
st100022
Idea: provide a short piece of code that is sufficient to reproduce the issue reliably.
st100023
It’s a very large network, so it is going to be very dificult for me to reproduce the issue with a short piece of code. But I’m getting closser to the issue, as changing the tensor type to double instead of float using: torch.set_default_tensor_type('torch.DoubleTensor') solves the issue and allows me to get deterministic results. I’m still searching why is that happening.
st100024
Know this convo is a little old but I’m under the impression there’s some non-determinism in a few cuDNN operations, like atomic adds on floating points? Might be the issue here https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html#reproducibility 65
st100025
Does no_grad still allow to update the batch normalization statistics? I have the following code: def train_add_stat(model, trn_loader, optimizer, ib_max): model.train() with torch.no_grad(): for _, (inputs, targets) in enumerate(trn_loader): inputs = Variable(inputs.cuda(), requires_grad=False) output = model(inputs) del inputs, targets, output torch.cuda.empty_cache() return to update the batch and I am wondering if it is correct or no_grad() avoids the statistic updating as the gradient. Thanks.
st100026
After upgrading from 0.4 to 0.4.1, I found that a C++ API I used to create variables is deprecated. For example, in 0.4: auto max_val = at::zeros(torch::CUDA(at::kFloat), {batch, channel, height}); How can I achieve the same thing in 0.4.1 with TensorOptions?
st100027
The equality 1/4 sum(zi, zi) = 3(xi + 2)^2 is incorrect; it should be 1/4 sum(zi, zi) = 1/4 * 3(xi + 2)^2
st100028
It’s this 2 one. I think, @chrobles misunderstood. There is no equality here. i.e., these are two assignment statements separated by a comma. o = 1/4 sum(z_i) z_i = 3(x_i + 2)^2
st100029
I am trying to deep copy the LSTM model in my code. But it raised this error: Only Variables created explicitly by the user (graph leaves) support the deepcopy protocol at the moment How can I solve this? Thanks!
st100030
When I convert a variable to numpy first, then to change through the function of numpy, and finally to convert to variable of pytorch and as an input of the neural network, so that the reverse autograd of the original variable can not be grad? If I want to make a derivative of my previous variable, what should I do?
st100031
Hi, The autograd engine only support pytorch’s operations. You cannot use numpy operation if you want gradients to be backpropagated.
st100032
import pyforms from pyforms import basewidget from pyforms.controls import ControlButton from pyforms.controls import ControlText class MainPage(basewidget): … when i run the program it rises a TypeError: class Main_Page(basewidget): TypeError: module() takes at most 2 arguments (3 given ) i don’t know how to fix this one. -the tutorial that i follow is on this page- https://pyforms.readthedocs.io/en/latest/getting-started/the-basic/ 74
st100033
I’m afraid you might ask in the wrong place. This is the PyTorch Forum, not the PyForms Forum. Best of luck resolving this issue
st100034
I have a dataset whose labels are from 0 to 39. And i wrap it using torch.utils.data.DataLoader, if i set num_workers to be 0, everything works fine, However if it is set to be 2, then the labels batch (a 1-D byte tensor)it loads at some epoch always seems to be bigger than 39, which seems to be 255. what causes this problem? any help? ( P.S. my dataset is .h5 file).
st100035
hi, guy, what do you mean by one worker? i used to run on the same machine using 2 workers with other project, and it is fine. By the way, the code works fine using 2 workers util some random epoch of trianing when it output labels with value 255 to stop my training. I guess the code here 34 may cause the problem. any idea
st100036
below is my code. from __future__ import print_function import torch.utils.data as data import os import os.path import errno import torch import json import h5py from IPython.core.debugger import Tracer debug_here = Tracer() import numpy as np import sys import json class Modelnet40_V12_Dataset(data.Dataset): def __init__(self, data_dir, image_size = 224, train=True): self.image_size = image_size self.data_dir = data_dir self.train = train file_path = os.path.join(self.data_dir, 'modelnet40.h5') self.modelnet40_data = h5py.File(file_path) if self.train: self.train_data = self.modelnet40_data['train']['data'] self.train_labels = self.modelnet40_data['train']['label'] else: self.test_data = self.modelnet40_data['test']['data'] self.test_labels = self.modelnet40_data['test']['label'] def __getitem__(self, index): if self.train: shape_12v, label = self.train_data[index], self.train_labels[index] else: shape_12v, label = self.test_data[index], self.test_labels[index] return shape_12v, label def __len__(self): if self.train: return self.train_data.shape[0] else: return self.test_data.shape[0] if __name__ == '__main__': print('test') train_dataset = Modelnet40_V12_Dataset(data_dir='path/data', train=True) print(len(train_dataset)) test_dataset = Modelnet40_V12_Dataset(data_dir='path/data', train=False) print(len(test_dataset)) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=8, shuffle=True, num_workers=2) total = 0 # debug_here() # check when to cause labels error for epoch in range(200): print('epoch', epoch) for i, (input_v, labels) in enumerate(train_loader): total = total + labels.size(0) # labels can be 255, what is the problem?? if labels.max() > 40: debug_here() print('error') if labels.min() < 1: debug_here() print('error') labels.sub_(1) # minus 1 in place if labels.max() >= 40: debug_here() print('error') if labels.min() < 0: debug_here() print('error') print(total) can someone give some help ??
st100037
if your data is numpy.array, you can try like this self.train_data = torch.from_numpy(self.modelnet40_data['train']['data'].value)
st100038
hi, this could partly solve my problem. because this method loads all the data into the memory. However, when the dataset is big(in .h5 file), this it is impractical to load all the data to the memory. donot it?? And the problem still exists.
st100039
yes, this method loads all the data into memory. If the data is large, I guess you can do this way. (I don’t try this) def __getitem__(self, index): if self.train: shape_12v, label = self.modelnet40_data['train']['data'][index], self.modelnet40_data['train']['label'][index] I don’t know if it works, you can tell me if you try.
st100040
hi, i tried this one, but it still doesnot work. And I suspect this issue is related to the multi-thread synchronization issues in dataloader class.
st100041
I have been seeing similar problems with DataLoader when num_workers is greater than 1. My per sample label is [1, 0 …, 0] array. When loading a batch of samples, most of the labels are OK, but I could get something like [70, 250, …, 90] in one row. This problem does not exist when num_workers=1. Any solution or suggestions?
st100042
I have also met similar problems. Does anyone can figure out how to solve it? Thanks a lot!
st100043
This is always the case if you are using Windows (in my computer). Try it from the command line, not from Jupyter.
st100044
Thanks. But I use pytorch in Linux(Archlinux), and the version of pytorch is 0.2-post2.
st100045
Can you share your full source code so that I can try it and see that it works on my system?
st100046
I have the same problem! How do you solve the problem? Besides, seemingly there is little anserwers about that.
st100047
This might be related to those two issues. What version of PyTorch are you using? Perhaps updating to 0.4.1 might help. github.com/pytorch/pytorch Issue: Always get error "ConnectionResetError: [Errno 104] Connection reset by peer" 19 opened by Mabinogiysk on 2018-07-03 closed by SsnL on 2018-08-13 Issue description I found someone had reported this error and could not reproduce it. However I always get this error during my... github.com/pytorch/pytorch Issue: Multithreaded DataLoader sometimes hits "Connection reset by peer" 74 opened by vadimkantorov on 2017-05-14 closed by vadimkantorov on 2017-11-13 I am using torch.utils.data.DataLoader with num_workers = 4 and sometimes getting this exception (in a single-threaded mode it works fine). I... needs-reproduction
st100048
Could pytorch print out a list of parameters in a computational graph if the parameters are not in a module? For example, print the list of parameters until d in the following computational graph: import torch from torch.autograd import Variable a = Variable(torch.rand(1, 4), requires_grad=True) b = a**2 c = b*2 d = c.mean() d.backward()
st100049
Hi, No such function exist at the moment. I guess you could traverse the graph using d.grad_fn and.next_functions, finding all the AccumulateGrad Functions and getting their .variable attribute. This would give you all the tensors in which gradients will be accumulated (possibly 0 valued) if you call backward on d. Why do you need such function? Why don’t you already know which tensors are used in your computations?
st100050
class DeephomographyDataset(Dataset): ''' DeepHomography Dataset ''' def __init__(self,hdf5file,imgs_key='images',labels_key='labels', transform=None): ''' :argument :param hdf5file: the hdf5 file including the images and the label. :param transform (callable, optional): Optional transform to be applied on a sample ''' self.db=h5py.File(hdf5file,'r') # store the images and the labels keys=list(self.db.keys()) if imgs_key not in keys: raise(' the ims_key should not be {}, should be one of {}' .format(imgs_key,keys)) if labels_key not in keys: raise(' the labels_key should not be {}, should be one of {}' .format(labels_key,keys)) self.imgs_key=imgs_key self.labels_key=labels_key self.transform=transform def __len__(self): return len(self.db[self.labels_key]) def __getitem__(self, idx): image=self.db[self.imgs_key][idx] label=self.db[self.labels_key][idx] sample={'images':image,'labels':label} if self.transform: sample=self.transform(sample) return sample batchSize=30 trainDataset=DeephomographyDataset(pth_config.TRAIN_H5_FILE, transform=transforms.Lambda( lambda x: toTensor(x))) traindataloader=DataLoader(trainDataset,batch_size=batchSize,shuffle=True, num_workers=20) # samplesss=trainDataset[0] for i, sample in enumerate(trainDataset): .... some errors are repoted as follows: Traceback (most recent call last): File "/home/dler/pytorch_codes_from_pc4/DeepHomography/my_Model/training_deephomography.py", line 138, in <module> epoch_num) File "/home/dler/pytorch_codes_from_pc4/DeepHomography/my_Model/training_deephomography.py", line 49, in train_model for i, sample in enumerate(dataloaders[phase]): File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 322, in __next__ return self._process_next_batch(batch) File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 357, in _process_next_batch raise batch.exc_type(batch.exc_msg) KeyError: 'Traceback (most recent call last):\n File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 106, in _worker_loop\n samples = collate_fn([dataset[i] for i in batch_indices])\n File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 106, in <listcomp>\n samples = collate_fn([dataset[i] for i in batch_indices])\n File "/home/dler/pytorch_codes_from_pc4/DeepHomography/preprocessing/generate_dataset.py", line 34, in __getitem__\n label=self.db[self.labels_key][idx]\n File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper\n File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper\n File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/site-packages/h5py/_hl/group.py", line 167, in __getitem__\n oid = h5o.open(self.id, self._e(name), lapl=self._lapl)\n File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper\n File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper\n File "h5py/h5o.pyx", line 190, in h5py.h5o.open\nKeyError: \'Unable to open object (bad object header version number)\'\n' Exception ignored in: <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7f09bc58dac8>> Traceback (most recent call last): File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 399, in __del__ self._shutdown_workers() File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 378, in _shutdown_workers self.worker_result_queue.get() File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/multiprocessing/queues.py", line 337, in get return _ForkingPickler.loads(res) File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 151, in rebuild_storage_fd fd = df.detach() File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach with _resource_sharer.get_connection(self._id) as conn: File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection c = Client(address, authkey=process.current_process().authkey) File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/multiprocessing/connection.py", line 487, in Client c = SocketClient(address) File "/home/dler/anaconda3/envs/pytorch4/lib/python3.6/multiprocessing/connection.py", line 614, in SocketClient s.connect(address) ConnectionRefusedError: [Errno 111] Connection refused Process finished with exit code 1 Interestingly, when the code samplesss=trainDataset[0]; ` uncomment, there is no error reported! But, the dataset traindataloader returned by the DataLoader is wrong, namely some data is not the raw data. So, I know how the num_wokers in DataLoader affect the code? in addition, my computer have 32 cpu cores. I set num_workers to 20. My OS is unbuntu 16.0, the version of pytorch is 0.4! python is 3.6
st100051
Take a look at this example Mode state saving code: github.com pytorch/examples/blob/master/imagenet/main.py#L165-L171 428 save_checkpoint({ 'epoch': epoch + 1, 'arch': args.arch, 'state_dict': model.state_dict(), 'best_prec1': best_prec1, 'optimizer' : optimizer.state_dict(), }, is_best) Model resume code: github.com pytorch/examples/blob/master/imagenet/main.py#L98-L107 225 if args.resume: if os.path.isfile(args.resume): print("=> loading checkpoint '{}'".format(args.resume)) checkpoint = torch.load(args.resume) args.start_epoch = checkpoint['epoch'] best_prec1 = checkpoint['best_prec1'] model.load_state_dict(checkpoint['state_dict']) optimizer.load_state_dict(checkpoint['optimizer']) print("=> loaded checkpoint '{}' (epoch {})" .format(args.resume, checkpoint['epoch']))
st100052
If you run your experiments inside a Docker container you may find this link interesting: github.com docker/cli/blob/master/experimental/checkpoint-restore.md 48 # Docker Checkpoint & Restore Checkpoint & Restore is a new feature that allows you to freeze a running container by checkpointing it, which turns its state into a collection of files on disk. Later, the container can be restored from the point it was frozen. This is accomplished using a tool called [CRIU](http://criu.org), which is an external dependency of this feature. A good overview of the history of checkpoint and restore in Docker is available in this [Kubernetes blog post](http://blog.kubernetes.io/2015/07/how-did-quake-demo-from-dockercon-work.html). ## Installing CRIU If you use a Debian system, you can add the CRIU PPA and install with apt-get [from the criu launchpad](https://launchpad.net/~criu/+archive/ubuntu/ppa). Alternatively, you can [build CRIU from source](http://criu.org/Installation). You need at least version 2.0 of CRIU to run checkpoint/restore in Docker. This file has been truncated. show original
st100053
I am trying to search “what has to be done to add a Operation …” Where is this file? https://github.com/pytorch/pytorch/blob/v0.4.1/aten/src/ATen/function_wrapper.py 7
st100054
It might be an easy question but I am not familiar with Maxpool layer. When I use Embedding layer it increases the dimention of tensor. embedding = nn.Embedding(10, 5) input = torch.LongTensor([[[1,2,4,5],[4,3,2,9]],[[1,2,4,5],[4,3,2,9]]]) output = embedding(input) input.seze() torch.Size([2, 2, 4]) output.size() torch.Size([2, 2, 4, 5]) I want to add an Maxpool2d(or any other layer) layer to conver my output to torch.Size([2, 2, 1, 5]) Let say my output vector is: tensor([[[[7, 0, 0, 3, 6], [6, 7, 5, 2, 0], [2, 1, 9, 1, 9], [1, 5, 8, 6, 1]], [[4, 7, 2, 4, 5], [4, 4, 2, 6, 2], [9, 1, 0, 3, 5], [5, 7, 6, 5, 8]]], [[[9, 6, 0, 6, 0], [8, 9, 7, 0, 2], [4, 7, 7, 4, 5], [7, 9, 1, 0, 8]], [[6, 4, 5, 7, 6], [2, 2, 4, 9, 4], [7, 7, 9, 0, 0], [6, 8, 8, 4, 1]]]]) I want to convert it to : torch.Size([2, 2, 1, 5]) tensor([[[[7, 7, 9, 6, 9]], [[9, 7, 6, 6, 8]]], [[[9, 9, 7, 6, 8]], [[7, 8, 9, 9, 6]]]]) So I can then convert it to torch.Size([2, 2, 5])
st100055
Solved by o2h4n in post #2 I have flagged topic to remove but I found the answer : m = nn.MaxPool2d((4,1)) output = m(input)
st100056
I have flagged topic to remove but I found the answer : m = nn.MaxPool2d((4,1)) output = m(input)
st100057
It seems your GPU supports compute capability 3.0 based on this source 3, which isn’t shipped in the pre-built binaries anymore. You could compile from source using this instructions 20 to use your GPU.
st100058
Hello, Pytorch doesn’t accept string classes. So, l converted my labels to int and feed my network as follow : However l need to keep track of my real labels (string names). I need at the end of my learning get the perfomance on each example and map the int label to string label. How can l do that ? with open('/home/train.sav', 'rb') as handle: train_data = pickle.load(handle) train_loader = torch.utils.data.DataLoader( train_data, batch_size=args.batch_size, shuffle=(train_sampler is None), num_workers=args.workers, pin_memory=True, sampler=train_sampler) for i, (input, target) in enumerate(train_loader): target = target.cuda(async=True) input_var = torch.autograd.Variable(input) target_var = torch.autograd.Variable(target) Thank you
st100059
Would a simple dict work? idx_to_label = { 0: 'class0', 1: 'class1', 2: 'class2' } preds = torch.argmax(torch.randn(10, 2), 1) for pred in preds: print(idx_to_label[pred.item()]) or what do you want to do with these labels?
st100060
Hey ! I was trying to get a Variational Autoencoder to work recently, but to no avail. Classic pattern was that the loss would quickly decrease to a small value at the beginning, and just stay there. Far from optimal, the network would not generate anything useful, only grey images with a slightly stronger intensity in the center I could not spot my error, until I finally noticed that in the example the size_average was turned to False in the loss function concerning the reconstruction. def loss_function(recon_x, x, mu, logvar): BCE = F.binary_cross_entropy(recon_x, x.view(-1, 784), size_average=False) KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp()) return BCE + KLD I tried it out, and magically, results have improved. I do not understand why it is so, especially since this part of the loss deals with the reconstruction and a simple autoencoder without this flag works just fine. Could someone please explain this ? Thanks a lot !
st100061
i think the reason is because of the nature of the task (mind you this is going to be non math for my part). basically in a VAE you are asking the question ‘how closely did each reconstruction match?’ not on average how well did all my constructions match? so you want the ‘amount’ of error from a batch not the average error from a batch.
st100062
Well, I’m not completely sure of that because a simple auto-encoder works just fine without this flag. However, when you take into account the variational inference, it just doesn’t work anymore without this specific instruction. That’s what puzzled me.
st100063
I’ve bumped into the same thing. It really bothered me. I should be able to counteract this mean over batch using larger learning rate, but it didn’t seem to work. Setting too high LR for Adam just blow the training. Setting it to roughly the higher value before blowing training gives just blob of white pixels in the center. BUT then I found this (https://github.com/pytorch/examples/issues/234 124), you can read there that binary cross entropy does mean over all dimensions (batch and spatial), so reconstruction loss gets divided by much larger value then KL loss divided only by batch size. This seems to be the problem, because KL loss averaged just over batch is much larger, then reconstruction loss averaged over batch AND spatial dimension. That’s why you just get a blob, because latent distribution overfit on standard gaussian and doesn’t carry any information to decoder. If you disable averaging and then divide by size of batch, everything works well with higher LR as expected Hope I helped!
st100064
Consider the following extension of torch.autograd.function: class MorphingLayer(Function): @staticmethod def forward(ctx, input, idx): ctx.save_for_backward(input, idx) #implementation of forward pass return output1, output2 Assume that the gradient w.r.t. input can be obtained using autograd, but the gradient w.r.t. idx must be implemented manually. How would I achieve that in the implementation of the backward pass, i.e., how can I access input.grad during the backward pass? @staticmethod def backward(ctx, gradIdx1,gradIdx2): input, idx = ctx.saved_tensors #compute gradIdx manually gradIdx = (manually computed gradient) #obtain gradInput from autograd gradInput = (obtain input.grad) return gradInput, gradIdx
st100065
This should give you a start for the automatically calculated gradient bit: class MyFn(torch.autograd.Function): @staticmethod def forward(ctx, a_): with torch.enable_grad(): a = a_.detach().requires_grad_() res = a**2 ctx.save_for_backward(a, res) return res.detach() @staticmethod def backward(ctx, grad_out): a, res = ctx.saved_tensors gr, = torch.autograd.grad(res, a, grad_out, retain_graph=True) return gr x = torch.randn(2,2, requires_grad=True, dtype=torch.double) print(torch.autograd.gradcheck(MyFn.apply, (x,))) I must admit I have no idea why you’d need to specify “retain_graph”, it’s just the empirical observation that without it, it doesn’t run. Best regards Thomas
st100066
That’s extremely helpful, thanks! Can someone explain what’s going on under the hood (why detach? why retain_graph?) and if it’s safe to combine autograd and a custom backward pass in this way. Does it break other functionality?
st100067
@tom, I think that is really a nice approach. I’d like to explain it a bit: In PyTorch autograd usually automatically computes the gradients of all operations, as long as requires_grad is set to True. If you however need operations that are not natively supported by PyTorch’s autograd, you can manually define the function and how to compute its gradients. Therefore autograd is by default turned off in forward and backward of subclasses of torch.autograd.Function. To turn it back on manually, tom used with torch.enable_grad(): Within this block gradients are 1) automatically calculated and 2) passed backwards through the graph. You want 1), but I think 2) is not a good idea within forward, because you are expected to explicitly pass the gradient backwards in backward. To prevent the gradient from automatically flowing backward, you need to detach the input from the graph (a_.detach()). I guess from then on you could let the gradient be calculated implicitly by returning res (instead of res.detach()) and get the gradient by gr = a.grad (perhaps you would have to set retain_graph=True before calculating res), but the more explicit way is to also detach the result and calculate the gradient explicitly with gr, = torch.autograd.grad(res, a, grad_out, retain_graph=True) Concerning retain_graph I am a bit puzzled, too. Apart from that, I think this is a good approach. I cannot think of big impacts on other aspects, so this should be safe. The only thing I can think of is, that it forces the calculation of gradients (within this function) even when running the defined function in a with torch.no_grad(): block. If the computation of the gradients of your function are complex or require to store lots of intermediate gradients, that might cause GPU memory or runtime issues. One thing you could do, is to check if the input (a_) requires gradients and use torch.no_grad() instead of torch.enable_grad() or skip the requires_grad_() part to prevent the calculation of some unnecessary gradients.
st100068
Nice explanation! @Florian_1990 Florian_1990: One thing you could do, is to check if the input ( a_ ) requires gradients and use torch.no_grad() instead of torch.enable_grad() or skip the requires_grad_() part to prevent the calculation of some unnecessary gradients. It might be easiest to have a wrapper (you wouln’t use MyFn.apply as a name, anyway) that checks whether torch.is_grad_enables() and whether the inputs need the gradient. And pass that as a boolean argument to the function. At least that’s what I usually do for extensions. Best regards Thomas
st100069
Hi, I really look forward to 1.0 with the tracing and jit compiling capabilities. To check it out I am using pytorch-nightly from anaconda right now, and in python torch.jit.trace works, and I can save and load the saved Script/Traced Modules. In the docs (https://pytorch.org/docs/master/jit.html#torch.jit.ScriptModule.save 3) it says that I could load it into a standalone C++ api with torch::jit::load(filename) but I dont find any examples on how to use the C++ api, i.e. how to include it into a project etc. I assume it will be the https://github.com/pytorch/pytorch/tree/master/torch/csrc/api that I have to use but I dont see any build instructions there Thanks for any help
st100070
Hi Johannes, I am waiting for the same API and documentation to become available. I found a recent tutorial: https://pytorch.org/tutorials/advanced/cpp_export.html 21, it might help until 1.0 is released.
st100071
Hi, I am a newbie in PyTorch. I am trying to implement a multi-label classifier using MultiLabelMarginLoss() as the loss function. INPUT_DIM = 74255 NUM_OF_CATEGORIES = 20 NUM_OF_HIDDEN_NODE = 64 class HDNet(nn.Module): def _init_(self): super(HDNet, self)._init_() self.hidden_1 = nn.Linear(INPUT_DIM, NUM_OF_HIDDEN_NODE) self.out_1 = nn.Linear(NUM_OF_HIDDEN_NODE, NUM_OF_CATEGORIES) def forward(self, x): x = F.relu(self.hidden_1(x)) y_out = F.softmax(self.out_1(x)) return y_out model = HDNet() loss_function = nn.MultiLabelMarginLoss() optimizer = optim.SGD(model.parameters(), lr=0.01) for input_vec, y_true in training_data: model.zero_grad() input_vec = torch.from_numpy(input_vec).float() input_vec = autograd.Variable(input_vec.view(1,-1)) y_true = autograd.Variable(torch.LongTensor([y_true])) y_pred = model(input_vec) loss = loss_function(y_pred, y_true) loss.backward() optimizer.step() y_true is a 1X2 variable Variable containing: 4 8 [torch.LongTensor of size 1x2] y_pred is a 1 X 20 variable Variable containing: Columns 0 to 9 1.00000e-02 * 4.9148 4.8022 4.5980 4.6124 5.1000 5.4152 4.8912 5.4974 4.4172 5.2499 Columns 10 to 19 1.00000e-02 * 5.0587 4.4383 4.9150 5.5286 5.0351 5.4916 5.1233 4.4147 4.8744 5.6221 [torch.FloatTensor of size 1x20] I’m getting the following error: Traceback (most recent call last): File “training_hierarchy_v3.py”, line 139, in loss = loss_function(y_pred, y_true) File “/Library/Python/2.7/site-packages/torch/nn/modules/module.py”, line 224, in call result = self.forward(*input, **kwargs) File “/Library/Python/2.7/site-packages/torch/nn/modules/loss.py”, line 384, in forward return F.multilabel_margin_loss(input, target, size_average=self.size_average) File “/Library/Python/2.7/site-packages/torch/nn/functional.py”, line 831, in multilabel_margin_loss return _functions.thnn.MultiLabelMarginLoss.apply(input, target, size_average) File “/Library/Python/2.7/site-packages/torch/nn/_functions/thnn/auto.py”, line 47, in forward output, *ctx.additional_args) RuntimeError: invalid argument 3: inconsistent target size at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/THNN/generic/MultiLabelMarginCriterion.c:35
st100072
Hi, I’m trying to define the Dataset class for our EHR data to be able to utilize the DataLoader, but it comes in the format of a list of list of list for a single subject, see below. Basically the entire thing is a medical history for a single patient. For each second-level list of list, e.g.[ [0], [ 7, 364, 8, 30, … 11, 596]] this indicates a single record in the patient’s history, where [0] is a visiting time indicator, and [ 7, 364, 8, 30, … 11, 596] corresponding to the medical codes regarding this visit. So there are inconsistent dimensions with regard to the length of visit codes, like for this patient, he/she has the visit codes varying 16, 16, 18 for his/her 1st, 2nd and 3rd visit. But each patient might also have varying length of historical records, like this person has 3 records, but another might have 10, or 20 or 34. Just at lost about what to do to process and prepare this data in the format that Dataloader can process for models later. Any hints or suggestions would be appreciated!
st100073
How would you like to get or process the data further? Using an own colalte_fn for your DataLoader you can just return the medical history as you’ve saved it: # Create Data data = [[[random.randint(0, 100)], torch.randint(0, 500, (random.randint(3, 10),)).tolist()] for _ in range(random.randint(20, 30))] class MyDataset(Dataset): def __init__(self, data): self.data = data def __getitem__(self, index): data = self.data[index] return data def __len__(self): return len(self.data) def my_collate(batch): return list(batch) dataset = MyDataset(data) loader = DataLoader( dataset, batch_size=10, shuffle=False, collate_fn=my_collate ) x = next(iter(loader)) print(x) I’m not sure the code is that useful for you, as now you are basically getting the data as a nested list. Do you want to create one tensor for each patient and feed it to the model? Or would an approach from NLP be more appropriate where we should use padding for the shorter medical recordings?
st100074
Thanks so much! It helps a lot, as currently our model just takes the nested list and go from there. But just curious if I want to take a look at the NLP approach and do the padding (2D) for both code length of a single visit and number of visits for each patient, would you mind pointing me to relevant materials (links) ? Thank you <3
st100075
I think pad_sequence 17 could be a good starter, but I’m not that familiar with NLP.
st100076
I have a data set of ~1k images, each about 60MB on disk. I want to train a UNet-like model with patches of the images, but I am unsure about the best way to construct a training Dataset to feed the model. The images are too large to fit all of them in RAM at the same time, and are too slow to load to have each training sample come from a newly-loaded image. I am hesitant to have an entire batch come from the same image, since I’m worried that the similarities between samples of the same image will have a negative impact on training. What’s the best way to design a Dataset in this situation? I’m currently thinking that each Dataset will have a short list of images stored in RAM, will pull a random patch from the Nth one, with a low percent chance to replace the image on access. Amortized out, that should be quick, but seems over-complicated, hence asking here.
st100077
I have spent the last 2 days trying to get a piece of code running and am at my wits end. Someone please help me. I am using nvidia gtx 780m, nvcc -V 9.0 and nvidia smi version 396.44. Running my code in an environment with PyTorch installed via the command conda install pytorch torchvision -c pytorch returns me the message UserWarning: Found GPU0 GeForce GTX 780M which is of cuda capability 3.0. PyTorch no longer supports this GPU because it is too old. THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1535490206202/work/aten/src/THC/THCTensorCopy.cu line=206 error=48 : no kernel image is available for execution on the device I have tried installing from source but get the same problem. Why is it still give me the same problem if i install from source ???
st100078
I tried running this snippet import torch print(torch.rand(3,3).cuda()) which gave me this error --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-23-78a66e8a8408> in <module>() 1 import torch ----> 2 print(torch.rand(3,3).cuda()) ~/anaconda3/envs/social/lib/python3.5/site-packages/torch/_utils.py in _cuda(self, device, async) 63 else: 64 new_type = getattr(torch.cuda, self.__class__.__name__) ---> 65 return new_type(self.size()).copy_(self, async) 66 67 ~/anaconda3/envs/social/lib/python3.5/site-packages/torch/cuda/__init__.py in __new__(cls, *args, **kwargs) 270 271 def __new__(cls, *args, **kwargs): --> 272 _lazy_init() 273 # We need this method only for lazy init, so we can remove it 274 del _CudaBase.__new__ ~/anaconda3/envs/social/lib/python3.5/site-packages/torch/cuda/__init__.py in _lazy_init() 82 raise RuntimeError( 83 "Cannot re-initialize CUDA in forked subprocess. " + msg) ---> 84 _check_driver() 85 torch._C._cuda_init() 86 torch._C._cuda_sparse_init() ~/anaconda3/envs/social/lib/python3.5/site-packages/torch/cuda/__init__.py in _check_driver() 49 def _check_driver(): 50 if not hasattr(torch._C, '_cuda_isDriverSufficient'): ---> 51 raise AssertionError("Torch not compiled with CUDA enabled") 52 if not torch._C._cuda_isDriverSufficient(): 53 if torch._C._cuda_getDriverVersion() == 0: AssertionError: Torch not compiled with CUDA enabled I installed PyTorch using this command conda install pytorch=0.1.12 cuda75 -c pytorch This is the output of nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2015 NVIDIA Corporation Built on Tue_Aug_11_14:27:32_CDT_2015 Cuda compilation tools, release 7.5, V7.5.17 This is the output of nvidia-smi Tue Sep 18 13:19:04 2018 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 384.130 Driver Version: 384.130 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 780M Off | 00000000:01:00.0 N/A | N/A | | N/A 49C P8 N/A / N/A | 294MiB / 4036MiB | N/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+
st100079
Hi, Is there a particular reason why you want to use such an old version? I am not sure at that time that the default package was compiled with cuda support.
st100080
Hello thanks for the help. My GPU only supports CUDA 7.5. I am okay with using the latest version but I am not sure what it is. I went to https://pytorch.org/previous-versions/ 71 and thought that 0.1.12 is the latest one. What is the latest version that can be used with CUDA 7.5 ?
st100081
I am currently looking for the optimal learning rate when training a GAN. Therefore I generated a generator and a discriminator model and copied both models tree times to evaluate four different learning rates. For copying I tried both copy.deepcopy(module) and module_copy = copy.deepcopy(module) module_copy.load_state_dict(module.state_dict()) However both approaches yield strange results when training: The results highly indicate that training the second GAN does not start from scratch but continues where the training of the first model ended. The third GAN continues where the second ended etc. I checked that the models do not share parameters. After training, different model’s parameters have different values. I do not have a clue what the problem is. These are the modules of the generator, the discriminator is very similar: 0: MFCCGenerator( (model_before): Sequential( (0): CombinedLinear( (layers): Sequential( (0): Linear(in_features=174, out_features=50, bias=True) (1): LayerNorm(torch.Size([50]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) (1): AlwaysDropout(p=0.5) (2): CombinedLinear( (layers): Sequential( (0): Linear(in_features=50, out_features=15, bias=True) (1): LayerNorm(torch.Size([15]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) ) (model_after): Sequential( (0): CombinedLinear( (layers): Sequential( (0): Linear(in_features=23, out_features=20, bias=True) (1): LayerNorm(torch.Size([20]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) (1): AlwaysDropout(p=0.5) (2): CombinedLinear( (layers): Sequential( (0): Linear(in_features=20, out_features=16, bias=True) (1): LayerNorm(torch.Size([16]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) (3): Linear(in_features=16, out_features=13, bias=True) ) ) 1: Sequential( (0): CombinedLinear( (layers): Sequential( (0): Linear(in_features=174, out_features=50, bias=True) (1): LayerNorm(torch.Size([50]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) (1): AlwaysDropout(p=0.5) (2): CombinedLinear( (layers): Sequential( (0): Linear(in_features=50, out_features=15, bias=True) (1): LayerNorm(torch.Size([15]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) ) 2: CombinedLinear( (layers): Sequential( (0): Linear(in_features=174, out_features=50, bias=True) (1): LayerNorm(torch.Size([50]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) 3: Sequential( (0): Linear(in_features=174, out_features=50, bias=True) (1): LayerNorm(torch.Size([50]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) 4: Linear(in_features=174, out_features=50, bias=True) 5: LayerNorm(torch.Size([50]), eps=1e-05, elementwise_affine=True) 6: LeakyReLU(negative_slope=0.01) 7: AlwaysDropout(p=0.5) 8: CombinedLinear( (layers): Sequential( (0): Linear(in_features=50, out_features=15, bias=True) (1): LayerNorm(torch.Size([15]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) 9: Sequential( (0): Linear(in_features=50, out_features=15, bias=True) (1): LayerNorm(torch.Size([15]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) 10: Linear(in_features=50, out_features=15, bias=True) 11: LayerNorm(torch.Size([15]), eps=1e-05, elementwise_affine=True) 12: LeakyReLU(negative_slope=0.01) 13: Sequential( (0): CombinedLinear( (layers): Sequential( (0): Linear(in_features=23, out_features=20, bias=True) (1): LayerNorm(torch.Size([20]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) (1): AlwaysDropout(p=0.5) (2): CombinedLinear( (layers): Sequential( (0): Linear(in_features=20, out_features=16, bias=True) (1): LayerNorm(torch.Size([16]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) (3): Linear(in_features=16, out_features=13, bias=True) ) 14: CombinedLinear( (layers): Sequential( (0): Linear(in_features=23, out_features=20, bias=True) (1): LayerNorm(torch.Size([20]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) 15: Sequential( (0): Linear(in_features=23, out_features=20, bias=True) (1): LayerNorm(torch.Size([20]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) 16: Linear(in_features=23, out_features=20, bias=True) 17: LayerNorm(torch.Size([20]), eps=1e-05, elementwise_affine=True) 18: LeakyReLU(negative_slope=0.01) 19: AlwaysDropout(p=0.5) 20: CombinedLinear( (layers): Sequential( (0): Linear(in_features=20, out_features=16, bias=True) (1): LayerNorm(torch.Size([16]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) 21: Sequential( (0): Linear(in_features=20, out_features=16, bias=True) (1): LayerNorm(torch.Size([16]), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) 22: Linear(in_features=20, out_features=16, bias=True) 23: LayerNorm(torch.Size([16]), eps=1e-05, elementwise_affine=True) 24: LeakyReLU(negative_slope=0.01) 25: Linear(in_features=16, out_features=13, bias=True) Thanks for every idea you might have!
st100082
How are the results indicating that the training is being continued? If you checked, that the models have different parameters, could you just pass a tensor with all ones through the trained model and the randomly initialized model and compare the outputs? I assume both the generator and discriminator are copied? Do you also create new optimizers for each GAN?
st100083
ptrblck: How are the results indicating that the training is being continued? I am looking at estimates of Wasserstein distances between different classes of training data and generated data. For copies of the same generator, those estimates should be almost identical before training. However, the distances at the beginning of the second training of a generator look nearly identical as the distance estimates after training the first generator with 5000 batches of data. ptrblck: If you checked, that the models have different parameters, could you just pass a tensor with all ones through the trained model and the randomly initialized model and compare the outputs? At the moment I do not have an initialized model in memory. I could run this experiment later. What I can do is to compare the results of the four trained models still in memory. Those are (averaged over 1000 outputs due to dropout in both evaluation and training mode): tensor([ 1.4081, 0.6046, -0.5007, 0.1663, -0.5247, -0.4271, -0.0108, -0.2019, 0.0682, -0.9449, 0.0687, -0.0462, -0.0376], device='cuda:0') tensor([ 1.2530, 0.3588, -0.2415, 0.0276, 0.1169, -0.5837, -0.2653, -0.7568, 0.1672, -0.4251, -0.1818, -0.0518, -0.5182], device='cuda:0') tensor([ 1.3222, -0.5203, -0.0297, 0.7092, 0.2500, -0.6458, -0.5713, 0.1600, -0.4600, -0.9698, -0.4975, -0.4153, -0.2564], device='cuda:0') tensor([ 1.5271, 0.0421, -0.4180, 0.3413, -0.4186, -0.0325, -0.3692, 0.4058, 0.1961, -0.9888, -0.1224, 0.5514, 0.0256], device='cuda:0') ptrblck: I assume both the generator and discriminator are copied? Yes. ptrblck: Do you also create new optimizers for each GAN? Yes. I also make sure that the gradients towards the inputs are not saved over training batches.
st100084
ptrblck: If you checked, that the models have different parameters, could you just pass a tensor with all ones through the trained model and the randomly initialized model and compare the outputs? I ran the experiment again and checked the outputs of each generator before, between and after training. These are the results: Before training: tensor([-0.0532, 0.4151, 0.3573, -0.2433, 0.1307, -0.0836, 0.2847, -0.1178, 0.1347, 0.1278, -0.1246, -0.0588, 0.3419], device='cuda:0') tensor([-0.0565, 0.4017, 0.3659, -0.2477, 0.1339, -0.0836, 0.2784, -0.1184, 0.1386, 0.1262, -0.1121, -0.0488, 0.3315], device='cuda:0') tensor([-0.0594, 0.3994, 0.3677, -0.2517, 0.1434, -0.0990, 0.2837, -0.1150, 0.1401, 0.1330, -0.1092, -0.0620, 0.3439], device='cuda:0') tensor([-0.0522, 0.4125, 0.3546, -0.2467, 0.1256, -0.0851, 0.2884, -0.1105, 0.1383, 0.1332, -0.1343, -0.0499, 0.3382], device='cuda:0') After training the first generator: tensor([ 1.2326, 0.1330, -0.2058, -0.2644, -0.1080, 0.1575, 0.2618, 0.0960, 0.3211, 0.4803, -0.0785, 0.2011, -0.3083], device='cuda:0') tensor([-0.0563, 0.4148, 0.3497, -0.2479, 0.1295, -0.0838, 0.2905, -0.0990, 0.1363, 0.1258, -0.1307, -0.0527, 0.3313], device='cuda:0') tensor([-0.0562, 0.3983, 0.3653, -0.2451, 0.1365, -0.0950, 0.2755, -0.1169, 0.1376, 0.1209, -0.1104, -0.0573, 0.3361], device='cuda:0') tensor([-0.0639, 0.3964, 0.3724, -0.2654, 0.1279, -0.1069, 0.2877, -0.1119, 0.1249, 0.1338, -0.1262, -0.0422, 0.3374], device='cuda:0') After training the second generator: tensor([ 1.2509, 0.1938, -0.1883, -0.2593, -0.0736, 0.0829, 0.1862, 0.0384, 0.3301, 0.5389, -0.0262, 0.2488, -0.3501], device='cuda:0') tensor([ 1.4068, -0.4611, -0.4623, 0.2457, 0.0790, -0.7879, -0.1040, -0.0594, 0.0446, 0.0345, -0.2796, -0.0319, -0.4630], device='cuda:0') tensor([-0.0517, 0.4005, 0.3747, -0.2566, 0.1396, -0.0952, 0.2775, -0.1243, 0.1364, 0.1313, -0.0991, -0.0516, 0.3421], device='cuda:0') tensor([-0.0709, 0.3985, 0.3652, -0.2412, 0.1333, -0.0953, 0.2782, -0.1052, 0.1356, 0.1420, -0.1292, -0.0532, 0.3499], device='cuda:0') After training the third generator: tensor([ 1.2610, 0.2675, -0.2377, -0.3138, -0.1379, 0.1104, 0.2437, 0.0204, 0.3022, 0.5029, 0.0027, 0.2320, -0.3861], device='cuda:0') tensor([ 1.4057, -0.4849, -0.5250, 0.3343, 0.1568, -0.8417, -0.0839, -0.0798, 0.0577, 0.0663, -0.3039, -0.0474, -0.4607], device='cuda:0') tensor([ 1.4200, 0.1538, 0.4493, 1.2378, 0.8067, -1.0643, -0.2824, 0.6857, -0.3737, -0.0873, -0.2991, 0.3936, -0.4183], device='cuda:0') tensor([-0.0647, 0.4080, 0.3546, -0.2470, 0.1262, -0.0913, 0.2868, -0.0972, 0.1294, 0.1270, -0.1409, -0.0572, 0.3390], device='cuda:0') After training all four generators: tensor([ 1.2381, 0.1970, -0.2328, -0.3427, -0.0975, 0.1496, 0.2497, 0.0587, 0.2780, 0.5047, 0.0048, 0.2039, -0.3336], device='cuda:0') tensor([ 1.3951, -0.4659, -0.5315, 0.2548, 0.0990, -0.7395, -0.0883, -0.0192, 0.1091, 0.0959, -0.2537, 0.0104, -0.4816], device='cuda:0') tensor([ 1.4325, 0.1591, 0.4397, 1.3557, 0.8864, -1.1142, -0.3061, 0.6200, -0.3217, -0.0575, -0.3192, 0.4867, -0.3381], device='cuda:0') tensor([ 1.0634, -0.4836, 0.4042, -0.5154, 0.0880, -0.2594, 0.0965, 0.2983, 0.6161, -0.2963, 0.7156, 0.2674, -0.5459], device='cuda:0')
st100085
Hello, Pytorch users! I am implementing multiagent reinforcement learning and finished to test it, and I am trying to convert cpu tensor to cuda tensor by myself. github.com/verystrongjoe/multiagent_rl make tensor run in gpu 9 by verystrongjoe on 08:56AM - 19 Sep 18 changed 10 files with 15 additions and 6 deletions. but it became more slow than before… I don’t know what the problem is… It ran without error. but very slowly. I think that it is working well as I want it to run on gpu. but I think copying data between cpu tensor and cuda tensor takes longer time than saved time through using gpu. I think is there any way to get more performance?
st100086
Hello, I have a tensor with size of BxCxHxW. I want to obtain the new tensor with size of Bx3xCxHxW, How can I do it? I have used unsqueeze(1) function but it only provides Bx1xCxHxW. Thanks Edit: I also think another solution may works B= torch.cat([torch.zeros(A.size()).unsqueeze(1),torch.zeros(A.size()).unsqueeze(1),torch.zeros(A.size()).unsqueeze(1)]
st100087
Solved by albanD in post #4 This should do it then to create a tensor of same type and on the same device as A. You can change the dtype and device arguments if you need a different type and/or device. size = list(A.size()) size.insert(1, 3) B = torch.zeros(size, dtype=A.dtype, device=A.device)
st100088
This should do it then to create a tensor of same type and on the same device as A. You can change the dtype and device arguments if you need a different type and/or device. size = list(A.size()) size.insert(1, 3) B = torch.zeros(size, dtype=A.dtype, device=A.device)
st100089
As in TensorFlow we can specify GPU memory fraction(as given below) , how can we do the same in Pytorch? gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
st100090
Hi, There is no such option in pytorch. It will allocate the memory as it needs it.
st100091
Is there a way to efficiently calculate top-k accuracy in Pytorch when using batches for CNNs? Currently I use a scikitlearn method called accuracy_score which takes an array of argmax values (so one value in the array is the argmax of an image prediction by the CNN) and compares it to an array of target values (where each element is an image target). Getting the prediction and target: prediction_by_CNN = model(batch) pred_numpy = prediction_by_CNN.detach().cpu().numpy() target_numpy = target.detach().cpu().numpy() prediction = np.argmax(pred_numpy,axis=1) prediction = np.round(prediction).astype(np.unit8).reshape(-1) target = target.astype(np.unit8).reshape(-1) Calculating the accuracy: accuracies.append(accuracy_score(target, prediction)*100) mean_batch_accuracy_score = sum(accuracies)/len(accuracies) epoch_accuracy.append(mean_batch_accuracy) The percentage of correct predictions is then calculated but this is equivalent to a top-1 accuracy only. Is there a way to alter the scikitlearn method or maybe make a function that will efficitently calculate the top-3 accuracy? My possible solution: Catch the prediction arrays before performing argmax() Use something like: prediction_numpy_top_k = (np.argpartition(prediction_numpy,-3,axis=1)[:-3] Perform: if target_numpy in prediction_numpy_top_k == True: \\ accuracies.append(1) \\ else: accuracies.append(0) or maybe this could be performed before even casting to numpy format? Many thanks in advance!
st100092
I’m a PyTorch beginner and I’m trying to implement a recommender system based on the paper “Scalable Recommender Systems through Recursive Evidence Chains” (https://arxiv.org/abs/1807.02150v1 1). In this paper the latent factors Ui for users are defined in (3.1) and the latent factors Vj for items are are defined in (3.2) prec.jpg693×251 28.7 KB You can see that the definition of Ui depends on Vj, and vice-versa! Of course, there are cases where this recursion is infinite, and so the authors define a Max Depth constant after which it is forced to stop. Here fφ and fψ both are 3-layer feed-forward neural network with each 200 neurons for each hidden layer. My question is: with this kind of recursive definition, I have not been able to figure out how to code the training phase in which the goal is to minimize the loss function defined in (3.3). The authors state: “While the definition of our latent feature vectors (U,V) rely on piece-wise functions, they are sub-differentiable and thus easy to optimize in a framework which supports automatic differentiation such as PyTorch or Tensorflow.” So it is possible!
st100093
I facing a memory issue. i was trying to use 500,000 images to train my model, but it can not load image before train my model. At the beginning, i used ImageFolder to load dataset. i searched from forums, someone said i should use my own dataset class. image_datasets = datasets.ImageFolder(dataset_dir, data_transforms['train']) train_id, val_id = train_test_split(image_datasets, test_size=0.01) train_dataloaders = torch.utils.data.DataLoader(train_id, batch_size=mc.batch_size, shuffle=True, num_workers=4) val_dataloaders = torch.utils.data.DataLoader(val_id, batch_size=mc.batch_size, shuffle=True, num_workers=4) Here is my own datasets code class MyDataset(Dataset): def __init__(self, root_dir, allfile, train=True, transform=None): self.root_dir = root_dir self.train = train self.allfile = allfile self.transform = transform self.mc = Myconfig() # self.length = allfile.shape def __len__(self): return self.allfile.shape[0] def __getitem__(self, idx): if self.train: # address = os.path.join(self.root_dir, self.allfile[idx, 0]) img_dir = os.path.join(self.root_dir, self.allfile[idx, 0]) label = self.allfile[idx, 1] label = label.astype(int) label = np.array(label) img = io.imread(img_dir).astype(np.uint8) sample = {'image': img, 'label': label, 'address': img_dir} else: address = os.path.join(self.root_dir, self.allfile[idx, 0]) img_dir = address img = io.imread(img_dir).astype(np.uint8) sample = {'image': img, 'address': address} if self.transform: sample = self.transform(sample) return sample When i use my dataset image_datasets = MyDataset(mc.dataset_dir, allfiles, transform=transforms.Compose([ Rescale((28, 28)), ToTensor() ])) i still face this problem. Some one could give me any help?
st100094
batch_size is 4. I think its not a batch_size problem, i cannot finish dataset codes.
st100095
It may be a simple question. I downloaded the tarball from github releases page, however the tarball doesn’t work. How to make it work ? (eee) [ pytorch-0.4.1]$ python setup.py install fatal: not a git repository (or any of the parent directories): .git running install running build_deps Could not find /home/liangstein/test_tarballs/4.1/pytorch-0.4.1/third_party/gloo/CMakeLists.txt Did you run ‘git submodule update --init’? (eee) [ pytorch-0.4.1]$ git submodule update --init fatal: not a git repository (or any of the parent directories): .git (eee) [ pytorch-0.4.1]$
st100096
I don’t think it would work. It probably is best to check out the git branch v0.4.1 and use that. If you clone and then do git checkout -b v0.4.1 origin/v0.4.1 should do the trick. Then you can run the git submodule update --init as you suggested. Best regards Thomas
st100097
After I do git checkout -b v0.4.1 origin/v0.4.1 and run python setup.py install However “pip list” shows the wrong torch version: torch 0.5.0a0+a24163a How does this happen?
st100098
That just means its legit : github.com/pytorch/pytorch Issue: Wrong version number for the 0.4.1 release 3 opened by EKami on 2018-07-27 closed by soumith on 2018-07-27 Issue description The __version__ provided by Pytorch is wrong See this line Best regards Thomas
st100099
environment: Windows10+vs 2015+cuba 9.2+python 2.7.14+cmake 3.12 How could I install caffe2 successfully? At the command “H:\pytorch\scripts>build_windows.bat”, the result has several mistakes. I only downloaded the cmake and do not use it. The following is the code: H:\pytorch\scripts>build_windows.bat Requirement already satisfied: pyyaml in c:\programdata\anaconda2\lib\site-packages (3.12) distributed 1.21.8 requires msgpack, which is not installed. grin 1.2.1 requires argparse>=1.1, which is not installed. You are using pip version 10.0.1, however version 18.0 is available. You should consider upgrading via the ‘python -m pip install --upgrade pip’ command. CAFFE2_ROOT=H:\pytorch\scripts… CMAKE_GENERATOR=“Visual Studio 14 2015 Win64” CMAKE_BUILD_TYPE=Release – Selecting Windows SDK version to target Windows 10.0.16299. – The CXX compiler identification is unknown – The C compiler identification is unknown CMake Error at CMakeLists.txt:6 (project): No CMAKE_CXX_COMPILER could be found. CMake Error at CMakeLists.txt:6 (project): No CMAKE_C_COMPILER could be found. – Configuring incomplete, errors occurred! See also “H:/pytorch/build/CMakeFiles/CMakeOutput.log”. See also “H:/pytorch/build/CMakeFiles/CMakeError.log”. “Caffe2 building failed”