title
stringlengths
15
126
category
stringclasses
3 values
posts
list
answered
bool
2 classes
Linear layers on top of LSTM
null
[ { "contents": "<SCODE>local dec = nn.Sequential()\ndec:add(nn.LookupTable(opt.vocabSize, opt.hiddenSize))\ndec:add(nn.Sequencer(nn.LSTM(opt.hiddenSize, opt.hiddenSize))\ndec:add(nn.Sequencer(nn.Linear(opt.hiddenSize, opt.vocabSize)))\ndec:add(nn.Sequencer(nn.LogSoftMax()))\n<ECODE> <SCODE>class Net(nn.Module):\n def __init__(self, vocabSz, hiddenSz):\n super(Net, self).__init__()\n self.emb = nn.Embedding(vocabSz, hiddenSz)\n self.dec = nn.LSTM(hiddenSz, hiddenSz)\n self.lin = nn.Linear(hiddenSz, vocabSz)\n def forward(self, input, hidden):\n embeddings = self.emb(input)\n out, hidden = self.dec(embeddings, hidden)\n # do the linear here for each lstm output.. what's the best way?\n return F.softmax(out), hidden<ECODE>", "isAccepted": false, "likes": 2, "poster": "douwe" }, { "contents": "I’d do this: <SCODE>out, hidden = self.dec(embeddings, hidden)\nout = self.lin(out.view(-1, out.size(2))\nreturn F.softmax(out)\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "I think you need <SCODE>return F.softmax(out), hidden\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "edited example code – typo when transcribing, thanks.", "isAccepted": false, "likes": null, "poster": "douwe" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "csarofeen" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mfa" }, { "contents": "Yes, it just basically folds together the first two dimensions, the third will stay unaffected. Example output from my code: <SCODE>out\nVariable containing:\n( 0 ,.,.) = \n 0.0426 -0.2047 -0.0125 ... -0.0409 0.1487 0.0376\n -0.0744 -0.0927 0.0185 ... -0.1620 0.1348 -0.2179\n 0.0111 -0.1565 -0.0192 ... -0.1247 0.0352 -0.0625\n\n( 1 ,.,.) = \n 0.0726 -0.1911 0.3049 ... -0.1031 0.1991 0.0659\n -0.2233 0.1622 0.0794 ... -0.1720 0.1020 -0.0430\n 0.0600 -0.3536 0.0201 ... -0.0990 -0.1203 -0.1207\n\n( 2 ,.,.) = \n 0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000\n -0.1225 0.1387 0.1205 ... -0.1052 -0.1638 -0.2314\n 0.0336 -0.3169 0.0090 ... -0.0343 0.0635 -0.0280\n\n( 3 ,.,.) = \n 0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000\n 0.0001 -0.1604 0.1111 ... -0.0778 -0.1291 -0.0852\n 0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000\n[torch.FloatTensor of size 4x3x128]\n\nout.view(-1, out.size(2))\nVariable containing:\n 0.0426 -0.2047 -0.0125 ... -0.0409 0.1487 0.0376\n-0.0744 -0.0927 0.0185 ... -0.1620 0.1348 -0.2179\n 0.0111 -0.1565 -0.0192 ... -0.1247 0.0352 -0.0625\n ... ⋱ ... \n 0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000\n 0.0001 -0.1604 0.1111 ... -0.0778 -0.1291 -0.0852\n 0.0000 0.0000 0.0000 ... 0.0000 0.0000 0.0000\n[torch.FloatTensor of size 12x128]\n\n<ECODE>", "isAccepted": false, "likes": null, "poster": "simono" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "live-wire" }, { "contents": "If your inputs are independent(like for generating baby names where each name is a separate thing) then yes, there is no need to do this, however, when the the inputs are related, you’ll need to forward the states as well. for example you want to produce Shakespearean text, or something to to that extend, you’ll benefit from doing so.", "isAccepted": false, "likes": null, "poster": "Shisho_Sama" }, { "contents": "<SCODE>voc_size = 100\nn_labels = 3\nemb_dim = 16\nrnn_size = 32\nembedding = nn.Embedding(voc_size, emb_dim)\nrnn = nn.LSTM(input_size=emb_dim, hidden_size=rnn_size, bidirectional=True, num_layers=1)\ntop_layer = nn.Linear(2 * rnn_size, n_labels)\n\nsentences = torch.randint(high=voc_size, size=(10, 4))\nprint(sentences.shape)\n\nembedded = embedding(sentences)\nprint(embedded.shape)\n\nrnn_out, _ = rnn(embedded)\nprint(rnn_out.shape)\n\nout = top_layer(rnn_out)\nprint(out.shape)\n<ECODE> The output is as follows: <SCODE>torch.Size([10, 4])\ntorch.Size([10, 4, 16])\ntorch.Size([10, 4, 64])\ntorch.Size([10, 4, 3])\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "shahensha" } ]
false
Masked Summation of Tensor
null
[ { "contents": "<SCODE> 99 99\n 489 1171\n 281 1317\n ⋮ \n 0 435\n 0 2741\n 0 517\n[torch.LongTensor of size 2406x2]```\n\nPOST EMBEDDING:\n```context_output.size() -- (2406L, 2L, 1L)``` -- (word_sequence, batch_size, attention_score)\n\n```Variable containing:\n( 0 ,.,.) = \n1.00000e-02 *\n 2.0804\n 1.6674\n 2.9782\n ⋮ \n 4.8565\n 4.8565\n 4.8565\n...\n( 1 ,.,.) = \n1.00000e-02 *\n 0.2246\n 1.4224\n 4.1816\n ⋮ \n 4.4363\n 3.0162\n 3.3986\n[torch.FloatTensor of size 2x2406x1]```\n\n\nI need to sum the values in the POST EMBEDDING tensor if they have the same ```word_id``` from the PRE EMBEDDING tensor.\n\nExample:\n\nPRE EMBEDDING\n```Variable containing:\n 4 1\n 4 2\n 2 2\n[torch.LongTensor of size 3x2]```\n\nPOST EMBEDDING:\n\n```Variable containing:\n( 0 ,.,.) =\n 0.35\n 0.35\n 0.65\n...\n( 1 ,.,.) = \n 0.25\n 0.65\n 0.65\n[torch.FloatTensor of size 2x3x1]```\n\nDESIRED RESULT\n\n```Variable containing:\n( 0 ,.,.) =\n 4.0 0.7\n 2.0 0.65\n 1.0 0.0\n...\n( 1 ,.,.) =\n 4.0 0.0\n 2.0 1.3\n 1.0 0.25\n[torch.LongTensor of size 2x3x2]```<ECODE>", "isAccepted": false, "likes": null, "poster": "mattyd2" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Update PyTorch version on LINUX w/ CUDA
null
[ { "contents": "I want to make sure that I do not break my installation on my LINUX machine with GPU (cuda 8.0) when attempting to update pytorch, so I am asking here to be sure. I would like to now upgrade my PyTorch version to whatever is on master, and the website has this to say about LINUX with CUDA8: conda install pytorch torchvision cuda80 -c soumith However, I do not have conda, and anyway I installed via a wheel with pip. Is there anything I should do (or not do) in particular here to make sure I do not break my system? I want to basically just update my pytorch on my LINUX machine. Thanks!", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "Veril" }, { "contents": "Thanks!", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "Hope there is something like “luarocks install nn” in Torch7 so that we can update blood-edge versions quicky…", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "All remaining question are about managing venvs, and there are probably lots of articles, that will explain it better, that I will in this short response.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "How do I do that? I looked at sys.modules but… it’s just a huge amount of information dumped into the screen. I currently do not know how to control what version is being loaded… :-/ Help!", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Usage of nn.ModuleList
null
[ { "contents": "<SCODE> module = [nn.Conv2d(1,64, 3, stride = (1,1), padding = (1,1)), nn.BatchNorm2d(\n self.module_list = nn.ModuleList(module)\n self.module_list += module\n<ECODE> Each “module”, is composed of a Conv2d, and a BatchNorm. What I would like self.module_list to be, is: Conv2d -> BatchNorm -> Conv2d -> BatchNorm Is this the right way to go about it? Thanks!!", "isAccepted": false, "likes": 1, "poster": "Kalamaya" }, { "contents": "No, you’re adding the same modules, with the same weights twice. You need to create a new list, and append that after construction. <SCODE>def make_sequence():\n return [nn.Conv2d(1,64, 3, stride = (1,1), padding = (1,1)), ...]\n\nself.module_list = nn.ModuleList()\nfor i in range(5):\n self.module_list += make_sequence()\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "apaszke" } ]
false
Stop gradients (for ST gumbel softmax)
null
[ { "contents": "Hi, Given the soft implementation of Gumbel Softmax: <SCODE>from torch.autograd import Variable\nimport torch.nn.functional as F\n\ndef sample_gumbel(input):\n noise = torch.rand(input.size())\n eps = 1e-20\n noise.add_(eps).log_().neg_()\n noise.add_(eps).log_().neg_()\n return Variable(noise)\n\ndef gumbel_softmax_sample(input):\n temperature = 1\n noise = sample_gumbel(input)\n x = (input + noise) / temperature\n x = F.log_softmax(x)\n return x.view_as(input)\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "eliabruni" }, { "contents": "I did stop_gradient(x) in PyTorch as Variable(x.data). However, I’m looking for a better alternative.", "isAccepted": false, "likes": null, "poster": "Ilya_Kostrikov" }, { "contents": "", "isAccepted": false, "likes": 5, "poster": "apaszke" }, { "contents": "Thank you both, I’ll try it out!", "isAccepted": false, "likes": null, "poster": "eliabruni" }, { "contents": "<SCODE>def gumbel_softmax_sample(self, input):\n temperature = 1\n noise = self.sample_gumbel(input)\n x = (input + noise) / temperature\n x = F.softmax(x)\n\n if hard == True:\n max_val, _ = torch.max(x, x.dim()-1)\n x_hard = x == max_val.expand_as(x)\n tmp = (x_hard.float() - x)\n tmp2 = tmp.clone()\n tmp2.detach_()\n x = tmp2 + x\n\n return x.view_as(input)\n<ECODE> But, I tried to modify it such that only one ‘1’ exists at each row in x_hard as below. <SCODE>def gumbel_softmax_sample(self, input):\n temperature = 1\n noise = self.sample_gumbel(input)\n x = (input + noise) / temperature\n x = F.softmax(x)\n\n if hard == True:\n _, max_inx = torch.max(x, x.dim()-1)\n x_hard = torch.cuda.FloatTensor(x.size()).zero_().scatter_(x.dim()-1, max_inx.data, 1.0)\n x2 = x.clone()\n tmp = Variable(x_hard-x2.data)\n tmp.detach_()\n\n x = tmp + x\n \n return x.view_as(input)\n<ECODE> These two scripts work without error, but I am not sure they are equivalent to the original ST gumbel softmax. I would greatly appreciate if someone reviews them.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "Is there anyone helps me to confirm that these two scripts are equivalent to the original tensorflow implementation ?", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ethancaballero" }, { "contents": "Because I do not know how to make the parameters same as those of the original, I have not yet compared them.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Qi_Zhang" }, { "contents": "<SCODE>y = tf.stop_gradient(y_hard - y) + y\n<ECODE> y is pure one-hot, in terms of value (since we add the soft y, and then subtract it again the gradients are those of soft y (since all the other terms in this expression have their gradient stripped)", "isAccepted": false, "likes": 7, "poster": "hughperkins" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "hughperkins" } ]
false
Implementing low rank approximation
null
[ { "contents": "below is simple code to test the performance part 1 <SCODE>import torch\nimport time\nimport numpy as np\n\nimg = torch.autograd.Variable(torch.rand(1, 3, 416, 416))\nfilters = torch.autograd.Variable(torch.rand(64, 3, 5, 5))\n\navg = 0\nfor x in xrange(10):\n\tnum_ops = 0\n\tst = time.clock()\n\tconv = torch.nn.functional.conv2d(img, filters, padding=1)\n\tnum_ops = np.prod(conv.size())*np.prod(filters.size())\n\t# print result.size()\n\tavg += time.clock() - st\nprint avg/10.0, 'Average conv operation time'\nprint 'Number of operation', num_ops\n<ECODE> Part 2 <SCODE>print \"=================================================================\"\nfilters = torch.autograd.Variable(torch.rand(8, 3, 5, 5))\nA = torch.autograd.Variable(torch.rand(64, 8))\navg = 0\ncore_avg = 0\nfor x in xrange(10):\n\tnum_ops = 0\n\tst = time.clock()\n\n\tcore_st = time.clock()\n\tconv = torch.nn.functional.conv2d(img, filters, padding=1)\n\tnum_ops = np.prod(conv.size())*np.prod(filters.size())\n\tcore_avg += time.clock() - core_st\n\n\tconv = conv.view(8, -1)\n\tcore_st = time.clock()\n\tnum_ops += np.prod(conv.size()) * A.size()[0]\n\tconv = torch.mm(A, conv)\n\tcore_avg += time.clock() - core_st\n\n\t# print conv.view(result.size()).size()\n\tavg += time.clock() - st\nprint 'Number of reduced operation', num_ops\nprint avg/10.0, 'Average reduced operation time'\nprint core_avg/10.0, 'Average reduced core operation time'\n<ECODE> part 1 does the simple convolution and part 2 does low-rank approximation.", "isAccepted": false, "likes": null, "poster": "madan-ram" }, { "contents": "The number of operations you computed is for the naive convolution algorithm, but there’s been a lot of research in this area, and modern algorithms perform much less operations. Additionally, the actual speed depends not only on floating point operations, but also on memory bandwidth. Doing conv + mm requires reloading some intermediate values multiple times, and this takes time. A single conv kernel can reuse values already loaded int registers.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Using your own dataset on PyTorch
null
[ { "contents": "I have a dataset in numpy format (actually, I just modified the CIFAR-10/MNIST datasets given on PyTorch). The dimensions of it consist with the dimensions a normal CNN expects. For example, if I write: print(our_dataset.shape, our_labels.shape) I get: (10000, 3, 32, 32) (10000,) which is fine. Now I cast the data info torch format using: <SCODE>train_data = torch.from_numpy(our_dataset)\nour_labels = torch.from_numpy(our_labels)\n<ECODE> encapsulate it into a TensorDataset: train = torch.utils.data.TensorDataset(train_data, our_labels) and finally into a DataLoader: trainloader = torch.utils.data.DataLoader(train, batch_size=128, shuffle=True) All fine here. Now I build the neural network, and then when I do the training, I get the error: TypeError: DoubleSpatialConvolutionMM_updateOutput received an invalid combination of arguments - got (int, torch.DoubleTensor, torch.DoubleTensor, torch.FloatTensor, torch.FloatTensor, torch.DoubleTensor, torch.DoubleTensor, long, long, int, int, int, int), but expected (int state, torch.DoubleTensor input, torch.DoubleTensor output, torch.DoubleTensor weight, [torch.DoubleTensor bias or None], torch.DoubleTensor finput, torch.DoubleTensor fgradInput, int kW, int kH, int dW, int dH, int padW, int padH) If I am working on cuda, then the error changes to: _cudnn_convolution_full_forward received an invalid combination of arguments - got (torch.cuda.DoubleTensor, torch.cuda.FloatTensor, torch.cuda.FloatTensor, torch.cuda.DoubleTensor, tuple, tuple, int, bool), but expected (torch.cuda.RealTensor input, torch.cuda.RealTensor weight, torch.cuda.RealTensor bias, torch.cuda.RealTensor output, std::vector<int> pad, std::vector<int> stride, int groups, bool benchmark) Anyone has seen these errors before? In addition, is the right way of using a new dataset by first casting it to torch format, then building a TensorDataset and finally a DataLoader?", "isAccepted": false, "likes": 3, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "you need to cast your data to double in that case", "isAccepted": false, "likes": null, "poster": "edgarriba" }, { "contents": "If I type: print(our_dataset.dtype, our_labels.dtype) I get: float32 int64 I sent the data to cuda (and gave the Error in that case). However, if I don’t send the net (and data) to cuda, I get the other error that I posted in my original post. @edgarriba Well, the data is already in float.", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "right, but the error says that expect a DoubleTensor", "isAccepted": false, "likes": null, "poster": "edgarriba" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "If I case the our_dataset to float32 and labels to int64, then the error changes to: multi-target not supported at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485444530918/work/torch/lib/THNN/generic/ClassNLLCriterion.c:20 but the shape of the labels is (10000,). Hmm, this is becoming weird.", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "sarthak1996" }, { "contents": "It seems that this works.", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "the version of 0.3.0 can well solve the problem of the problem of ‘received an invalid combination of arguments’", "isAccepted": false, "likes": null, "poster": "Shine_George" } ]
false
Error while saving my network
null
[ { "contents": "I have made a CNN: <SCODE>class convNet(nn.Module):\n #constructor\n def __init__(self):\n super(convNet, self).__init__()\n \n self.conv1 = nn.Conv2d(3, 96, kernel_size=7,stride=1,padding=3)\n self.conv2 = nn.Conv2d(96, 256, kernel_size=5,stride=1,padding=2)\n self.conv3 = nn.Conv2d(256,384,kernel_size=3,stride=1,padding=1)\n self.conv4 = nn.Conv2d(384,512,kernel_size=3,stride=1,padding=1)\n self.conv5 = nn.Conv2d(512,1024,kernel_size=3,stride=1,padding=1)\n self.fc1 = nn.Linear(4*4*1024, 300)\n self.fc2 = nn.Linear(300, 100)\n self.fc3 = nn.Linear(100, 10)\n \n\n def forward(self, x):\n conv1_relu = nnFunctions.relu(self.conv1(x))\n conv2_relu = nnFunctions.relu(self.conv2(conv1_relu))\n conv3_relu =nnFunctions.max_pool2d(nnFunctions.relu(self.conv3(conv2_relu)),2)\n conv4_relu =nnFunctions.max_pool2d(nnFunctions.relu(self.conv4(conv3_relu)),2)\n conv5_relu =nnFunctions.max_pool2d(nnFunctions.relu(self.conv5(conv4_relu)),2)\n \n x = conv5_relu.view(-1, 4*4*1024)\n x = nnFunctions.relu(self.fc1(x))\n x = nnFunctions.relu(self.fc2(x))\n x = self.fc3(x)\n return x\n<ECODE> <SCODE>net=convNet()\nnet.cuda()\n<ECODE> It gives me the following errror: <SCODE>\nAttributeError Traceback (most recent call last)\n<ipython-input-13-a8f33f24049b> in <module>()\n----> 1 torch.save('net.t7',net)\n\n/home/sarthak/anaconda2/lib/python2.7/site-packages/torch/serialization.pyc in save(obj, f, pickle_module, pickle_protocol)\n 121 f = open(f, \"wb\")\n 122 try:\n--> 123 return _save(obj, f, pickle_module, pickle_protocol)\n 124 finally:\n 125 if new_fd:\n\n/home/sarthak/anaconda2/lib/python2.7/site-packages/torch/serialization.pyc in _save(obj, f, pickle_module, pickle_protocol)\n 212 pickle_module.dump(sys_info, f, protocol=pickle_protocol)\n 213 \n--> 214 with closing(tarfile.open(fileobj=f, mode='w:', format=tarfile.PAX_FORMAT)) as tar:\n 215 _add_to_tar(save_sys_info, tar, 'sys_info')\n 216 _add_to_tar(pickle_objects, tar, 'pickle')\n\n/home/sarthak/anaconda2/lib/python2.7/tarfile.pyc in open(cls, name, mode, fileobj, bufsize, **kwargs)\n 1691 else:\n 1692 raise CompressionError(\"unknown compression type %r\" % comptype)\n-> 1693 return func(name, filemode, fileobj, **kwargs)\n 1694 \n 1695 elif \"|\" in mode:\n\n/home/sarthak/anaconda2/lib/python2.7/tarfile.pyc in taropen(cls, name, mode, fileobj, **kwargs)\n 1721 if mode not in (\"r\", \"a\", \"w\"):\n 1722 raise ValueError(\"mode must be 'r', 'a' or 'w'\")\n-> 1723 return cls(name, mode, fileobj, **kwargs)\n 1724 \n 1725 @classmethod\n\n/home/sarthak/anaconda2/lib/python2.7/tarfile.pyc in __init__(self, name, mode, fileobj, format, tarinfo, dereference, ignore_zeros, encoding, errors, pax_headers, debug, errorlevel)\n 1577 self.members = [] # list of members as TarInfo objects\n 1578 self._loaded = False # flag if all members have been read\n-> 1579 self.offset = self.fileobj.tell()\n 1580 # current position in the archive file\n 1581 self.inodes = {} # dictionary caching the inodes of\n\n/home/sarthak/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __getattr__(self, name)\n 241 if name in modules:\n 242 return modules[name]\n--> 243 return object.__getattribute__(self, name)\n 244 \n 245 def __setattr__(self, name, value):\n\nAttributeError: 'convNet' object has no attribute 'tell'\n<ECODE>", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "<SCODE>torch.save('net.txt',net)\n<ECODE> It is: <SCODE>torch.save(net, 'net.txt')\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "smth" }, { "contents": "Thanks for your help", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "<SCODE>/home/sarthak/anaconda2/lib/python2.7/site-packages/torch/serialization.pyc in load(f, map_location, pickle_module)\n 246 f = open(f, 'rb')\n 247 try:\n--> 248 return _load(f, map_location, pickle_module)\n 249 finally:\n 250 if new_fd:\n\n/home/sarthak/anaconda2/lib/python2.7/site-packages/torch/serialization.pyc in _load(f, map_location, pickle_module)\n 344 unpickler = pickle_module.Unpickler(pickle_file)\n 345 unpickler.persistent_load = persistent_load\n--> 346 result = unpickler.load()\n 347 return result\n\nAttributeError: 'module' object has no attribute 'convNet'<ECODE>", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "You need to copy the code of your network to that other notebook.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Or better put it in a file in the same directory as your notebooks and import it from both.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
TypeError: addmm_, after transfering the net to GPU (works fine on CPU)
null
[ { "contents": "Hi all, to Thanks a lot for your help.", "isAccepted": false, "likes": null, "poster": "nelsonleung" }, { "contents": "the input to your network is on the CPU, but your network is transferred to the GPU (or vice versa). Here’s the critical part of the error that helped me figure this out: <SCODE> got (int, int, torch.FloatTensor, torch.cuda.FloatTensor),\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I see. Got it to work. Thanks a lot for your prompt reply!", "isAccepted": false, "likes": null, "poster": "nelsonleung" }, { "contents": "Thanks in advance.", "isAccepted": false, "likes": null, "poster": "Ke_Bai" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jupiter" }, { "contents": "<SCODE>def __len__(self):\n return self.npy_data.shape[0]\n\ndef __getitem__(self, index):\n fm_data = np.array(self.npy_data[:, :-1], dtype = np.float)\n label_data = np.array(self.npy_data[:, -1], dtype = np.float)\n\n return fm_data, label_data\n<ECODE> <SCODE>def forward(self, x):\n out = self.fc1(x)\n out = self.relu(out)\n out = self.fc2(out)\n return out\n<ECODE> <SCODE> label_data = torch.from_numpy(label_data).float().cuda()\n label_data = Variable(label_data)\n\n optimizer.zero_grad() # zero the gradient buffer\n outputs = net(fm_data)\n loss = criterion(outputs, label_data)\n loss.backward()\n optimizer.step()\n\n if (i + 1) % 100 == 0:\n print ('Epoch [%d/%d], Step [%d/%d], Loss: %.4f'\n %(epoch+1, num_epochs, i+1, len(train_data)//batch_size, loss.data[0]))\n<ECODE> could you give me an advice to solve this problem? Thank you very much!", "isAccepted": false, "likes": null, "poster": "Dami_Mi" }, { "contents": "By the way, your question has nothing to do with the original post. So you should post a new thread instead replying in here. Further more, the error message is quite clear in your case, you might be able to solve such issues faster if you think about the message and consult the doc in future…", "isAccepted": false, "likes": null, "poster": "SimonW" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Dami_Mi" }, { "contents": "Hi Nelson", "isAccepted": false, "likes": null, "poster": "Borna_Ghotbi" }, { "contents": "Hey. Have you got it worked?", "isAccepted": false, "likes": null, "poster": "xhran2010" } ]
false
Clarification on simple network using pytorch
null
[ { "contents": "Hi everyone, thanks for the great community. I was wondering whether could someone provide some insights into how to construct a NN such as those in the examples on the github page but on its simplest form. Let me explain with an example. For instance in the github page of examples one can find the following. <SCODE>import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 6, 5) # 1 input image channel, 6 output channels, 5x5 square convolution kernel\n self.conv2 = nn.Conv2d(6, 16, 5)\n self.fc1 = nn.Linear(16*5*5, 120) # an affine operation: y = Wx + b\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n\n def forward(self, x):\n x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # Max pooling over a (2, 2) window\n x = F.max_pool2d(F.relu(self.conv2(x)), 2) # If the size is a square you can only specify a single number\n x = x.view(-1, self.num_flat_features(x))\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n return x\n\nnet = Net()\n<ECODE> There are a couple of things here that are being obfuscated and might create confusion for newcomers. Thanks.", "isAccepted": false, "likes": null, "poster": "kirk86" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "From your post, I see that you want to avoid classes and inheritance. You could look at Sergey Zagoruyko’s purely functional implementation of Wide ResNets:", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "kirk86" }, { "contents": "I’m confused regarding the size of the max pooling window. Thanks", "isAccepted": false, "likes": null, "poster": "isalirezag" }, { "contents": "Exactly as you said it. The example is just to show both ways are valid. If you have pooling size of 2x2 you can simply use one number (2) instead of (2, 2). In other cases you might wanna have pooling of over different window/kernel sizes e.g. (2, 3).", "isAccepted": false, "likes": null, "poster": "kirk86" } ]
false
In what situation the network computation would be slow?
null
[ { "contents": "I know when you run the first batch, it’s possible to be very slow. However, I met the following situation: When I spend a long time reading data, the time of network is also higher. The time is counted after synchronized. My dataloader includes multi-threaded loading. Anyone has any idea?", "isAccepted": false, "likes": null, "poster": "ruotianluo" }, { "contents": "That’s a tough one. There are many reasons why this can happen. You can have a background process that’s sleeping most of the time, but wakes up and consumes a lot of resources for some time. It might be Python’s garbage collection. It can be some kind of cleanup in the OS. It can be because of the hardware. Does this happen to you very often?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "It may be solved if I change my data loading strategy. (The readdata time is occasionally long because I’m periodically dumping tasks into the pool). But every time readdata is long, the network also becomes slow. So I’m curious what will cause the network to become slow. Could high IO slow down the network computation? I thought the network computation time is mostly determined by GPU?", "isAccepted": false, "likes": null, "poster": "ruotianluo" }, { "contents": "It is, but you still need to have your CPU share, so it can queue the kernels for the GPU. If the Python process is suspended, then it can’t give the GPU any work to do.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Allocate cuda tensor in subprocess
null
[ { "contents": "<SCODE>import torch\nimport torch.multiprocessing as mp\n\ndef put_in_q():\n x = torch.IntTensor(2, 2).fill_(22)\n x = x.cuda()\n print(x)\n\np = mp.Process(target=put_in_q, args=())\np.start()\np.join()\n<ECODE> But not this one: <SCODE>import torch\nimport multiprocessing as mp\n\nclass CudaProcess(mp.Process):\n def __init__(self):\n mp.Process.__init__(self)\n\n def run(self):\n x = torch.IntTensor(2, 2).fill_(22)\n x = x.cuda()\n print(x)\n\np = CudaProcess()\np.start()\np.join()\n<ECODE> The error I’m getting is: <SCODE>terminate called after throwing an instance of 'THException'\n what(): cuda runtime error (3) : initialization error at .../pytorch/torch/lib/THC/THCGeneral.c:70\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "florin" }, { "contents": "CUDA multiprocessing is quite complicated, and I wouldn’t recommend it, unless it would give you huge performance benefits. The two most important things to remember are:", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "I’m not familiar with all the architecture in pytorch, but I would like to share some thoughts if I may. Wouldnt multi-threading mult-gpu work if: I guess moving all c++ part to cffi would solve 1) and would make it even possible to use pypy (lua torch used JIT but in python its suddenly a bad idea?)", "isAccepted": false, "likes": 1, "poster": "kmichaelkills" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "florin" }, { "contents": "edit:", "isAccepted": false, "likes": null, "poster": "florin" }, { "contents": "UB=undefined behaviour I guess", "isAccepted": false, "likes": 1, "poster": "kmichaelkills" }, { "contents": "Yeah, UB is undefined behaviour = anything could happen now. Regarding cffi, we didn’t want to add more dependencies, and having more fine grained control over GIL and some other aspects makes it easier for us to develop the C code. I’d love to support PyPy, but there are very few people using it, so it’s not really a top priority for us.", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
How to use mark_non_differentiable
null
[ { "contents": "Hi All,", "isAccepted": false, "likes": 1, "poster": "mattyd2" }, { "contents": "For example, comparison is non-differentiable, so you could implement a Function for it like that: <SCODE>class Gt(Function):\n\n def __init__(self, scalar=None):\n super(Gt, self).__init__()\n self.scalar = scalar\n\n def forward(self, tensor1, tensor2=None):\n other = tensor2 if tensor2 is not None else self.scalar\n mask = tensor1.gt(other)\n self.mark_non_differentiable(mask)\n return mask\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Did torch.gather change in new pytorch version?
null
[ { "contents": "Before I upgraded to the latest pytorch, this command worked for me: ss = pp.gather(1, labels) <SCODE>Variable containing:\n 0.0651 0.9349\n 0.6208 0.3792\n 0.3024 0.6976\n 0.2226 0.7774\n 0.0394 0.9606\n 0.4197 0.5803\n 0.1205 0.8795\n 0.3774 0.6226\n 0.1682 0.8318\n 0.2281 0.7719\n 0.3845 0.6155\n 0.4658 0.5342\n 0.4982 0.5018\n 0.2653 0.7347\n 0.2694 0.7306\n 0.6550 0.3450\n[torch.cuda.FloatTensor of size 16x2 (GPU 0)]\n<ECODE> <SCODE>Variable containing:\n 1\n 1\n 1\n 1\n 1\n 1\n 1\n 1\n 0\n 0\n 0\n 0\n 0\n 0\n 0\n 0\n[torch.cuda.LongTensor of size 16 (GPU 0)]\n<ECODE> However after I upgraded to the latest version off of master, this same line gives me the following error: *** RuntimeError: Input tensor must have same dimensions as output tensor at /data/pytorch/torch/lib/THC/generic/THCTensorScatterGather.cu:16", "isAccepted": false, "likes": 2, "poster": "Kalamaya" }, { "contents": "The answer, is that labels needs to be reshaped, into an explicit 16x1 vector, from the 16 it is at. Therefore, doing labels.view(-1,1) will make it work.", "isAccepted": false, "likes": 2, "poster": "Kalamaya" }, { "contents": "RuntimeError: Index tensor must have same dimensions as input tensor at /home/jdily/Desktop/project/lib/pytorch/torch/lib/THC/generic/THCTensorScatterGather.cu:111 And there is no error when running under cpu mode. thx!", "isAccepted": false, "likes": null, "poster": "jdily" }, { "contents": "Maybe you are affected by this.", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
How to prefetch data when processing with GPU?
null
[ { "contents": "Hi everyone, I’m new to Pytorch/torch. Is there any deme codes for prefetching data with another process during GPU doing computation? Thank you very much.", "isAccepted": false, "likes": 4, "poster": "yikang-li" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ruotianluo" }, { "contents": "Got it! thank you very much!", "isAccepted": false, "likes": null, "poster": "yikang-li" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "zym1010" }, { "contents": "there isn’t a prefetch option, but you can write a custom Dataset that just loads the entire data on GPU and returns samples from in-memory. In that case you can just use 0 workers in your DataLoader", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Thanks. I want a prefetching option when dealing with dataset like ImageNet, and it should work with multi GPU. The approach you proposed would not work in that case.", "isAccepted": false, "likes": 1, "poster": "zym1010" }, { "contents": "if there is no prefetch, the GPUs will not be fully occupied due to waiting for data loading and preprocessing. is this understanding correct?", "isAccepted": false, "likes": 1, "poster": "zli" }, { "contents": "we already have prefetch (see the imagenet or dcgan examples), but we dont prefetch directly onto the GPU. We prefetch onto CPU, do data augmentation and then we put the mini-batch in CUDA pinned memory (on CPU) so that GPU transfer is very fast. Then we give data to network to transfer to GPU and train.", "isAccepted": false, "likes": 20, "poster": "smth" }, { "contents": "Using prefetch seems to decrease speed in my case. I can run ~100 examples/second using num_workers = 0. But only run ~50 examples/second using num_workers = 1. The more workers, the slower the speed. Is there any reasons behind this?", "isAccepted": false, "likes": 4, "poster": "bily" }, { "contents": "Same situation in my code. The dataloader can’t make full use of the cpus.", "isAccepted": false, "likes": 2, "poster": "Chong_Lv" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Raghu_Rajpal" }, { "contents": "Is data augmentation done in a single thread even when data loader is multi-threaded with num_workers > 2?", "isAccepted": false, "likes": 1, "poster": "FuriouslyCurious" }, { "contents": "Same situation. I have 12 cores, I find that the dataloader can not make full use of them. It can only make about 60% ability of them. I set the num_worker as 12 and my IO bound is enough. For training network like resnet, the data preprocess time cost more than 70% time in the whole process.", "isAccepted": false, "likes": null, "poster": "Hou_Qiqi" }, { "contents": "<SCODE>Epoch: [0][0/1124] Time 100.303 (100.303) Data 95.814 (95.814) Loss 2.1197 (2.1197) Acc@1 1.562 (1.562) Acc@2 51.562 (51.562)\nEpoch: [0][10/1124] Time 0.422 (9.584) Data 0.011 (8.753) Loss 2.2578 (2.1322) Acc@1 4.297 (4.865) Acc@2 6.250 (20.312)\nEpoch: [0][20/1124] Time 0.288 (5.191) Data 0.078 (4.606) Loss 2.3653 (2.1892) Acc@1 16.406 (7.310) Acc@2 63.281 (21.001)\nEpoch: [0][30/1124] Time 0.288 (3.604) Data 0.071 (3.134) Loss 2.2124 (2.1810) Acc@1 23.047 (10.144) Acc@2 46.094 (28.679)\nEpoch: [0][40/1124] Time 0.295 (2.795) Data 0.094 (2.382) Loss 2.2084 (2.1984) Acc@1 2.344 (11.719) Acc@2 3.906 (30.164)\nEpoch: [0][50/1124] Time 0.434 (2.312) Data 0.098 (1.927) Loss 2.0883 (2.1817) Acc@1 27.344 (11.060) Acc@2 50.391 (28.592)\nEpoch: [0][60/1124] Time 30.907 (2.497) Data 30.285 (2.119) Loss 2.0824 (2.1731) Acc@1 19.531 (12.897) Acc@2 41.406 (31.276)\nEpoch: [0][70/1124] Time 0.816 (2.326) Data 0.102 (1.933) Loss 2.0904 (2.1663) Acc@1 22.656 (14.277) Acc@2 43.359 (33.291)\nEpoch: [0][80/1124] Time 0.192 (2.094) Data 0.065 (1.707) Loss 2.1167 (2.1607) Acc@1 21.094 (15.355) Acc@2 42.188 (34.891)\nEpoch: [0][90/1124] Time 0.788 (1.904) Data 0.168 (1.528) Loss 2.2529 (2.1558) Acc@1 25.781 (16.303) Acc@2 46.484 (36.178)\nEpoch: [0][100/1124] Time 0.324 (1.749) Data 0.212 (1.385) Loss 2.1106 (2.1537) Acc@1 21.484 (16.901) Acc@2 48.438 (37.233)\nEpoch: [0][110/1124] Time 0.414 (1.633) Data 0.002 (1.267) Loss 2.0465 (2.1491) Acc@1 23.438 (17.325) Acc@2 48.438 (37.968)\nEpoch: [0][120/1124] Time 45.406 (1.906) Data 44.589 (1.537) Loss 2.2800 (2.1496) Acc@1 20.703 (17.859) Acc@2 42.578 (38.598)\nEpoch: [0][130/1124] Time 0.591 (1.824) Data 0.007 (1.454) Loss 2.0338 (2.1466) Acc@1 19.141 (18.079) Acc@2 45.703 (39.054)\nEpoch: [0][140/1124] Time 0.510 (1.765) Data 0.184 (1.397) Loss 2.1249 (2.1457) Acc@1 21.875 (18.426) Acc@2 49.609 (39.583)\nEpoch: [0][150/1124] Time 0.203 (1.670) Data 0.004 (1.308) Loss 2.1863 (2.1450) Acc@1 20.703 (18.755) Acc@2 42.188 (39.929)\nEpoch: [0][160/1124] Time 0.269 (1.589) Data 0.084 (1.231) Loss 2.2051 (2.1434) Acc@1 23.828 (19.031) Acc@2 48.047 (40.302)\nEpoch: [0][170/1124] Time 0.281 (1.520) Data 0.077 (1.163) Loss 2.1192 (2.1403) Acc@1 21.875 (19.301) Acc@2 44.531 (40.634)\nEpoch: [0][180/1124] Time 40.498 (1.675) Data 39.783 (1.321) Loss 2.0573 (2.1410) Acc@1 24.609 (19.572) Acc@2 47.656 (40.981)\n<ECODE>", "isAccepted": false, "likes": 7, "poster": "Weifeng" }, { "contents": "I am experiencing the same issue. No matter how many data loading workers I select, there’s always a delay after their batches have been processed.", "isAccepted": false, "likes": null, "poster": "RicCu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Joey_Nr" }, { "contents": "But dataloader loads batch and waits for model optimization (but it shouldn’t wait!)", "isAccepted": false, "likes": 4, "poster": "Red-Eyed" }, { "contents": "I’ve spent some time working on this problem over a variety of projects. I’ve cut and pasted some past thoughts, bullets, etc from previous discussions. My background involves architecting systems that move large volumes of data from network cards to storage and then back again on request, with processing between the steps. A very similar set of concerns. The two main constraints that usually dominate your PyTorch training performance and ability to saturate the shiny GPUs are your total CPU IPS (instructions per second) and your storage IOPS (I/O per second). You want the CPUs to be performing preprocessing, decompression, and copying – to get the data to the GPU. You don’t want them to be idling or busy-waiting for thread/process synchronization primitives, IO, etc. The easiest way to improve CPU utilization with the PyTorch is to use the worker process support built into Dataloader. The preprocessing that you do in using those workers should use as much native code and as little Python as possible. Use Numpy, PyTorch, OpenCV and other libraries with efficient vectorized routines that are written in C/C++. Looping through your data byte by byte in Python will kill your performance, massacre the memory allocator, etc. With most common use cases, the preprocessing is done well enough to not be an issue. Things tend to fall apart dramatically hitting the IOPS limit of your storage. Most simple PyTorch datasets tend to use media stored in individual files. Modern filesystems are good, but when you have thousands of small files and you’re trying to move GB/s of data, reading each file individually can saturate your IOPS long before you can ever maximize GPU or CPU utilization. Just opening a file by name with PIL can be an alarming number of disk seeks (profile it) Quick fix: buy an NVME SSD drive, or two. SATA SSD is not necessarily going to cut it, you can saturate them with small to medium image files + default loader/dataset setups feeding multiple GPUs. Magnetic drives are going to fall on their face If you are stuck with certain drives or max out the best, the next move requires more advanced caching, prefetching, on-disk format – move to an index/manifest + record based bulk data (like tfrecord or RecordIO) or an efficient memory-mapped/paged in-process DB I’ve leveraged LMDB successfully with PyTorch and a custom simplification of the Python LMDB module. My branch here (https://github.com/rwightman/py-lmdb/tree/rw 274). I didn’t document or explain what I did there or why, ask if curious. Beyond an optimal number (experiment!), throwing more worker processes at the IOPS barrier WILL NOT HELP, it’ll make it worse. You’ll have more processes trying to read files at the same time, and you’ll be increasing the shared memory consumption by significant amounts for additional queuing, thus increasing the paging load on the system and possibly taking you into thrashing territory that the system may never recover from Once you have saturated the IOPS of your storage or taxed the memory subsystem and entered into a thrashing situation, it won’t look like you’re doing a whole lot. There will be a lot of threads/processes (including kernel ones) basically not doing much besides waiting for IO, page faults, etc. Behaviour will usually be sporadic and bursty once you cross the line of what can be sustained by your system, much like network link utilization without flow control (queuing theory). Other pointers for a fast training setup with minimal work over the defaults: Employ some of the optimizations in NVIDIA’s examples (https://github.com/NVIDIA/apex/tree/master/examples/imagenet 760). NVIDIA’s fast_collate and prefetch loader w/ GPU normalization step do help a bit. I’ve seen big gains over torch.DataParallel using apex.DistributedDataParallel. Moving from ‘one main process + worker process + multiple-GPU with DataParallel’ to 'one process-per GPU with apex (and presumably torch) DistributedDataParallel has always improved performance for me. Remember to (down)scale your worker processes per training process accordingly. Higher GPU utilization and less waiting for synchronization usually results, the variance in batch times will reduce with the average time moving closer to the peak. Use SIMD fork of Pillow with default PyTorch transforms, or write your own OpenCV image processing and loading routines Don’t leave the dataloader pin_memory=‘True’ on by default in your code. There was a reason why PyTorch authors left it as False. I’ve run into many situations where True definitely does cause extremely negative paging/memory subsystem impact . Try both. An observation on the tfrecord/recordio chunking. For IO, even flash based, randomness is bad, sequential chunks are good. Hugely so when you have to move physical disk heads. The random/shuffled nature of training is thus worst case. When you see gains using record/chunked data, it’s largely due to the fact that you read data in sequential chunks. This comes with a penalty. You’ve constrained how random your training samples can be from epoch to epoch. With tfrecord, you usually shuffle once when you build the the tfrecord chunks. When training, you can only shuffle again within some number of queued record files (constrained by memory), not across the whole dataset. In many situations, this isn’t likely to cause problems, but you do have to be aware for each use case, tune your record size to balance performance with how many samples you’d like to shuffle at a time.", "isAccepted": false, "likes": 138, "poster": "rwightman" }, { "contents": "", "isAccepted": false, "likes": 9, "poster": "ptrblck" } ]
false
Concatenate tensors as they are compued
null
[ { "contents": "I have a loop that has 4 iterations. In each iteration, I will compute a tensor of size [8, x, 128, 128], where x is a variable size. EDIT: Never mind I figured it out!", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "Hi, may I ask how did you figure that out? thanks!", "isAccepted": false, "likes": null, "poster": "Yilin_Liu" } ]
false
Why RNN needs two biases?
null
[ { "contents": "", "isAccepted": false, "likes": 2, "poster": "Sunnydreamrain" }, { "contents": "You mean in general? For the same reason that it needs two sets of weights, one for the input and one from the previous state.", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Sunnydreamrain" }, { "contents": "As you pointed out it doesn’t really change the definition of the model, but this is what cuDNN does, so we’ve made our RNNs consistent with this behaviour.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "OKay. Good to know. Thanks.", "isAccepted": false, "likes": null, "poster": "Sunnydreamrain" } ]
false
How to split the Variables along Batch Dimension to several Variables?
null
[ { "contents": "Hi all, I have got a batch of features (N features). Now I want to pass message among different features. I found that if I manipulate the features along batch dim, then errors will be raised when backward(). The codes are like below: <SCODE>import torch\nfrom torch.autograd import Variable\nimport numpy as np\nimport torch.nn as nn\n\na = Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)\nb = Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)\n\nfor i in range(b.size()[0]):\n for j in range(a.size()[0]):\n b[i] = b[i] + a[j]\n \nb.backward(torch.Tensor(5, 3).normal_())\n<ECODE> When running, I got “inconsistent tensor size at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485444530918/work/torch/lib/TH/generic/THTensorCopy.c:40”. Is it possible to divid the original Variable into several parts and then process them?", "isAccepted": false, "likes": null, "poster": "yikang-li" }, { "contents": "<SCODE>import torch\nfrom torch.autograd import Variable\nimport numpy as np\nimport torch.nn as nn\n\na = Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)\nb = Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)\n\na_summed = a.sum(0) # sum along first dimension\nresult = b + a_summed.expand_as(b)\nresult.backward(torch.randn(5, 3))\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Yes, that’s the same. However, the ‘backward()’ will lead to an error? Could you tell me I to do that?", "isAccepted": false, "likes": null, "poster": "yikang-li" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Thank you for your timely reply. I apologize for misleading you with the example. So what should I do if I want to manipulate the feature along batch dim. In addition, what’s the meaning of leaf Variables?", "isAccepted": false, "likes": null, "poster": "yikang-li" }, { "contents": "I’m sorry but I don’t know what passing a message along a dimension is. I’d recommend you to look for out-of-place functions that can do this, but I can’t recommend any, as I don’t know what are you trying to achieve. Leaf Variables are Variables that are not results of any computation, so: <SCODE>x = Variable(torch.randn(5, 5)) # this is a leaf\ny = x + 2 # y is not a leaf\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "I have got that! The problem has fixed. Just add an operation will be enough. <SCODE>import torch\nfrom torch.autograd import Variable\nimport numpy as np\nimport torch.nn as nn\n\n\nx = Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)\ny= Variable(torch.Tensor(3, 5).normal_(), requires_grad=True)\n\na = x + 0\nb = y + 0\n\nfor i in range(b.size()[0]):\n for j in range(a.size()[0]):\n b[i] = b[i] + a[j]\n \nb.backward(torch.Tensor(3, 5).normal_())\n<ECODE> Thank you very much for your help and patience.", "isAccepted": false, "likes": null, "poster": "yikang-li" }, { "contents": "In most cases (like in your example), you can avoid these assignments, and I’d recommend doing that. It will likely result in better performance. Nevertheless, we do support them, so if there’s no other way, you can still do that.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Got it. Thank you very much. I really appreciate your works for PyTorch and the CV community!", "isAccepted": false, "likes": 1, "poster": "yikang-li" }, { "contents": "I have been using PyTorch for a couple of weeks now, working mostly with complex RNNs models. In such situations, the natural way of accumulating state for me till now has been to create a Tensor and keep filling it with new state at each time step. But I haven’t been able to write code like that in PyTorch since in-place ops on leaf nodes are not supported. I have had to use lists with manual concatenation and indexing is messy. I can use the ugly hack shown above where in a dummy operation is performed to make a node non-leaf. But it would be great to have some implicit adjustment that allows for in-place ops on leaves. If you think this would be a useful feature to have (or conversely if its not something you are intentionally against), and if you can give some pointers as to how to make this happen, I can work on it and submit a PR . I hope I am not missing something trivial here.", "isAccepted": false, "likes": null, "poster": "pranav" }, { "contents": "Thanks a lot for the feedback.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "You can fill in a leaf Variable. Just don’t declare it as requiring grad - you don’t need grad w.r.t. the original content that will get overwritten.", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
How to do clips? Just clip the parameter, not clip the norm
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Sunnydreamrain" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Source compile error related to half precision?
null
[ { "contents": "<SCODE>gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/homes/3/kimjook/pytorch -I/homes/3/kimjook/pytorch/torch/csrc -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/TH -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THNN -I/u/drspeech/opt/anaconda2/lib/python2.7/site-packages/numpy/core/include -I/usr/local/cuda/include -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THCUNN -I/u/drspeech/opt/anaconda2/include/python2.7 -c torch/csrc/cuda/Storage.cpp -o build/temp.linux-x86_64-2.7/torch/csrc/cuda/Storage.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda/lib64\ncc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default]\ntorch/csrc/nn/THNN_generic.cpp: In function ‘void torch::nn::Abs_updateOutput(thpp::Tensor*, thpp::Tensor*)’:\ntorch/csrc/nn/THNN_generic.cpp:45:14: error: ‘THCudaHalfTensor’ was not declared in this scope\n (THCudaHalfTensor*)input->cdata(),\n ^\n\n\nIn file included from /homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP/tensors/../storages/THCStorage.hpp:11:0,\n from /homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP/tensors/THCTensor.hpp:22,\n from torch/csrc/DynamicTypes.cpp:11:\n/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP/tensors/../storages/../TraitsCuda.hpp:9:20: error: ‘half’ was not declared in this scope\n struct type_traits<half> {\n ^\n<ECODE> It seems that the compiler cannot correctly recognize “half” precision related data structures or classes. How would I reslove this problem? (I don’t need half precision operations so if possible it’s fine for me to disable it.) Thanks!", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "The full log is as follows. Thanks.", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "are you using CUDA 8.0.27 by any chance, instead of the latest 8.0.44?", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I am using CUDA 8.0.61. Thanks.", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "I have no idea what’s wrong. You have a fresh version of CUDA, so half should be automatically enabled everywhere.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Tried it, but no differences.", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Oh. That’s it. My system’s /usr/local/cuda is linked to cuda-6.5 since I am sharing the system with others. Would there be any other way I can explictly specify the directory path? Thanks!", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Type Variable doesn’t implement stateless method cumprod
null
[ { "contents": "<SCODE>import torch\nfrom torch.autograd import Variable\ninputs = Variable(torch.randn(4,5), requires_grad = True)\n#inputs = torch.randn(4,5)\ncum_out = torch.cumprod(inputs, dim =0)\n\nprint(cum_out)\n<ECODE> <SCODE>RuntimeError Traceback (most recent call last)\n<ipython-input-6-427be1ef1fd7> in <module>()\n 4 #inputs = torch.randn(4,5)\n 5 \n----> 6 cum_out = torch.cumprod(inputs, dim =0)\n 7 \n 8 print(cum_out)\n\nRuntimeError: Type Variable doesn't implement stateless method cumprod\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Ja-Keoung_Koo" } ]
false
Nan in backward() if prod() is used
null
[ { "contents": "In torch.autograd._functions.reduce When input entry is zero, this method returns ‘nan’ gradient. By replacing backward() function of Prod() class with <SCODE> if self.dim is None:\n input, = self.saved_tensors\n zero_loc = (input==0).nonzero()\n if zero_loc.dim() == 0:\n grad_input = grad_output.new(self.input_size).fill_(self.result)\n return grad_input.div(input)\n elif zero_loc.size()[0] > 1:\n return grad_output.new(self.input_size).fill_(0)\n else:\n grad_input = grad_output.new(self.input_size).fill_(0)\n indexing_tuple = tuple(zero_loc[0].numpy())\n input_copy = grad_output.new(self.input_size).fill_(0)\n input_copy.copy_(input)\n input_copy[indexing_tuple] = 1.0\n grad_input[indexing_tuple] = input_copy.prod()\n return grad_input\n else:\n input, output = self.saved_tensors\n input_copy = grad_output.new(self.input_size).fill_(0)\n input_copy.copy_(input)\n input_copy[input == 0] = 1.0\n\n repeats = [1 for _ in self.input_size]\n repeats[self.dim] = self.input_size[self.dim]\n output_zero_cnt = (input == 0).sum(self.dim)\n output_one_zero_ind = (output_zero_cnt == 1).nonzero()\n grad_input = output.mul(grad_output)\n grad_input[output_zero_cnt > 0] = 0.0\n grad_input = grad_input.repeat(*repeats).div_(input_copy)\n if output_one_zero_ind.dim() == 0:\n return grad_input\n\n for i in range(output_one_zero_ind.size()[0]):\n if output_one_zero_ind.is_cuda:\n output_one_zero_vec_ind = tuple(output_one_zero_ind[i].cpu().numpy())\n else:\n output_one_zero_vec_ind = tuple(output_one_zero_ind[i].numpy())\n output_one_zero_vec_indexing = output_one_zero_vec_ind[:self.dim] + (slice(0, None),) + output_one_zero_vec_ind[self.dim+1:]\n output_one_zero_vec = input.new(self.input_size[self.dim]).fill_(0)\n output_one_zero_vec.copy_(input[output_one_zero_vec_indexing])\n output_one_zero_vec[(output_one_zero_vec==0).nonzero()[0, 0]] = 1.0\n grad_input[output_one_zero_vec_ind] = output_one_zero_vec.prod() if output_one_zero_vec.numel()>1 else 1.0\n return grad_input\n<ECODE> It would be better this modification is reflected in the next release.", "isAccepted": false, "likes": null, "poster": "OCY" }, { "contents": "Thanks, we’ll improve the formula.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Previous code only worked for 2 dim tensor. I modified it to make it work for n dim tensor.", "isAccepted": false, "likes": null, "poster": "OCY" }, { "contents": "This should be solved.", "isAccepted": false, "likes": 1, "poster": "OCY" } ]
false
Graph not resetting between backward passes?
null
[ { "contents": "I guess I’m missing something obvious here, but why running the model again doesn’t refill the buffers? I was going to implement training in a similar loop. Code to reproduce: <SCODE>import torch as th\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\n# borrowed from `word_language_model`\nclass RNNModel(nn.Module):\n \"\"\"Container module with an encoder, a recurrent module, and a decoder.\"\"\"\n\n def __init__(self, rnn_type, ntoken, ninp, nhid, nlayers):\n super(RNNModel, self).__init__()\n self.encoder = nn.Embedding(ntoken, ninp)\n self.rnn = getattr(nn, rnn_type)(ninp, nhid, nlayers, bias=False)\n self.decoder = nn.Linear(nhid, ntoken)\n\n self.init_weights()\n\n self.rnn_type = rnn_type\n self.nhid = nhid\n self.nlayers = nlayers\n\n def init_weights(self):\n initrange = 0.1\n self.encoder.weight.data.uniform_(-initrange, initrange)\n self.decoder.bias.data.fill_(0)\n self.decoder.weight.data.uniform_(-initrange, initrange)\n\n def forward(self, input, hidden):\n emb = self.encoder(input)\n output, hidden = self.rnn(emb, hidden)\n decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))\n return decoded.view(output.size(0), output.size(1), decoded.size(1)), hidden\n\n def init_hidden(self, bsz):\n weight = next(self.parameters()).data\n if self.rnn_type == 'LSTM':\n return (Variable(weight.new(self.nlayers, bsz, self.nhid).zero_()),\n Variable(weight.new(self.nlayers, bsz, self.nhid).zero_()))\n else:\n return Variable(weight.new(self.nlayers, bsz, self.nhid).zero_())\n\nvocab_size = 256\nbatch_size = 64\n\nmodel = RNNModel('GRU', vocab_size, 100, 100, 3)\nmodel.train()\n\noptimizer = th.optim.Adam(model.parameters(), lr=1e-2)\ncriterion = nn.CrossEntropyLoss()\n\nx = Variable(th.LongTensor(50, 64))\nx[:] = 1\n\nstate = model.init_hidden(batch_size)\n\nprint('first pass')\noptimizer.zero_grad()\nlogits, state = model(x, state)\nloss = criterion(logits.view(-1, vocab_size), x.view(-1))\nloss.backward()\noptimizer.step()\n\nprint('second pass')\noptimizer.zero_grad()\nlogits, state = model(x, state)\nloss = criterion(logits.view(-1, vocab_size), x.view(-1))\nloss.backward() # <-- error occurs here\noptimizer.step()<ECODE>", "isAccepted": false, "likes": null, "poster": "ematvey" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "ematvey" } ]
false
How to transfer an existing tensor to another device based on other tensor
null
[ { "contents": "For example <SCODE>tensor_a is currently in cpu\nI wanna transfer tensor_a to cpu or gpu based on \ntensor_b's residential state.\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "<SCODE>tensor_a = tensor_a.cuda(b.get_device()) if b.is_cuda else tensor_a\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "good to know that, thanks!", "isAccepted": false, "likes": null, "poster": "ypxie" } ]
false
Loading videos from folders as a dataset object
vision
[ { "contents": "Now I have that <SCODE>next(DataLoaderIter)\n<ECODE> will call <SCODE>DataLoaderIter._next_indeces()\n<ECODE> <SCODE>list(range(t * batch_size, (t + 1) * batch_size))\n<ECODE> <SCODE>dataset[i] for i in indeces\n<ECODE> Can anyone provide feedback on this strategy? Does it sound reasonable, or am I missing something? <SCODE> 0 5 10 15 20 25\n0 >xxxx|xxxxx|xxxxx|xxxxx|xxxxx|xxx\n1 xxxxx|xxxxx|xxx>x|xxxxx|xxxxx|xxx\n2 xxxxx|xx>xx|xxxxx|xxxxx|xxxxx|xxx\n3 xxxxx|xxxxx|xxxxx|xxxxx|xxxx>|xxx\n4 xxxxx|xxxxx|xxxxx|x>xxx|xxxx\n<ECODE> <SCODE> 0 5 10 15 20 25\n0 0oooo|ooooo|ooooo|ooooo|ooooo|ooo\n1 ooooo|ooooo|ooo>x|xxxxx|xxxxx|xxx\n2 xxxxx|xx>xx|xxxxx|xxxxx|xxxxx|xxx\n3 xxxxx|xxxxx|xxxxx|xxxxx|xxxx>|xxx\n4 xxxxx|xxxxx|xxxxx|x>xxx|xxxx0|ooo\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "raivokoot" } ]
false
How can I use legacy torch module?
null
[ { "contents": "Because I need to employ nn.Replicate from torch, I tried the below <SCODE>import torch\nimport torch.nn.functional as F\nfrom torch.legacy import nn as nn\nfrom torch.autograd import Variable\n\ninput = Variable(torch.rand(32,128,1,1))\nreplicate = nn.Replicate(nf = 32, dim = 3)\noutput = replicate(input)\n<ECODE> But, I got the error message TypeError: ‘Replicate’ object is not callable Can you tell me how I can use nn.Replicate ?", "isAccepted": false, "likes": 1, "poster": "Seungyoung_Park" }, { "contents": "torch.expand can do the same thing, however the arguments are a little different from nn.Replicate.", "isAccepted": false, "likes": null, "poster": "ruotianluo" }, { "contents": "Thanks for your reply.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Improve error message: .numpy() not supported on cuda
null
[ { "contents": "I was struggling to feed my predictions into an sklearn function <SCODE>M_ = M.data.numpy()\n<ECODE> I got error <SCODE>RuntimeError: numpy conversion for LongTensor is not supported\n<ECODE> <SCODE>M_ = M.data.cpu().numpy()\n<ECODE> I suggest to improve this error message. Maybe something like <SCODE>RuntimeError: numpy conversion for LongTensor is not supported on CUDA\n<ECODE> That way, novices like me will know what to do", "isAccepted": false, "likes": 1, "poster": "robromijnders" }, { "contents": "thanks, it’s a good suggestion. i’ll work on this.", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Repeat method for Variable
null
[ { "contents": "<SCODE>def repeat(self, repeats):\n return Repeat(repeats)(self)<ECODE>", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "My mistake, I was not looking at the last version…", "isAccepted": false, "likes": null, "poster": "alexis-jacq" } ]
false
Error when using CUDNN
null
[ { "contents": "<SCODE> export LD_LIBRARY_PATH=`pwd`:$LD_LIBRARY_PATH\n<ECODE> How can I find and solve the problem. <SCODE>Traceback (most recent call last):\n File \"main.py\", line 157, in <module>\n train()\n File \"main.py\", line 131, in train\n output, hidden = model(data, hidden)\n File \"/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py\", line 202, in __call__\n result = self.forward(*input, **kwargs)\n File \"/data/disk1/workbench/learn/pytorch/examples/word_language_model/model.py\", line 28, in forward\n output, hidden = self.rnn(emb, hidden)\n File \"/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py\", line 202, in __call__\n result = self.forward(*input, **kwargs)\n File \"/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/modules/rnn.py\", line 81, in forward\n return func(input, self.all_weights, hx)\n File \"/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py\", line 235, in forward\n return func(input, *fargs, **fkwargs)\n File \"/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/function.py\", line 201, in _do_forward\n flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)\n File \"/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/function.py\", line 223, in forward\n result = self.forward_extended(*nested_tensors)\n File \"/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py\", line 180, in forward_extended\n cudnn.rnn.forward(self, input, hx, weight, output, hy)\n File \"/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/backends/cudnn/rnn.py\", line 184, in forward\n handle = cudnn.get_handle()\n File \"/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/backends/cudnn/__init__.py\", line 337, in get_handle\n handle = CuDNNHandle()\n File \"/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/backends/cudnn/__init__.py\", line 128, in __init__\n check_error(lib.cudnnCreate(ctypes.byref(ptr)))\n File \"/data/disk1/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/backends/cudnn/__init__.py\", line 324, in check_error\n raise CuDNNError(status)\ntorch.backends.cudnn.CuDNNError: 6: CUDNN_STATUS_ARCH_MISMATCH\nException ctypes.ArgumentError: \"argument 1: <type 'exceptions.TypeError'>: Don't know how to convert parameter 1\" in <bound method CuDNNHandle.__del__ of <torch.backends.cudnn.CuDNNHandle instance at 0x7fa7707dd5f0>> ignored\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "maplewizard" }, { "contents": "are you sure you have the correct cudnn version? it needs to be R5 or R6", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "<SCODE>nvcc --version\nnvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2015 NVIDIA Corporation\nBuilt on Tue_Aug_11_14:27:32_CDT_2015\nCuda compilation tools, release 7.5, V7.5.17\n<ECODE> How can I find more details about the cudnn?", "isAccepted": false, "likes": null, "poster": "maplewizard" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>torch.backends.cudnn.version()\n---------------------------------------------------------------------------\nRuntimeError Traceback (most recent call last)\n<ipython-input-3-de1bb2d5285f> in <module>()\n----> 1 torch.backends.cudnn.version()\n\n/global-hadoop/home/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/backends/cudnn/__init__.pyc in version()\n 73 def version():\n 74 if not lib:\n---> 75 raise RuntimeError(\"cuDNN not initialized\")\n 76 if len(__cudnn_version) == 0:\n 77 __cudnn_version.append(lib.cudnnGetVersion())\n\nRuntimeError: cuDNN not initialized\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "maplewizard" }, { "contents": "Right, sorry. Do this please: <SCODE>print(torch.backends.cudnn.is_acceptable(torch.cuda.FloatTensor(1)))\nprint(torch.backends.cudnn.version())\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "<SCODE>In [1]: import torch\n\nIn [2]: print(torch.backends.cudnn.is_acceptable(torch.cuda.FloatTensor(1)))\n ...: print(torch.backends.cudnn.version())\n ...:\nTrue\n5005\n<ECODE> Now the error code change to: <SCODE>torch.backends.cudnn.CuDNNError: 6: CUDNN_STATUS_ARCH_MISMATCH\nException ctypes.ArgumentError: \"argument 1: <type 'exceptions.TypeError'>: Don't know how to convert parameter 1\" in <bound method CuDNNHandle.__del__ of <torch.backends.cudnn.CuDNNHandle instance at 0x7f4b9099a320>> ignored\n<ECODE> I just found that my gpu is Tesla M2075, I searched a similar issue in caffe, saying that cudnn require higher version than pure cuda. Is it not supported in Tesla? Can I run the sample code with only cuda instead of cudnn?", "isAccepted": false, "likes": 1, "poster": "maplewizard" }, { "contents": "M2075 is Fermi architecture card, cudnn is not supported on it. You can disable cudnn by setting torch.backend.cudnn.enabled=False. But you can expect only very modest speed-ups with such an old card.", "isAccepted": false, "likes": null, "poster": "ngimel" }, { "contents": "<SCODE>THCudaCheck FAIL file=/data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487343590888/work/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu line=246 error=8 : invalid device function\nTraceback (most recent call last):\n File \"main.py\", line 157, in <module>\n train()\n File \"main.py\", line 131, in train\n output, hidden = model(data, hidden)\n File \"/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py\", line 202, in __call__\n result = self.forward(*input, **kwargs)\n File \"/data/disk1/ckyn/workbench/learn/pytorch/examples/word_language_model/model.py\", line 28, in forward\n output, hidden = self.rnn(emb, hidden)\n File \"/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py\", line 202, in __call__\n result = self.forward(*input, **kwargs)\nons/rnn.py\", line 138, in forward\n nexth, output = func(input, hidden, weight)\n File \"/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py\", line 67, in forward\n hy, output = inner(input, hidden[l], weight[l])\n File \"/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py\", line 96, in forward\n hidden = inner(input[i], hidden, *weight)\n File \"/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/rnn.py\", line 22, in LSTMCell\n gates = F.linear(input, w_ih, b_ih) + F.linear(hx, w_hh, b_hh)\n File \"/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py\", line 752, in __add__\n return self.add(other)\n File \"/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py\", line 292, in add\n return self._add(other, False)\n File \"/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/variable.py\", line 286, in _add\n return Add(inplace)(self, other)\n File \"/data/disk1/ckyn/ProgramFiles/anaconda2/lib/python2.7/site-packages/torch/autograd/_functions/basic_ops.py\", line 13, in forward\n return a.add(b)\nRuntimeError: cuda runtime error (8) : invalid device function at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487343590888/work/torch/lib/THC/generated/../generic/THCTensorMathPointwise.cu:246\n<ECODE> Is there any idea about this?", "isAccepted": false, "likes": null, "poster": "maplewizard" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ngimel" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Chun_Li" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "The problem is not ctypes (it looks in ld cache) and not ld cache per se. The problem is that ld cache typically contains libname.so.MAJOR (verify this with ldconfig -p), and for cudnn pytorch tries to load libcudnn.so.MAJOR.MINOR.PATCH. Try adding libcudnn.so.MAJOR.MINOR.PATCH to your ld cache (ldconfig -l may be?)", "isAccepted": false, "likes": null, "poster": "ngimel" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Chun_Li" }, { "contents": "<SCODE>UserWarning: PyTorch was compiled without cuDNN support. To use cuDNN, rebuild PyTorch making sure the library is visible to the build system.\n \"PyTorch was compiled without cuDNN support. To use cuDNN, rebuild \"\n<ECODE> <SCODE>export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64/cuDNNv5:$LD_LIBRARY_PATH\n<ECODE> I build the pytorch from source code. <SCODE>cd pytorch-root/ & python setup.py install\n<ECODE>", "isAccepted": false, "likes": null, "poster": "EthanZhangYi" }, { "contents": "Hello , I meet the same problem , can you tell me how to handle this?", "isAccepted": false, "likes": null, "poster": "RayAnteku" } ]
false
Can a thread be assigned to each GPU?
null
[ { "contents": "In Torch, we can assign a thread to each GPU as like below. <SCODE>net = nn.DataParallelTable(1, true, true):add(net, {1, 2}):threads(function() require 'cudnn' cudnn.benchmark = true cudnn.fastest = true end)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "unlike LuaTorch, we dont dispatch DataParallel via python threads in PyTorch. No, we plan to (and do) dispatch multi-GPU differently at high performance, without the user needing to do anything. This is irrelevant to the user, we are working on improving the internals and multi-gpu perf. Depending on how many parameters you have in your model, this is possible.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thanks for your reply.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" } ]
false
Autogradable image resize
null
[ { "contents": "", "isAccepted": false, "likes": 1, "poster": "meijieru" }, { "contents": "you can look at these Upsampling* modules to resize up: To resize down, you can use AvgPool2d", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "meijieru" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "thnkim" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Has this been implemented yet for float scale factors? Greatly appreciated!", "isAccepted": false, "likes": null, "poster": "jwin" }, { "contents": "Jwin, I had the same problem and I solved using a sampling grid: <SCODE>def downsampling(x, size=None, scale_factor=None, mode='nearest'):\n\t# define size if user has specified scale_factor\n\tif size is None: size = (int(scale_factor*x.size(2)), int(scale_factor*x.size(3)))\n\t# create coordinates\n\th = torch.arange(0,size[0]) / (size[0]-1) * 2 - 1\n\tw = torch.arange(0,size[1]) / (size[1]-1) * 2 - 1\n\t# create grid\n\tgrid = torch.zeros(size[0],size[1],2)\n\tgrid[:,:,0] = w.unsqueeze(0).repeat(size[0],1)\n\tgrid[:,:,1] = h.unsqueeze(0).repeat(size[1],1).transpose(0,1)\n\t# expand to match batch size\n\tgrid = grid.unsqueeze(0).repeat(x.size(0),1,1,1)\n\tif x.is_cuda: grid = grid.cuda()\n\t# do sampling\n\treturn F.grid_sample(x, grid, mode=mode)\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "flavio" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ndronen" }, { "contents": "Does anyone have a solution for autogradable general nearest neighbours upsampling?", "isAccepted": false, "likes": 1, "poster": "mariosfourn" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ndronen" }, { "contents": "<SCODE>x = torch.randn(1, 3, 24, 24)\nup = nn.Upsample(size=47, mode='bilinear')\ny = up(x)\nprint(y.shape)\n> torch.Size([1, 3, 47, 47])\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "Is there any plan to support ‘nearest’ ? As in many dense estimation task, a edge preserving property is important, using bilinear downsample will hurt that property…", "isAccepted": false, "likes": 1, "poster": "chenchr" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "solsol" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "Got it. Thanks a lot", "isAccepted": false, "likes": null, "poster": "solsol" } ]
false
Adaptive threshold possible?
null
[ { "contents": "I am trying to threshold Variables using the adaptive value as below. <SCODE>import torch\nfrom torch.autograd import Variable\nimport torch.nn as nn\n\nx = Variable(torch.rand(10))\nmaxx = torch.max(x)\nm = nn.Threshold(0.6*maxx,0)\ny = m(x)\n<ECODE> Thanks in advance for your help.", "isAccepted": false, "likes": 1, "poster": "Seungyoung_Park" }, { "contents": "Hi, I think you want to use directly the nn functional version of threshold: <SCODE>import torch\nfrom torch.autograd import Variable\nimport torch.nn as nn\n\nx = Variable(torch.rand(10))\nmaxx = torch.max(x)\ny = nn.functional.threshold(x, 0.6*maxx.data[0], 0)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "Thanks for your reply. It works as I want.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "When I use this snippet in my code, I got an error as <SCODE>RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation\n<ECODE> My test code is as follows. <SCODE>import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.autograd import Variable\n\nclass AdaReLU(nn.Module):\ndef __init__(self):\n super(AdaReLU, self).__init__()\n self.relu = nn.ReLU()\n \ndef forward(self, x):\n y = x\n for i in range(x.size(0)):\n for ii in range(x.size(1)):\n maxx = torch.max(x[i,ii,0,:])\n y[i,ii,0,:] = F.threshold(x[i,ii,0,:], config.max_th*maxx.data[0], 0)\n\n x = self.relu(y)\n return x\n\nclass TestNet(nn.Module):\ndef __init__(self):\n super(TestNet, self).__init__()\n self.conv_1 = nn.Conv2d(len(config.alphabet), config.basic_frames, kernel_size = (1,3), stride = (1,2), padding = (0,0), bias = False)\n self.batchnorm_1 = nn.BatchNorm2d(config.basic_frames)\n self.conv_2 = nn.ConvTranspose2d(config.basic_frames, len(config.alphabet), kernel_size = (1,3), stride = (1,2), padding = (0,0), bias = False)\n self.batchnorm_2 = nn.BatchNorm2d(len(config.alphabet))\n self.adarelu = AdaReLU()\n\ndef forward(self, x):\n x = self.adarelu(self.batchnorm_1(self.conv_1(x)))\n x = F.sigmoid(self.batchnorm_2(self.conv_2(x)))\n return x\n\nx = Variable(torch.rand(32,68,1,2283))\nmodel = TestNet()\noptimizer = optim.SGD(model.parameters(), lr =0.01, momentum=0.9)\noptimizer.zero_grad()\noutput = model(data)\nloss = F.binary_cross_entropy(output, data)\nloss.backward()\noptimizer.step()<ECODE>", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "Hi, I think you can replace the loop by: <SCODE>y = x * (1 - x.lt(maxx.expand_as(x))).float()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "The whole function can be simplified to this: <SCODE>def AdaReLU(x): \n max_val, _ = torch.max(x, 3) \n mask = x < max_val.expand_as(x)\n mask.detach_() \n x = x.clone() \n x[mask] = 0 \n return F.relu(x) \n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" } ]
false
How to free the memory used by processed layers in the testing/eval phase?
null
[ { "contents": "Hi. In the testing/eval phase, after one layer is processed is there anyway to free its memory? Or can the output tensors of all layers share the same memory, since the outputs are not needed for calculating the gradients. Thanks.", "isAccepted": false, "likes": null, "poster": "dhm" }, { "contents": "Training is already quite memory efficient. If it doesn’t need to hold onto some input/output because it can compute the gradient in another way, it won’t do it.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks. It is so convenient to use!", "isAccepted": false, "likes": 1, "poster": "dhm" } ]
false
Interop between pycuda and pytorch
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "dbWkDGPH" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Multi-dimensional Labels in pytorch
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I have some co-ordinates for bounding boxes in the train dataset and I want the net to train on those.", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "What kind of loss function do you want to use then?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "Sooo you’re predicting a 0 or 1 for each pixel? Why does your output have 3 channels then?", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "L1 and L2 losses should be easy to write yourself using only autograd operations. It’s just a subtraction + abs or pow.", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Cuda runtime error (2) : out of memory
null
[ { "contents": "I would like to know how can I solve my problem. What is the source of problem? My Code:", "isAccepted": false, "likes": null, "poster": "mderakhshani" }, { "contents": "The problem is exactly what the error says, you ran out of memory on the GPU. You can try reducing the batch size or image sizes.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "To complement apaszke’s answer, in case you are using Jupyter and you ‘run’ multiple times the cell which creates the network, put always a delete net on that cell, otherwise you might be creating multiple neural networks which will cause similar problems. If not, just try to find the maximum batch size which works for your hardware.", "isAccepted": false, "likes": 1, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "afrozalm" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "afrozalm" }, { "contents": "I face the same problem with you and I find the problem results from the upsampling layer. When I use nvidia-smi to monitor my GPU memory usage, I find that during some time, the memory requirement for transpose2d for upsampling will be doubled and it tells me out of memory… Maybe consider to reduce the batch size.", "isAccepted": false, "likes": 1, "poster": "PeterXiaoGuo" } ]
false
Tip: using keras compatible tensor dot product and broadcasting ops
null
[ { "contents": "I find theano’s dot() and broadcasting of basic operators very convenient (ditto for keras, which is designed to be fully compatible with the theano API for these functions). It saves a lot of unsqueeze()ing and expand_as()ing and makes life a lot easier IMO. It also makes it easier to port code from theano and keras to pytorch. In summary, dot() handles pretty much any sized tensor arguments and does a dot product of the last axis of the first argument with the 2nd last axis of the 2nd argument. And for +*-/ it broadcasts any empty leading or unit axes as needed to make the arguments compatible. In case anyone is interested, here is a pytorch version of theano’s dot() and broadcasted operators - hope some folks find it useful! <SCODE>def align(x, y, start_dim=2):\n xd, yd = x.dim(), y.dim()\n if xd > yd:\n for i in range(xd - yd): y = y.unsqueeze(0)\n elif yd > xd:\n for i in range(yd - xd): x = x.unsqueeze(0)\n xs = list(x.size())\n ys = list(y.size())\n nd = len(ys)\n for i in range(start_dim, nd):\n td = nd-i-1\n if ys[td]==1: ys[td] = xs[td]\n elif xs[td]==1: xs[td] = ys[td]\n return x.expand(*xs), y.expand(*ys)\n\ndef dot(x, y):\n x, y = align(x, y)\n assert(1<y.dim()<5)\n if y.dim() == 2:\n return x.mm(y)\n elif y.dim() == 3: \n return x.bmm(y)\n else:\n xs,ys = x.size(), y.size()\n res = torch.zeros(*(xs[:-1] + (ys[-1],)))\n for i in range(xs[0]): res[i] = x[i].bmm(y[i])\n return res\n\ndef aligned_op(x,y,f):\n x, y = align(x,y,0)\n return f(x, y)\n\ndef add(x, y): return aligned_op(x, y, operator.add)\ndef sub(x, y): return aligned_op(x, y, operator.sub)\ndef mul(x, y): return aligned_op(x, y, operator.mul)\ndef div(x, y): return aligned_op(x, y, operator.truediv)\n<ECODE> And here are some tests / examples: <SCODE>def Arr(*sz): return torch.randn(sz)\n\nm = Arr(3, 2)\nv = Arr(2)\nb = Arr(4,3,2)\nt = Arr(5,4,3,2)\n\nmt = m.transpose(0,1)\nbt = b.transpose(1,2)\ntt = t.transpose(2,3)\n\ndef check_eq(x,y): assert(torch.equal(x,y))\n\ncheck_eq(dot(m,mt),m.mm(mt))\ncheck_eq(dot(v,mt), v.unsqueeze(0).mm(mt))\ncheck_eq(dot(b,bt),b.bmm(bt))\ncheck_eq(dot(b,mt),b.bmm(mt.unsqueeze(0).expand_as(bt)))\n\nexp = t.view(-1,3,2).bmm(tt.contiguous().view(-1,2,3)).view(5,4,3,3)\ncheck_eq(dot(t,tt),exp)\n\ncheck_eq(add(m,v),m+v.unsqueeze(0).expand_as(m))\ncheck_eq(add(v,m),m+v.unsqueeze(0).expand_as(m))\ncheck_eq(add(m,t),t+m.unsqueeze(0).unsqueeze(0).expand_as(t))\ncheck_eq(sub(m,v),m-v.unsqueeze(0).expand_as(m))\ncheck_eq(mul(m,v),m*v.unsqueeze(0).expand_as(m))\ncheck_eq(div(m,v),m/v.unsqueeze(0).expand_as(m))\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "jphoward" }, { "contents": "I’ve made some minor changes to this code to make it a bit faster - here’s the updated version (tests from above will still work fine): <SCODE>def unit_prefix(x, n=1):\n for i in range(n): x = x.unsqueeze(0)\n return x\n\ndef align(x, y, start_dim=2):\n xd, yd = x.dim(), y.dim()\n if xd > yd: y = unit_prefix(y, xd - yd)\n elif yd > xd: x = unit_prefix(x, yd - xd)\n\n xs, ys = list(x.size()), list(y.size())\n nd = len(ys)\n for i in range(start_dim, nd):\n td = nd-i-1\n if ys[td]==1: ys[td] = xs[td]\n elif xs[td]==1: xs[td] = ys[td]\n return x.expand(*xs), y.expand(*ys)\n\ndef dot(x, y):\n assert(1<y.dim()<5)\n x, y = align(x, y)\n \n if y.dim() == 2: return x.mm(y)\n elif y.dim() == 3: return x.bmm(y)\n else:\n xs,ys = x.size(), y.size()\n res = torch.zeros(*(xs[:-1] + (ys[-1],)))\n for i in range(xs[0]): res[i].baddbmm_(x[i], (y[i]))\n return res\n<ECODE>", "isAccepted": false, "likes": 9, "poster": "jphoward" }, { "contents": "Why not merge it in pytorch?", "isAccepted": false, "likes": null, "poster": "heihei" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jphoward" } ]
false
Error while running CNN for grayscale image
null
[ { "contents": "My net: <SCODE>class convNet(nn.Module):\n #constructor\n def __init__(self):\n super(convNet, self).__init__()\n #defining layers in convnet\n #input size=32x32x3\n self.conv1 = nn.Conv2d(1, 96, kernel_size=3,stride=1,padding=1)\n #conv1: 28x28x1 -> 28x28x96\n self.conv2 = nn.Conv2d(96, 256, kernel_size=3,stride=1,padding=1)\n #conv2: 28x28x96 -> 28x28x256\n self.conv3 = nn.Conv2d(256,384,kernel_size=3,stride=1,padding=1)\n #conv3: 28x28x256 -> 28x28x384\n #pooling: 28x28x384 -> 14x14x384\n self.conv4 = nn.Conv2d(384,512,kernel_size=3,stride=1,padding=1)\n #conv4: 14x14x384 -> 14x14x512\n #pooling: 14x14x512 -> 7x7x512 \n self.fc1 = nn.Linear(7*7*512, 300)\n self.fc2 = nn.Linear(300, 100)\n self.fc3 = nn.Linear(100, 10)\n \ndef forward(self, x):\n conv1_relu = nnFunctions.relu(self.conv1(x))\n conv2_relu = nnFunctions.relu(self.conv2(conv1_relu))\n conv3_relu =nnFunctions.max_pool2d(nnFunctions.relu(self.conv3(conv2_relu)),2)\n conv4_relu =nnFunctions.max_pool2d(nnFunctions.relu(self.conv4(conv3_relu)),2)\n\n x = conv4_relu.view(-1, 7*7*512)\n x = nnFunctions.relu(self.fc1(x))\n x = nnFunctions.relu(self.fc2(x))\n x = self.fc3(x)\n return x\n<ECODE> But it gives me the following error: <SCODE>RuntimeError: CHECK_ARG((long)pad.size() == input->nDimension - 2) failed at torch/csrc/cudnn/Conv.cpp:258<ECODE>", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "Are you sure you are passing the image as (1, 28, 28). It accepts input as (N, C, H, W). It runs on my computer perfectly. <SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\nimport numpy as np\n\nclass channel1(nn.Module):\n\tdef __init__(self):\n\t\tsuper(channel1, self).__init__()\n\t\tself.conv1 = nn.Conv2d(1, 96, kernel_size=3, stride=1, padding=1)\n\t\tself.relu1 = nn.ReLU(inplace=True)\n\t\tself.conv2 = nn.Conv2d(96, 10, kernel_size=3, stride=1, padding=1)\n\n\tdef forward(self, x):\n\t\tx = self.conv1(x)\n\t\tx = self.relu1(x)\n\t\tx = self.conv2(x)\n\t\treturn x\n\nif __name__ == '__main__':\n\tsomeNet = channel1()\n\tx = Variable(torch.randn(1, 1, 28, 28))\n\tprint someNet(x)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "your code runs too on my system <SCODE>if __name__ == '__main__':\n net = convNet()\n x = Variable(torch.randn(10, 1, 28, 28))\n print(net(x))\n<ECODE>", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "This is my complete code: <SCODE>np_data.shape ->(42000, 784)\nnp_data=np_data.astype(np.float32).reshape(42000,28,28,1)\nnp_labels.shape ->(42000, 1)\nnp_data.shape ->(42000, 28, 28, 1)\nnp_data=np.transpose(np_data,[0,3,1,2])\nnp_data.shape ->(42000, 1, 28, 28)\n\nbatch_size=50\nfeatures=torch.from_numpy(np_data)\nfeatures=features.contiguous().view(42000,-1)\ntargets=torch.from_numpy(np_labels)\n\nimport torch.utils.data as data_utils\n\ntrain = data_utils.TensorDataset(features, targets)\ntrain_loader = data_utils.DataLoader(train, batch_size=batch_size, shuffle=True)\n\ncriterion = nn.CrossEntropyLoss() # use a Classification Cross-Entropy loss\nimport torch.optim as optim\n\ndef trainConvNet(train_loader,net,criterion,epochs,total_samples,learning_rate):\n prev_loss=0\n \n optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9)\n for epoch in range(int(epochs)): # loop over the dataset multiple times\n\n running_loss = 0.0\n for i,data in enumerate(train_loader):\n\n inputs,labels=data\n # wrap them in Variable\n inputs, labels = Variable(inputs).cuda(), Variable(labels).cuda()\n \n # zero the parameter gradients\n optimizer.zero_grad()\n\n # forward + backward + optimize\n outputs = net(inputs)\n loss = criterion(outputs, labels[:,0])\n loss.backward() \n optimizer.step()\n # print statistics\n running_loss += loss.data[0]\n print('Finished Training')\n return net\n\n\nclass convNet(nn.Module):\n #constructor\n def __init__(self):\n super(convNet, self).__init__()\n #defining layers in convnet\n #input size=32x32x3\n self.conv1 = nn.Conv2d(1, 96, kernel_size=3,stride=1,padding=1)\n #conv1: 28x28x1 -> 28x28x96\n self.conv2 = nn.Conv2d(96, 256, kernel_size=3,stride=1,padding=1)\n #conv2: 28x28x96 -> 28x28x256\n self.conv3 = nn.Conv2d(256,384,kernel_size=3,stride=1,padding=1)\n #conv3: 28x28x256 -> 28x28x384\n #pooling: 28x28x384 -> 14x14x384\n self.conv4 = nn.Conv2d(384,512,kernel_size=3,stride=1,padding=1)\n #conv4: 14x14x384 -> 14x14x512\n #pooling: 14x14x512 -> 7x7x512 \n self.fc1 = nn.Linear(7*7*512, 300)\n self.fc2 = nn.Linear(300, 100)\n self.fc3 = nn.Linear(100, 10)\n \n def forward(self, x):\n x = nnFunctions.relu(self.conv1(x))\n x = nnFunctions.relu(self.conv2(x))\n x = nnFunctions.max_pool2d(nnFunctions.relu(self.conv3(x)),2)\n x = nnFunctions.max_pool2d(nnFunctions.relu(self.conv4(x)),2)\n \n x = x.view(-1, 7*7*512)\n x = nnFunctions.relu(self.fc1(x))\n x = nnFunctions.relu(self.fc2(x))\n x = self.fc3(x)\n return x\nnet=convNet()\nnet.cuda()\nnet=trainConvNet(train_loader,net,criterion,1,42000,0.01)\n<ECODE> I dont understand why am I getting the above error.", "isAccepted": false, "likes": null, "poster": "sarthak1996" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "sarthak1996" } ]
false
Help with svd approximation
null
[ { "contents": "Hi I am trying to implement “Exploiting linear structure for efficient evaluation” paper and it requires me to perform SVD on the weight tensor and find its rank 1 approximation and now these rank 1 filters perform the convolution operation. I wrote a simple test suite to verify this and I am getting good speed ups. Here is the code which does that: <SCODE># doing passes\nres1 = np.zeros((oH, pw))\nout = np.zeros((oH, oW))\nt = time.time()\nfor i in xrange(oH):\n hst, hend = i*stride, i*stride+HH\n patch = padded_x[hst:hend, :]\n res1[i] = np.sum(patch*u, axis=0)\nfor i in xrange(oW):\n wst, wend = i*stride, i*stride+WW\n patch = res1[:, wst:wend]\n out[:, i] = np.sum(patch*v, axis=1)\nelapsed = time.time() - t\nprint \"time taken to do this awesome operation is {}\".format(elapsed)\nprint out\n<ECODE> I wanted to know if there is a way I can do this using pytorch and get the speed improvements?", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "If you would guide me in the right direction, I would be grateful. <SCODE>import numpy as np\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\nimport torch.nn.functional as nnf\nimport time\n\nimage = torch.randn(512, 512)\nimage = Variable(image.view(1,1,512, 512))\n\nu = torch.Tensor([1,2,1])\nv = torch.Tensor([-1, 0, 1])\nw = Variable(torch.ger(u, v))\nw = w.view(1,1,3,3)\n\nu = Variable(u.view(1,1,3,1))\nv = Variable(v.view(1,1,1,3))\n\nt = time.time()\nres1 = nnf.conv2d(image, u, stride=1, padding=1)\nout = nnf.conv2d(res1, v, stride=1)\nprint \"time taken is {}\".format(time.time()-t)\nprint out\n\nt = time.time()\nout1 = nnf.conv2d(image, w, stride=1, padding=1)\nprint \"time taken by normal is {}\".format(time.time()-t)\nprint out1\n<ECODE>", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "I don’t think it’s that surprising that a single convolution is faster. For each of them, you have to preprocess the image in some way, most libraries are optimized for 3x3 filters, since they’re one of the most commonly used sizes, and you benefit from data locality if you only do a single conv (no need to transfer the image back and forth).", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "So i would not get any improvements if I implement the forward and backward pass for this as per your previous suggestion of extending “autograd” Thanks", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "Correct. I just wanted to tell you how could you do that.", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Why a full-1 tensor on GPU returns a value smaller than 1 when we take product of it?
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Donglai_Xiang" }, { "contents": "Wow, that’s weird. It’s a bug in the C backend, the same thing happens in Lua Torch. We’ll fix that right away.", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Does torch.utils.data.DataLoader processing disk i/o?
null
[ { "contents": "<SCODE>transform=transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ])\ntrainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=4, \n shuffle=True, num_workers=2)\n\ntestset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=4, \n shuffle=False, num_workers=2)\n\ndataiter = iter(trainloader)\nimages, labels = dataiter.next()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "isalirezag" } ]
false
Train 3D Networks with Pytorch
null
[ { "contents": "Hi all, Thank you in advance.", "isAccepted": false, "likes": null, "poster": "ltnghia" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks. Do you support Deconv3D?", "isAccepted": false, "likes": 1, "poster": "ltnghia" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "The return 4D elements is (c,d,h,w) ?", "isAccepted": false, "likes": null, "poster": "YunnFeng" }, { "contents": "The return is a tuple of 5 elements (N, c, d, h, w).", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "You need to return 4D elements from the dataset, so they will be concatenated into 5D batches. Then, we only support batch mode in nn, so you’ll be using 5D tensors throught the network.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "YunnFeng" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Support for slicing with a step != 1?
null
[ { "contents": "Currently when attempting to slice a tensor or Variable object with a step different from 1 I get a Runtime Error: Is support for this opperation coming anytime soon and what is the best way to reverse a tensor along an axis currently? Best I can come up with is doing it via index_select which is clunky and copies the tensor… Doing it via numpy is also not possible as from_numpy() doesn’t seem to work with negative strides.", "isAccepted": false, "likes": null, "poster": "Ajoo" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "So how do I flip the image?", "isAccepted": false, "likes": 2, "poster": "acgtyrant" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "acgtyrant" }, { "contents": "Strange it sometimes work, and sometimes doesn’t: <SCODE>tensor([[-0.8948],\n [-0.3556],\n [ 1.2324],\n [ 0.1382],\n [-1.6822]])\n<ECODE> <SCODE>_ ValueError Traceback (most recent call last)_\n_ <ipython-input-42-67cae86c60db> in <module>_\n_torch.mm(features, torch.reshape(weights, weights[::-1]))_\n_ ValueError: negative step not yet supported_\n<ECODE> <SCODE>tensor([[-0.8948],\n [-0.3556],\n [ 1.2324],\n [ 0.1382],\n [-1.6822]])\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "bobiblazeski" }, { "contents": "Hi guys, I am also facing the same issue. Could anyone please confirm if the issue is resolved?", "isAccepted": false, "likes": null, "poster": "Shriya_Kak" }, { "contents": "Which issue? negative strides? I’m afraid we do not support it.", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "<SCODE>detected_faces = face_detector.detect_from_image(data[..., ::-1].copy())\n<ECODE> ValueError: negative step not yet supported This is what I am getting.", "isAccepted": false, "likes": null, "poster": "Shriya_Kak" }, { "contents": "Hi,", "isAccepted": false, "likes": 1, "poster": "albanD" } ]
false
There are some mistakes in PyTorch documentations
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "Yes, this example is incorrect. Can you please send a PR?", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Loading cudnn modules
null
[ { "contents": "Hi, How exactly can I do this?", "isAccepted": false, "likes": null, "poster": "Anmol6" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Saving custom models
null
[ { "contents": "I have defined a custom model as follows: <SCODE>class model(torch.nn.Module):\n\t\"\"\"Defines custom model\n\t\"\"\"\n\tdef __init__(self, dim_input, dim_output):\n\t\t\n\t\tsuper(model, self).__init__()\n\t\tself._dim_input = dim_input\n\t\tself._dim_output = dim_output\n\t\t\n\t\t'''Initialize nnet layers'''\n\t\tself._l1 = torch.nn.Linear(self._dim_input, SIZE_H1)\n\t\tself._l2 = torch.nn.Linear(SIZE_H2, SIZE_H1)\n\tself._l3 = torch.nn.Linear(SIZE_H3, SIZE_H2)\n\n\n\tdef forward(self,x):\n\t\tself._l1_out = nn.ReLU(self._l1(x))\n\t\tself._l2_out = nn.ReLU(self._l2(self._l1_out))\n\t\tself._l3_out = nn.BatchNorm1d(self._l3(self._l2_out))\n\t\tself._out = nn.Sigmoid(self._l3_out)\n\n\t\treturn self._out\n<ECODE> How do I save this as a .t7 file? Thanks", "isAccepted": false, "likes": null, "poster": "Anmol6" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "Kinda embarrassed, but I had the order of the filename and the module reversed in torch.save()… Thanks anyway!", "isAccepted": false, "likes": null, "poster": "Anmol6" }, { "contents": "Spoke too soon - I tried saving it as a .t7 file but I can’t load it with load_lua", "isAccepted": false, "likes": null, "poster": "Anmol6" }, { "contents": "I want to load this like I would load a Sequential model. Can I not do that?", "isAccepted": false, "likes": null, "poster": "Anmol6" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Oh, do you have any suggestions then on how I can best load/save this custom model in python? Should I just do my best to just write my model as a nn module (eg. using Sequential) and load/save with torch.save() and load_lua respectively?", "isAccepted": false, "likes": null, "poster": "Anmol6" }, { "contents": "nvm, got it - thanks!", "isAccepted": false, "likes": null, "poster": "Anmol6" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Russel_Russel" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks", "isAccepted": false, "likes": null, "poster": "devansh20la" }, { "contents": "Hey , I am beginner and was trying to save parameters of a pretrained network in hdf5 file and wanted to load it in torch , but unsuccessfull . Could you please let me know how to save parameters of a pretrained network (which is in pytorch ) in hdf5 file . Thanks!", "isAccepted": false, "likes": null, "poster": "Rao_Shivansh" }, { "contents": "I can tell you how I saved the model and it worked import pickle This saved the model, then when I started training the model instead of starting from zero I loaded the saved model: and loaded the state of the saved model to a already initialized model: shared_model.load_state_dict(existing_model.state_dict()) The training continued from where I stopped the training.", "isAccepted": false, "likes": null, "poster": "Erik_Janezic" }, { "contents": "does it save the model in pytorch and load it in lua ?", "isAccepted": false, "likes": null, "poster": "Rao_Shivansh" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Erik_Janezic" }, { "contents": "Thanks !", "isAccepted": false, "likes": null, "poster": "Rao_Shivansh" }, { "contents": "Thanks a lot !", "isAccepted": false, "likes": null, "poster": "Rao_Shivansh" }, { "contents": "Hey, Rao, I know this post is a few months old, but were you ever able to package the model to HDF5? Thanks.", "isAccepted": false, "likes": null, "poster": "Rose" } ]
false
Different weight decay for different layes
null
[ { "contents": "Hi, is it possible to set different values for different layers or disable the weight decay for some specified layers, thanks", "isAccepted": false, "likes": 3, "poster": "twtygqyy" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "Thanks for the reply, it works.", "isAccepted": false, "likes": null, "poster": "twtygqyy" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "matthias.l" } ]
false
RunTime Error, Cuda
null
[ { "contents": "Hi, I am training a pretrained model resnet-18 imported from torchvision.models with dataset containing 1050 images of size 3x240x320. After training, I am testing with 399 samples, but I am getting RunTime CUDA Error : Out of Memory. Also, I have ported the test dataset to CUDA and volatile attribute is set to True. The model is in GPU and after training nvidia-smi output is 3243MiB/4038MiB Is there any way available to free the GPU memory so that I can do the testing?", "isAccepted": false, "likes": null, "poster": "srv902" }, { "contents": "it’s possible that moving both the training and test dataset over to CUDA doesn’t give enough space for the network to forward-prop through. Loading your dataset into GPU memory itself takes 1.3 GB of memory. Are you giving a large batch size as input to your network? Can you reduce the batch size at inference time?", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Yes I was giving a large batch size, so I reduced the batch size at inference time. It is working fine now. Thank you very much.", "isAccepted": false, "likes": null, "poster": "srv902" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" } ]
false
Dropout for RNNs
null
[ { "contents": "I wonder if there would be an elegant way to use the same dropout mask on sequences for RNNs, or it would be better to implement a module. (Dropout option in the current RNN module just regard the entire sequence output as a single output. Right?) Thanks!", "isAccepted": false, "likes": 2, "poster": "supakjk" }, { "contents": "You could write your own module, where you process the whole sequence, sample the mask once at the beginning and just do an element-wise multiplication after each step. Here’s a gist: <SCODE>def forward(self, x):\n mask = Variable(torch.Tensor(x.size(0), self.hidden_size).fill_(self.dropout).bernoulli_())\n hidden = None\n outputs = []\n for t in x.size(1):\n output, hidden = self.rnn(x[:, t], hidden)\n outputs.append(output * mask)\n return outputs, hidden\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "Regarding the current RNN modules, they use a different mask for each time step.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Performance-wise, running rnn on the whole input sequence, expanding mask and applying it to the whole rnn output will probably be better than having a loop over time.", "isAccepted": false, "likes": 4, "poster": "ngimel" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "Yes, if a separate mask for each timestep is ok, then you can just use the built in modules. They will be the fastest.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>def _make_noise:\n\treturn input.new().resize_(1, input.size(1), input.size(2))'<ECODE>", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "What I did to use the same dropout mask for different time steps was inheriting classes as follows: <SCODE>class SeqConstDropoutFunc(torch.nn._functions.dropout.Dropout):\n def __init__(self, p=0.5, train=False, inplace=False):\n super(SeqConstDropoutFunc, self).__init__(p, train, inplace)\n\n def _make_noise(self, input): # for timesteps X batches X dims inputs, let each time step has the same dropout mask\n return input.new().resize_(1, input.size(1), input.size(2))\n\nclass SeqConstDropout(nn.Dropout):\n def __init__(self, p=0.5, inplace=False):\n super(SeqConstDropout, self).__init__(p, inplace)\n\n def forward(self, input):\n return SeqConstDropoutFunc(self.p, self.training, self.inplace)(input)\n<ECODE> Thanks.", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "Yes I think it’s best to reimplement it yourself. It’s a very simple function.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I think you should sample the bernoulli using: <SCODE>Variable(torch.bernoulli(torch.Tensor(x.size(0), self.hidden_size).fill_(self.dropout)))\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "seba-1511" }, { "contents": "Is there an elegant implementation of it already? I guess the idea of using the same dropout mask is from “A Theoretically Grounded Application of Dropout in Recurrent Neural Networks”.", "isAccepted": false, "likes": null, "poster": "cosmozhang1988" }, { "contents": "Did you get it implemented in pytorch already?", "isAccepted": false, "likes": null, "poster": "cosmozhang1988" }, { "contents": "Variable(torch.Tensor(x.size(0), self.hidden_size).fill_(self.dropout).bernoulli_()) is not correct. The correct form is Variable(torch.Tensor(x.size(0), self.hidden_size).fill_(self.dropout).bernoulli())", "isAccepted": false, "likes": 1, "poster": "026f9ee49b7010c38742" } ]
false
Data augmentation for limited data
null
[ { "contents": "Thanks.", "isAccepted": false, "likes": null, "poster": "YunnFeng" }, { "contents": "you can look at the imagenet example to get yourself started. Here are the relevant lines:", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "Will the composed transformer just give one output or every transform give a output image?Just the last one can be the data augmentation method.", "isAccepted": false, "likes": 1, "poster": "ycszen" } ]
false
Accessing intermediate data in nn.Sequential
null
[ { "contents": "I’d like to access the output of a module inside an nn.Sequential. It could be that I have a big CNN wrapped in nn.Sequential, and I’d like to concat the output of several layers. Or it could be that I’m just checking the intermediate layer outputs to debug stuff. Doing this used to be quite easy in lua torch because modules cache their output in module.output. But pytorch apparently doesn’t do that anymore. Is there a way I can access intermediate layers’ output elegantly? Currently I just inherit from nn.Sequential and in forward() I cache the outputs from specific modules I want. I’m not sure if this screws up memory management since it looks like the intermediate values need to be re-allocated at every network pass?", "isAccepted": false, "likes": 1, "poster": "hyqneuron" }, { "contents": "Subclassing sequential and overriding forward (just like you are doing now) is the right approach. We wanted to get rid of Sequential itself, but we kept it as a convenience container. There is no screwing up of memory management because of subclassing sequential (or caching outputs), it works out fine.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Would this increase memory usage by prolonging the lifetime of output? I’m very curious about how intermediate values are allocated and deallocated.", "isAccepted": false, "likes": null, "poster": "hyqneuron" }, { "contents": "yes if you do that, all output variables will be cached and you will run out of memory (because in pytorch output buffers are not reused but recreated)", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "You won’t necessarily run out of memory, because if you overwrite them at every forward, you’ll be keeping at most one additional copy of the graph, that’s likely to have its buffers freed if only you called backward on it.", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Dependent Batches in DataSet
vision
[ { "contents": "What do you think about this strategy?", "isAccepted": false, "likes": null, "poster": "melgor" }, { "contents": "Hey, sorry for a late reply. I didn’t really understand your solution, sampler is not a function, but an iterable (you can think of it as a stream of indices). I think that one way to impement what you want would be to:", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for the reply. Here is my code: <SCODE>from PIL import Image\n\nimport sys\nimport collections\n\ndef collate_cat(batch):\n '''concatenate sub-batches along first dimension'''\n if torch.is_tensor(batch[0]):\n return torch.cat(batch,0)\n elif isinstance(batch[0], collections.Iterable):\n # if each batch element is not a tensor, then it should be a tuple\n # of tensors; in that case we collate each element in the tuple\n transposed = zip(*batch)\n return [collate_cat(samples) for samples in transposed]\n\n raise TypeError((\"batch must contain tensors, numbers, or lists; found {}\"\n .format(type(batch[0]))))\n\nclass RandomSampler(object):\n \"\"\"Samples num_iters times the idxes from class\n Arguments:\n data_source (Dataset): dataset to sample from\n \"\"\"\n\n def __init__(self, data_source, num_iters):\n self.num_samples = len(data_source)\n self.num_iters = num_iters\n\n def __iter__(self):\n return iter(torch.cat([torch.randperm(self.num_samples) for i in range(self.num_iters)], 0).long())\n\n def __len__(self):\n return self.num_iters\n \n \nclass TripletDataset(datasets.ImageFolder):\n def __init__(self, root, transform=None, target_transform=None, num_iters = 1000, \\\n class_per_batch = 10, example_per_class = 6):\n super(self.__class__, self).__init__(root, transform, target_transform)\n self.class_per_batch = class_per_batch\n self.example_per_class = example_per_class\n \n # Create a dictionary which will contain idxes of each class separately\\\n self.class_dict = defaultdict(list)\n for idx, (path, target) in enumerate(self.imgs):\n self.class_dict[target].append(idx)\n \n \n def __getitem__(self, index):\n class_data = self.class_dict[index]\n num_example = min(len(class_data), self.example_per_class)\n shuffle = torch.randperm(len(class_data))\n idxes = shuffle[:num_example]\n list_imgs = list()\n list_targets = list()\n for idx in idxes:\n path, target = self.imgs[class_data[idx]]\n img = self.loader(os.path.join(self.root, path))\n if self.transform is not None:\n img = self.transform(img)\n if self.target_transform is not None:\n target = self.target_transform(target)\n list_imgs.append(img)\n list_targets.append(target)\n \n return torch.stack(list_imgs, 0), torch.LongTensor(list_targets) #img, target\n\n def __len__(self):\n return len(self.class_dict.keys())\n\n\n\ndata = TripletDataset(\"MNIST/train/\",transforms.Compose([transforms.ToTensor()]))\ntrain_loader = torch.utils.data.DataLoader(\n data,\n batch_size=3, shuffle=True,\n sampler = RandomSampler(data, 100 * 3),\n num_workers=0, pin_memory=True,\n collate_fn = collate_cat)\n\nfor img, target in train_loader:\n print (img.size(), target.size())\n<ECODE>", "isAccepted": false, "likes": null, "poster": "melgor" }, { "contents": "About the iterator, it would be slightly more efficient to recreate the permutations only once they’re needed, not when the iterator is instantiated, but as long as your dataset isn’t huge it probably doesn’t matter too much.", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
How can i compute 3D tensor * 2D tensor multiplication?
null
[ { "contents": "In PyTorch, i can do this as below. <SCODE>result = torch.mm(X.view(-1, C), Y)\nresult = result.view(-1, B, D)\n<ECODE> Is there simpler way to handle this?", "isAccepted": false, "likes": 4, "poster": "yunjey" }, { "contents": "<SCODE>Y.unsqueeze(0).expand_as(X) * X<ECODE>", "isAccepted": false, "likes": 2, "poster": "chenyuntc" }, { "contents": "Y.unsqueeze(0) has a shape (1, C, D) so that it can not be expanded as X which has a shape (A, B, C). What i want to do is to broadcast 2D matrix multiplication as below. <SCODE>def matmul(X, Y):\n results = [] \n for i in range(X.size(0)):\n result = torch.mm(X[i], Y)\n results.append(result)\n return torch.cat(results)<ECODE>", "isAccepted": false, "likes": 1, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "smth" }, { "contents": "Ok, I got it. Thanks.", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "result = torch.mm(X.resize_(A*B,C), Y).resize_(A,B,D)", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "My mistake, I was careless", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jphoward" }, { "contents": "Also, you be able to do that with batched matrix multiply: <SCODE>result = torch.bmm(X, Y.unsqueeze(0).expand(X.size(0), *Y.size()))\n<ECODE> We’ll be adding broadcasting very soon.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ngimel" }, { "contents": "", "isAccepted": false, "likes": 9, "poster": "o2h4n" } ]
false
Pruning the Convnets
null
[ { "contents": "Hello, I am trying to implement weight pruning using forward hook, but somehow it is giving me invalid syntax, below is the code <SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\nimport numpy as np\n\n# this code implements pruning using register buffer feature to save input mask\n\ndef compute_mask(weights):\n thresh = weights.std()\n m1 = weights > thresh\n m2 = weights < (-thresh)\n mask = torch.ones(weights.size())\n mask = mask-m1.float()\n mask = mask-m2.float()\n return mask\n\nclass PrunedSqueezenet(nn.Module):\n def __init__(self, to_prune, pretrained_weight):\n \"\"\"\n takes a list of layers to prune, model, weights\n to_prune: a list of all the layers on which pruning should be applied\n model: architecture of the model\n weights: pretrained weights to use for the model\n \"\"\"\n super(self, PrunedSqueezenet).__init__()\n self.to_prune = to_prune\n # get the model ready\n self.base_model = model.SqueezeNet()\n pretrained_weights = torch.load(pretrained_weight)\n base_model.load_state_dict(pretrained_weights)\n\n self.layers = self.base_model._modules.keys()\n # compute the mask for the weights\n for l in to_prune:\n if \"fire\" in l:\n curr_layer = self.base_model._modules.get(l)._modules.get('conv3')\n weights = curr_layer.weight.data\n # save the mask\n curr_layer.register_buffer('mask', compute_mask(weights))\n # change the computed output of conv3 layer in the fire\n curr_layer.register_forward_hook(\n lambda m, i, o: \\\n print(\"Hello this is ok\")\n )\n\n elif \"conv\" in l:\n curr_layer = self.base_model._modules.get(l)\n weights = curr_layer.weight.data\n # save the mask\n curr_layer.register_buffer('mask', compute_mask(weights))\n # change the computed output of conv3 layer in the fire\n curr_layer.register_forward_hook(\n lambda m, i, o: \\\n print(\"Hello this is ok\"))\n )\n \n else:\n print(\"I dont understand what you are talking about\")\n\n def forward(self, x):\n return self.base_model(x)\n\nif __name__ == '__main__':\n net = PrunedSqueezenet(to_prune=['fire9'], pretrained_weight='pretrained_models/squeezedet_compatible.pth')\n x = Variable(torch.randn(1, 3, 32, 32))\n print(net(x))\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "gsp-27" }, { "contents": "what’s the exact error and stack trace? invalid syntax probably means invalid python syntax.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I solved it, it was some python syntax error. Sorry for the trouble. But I do have a logical question in “pruning”. So my line of thinking is I will compute mask for the specified layers and store it as the layer buffer and then implement a hook which will multiply the weight matrix with the mask thus essentially making the weights zero at that position. And hence they will not contribute to the output… The question is whether this hook is applied after the forward of the layer or before, because in my before will make sense. If this thinking is not right direction, would you please suggest me a way to achieve the desired behaviour?", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "The hook is applied after the forward of the layer.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I think it would be simpler to set the pruned weights to 0 and mask gradients after backward(), so that they’re never modified.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "I’m in a similar situation where I need to mask weights during forward(). I cannot set them to zero initially because the weight_mask gets updated every iteration based on the values of the weights and the gradients during backprop update all the weights (not just the unpruned ones). Any suggestions on how I could do that short of creating new layers?", "isAccepted": false, "likes": null, "poster": "angup143" }, { "contents": "The library itself is an ongoing project of mine about pruning and calculating saliencies.", "isAccepted": false, "likes": 1, "poster": "evcu" }, { "contents": "hello, i am interesting in your code. I have a question : how can i retrain a neural network without change some fixed weight? Hope for you reply! thanks!!!", "isAccepted": false, "likes": null, "poster": "wxwx" }, { "contents": "One tricky thing is to reinitialize your optimizer or gradient history of it after each update on your mask to prevent momentum to come into play.", "isAccepted": false, "likes": 1, "poster": "evcu" }, { "contents": "wow,very thanks for reply!!! i will try . thanks", "isAccepted": false, "likes": null, "poster": "wxwx" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "wxwx" }, { "contents": "<SCODE>loss.backward()\nfor layer,mask in zip(X,Y):\n layer.weight.grad.data[mask]=0\noptimizer.step()\n<ECODE> You just need to keep track which mask belongs to which weight. That’s MaskedModule wrapper is doing in the code I shared above.", "isAccepted": false, "likes": 2, "poster": "evcu" }, { "contents": "very thanks for reply, i will try again ! thanks", "isAccepted": false, "likes": null, "poster": "wxwx" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "wxwx" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "wxwx" }, { "contents": "Hi, do you find a solution for this?", "isAccepted": false, "likes": null, "poster": "Daniel_Ortega" } ]
false
How can I modify some parts of the saved model?
null
[ { "contents": "My original model as below: <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.encoder = Encoder()\n self.decoder = Decoder()\n\n def forward(self, x):\n x = self.encoder(x)\n x = F.dropout(x, p = 0.2)\n x = self.decoder(x)\n return x\n<ECODE> <SCODE>net = torch.load('net.pth')\nclass classifier(nn.Module):\n super(classifier, self),__init__()\n self.encoder = nn.Sequential(*list(net.encoder.children()))\n self.fc = nn.Linear(enc_dim, num_class)\n\n def forward(self, x):\n x = self.encoder(x)\n x = F.dropout(x,p = 0.5)\n x = self.fc(x)\n x = F.log_softmax(x)\n return x\n<ECODE> Can you please tell me how I can solve this error ? Also, when I use DataParallel, I think <SCODE>nn.Sequential(*list(ae_net.encoder.children())) \n<ECODE> needs to be modified as like <SCODE>nn.Sequential(*list(ae_net.module.encoder.children()))\n<ECODE> Is it right ?", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "invalid device ordinal points to you trying to use a CUDA device id that doesn’t exist. For example, trying to use id=2 on a device with only 2 GPUs (0, 1)", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I do not understand your explanation. Where can i add gpu id ? I have 4gpus, but at this time i just use only one by using CUDA_VISIBLE_DEVICES.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "CUDA_VISIBLE_DEVICES is 0-indexed, so if you have 4 GPUs, then valid values are 0, 1, 2 or 3. pytorch (actually CUDA) is complaining that you gave an invalid device id to use", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "No, I tried it using CUDA_DEVICES_VISIBLE=1. But, may be i used different gpu when the model was loaded.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "As you suggested, it works! Thanks.", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" } ]
false
How to use condition flow?
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "t3476" }, { "contents": "You probably get this error when you try to use the ByteTensor as a boolean: RuntimeError: bool value of non-empty torch.ByteTensor objects is ambiguous", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "Does that numpy bool create proper computing graph? If so, why that’s the case? I’m not sure why pytorch can create computing graph with mixing numpy arrays and pytorch variables.", "isAccepted": false, "likes": 1, "poster": "t3476" }, { "contents": "Can you elaborate a bit more on what exactly you are trying to do?", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "<SCODE>def forward(self, x):\n x = self.module1(x)\n if (x.data > 0).all():\n return self.module2(x)\n else:\n return self.module3(x)\n<ECODE>", "isAccepted": false, "likes": 9, "poster": "apaszke" }, { "contents": "Problem sovled. Thanks for help.", "isAccepted": false, "likes": null, "poster": "t3476" }, { "contents": "Hi, does the condition work on a sample by sample basis? or does the condition apply across all the samples in batch? I am confused about this. Can you please explain it?", "isAccepted": false, "likes": 1, "poster": "rahul" }, { "contents": "I don’t think it works on a per sample basis. Can someone please provide a proper equivalent to tf.cond().", "isAccepted": false, "likes": null, "poster": "madmaxiwnl" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "Well, it’s something like this: <SCODE>out_tensor1, out_tensor2, out_tensor3 = tf.cond(some_condition,\n lambda: some_tensor1, some_tensor2, some_tensor3,\n lambda: some_tensor4, some_tensor5, some_tensor6)\n<ECODE> Currently, I am using a workaround I found on another thread: <SCODE>def where(cond, f1, f2):\n return c * f1() + (1-c) * f2()\n<ECODE> and finding all 3 out_tensors seperately. Though having an inbuilt operation in pytorch would be useful.", "isAccepted": false, "likes": null, "poster": "madmaxiwnl" }, { "contents": "I’m not sure, how the TF code works exactly, but if it’s just an assignment, wouldn’t this work: <SCODE>if some_condition:\n out_tensor1, out_tensor2, out_tensor3 = some_tensor1, some_tensor2, some_tensor3\nelse:\n out_tensor1, out_tensor2, out_tensor3 = some_tensor4, some_tensor5, some_tensor6\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ptrblck" }, { "contents": "<SCODE>some_condition = some_condition.type(torch.LongTensor)\nout_tensor1 = some_condition * some_tensor1 + (1-some_condition) * some_tensor2\n<ECODE> and, similarly for out_tensor 2 and 3. I couldn’t find a function which would perform the operation for all 3 out_tensors at once. Anyways it’s working now, thanks.", "isAccepted": false, "likes": null, "poster": "madmaxiwnl" }, { "contents": "Use", "isAccepted": false, "likes": null, "poster": "abediee" }, { "contents": "Could the tensor.unbind() function be used on .data, to work per sample basis?", "isAccepted": false, "likes": null, "poster": "wiiiktor" } ]
false
Is there a way to mix many different activation functions efficiently?
null
[ { "contents": "Suppose I want a layer to consist of 10 different activation functions. Right now the only way to do this seems to basically involve the creation of multiple layers and concatenating them. This obviously introduces some computational inefficiency. Is there some clever hack to apply point-wise nonlinearities based on masks or something? The way I envision this possibly happening is by doing the linear tranformation as usual but then stacking multiple nonlinearities on top such that each nonlinearity layer ignores all but its own portion of the layer. Not sure how to approach that.", "isAccepted": false, "likes": 1, "poster": "Veril" }, { "contents": "Activation functions or general pointwise functions? Everything applied at the same time on the same tensor or a string of pointwise functions on subtensors?", "isAccepted": false, "likes": null, "poster": "csarofeen" }, { "contents": "Not sure about what distinction you are making between activation functions and “general pointwise functions”. The goal is basically have different neurons have different non-linearities. Not apply different functions to the same neurons.", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "So why not have a series of tensors with value 0 or 1 to pre-multiply the input before each activation and accumulate in an output?", "isAccepted": false, "likes": null, "poster": "csarofeen" }, { "contents": "Would that be an efficient solution? If I understood you correctly, you would have a lot of function operations with input as zero, or would that be optimized away?", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "That won’t be optimized away. If you really want to optimize it this should work: <SCODE>nonlinearities = [F.relu, F.tanh, F.sigmoid]\nmasks = generate_masks(input) # a list of masks\npreactivation = module(input) \n# preallocate output\n# IMPORTANT: it shouldn't require grad!\n# you don't care about the grad w.r.t the original content!\noutput = Variable(preactivation.new(preactivation.size()))\nfor nonlinearity, mask in zip(nonlinearities, masks):\n output[mask] = nonlinearity(preactivation[mask])\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "hello, i have the same question, Have you solved this problem?", "isAccepted": false, "likes": 1, "poster": "wxwx" }, { "contents": "It sounds like I’ll need to implement a new mixed layer to replace an existing ReLU layer. In my new mixed layer I’ll need to generate a set of masks for each activation function I intend to use. Then I need to pass each masked input to the corresponding activation function, and assign the output of these to the corresponding masked output. Does that all sound correct/possible?", "isAccepted": false, "likes": null, "poster": "awalkingstick" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "awalkingstick" } ]
false
Train simultaneously on two datasets
null
[ { "contents": "Hello, I should train using samples from two different datasets, so I initialize two DataLoaders: <SCODE>train_loader_A = torch.utils.data.DataLoader(\n datasets.ImageFolder(traindir_A),\n batch_size=args.batch_size, shuffle=True,\n num_workers=args.workers, pin_memory=True)\n\ntrain_loader_B = torch.utils.data.DataLoader(\n datasets.ImageFolder(traindir_B),\n batch_size=args.batch_size, shuffle=True,\n num_workers=args.workers, pin_memory=True)\n<ECODE> What is the best way to extract samples from both iterators? In order to use something like this: Thanks.", "isAccepted": true, "likes": 8, "poster": "lcelona" }, { "contents": "I’d recommend creating a new dataset and concatenating the images there, so the copy will be done inside the worker processes: <SCODE>class ConcatDataset(torch.utils.data.Dataset):\n def __init__(self, *datasets):\n self.datasets = datasets\n\n def __getitem__(self, i):\n return tuple(d[i] for d in self.datasets)\n\n def __len__(self):\n return min(len(d) for d in self.datasets)\n\ntrain_loader = torch.utils.data.DataLoader(\n ConcatDataset(\n datasets.ImageFolder(traindir_A),\n datasets.ImageFolder(traindir_B)\n ),\n batch_size=args.batch_size, shuffle=True,\n num_workers=args.workers, pin_memory=True)\n\nfor i, (input, target) in enumerate(train_loader):\n ... \n<ECODE>", "isAccepted": true, "likes": 27, "poster": "apaszke" }, { "contents": "Perfect! Thank you for your help.", "isAccepted": true, "likes": null, "poster": "lcelona" }, { "contents": "Yes, that’s a typo. Sorry. I’ve changed the code.", "isAccepted": true, "likes": null, "poster": "apaszke" }, { "contents": "Any idea on how to know, on the merged data whether the sample is coming from datasetA or datasetB in this case? I am trying to train a GAN-like network which needs to sample from two data sources and therefore needs the label in this case (A or B). Sorry to wake-up an old thread", "isAccepted": true, "likes": 3, "poster": "Ahmed_Abbas" }, { "contents": "For example, dataset A contains 100 data, and dataset B contains 10000 data. How can we get 1:1 data from both A and B in one mini-batch? Does anybody have any idea? Thank you.", "isAccepted": true, "likes": 18, "poster": "platero" }, { "contents": "", "isAccepted": true, "likes": 2, "poster": "cjb60" }, { "contents": "To change this behavior, I think one could use WeightedRandomSampler, setting the appropriate weights.", "isAccepted": true, "likes": 1, "poster": "mlopezantequera" }, { "contents": "<SCODE>\ntrain_dl1 = torch.utils.data.DataLoader(train_ds1, batch_size=16, \n shuffle=True, num_workers=8)\ntrain_dl2 = torch.utils.data.DataLoader(train_ds2, batch_size=16, \n shuffle=True, num_workers=8)\ninputs1, targets1 = next(iter(train_dl1))\ninputs2, targets2 = next(iter(train_dl2))\n\ntargets1\ntensor([ 1, 1, 0, 1, 0, 0, 1, 1])\n\ntargets2\ntensor([ 0, 0, 0, 0, 0, 0, 0, 1])\n<ECODE>", "isAccepted": true, "likes": 5, "poster": "mostafaaminnaji" }, { "contents": "hey! this really helps but what if I have different samplers for both the data loaders? How do I train them simultaneously then?", "isAccepted": true, "likes": null, "poster": "Arshiya_Aggarwal" }, { "contents": "The same with you, do you have any ideas? thank you", "isAccepted": true, "likes": null, "poster": "kli-nlpr" }, { "contents": "class ConcatDataset(Dataset): <SCODE>def __init__(self,*datasets):\n self.datasets = datasets\n self.data_files =os.listdir('../data/cat1/cat05')\n\ndef __getitem__(self, i):\n\n return tuple(d[i] for d in self.datasets)\n\ndef __len__(self):\n return min(len(d) for d in self.datasets)\n<ECODE> <SCODE> print(\"Batch all \",i,lables)\nfor i, (inputs, lables) in enumerate(dataloadercat1):\n print(\"Batch cat \", i)\nprint(ConcatDataset(dataset1,dataset2).datasets)\n<ECODE>", "isAccepted": true, "likes": null, "poster": "PRAVEEN_KUMAR" }, { "contents": "After implementation ConcatDataset with over data set than only smaller size subfolder is consider in batch formation. this reason some image of other category is missed out. you have any idea how to solved out this problem.?", "isAccepted": true, "likes": null, "poster": "PRAVEEN_KUMAR" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "PRAVEEN_KUMAR" }, { "contents": "Hi there, I have managed to use two datasets by creating a custom dataset that takes in two root directories: <SCODE>class dataset_maker(Dataset):\n def __init__(self, root_dir1, root_dir2, transform= None):\n self.root_dir1=root_dir1\n self.root_dir2=root_dir2\n self.filelist1 = glob.glob(root_dir1+'*.png')\n self.filelist2 = glob.glob(root_dir2+'*.png')\n self.transform=transform\n \n def __len__(self):\n return min(len(self.filelist1),len(self.filelist2))\n \n def __getitem__(self, idx):\n sample1 = io.imread(self.filelist1[idx])/65535*255\n sample2 = io.imread(self.filelist2[idx])/65535*255\n sample1=np.uint8(sample1)\n sample2=np.uint8(sample2)\n sample1=PIL.Image.fromarray(sample1)\n sample2=PIL.Image.fromarray(sample2)\n if self.transform:\n sample1 = self.transform(sample1)\n sample2 = self.transform(sample2)\n return sample1,sample2\n<ECODE> then, make a dataloader using the two datasets: <SCODE>dataloader = DataLoader(combined_dataset, batch_size=3, shuffle=True, num_workers=4)\n<ECODE> Finally, I get the data in the training loops by doing this call in the for loop: <SCODE>for epoch in range(10):\n running_loss=0.0\n \n #get the data\n for batch_num, (hq_batch,Lq_batch) in enumerate(dataloader):\n print(batch_num, hq_batch.shape, Lq_batch.shape)\n<ECODE> The output is stated below: <SCODE>0 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n1 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n2 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n3 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n4 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n5 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n6 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n7 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n8 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n9 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n10 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n11 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n12 torch.Size([3, 3, 256, 256]) torch.Size([3, 3, 256, 256])\n<ECODE> Hope this solves the problem!", "isAccepted": true, "likes": 3, "poster": "Haris_Cheong" }, { "contents": "how this combined_dataset is to be designed? Is that to be instance of Concat. I got this error while tried to form the combined dataset using dset.ImageFolder, :“TypeError: expected str, bytes or os.PathLike object, not ConcatDataset”", "isAccepted": true, "likes": null, "poster": "Sumesh_Uploader" }, { "contents": "", "isAccepted": true, "likes": 1, "poster": "miladiouss" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Vij" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Vij" }, { "contents": "Hi There, Thank you.", "isAccepted": true, "likes": null, "poster": "vishalthengane" } ]
true
How can i load coco caption data set using prebuilt tools?
null
[ { "contents": "<SCODE>import torchvision.datasets as dset\nimport torchvision.transforms as transforms\n\ncap = dset.CocoCaptions(root = '../data/train2014',\n annFile = '../data/annotations/captions_train2014.json',\n transform=transforms.ToTensor())\n\nprint('Number of samples: ', len(cap))\nimg, target = cap[3] # load 4th sample\n\nprint(\"Image Size: \", img.size())\nprint(target)\n<ECODE> <SCODE>/Users/yunjey/JupyterProjects/kwtl-pytorch/coco/PythonAPI/pycocotools/coco.py in <module>()\n 53 import copy\n 54 import itertools\n---> 55 from . import mask as maskUtils\n 56 import os\n 57 from collections import defaultdict\n\n/Users/yunjey/JupyterProjects/kwtl-pytorch/coco/PythonAPI/pycocotools/mask.py in <module>()\n 1 __author__ = 'tsungyi'\n 2 \n----> 3 import pycocotools._mask as _mask\n 4 \n 5 # Interface for manipulating masks stored in RLE format.\n\nImportError: No module named _mask\n<ECODE> Is there a simple way to install the coco api to solve this problem? Or, which directory should i clone the coco api repo?", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "I have solved this problem by running make inside the coco/PythonAPI. Still remaining problem is that i should run the code inside coco/PythonAPI, this is inconvenient. (from pycocotools.coco import COCO)", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "Thanks for your reply. I am using Python 2.7.", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "\nThe first problem you came across is the same with https://github.com/pdollar/coco/issues/14 180, which is caused by forgetting make.(The cython was not compiled.)\n \nThe second problem is that this package is not in the path your python script would search in.\n\nYou may solve this running the setup.py in PythonAPI folder to copy this package to your pip folder.\nOr you can add such code in your script```\nimport sys\nsys.path.append(“The path of PythonAPI folder”)\n\n You may solve this running the setup.py in PythonAPI folder to copy this package to your pip folder. Or you can add such code in your script```\nimport sys\nsys.path.append(“The path of PythonAPI folder”) <SCODE><ECODE>", "isAccepted": false, "likes": 1, "poster": "Response777" }, { "contents": "Thank you very much! Your reply helped a lot.", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "It still fails for me. these are the following steps I took", "isAccepted": false, "likes": null, "poster": "Vyvian_Nellira" } ]
false
How to index a tensor with a tensor?
null
[ { "contents": "I want to get something like x[y] = [[1,2],[3,4]] But pytorch tensor do not accept different index matrix shape.", "isAccepted": false, "likes": 1, "poster": "waitwaitforget" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "waitwaitforget" }, { "contents": "We’ll have to add support for this kind of indexing too. For now you have to use gather.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Nan loss in RNN model?
null
[ { "contents": "However, the loss becomes nan after several iterations. <SCODE>import torch.nn as nn\nfrom torch.autograd import Variable\n\nclass RNN(nn.Module):\n def __init__(self, input_size, hidden_size, output_size):\n super(RNN, self).__init__()\n \n self.hidden_size = hidden_size\n \n self.i2h = nn.Linear(input_size + hidden_size, hidden_size)\n self.i2o = nn.Linear(input_size + hidden_size, output_size)\n self.softmax = nn.LogSoftmax()\n \n def forward(self, input, hidden):\n combined = torch.cat((input, hidden), 1)\n hidden = self.i2h(combined)\n output = self.i2o(combined)\n output = self.softmax(output)\n return output, hidden\n\n def initHidden(self):\n return Variable(torch.zeros(1, self.hidden_size))\n<ECODE> <SCODE>import torch.nn as nn\nfrom torch.autograd import Variable\n\nclass RNN(nn.Module):\n def __init__(self, input_size, hidden_size, output_size):\n super(RNN, self).__init__()\n \n self.hidden_size = hidden_size\n \n self.i2h = nn.Linear(input_size + hidden_size, hidden_size)\n self.i2o = nn.Linear(input_size + hidden_size, output_size)\n \n self.softmax = nn.Softmax()\n \n def forward(self, input, hidden):\n combined = torch.cat((input, hidden), 1)\n hidden = self.i2h(combined)\n output = self.i2o(combined)\n \n output = self.softmax(output)\n output = output.add(1e-8)\n output = output.log()\n \n return output, hidden\n\n def initHidden(self):\n return Variable(torch.zeros(1, self.hidden_size).type(dtypeFloat))\n<ECODE> The result is slightly better, but still end up nan", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "Besides, another problem is how to get nan in PyTorch. But the NaN problem above is remained unsolved…", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "The NaNs appear, because softmax + log separately can be a numerically unstable operation.", "isAccepted": false, "likes": 8, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "<SCODE>def cross_entropy(input, target, weight=None, size_average=True):\n r\"\"\"This criterion combines `log_softmax` and `nll_loss` in one single class.\n See :class:`torch.nn.CrossEntropyLoss` for details.\n Args:\n input: Variable :math:`(N, C)` where `C = number of classes`\n target: Variable :math:`(N)` where each value is `0 <= targets[i] <= C-1`\n weight (Variable, optional): a manual rescaling weight given to each\n class. If given, has to be a Variable of size \"nclasses\"\n size_average (bool, optional): By default, the losses are averaged\n over observations for each minibatch. However, if the field\n sizeAverage is set to False, the losses are instead summed\n for each minibatch.\n \"\"\"\n return nll_loss(log_softmax(input), target, weight, size_average)\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "Response777" }, { "contents": "I have the same problem with you, and replace cross_entropy with log_softmax + nll_loss doesn’t work. It seems to me that nan is more likely to happen when the network is big; when I try the same architecture on a much smaller scale the nan disappear", "isAccepted": false, "likes": null, "poster": "splinter" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "I further checked my codes and find out that my problem is due to the gradient explosion of RNN. This code might help you if the cause is the same as mine, pay attention to the function ‘clip_gradient’", "isAccepted": false, "likes": 9, "poster": "splinter" }, { "contents": "Thanks, that is exactly what I’ve missed!", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "Hi all, I would agree that it is very likely due to the gradient explosion.", "isAccepted": false, "likes": 2, "poster": "YisongMiao" }, { "contents": "<SCODE>loss.backward()\ntorch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)\n\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Iridium_Blue" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "MrAdamJensen" }, { "contents": "I notice that the clip_grad_norm trick appears frequently in Cutting edge solutions like fastai; aside from preventing NaNs it helps to correct a fundamental weakness of any RNN, that of vanishing and exploding gradients. This can greatly improve the models performance.", "isAccepted": false, "likes": null, "poster": "Iridium_Blue" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "AbdulsalamBande" } ]
false
torch.utils.DataLoader causes a RuntimeError
null
[ { "contents": "<SCODE>import torch\nimport torchvision.datasets as dset\nimport torchvision.transforms as transforms\n\ncap = dset.CocoCaptions(root = './data/train2014resized',\n annFile = './data/annotations/captions_train2014.json',\n transform=transforms.ToTensor())\n\nprint('Number of samples: ', len(cap))\nimg, target = cap[3] # this works well\n\ntrain_loader = torch.utils.data.DataLoader(\n cap, batch_size=1, shuffle=False, num_workers=1)\ndata_iter = iter(train_loader)\n\nprint (data_iter.next()) # this returns an error.\n<ECODE> When I ran the code above, i got a huge RuntimeError message. The below is the bottom of the error message. <SCODE>File \"/Users/yunjey/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py\", line 75, in default_collate\n return [default_collate(samples) for samples in transposed]\n File \"/Users/yunjey/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py\", line 71, in default_collate\n elif isinstance(batch[0], collections.Iterable):\n File \"/Users/yunjey/anaconda2/lib/python2.7/abc.py\", line 132, in __instancecheck__\n if subclass is not None and subclass in cls._abc_cache:\n File \"/Users/yunjey/anaconda2/lib/python2.7/_weakrefset.py\", line 75, in __contains__\n return wr in self.data\nRuntimeError: maximum recursion depth exceeded in cmp\n<ECODE> What can i do for solving this problem?", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "Do you have unicode objects in your batches?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I solved this problem by converting return type unicode to string. Could i pull request to solve this problem?", "isAccepted": false, "likes": 1, "poster": "yunjey" }, { "contents": "If you could send a pull request for (1), that would be great. Thank you.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yunjey" } ]
false
Where should i implement the preprocessing code for text? (e.g seq2seq learning)
null
[ { "contents": "I want to preprocess the text data, for example, converting each word to index and adding some pads (for seq2seq learning). Is it good way to handle this as below? <SCODE> class MyDataset(torch.data.utils.Dataset):\n def __init__(self):\n self.data_files = os.listdir('data_dir')\n sort(self.data_files)\n\n def __getitem__(self, idx):\n data = load_file(self.data_files[idx])\n data = preprocess_data(data) # preprocess\n return data \n\n def __len__(self):\n return len(self.data_files)\n\n\ndset = MyDataset()\nloader = torch.data.utils.DataLoader(dset, num_workers=8)<ECODE>", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "yes, this is a good way to handle text loading.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "The repo looks really good. Is there a potential timeline on when it might be released on pip?", "isAccepted": false, "likes": null, "poster": "Spider101" } ]
false
For seq2seq, got any examples like CTC or attention?
vision
[ { "contents": "get layer like CTC or attention?", "isAccepted": false, "likes": null, "poster": "brbchen" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "brbchen" } ]
false
Extension of convolution layer using CFFI
null
[ { "contents": "It seems that I can start from some sources from pytorch/torch/lib/… Could I just know what sources I have to use or a big picture of getting started?", "isAccepted": false, "likes": null, "poster": "Ja-Keoung_Koo" }, { "contents": "What additional information would get you started?", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for your information. In /THNN/docs I found a description of module = nn.SpatialConvolution(nInputPlane, nOutputPlane, kW, kH, [dW], [dH], [padW], [padH]) But it seems that there is no SpatialConvolution in THNN/generic. I am just curious:", "isAccepted": false, "likes": null, "poster": "Ja-Keoung_Koo" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "<SCODE>THNN_(col2im)(\n THTensor_(data)(gradColumns),\n nInputPlane, inputHeight, inputWidth, outputHeight, outputWidth,\n kH, kW, padH, padW, dH, dW,\n dilationH, dilationW,\n THTensor_(data)(gradInput_n)\n);\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Zijun_Wei" }, { "contents": "Hi, Best regards Thomas", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Zijun_Wei" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "THNN_Floatim2col(THFloatTensor_data(input_h_w), 1, inputHeight, inputWidth, kH, kW, padH, padW, dH, dW, dilationH, dilationW, THFloatTensor_data(columns)); But when I compile the file, the warning comes out: <SCODE>warning: implicit declaration of function 'THNN_Float_im2col' is invalid in C99 [-Wimplicit-function-declaration]\n THNN_Float_im2col(THFloatTensor_data(input_h_w), 1, inputHeight, inputWidth, kH, kW, padH, padW, dH, dW, dilationH, dilationW, THFloatTensor_data(columns))\n<ECODE> I wonder this might because I need to declare it first. So the question is, what is the full declaration for THNN_Float_im2col? I’m new to c, so if I understand this wrong or there is an other way to solve this problem, please let me know. Many thanks.", "isAccepted": false, "likes": null, "poster": "Zijun_Wei" }, { "contents": "I think the declaration is more <SCODE>static void THNN_Floatim2col(const float* data_im, const int channels,\n const int height, const int width, const int kernel_h, const int kernel_w,\n const int pad_h, const int pad_w,\n const int stride_h, const int stride_w,\n const int dilation_h, const int dilation_w,\n float* data_col);\n<ECODE> or so (by literally copying the declaration in the file and substituting). From the message, you seem to have an extra underscore somewhere. Best regards Thomas", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "Where did you copy this?", "isAccepted": false, "likes": null, "poster": "Zijun_Wei" }, { "contents": "Hi! Best regards Thomas", "isAccepted": false, "likes": 1, "poster": "tom" }, { "contents": "THNN_Floatim2col' has internal linkage but is not defined If I removed the declaration I got: implicit declaration of function 'THNN_Floatim2col' is invalid in C99 I will continue to check if it is possible to get the header files that defines THNN_Floatim2col.", "isAccepted": false, "likes": null, "poster": "Zijun_Wei" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Zijun_Wei" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "111128" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mhubii" } ]
false
Pre-trained VGG16
null
[ { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "srv902" }, { "contents": "We will be generating versioned binaries v0.1.8 soon", "isAccepted": false, "likes": 3, "poster": "smth" }, { "contents": "Will the BN version of VGG available?", "isAccepted": false, "likes": 1, "poster": "panovr" }, { "contents": "no, we dont have plans to include BN version of VGG. IF you plan to add it, we accept pull requests. Thank you.", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
High GPU Memory Demand for pytorch?
vision
[ { "contents": "Hi, I have a variant of inception v3 network, which is pre-trained and fixed. Now I want to do the inference for that network. I tried both tensorflow and pytorch. I don’t know why this happens. My guess is that tensorflow may not cache the intermediate feature maps in the graphdef mode, but pytorch may do. I also had other problems related to the memory usage. My recent experience suggests that pytorch often uses higher gpu memory than tensorflow. In many cases, I have to reduce batch size, which may or may not solve my problem. Are there any suggestions on how to use pytorch more memory efficiently?", "isAccepted": false, "likes": 7, "poster": "gaoking132" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Thanks for the pointer. I think that my problem might be a little different. I have two networks A and B, the output of A will be fed to network B, and the loss is defined on the top of network B. The network B is fixed and network A is to be optimized. So I need network B to back propagate the gradients from loss to A, but these gradients do not need to be saved since network B doesn’t need to be updated. I am wondering that in this case, can I set all variables in B to be volatile=True? Is there some way for me to know whether the gradients of parameters in B are buffered or not?", "isAccepted": false, "likes": 1, "poster": "gaoking132" }, { "contents": "<SCODE># let's assume input is a Tensor, A and B are networks\n\noptimizerA = optim.SGD(A.parameters(), ...)\n\n# freeze B's params\nfor p in B.parameters():\n p.requires_grad = False\n\niv = Variable(input)\noptimizerA.zero_grad()\nout1 = A(iv)\nout2 = B(out1)\nout2.backward()\noptimizer.step()\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Thanks. This is what I did. One quick question is that if I set requires_grad = False, does it save extra gpu memory for me?", "isAccepted": false, "likes": null, "poster": "gaoking132" }, { "contents": "Yes it saves memory at certain places", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "OK, I felt a little frustrated. I did everything I could to reduce GPU memory. But the usage of my pytorch code is still more than twice memory-consuming than my tensorflow implementation. Because the code is confidential, I cannot release it in public. My network B is an inception v3 network. I found that after I passed the output of A to B, the GPU memory increases 4 GB. The batch size is only about 20.", "isAccepted": false, "likes": null, "poster": "gaoking132" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "if you can give a small script that showcases your code (without giving out your actual code), happy to take a look.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jhjungCode" }, { "contents": "Thank you for all the helpful replies. The code for both pytorch and tensorflow implementation is a little complex, which certainly cannot be used as a fair comparison. I don’t have too much time to extract a small part of the code for debugging purpose. Sorry for not providing helpful feedbacks on this thread.", "isAccepted": false, "likes": null, "poster": "gaoking132" }, { "contents": "This is my test codes for comparing pytorch and tensorflow <SCODE>import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torchvision\nfrom torchvision import datasets, transforms\nfrom torch.autograd import Variable\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nis_cuda = torch.cuda.is_available() # if cuda is avaible, True\ntraindir = './flower_photos'\n\nnormalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n\nbatch_size = 16 \ntrain_loader = torch.utils.data.DataLoader(\n datasets.ImageFolder(traindir,\n transforms.Compose([\n transforms.RandomSizedCrop(224),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n normalize,])),\n batch_size=batch_size,\n shuffle=True,\n num_workers=4)\n\ncls_num = len(datasets.folder.find_classes(traindir)[0])\n\ntest_loader = torch.utils.data.DataLoader(\n datasets.ImageFolder(traindir,\n transforms.Compose([\n transforms.RandomSizedCrop(224),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n normalize,])),\n batch_size=batch_size,\n shuffle=True,\n num_workers=1)\n\nmodel = torchvision.models.resnet152(pretrained = True)\n\n### don't update model parameters\nfor param in model.parameters() :\n param.requires_grad = False\n#modify last fully connected layter\nmodel.fc = nn.Linear(model.fc.in_features, cls_num)\n\nfc_parameters = [\n {'params': model.fc.parameters()},\n]\noptimizer = torch.optim.Adam(fc_parameters, lr=1e-4, weight_decay=1e-4)\nloss_fn = nn.CrossEntropyLoss() \n\nif is_cuda : model.cuda(), loss_fn.cuda()\n# trainning\n\nmodel.train()\ntrain_loss = []\ntrain_accu = []\ni = 0\nfor epoch in range(1):\n for image, target in train_loader:\n image, target = Variable(image.float()), Variable(target) \n if is_cuda : image, target = image.cuda(), target.cuda() \n output = model(image) \n loss = loss_fn(output, target) \n optimizer.zero_grad() \n loss.backward() \n optimizer.step() \n \n pred = output.data.max(1)[1]\n accuracy = pred.eq(target.data).sum()/batch_size\n \n train_loss.append(loss.data[0])\n train_accu.append(accuracy)\n\n if i % 300 == 0:\n print(i, loss.data[0])\n i += 1\n<ECODE> <SCODE>images, _, labels = load_batch(dataset, batch_size=256, height=image_size, width=image_size)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "jhjungCode" }, { "contents": "Thanks! We’ll look into that!", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for posting the repro!", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "jhjungCode" }, { "contents": "Thanks, it also solves my problem.", "isAccepted": false, "likes": 1, "poster": "gaoking132" }, { "contents": "Hello! It is supposed to work as -> torch.no_grad() ? Because with torch.no_grad(): my model inferences consumes the same memory as in train… It is normal?", "isAccepted": false, "likes": null, "poster": "Mario_Parreno" } ]
false
A Workaround for HalfTensor lacking Stateless Methods
null
[ { "contents": "Hey guys, EDIT: SEE BELOW I’m working on using FP16 in DenseNets where concatenation is a key part of the architecture, but since HalfTensors don’t support stateless methods I can’t call cat(). I’ve thrown a workaround in by instantiating a temporary variable to store the concatenated results and then just dumping to it by indexing, but then I get an error calling backwards() that looks like it comes from the final layer: Is there an easy way to work around this, or is training in half precision not supported yet? I’m on CUDA8.0, cuDNN5105, Ubuntu14.04, pytorch1.9 (the latest version that you get through conda install) and a Maxwell titan X. I’m in a situation where memory costs are more important than speed so I can eat the slowdown of FP16 if it reduces memory useage. Aside, I get an error when calling torch.backends.cudnn.version() if I don’t call torch.backends.cudnn._loadlib() first (the “cuDNN not initialized” error). cuDNN still works, but when trying to check if a source build has succeeded this can be confusing. EDIT: Nevermind, it looks like nothing else other than F.linear has this issue, so switching over to the state’d version (three edits) fixed this and is now running smoothly and quickly in FP16. Dongers up for pytorch, woo! Probably would still make sense to add in a stateless cat() method, though, I suspect the way I’m doing it is inefficient. Thanks, Andy", "isAccepted": false, "likes": null, "poster": "ajbrock" }, { "contents": "if you could give a repro for the F.linear issue, we’re happy to fix this asap. Open an issue on pytorch/pytorch with the repro.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Any ideas on if there’s a better workaround for the cat() issue would also be appreciated (I’m guessing the reason tensors don’t have a tensor.cat() function attached to them since it would change their shape/memory size?)", "isAccepted": false, "likes": null, "poster": "ajbrock" }, { "contents": "It’s hard to say what would be a good choice of semantics for tensor.cat() (should it modify the tensor? should it be included in the list?), so we’d rather stick to the stateless one. If the method is implemented in THC, then exposing it is a matter of changing one line in our C wrappers. Otherwise, it will require some changes in the backends.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Documentation for reinforce()
reinforcement-learning
[ { "contents": "Hi all, I’m having trouble understanding how and when to use a.reinforce(), despite the examples. In particular, why do I need it if I want to implement REINFORCE ? Thanks for your help", "isAccepted": false, "likes": 1, "poster": "cartage" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "You could implement REINFORCE manually, it’s just a convenient way of doing that.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks, that’s just what I needed !", "isAccepted": false, "likes": null, "poster": "cartage" } ]
false
L-bfgs-b and line search methods for l-bfgs
null
[ { "contents": "The current version of lbfgs does not support line search, so simple box constrained is not available. <SCODE>import torch\nfrom functools import reduce\nfrom .optimizer import Optimizer\nfrom math import isinf\n\nclass LBFGSB(Optimizer):\n \"\"\"Implements L-BFGS algorithm.\n\n.. warning::\n This optimizer doesn't support per-parameter options and parameter\n groups (there can be only one).\n\n.. warning::\n Right now all parameters have to be on a single device. This will be\n improved in the future.\n\n.. note::\n This is a very memory intensive optimizer (it requires additional\n ``param_bytes * (history_size + 1)`` bytes). If it doesn't fit in memory\n try reducing the history size, or use a different algorithm.\n\nArguments:\n lr (float): learning rate (default: 1)\n max_iter (int): maximal number of iterations per optimization step\n (default: 20)\n max_eval (int): maximal number of function evaluations per optimization\n step (default: max_iter * 1.25).\n tolerance_grad (float): termination tolerance on first order optimality\n (default: 1e-5).\n tolerance_change (float): termination tolerance on function value/parameter\n changes (default: 1e-9).\n line_search_fn (str): line search methods, currently available\n ['backtracking', 'goldstein', 'weak_wolfe']\n bounds (list of tuples of tensor): bounds[i][0], bounds[i][1] are elementwise\n lowerbound and upperbound of param[i], respectively\n history_size (int): update history size (default: 100).\n\"\"\"\n\ndef __init__(self, params, lr=1, max_iter=20, max_eval=None,\n tolerance_grad=1e-5, tolerance_change=1e-9, history_size=100,\n line_search_fn=None, bounds=None):\n if max_eval is None:\n max_eval = max_iter * 5 // 4\n defaults = dict(lr=lr, max_iter=max_iter, max_eval=max_eval,\n tolerance_grad=tolerance_grad, tolerance_change=tolerance_change,\n history_size=history_size, line_search_fn=line_search_fn, bounds=bounds)\n super(LBFGS, self).__init__(params, defaults)\n\n if len(self.param_groups) != 1:\n raise ValueError(\"LBFGS doesn't support per-parameter options \"\n \"(parameter groups)\")\n\n self._params = self.param_groups[0]['params']\n self._bounds = [(None, None)] * len(self._params) if bounds is None else bounds\n self._numel_cache = None\n\ndef _numel(self):\n if self._numel_cache is None:\n self._numel_cache = reduce(lambda total, p: total + p.numel(), self._params, 0)\n return self._numel_cache\n\ndef _gather_flat_grad(self):\n return torch.cat(\n tuple(param.grad.data.view(-1) for param in self._params), 0)\n\ndef _add_grad(self, step_size, update):\n offset = 0\n for p in self._params:\n numel = p.numel()\n p.data.add_(step_size, update[offset:offset + numel].resize_(p.size()))\n offset += numel\n assert offset == self._numel()\n\ndef step(self, closure):\n \"\"\"Performs a single optimization step.\n\n Arguments:\n closure (callable): A closure that reevaluates the model\n and returns the loss.\n \"\"\"\n assert len(self.param_groups) == 1\n\n group = self.param_groups[0]\n lr = group['lr']\n max_iter = group['max_iter']\n max_eval = group['max_eval']\n tolerance_grad = group['tolerance_grad']\n tolerance_change = group['tolerance_change']\n line_search_fn = group['line_search_fn']\n history_size = group['history_size']\n\n state = self.state['global_state']\n state.setdefault('func_evals', 0)\n state.setdefault('n_iter', 0)\n\n # evaluate initial f(x) and df/dx\n orig_loss = closure()\n loss = orig_loss.data[0]\n current_evals = 1\n state['func_evals'] += 1\n\n flat_grad = self._gather_flat_grad()\n abs_grad_sum = flat_grad.abs().sum()\n\n if abs_grad_sum <= tolerance_grad:\n return loss\n\n # variables cached in state (for tracing)\n d = state.get('d')\n t = state.get('t')\n old_dirs = state.get('old_dirs')\n old_stps = state.get('old_stps')\n H_diag = state.get('H_diag')\n prev_flat_grad = state.get('prev_flat_grad')\n prev_loss = state.get('prev_loss')\n\n n_iter = 0\n # optimize for a max of max_iter iterations\n while n_iter < max_iter:\n # keep track of nb of iterations\n n_iter += 1\n state['n_iter'] += 1\n\n ############################################################\n # compute gradient descent direction\n ############################################################\n if state['n_iter'] == 1:\n d = flat_grad.neg()\n old_dirs = []\n old_stps = []\n H_diag = 1\n else:\n # do lbfgs update (update memory)\n y = flat_grad.sub(prev_flat_grad)\n s = d.mul(t)\n ys = y.dot(s) # y*s\n if ys > 1e-10:\n # updating memory\n if len(old_dirs) == history_size:\n # shift history by one (limited-memory)\n old_dirs.pop(0)\n old_stps.pop(0)\n\n # store new direction/step\n old_dirs.append(s)\n old_stps.append(y)\n\n # update scale of initial Hessian approximation\n H_diag = ys / y.dot(y) # (y*y)\n\n # compute the approximate (L-BFGS) inverse Hessian\n # multiplied by the gradient\n num_old = len(old_dirs)\n\n if 'ro' not in state:\n state['ro'] = [None] * history_size\n state['al'] = [None] * history_size\n ro = state['ro']\n al = state['al']\n\n for i in range(num_old):\n ro[i] = 1. / old_stps[i].dot(old_dirs[i])\n\n # iteration in L-BFGS loop collapsed to use just one buffer\n q = flat_grad.neg()\n for i in range(num_old - 1, -1, -1):\n al[i] = old_dirs[i].dot(q) * ro[i]\n q.add_(-al[i], old_stps[i])\n\n # multiply by initial Hessian\n # r/d is the final direction\n d = r = torch.mul(q, H_diag)\n for i in range(num_old):\n be_i = old_stps[i].dot(r) * ro[i]\n r.add_(al[i] - be_i, old_dirs[i])\n\n if prev_flat_grad is None:\n prev_flat_grad = flat_grad.clone()\n else:\n prev_flat_grad.copy_(flat_grad)\n prev_loss = loss\n\n ############################################################\n # compute step length\n ############################################################\n # directional derivative\n gtd = flat_grad.dot(d) # g * d\n\n # check that progress can be made along that direction\n if gtd > -tolerance_change:\n break\n\n # reset initial guess for step size\n if state['n_iter'] == 1:\n t = min(1., 1. / abs_grad_sum) * lr\n else:\n t = lr\n\n # optional line search: user function\n ls_func_evals = 0\n if line_search_fn is not None:\n # perform line search, using user function\n # raise RuntimeError(\"line search function is not supported yet\")\n if line_search_fn == 'weak_wolfe':\n t = self._weak_wolfe(closure, d)\n elif line_search_fn == 'goldstein':\n t = self._goldstein(closure, d)\n elif line_search_fn == 'backtracking':\n t = self._backtracking(closure, d)\n self._add_grad(t, d)\n else:\n # no line search, simply move with fixed-step\n self._add_grad(t, d)\n if n_iter != max_iter:\n # re-evaluate function only if not in last iteration\n # the reason we do this: in a stochastic setting,\n # no use to re-evaluate that function here\n loss = closure().data[0]\n flat_grad = self._gather_flat_grad()\n abs_grad_sum = flat_grad.abs().sum()\n ls_func_evals = 1\n\n # update func eval\n current_evals += ls_func_evals\n state['func_evals'] += ls_func_evals\n\n ############################################################\n # check conditions\n ############################################################\n if n_iter == max_iter:\n break\n\n if current_evals >= max_eval:\n break\n\n if abs_grad_sum <= tolerance_grad:\n break\n\n if d.mul(t).abs_().sum() <= tolerance_change:\n break\n\n if abs(loss - prev_loss) < tolerance_change:\n break\n\n state['d'] = d\n state['t'] = t\n state['old_dirs'] = old_dirs\n state['old_stps'] = old_stps\n state['H_diag'] = H_diag\n state['prev_flat_grad'] = prev_flat_grad\n state['prev_loss'] = prev_loss\n\n return orig_loss\n\ndef _copy_param(self):\n original_param_data_list = []\n for p in self._params:\n param_data = p.data.new(p.size())\n param_data.copy_(p.data)\n original_param_data_list.append(param_data)\n return original_param_data_list\n\ndef _set_param(self, param_data_list):\n for i in range(len(param_data_list)):\n self._params[i].data.copy_(param_data_list[i])\n\ndef _set_param_incremental(self, alpha, d):\n offset = 0\n for p in self._params:\n numel = p.numel()\n p.data.copy_(p.data + alpha*d[offset:offset + numel].resize_(p.size()))\n offset += numel\n assert offset == self._numel()\n\ndef _directional_derivative(self, d):\n deriv = 0.0\n offset = 0\n for p in self._params:\n numel = p.numel()\n deriv += torch.sum(p.grad.data * d[offset:offset + numel].resize_(p.size()))\n offset += numel\n assert offset == self._numel()\n return deriv\n\ndef _max_alpha(self, d):\n offset = 0\n max_alpha = float('inf')\n for p, bnd in zip(self._params, self._bounds):\n numel = p.numel()\n l_bnd, u_bnd = bnd\n p_grad = d[offset:offset + numel].resize_(p.size())\n if l_bnd is not None:\n from_l_bnd = ((l_bnd-p.data)/p_grad)[p_grad<0]\n min_l_bnd = torch.min(from_l_bnd) if from_l_bnd.numel() > 0 else max_alpha\n if u_bnd is not None:\n from_u_bnd = ((u_bnd-p.data)/p_grad)[p_grad>0]\n min_u_bnd = torch.min(from_u_bnd) if from_u_bnd.numel() > 0 else max_alpha\n max_alpha = min(max_alpha, min_l_bnd, min_u_bnd)\n return max_alpha\n\n\ndef _backtracking(self, closure, d):\n # 0 < rho < 0.5 and 0 < w < 1\n rho = 1e-4\n w = 0.5\n\n original_param_data_list = self._copy_param()\n phi_0 = closure().data[0]\n phi_0_prime = self._directional_derivative(d)\n alpha_k = 1.0\n while True:\n self._set_param_incremental(alpha_k, d)\n phi_k = closure().data[0]\n self._set_param(original_param_data_list)\n if phi_k <= phi_0 + rho * alpha_k * phi_0_prime:\n break\n else:\n alpha_k *= w\n return alpha_k\n\n\ndef _goldstein(self, closure, d):\n # 0 < rho < 0.5 and t > 1\n rho = 1e-4\n t = 2.0\n\n original_param_data_list = self._copy_param()\n phi_0 = closure().data[0]\n phi_0_prime = self._directional_derivative(d)\n a_k = 0.0\n b_k = self._max_alpha(d)\n alpha_k = min(1e4, (a_k + b_k) / 2.0)\n while True:\n self._set_param_incremental(alpha_k, d)\n phi_k = closure().data[0]\n self._set_param(original_param_data_list)\n if phi_k <= phi_0 + rho*alpha_k*phi_0_prime:\n if phi_k >= phi_0 + (1-rho)*alpha_k*phi_0_prime:\n break\n else:\n a_k = alpha_k\n alpha_k = t*alpha_k if isinf(b_k) else (a_k + b_k) / 2.0\n else:\n b_k = alpha_k\n alpha_k = (a_k + b_k)/2.0\n if torch.sum(torch.abs(alpha_k * d)) < self.param_groups[0]['tolerance_grad']:\n break\n if abs(b_k-a_k) < 1e-6:\n break\n return alpha_k\n\n\ndef _weak_wolfe(self, closure, d):\n # 0 < rho < 0.5 and rho < sigma < 1\n rho = 1e-4\n sigma = 0.9\n\n original_param_data_list = self._copy_param()\n phi_0 = closure().data[0]\n phi_0_prime = self._directional_derivative(d)\n a_k = 0.0\n b_k = self._max_alpha(d)\n alpha_k = min(1e4, (a_k + b_k) / 2.0)\n while True:\n self._set_param_incremental(alpha_k, d)\n phi_k = closure().data[0]\n phi_k_prime = self._directional_derivative(d)\n self._set_param(original_param_data_list)\n if phi_k <= phi_0 + rho*alpha_k*phi_0_prime:\n if phi_k_prime >= sigma*phi_0_prime:\n break\n else:\n alpha_hat = alpha_k + (alpha_k - a_k) * phi_k_prime / (phi_0_prime - phi_k_prime)\n a_k = alpha_k\n phi_0 = phi_k\n phi_0_prime = phi_k_prime\n alpha_k = alpha_hat\n else:\n alpha_hat = a_k + 0.5*(alpha_k-a_k)/(1+(phi_0-phi_k)/((alpha_k-a_k)*phi_0_prime))\n b_k = alpha_k\n alpha_k = alpha_hat\n if torch.sum(torch.abs(alpha_k * d)) < self.param_groups[0]['tolerance_grad']:\n break\n if abs(b_k-a_k) < 1e-6:\n break\n return alpha_k<ECODE>", "isAccepted": false, "likes": 3, "poster": "OCY" }, { "contents": "Could you please submit a PR with the changes?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I made pull request. It is showing some error message Travis-CI though.", "isAccepted": false, "likes": null, "poster": "OCY" }, { "contents": "I’ve been trying to use this code. The _backtracking linesearch method doesn’t take bounds. Both _weak_wolfe and _goldstein take bounds, but don’t obey them. Further, the _max_alpha method breaks unless you are working with both upper and lower bounds. I’m not convinced that it calculates the max_alpha parameter correctly either. I’m trying to work on a fix on my own, but I’m hoping that the author has already fixed these issues and could offer a new version to play around with.", "isAccepted": false, "likes": null, "poster": "rapidsnow" }, { "contents": "Any news related to this topic?", "isAccepted": false, "likes": null, "poster": "magnus_w" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "palimboa" } ]
false
How can I masked (or index) evaluate a tensor?
null
[ { "contents": "but Thank you!", "isAccepted": false, "likes": null, "poster": "lili.ece.gwu" }, { "contents": "Can anyone help? Thank you!", "isAccepted": false, "likes": null, "poster": "lili.ece.gwu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "lili.ece.gwu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you for your answer, I will have another question and need your help soon!", "isAccepted": false, "likes": null, "poster": "lili.ece.gwu" } ]
false
Anyone tried the test code?
null
[ { "contents": "Hi there, I was trying to test if my pytorch is installed correctly on a CentOS-based server. I found there are a set of programs under test dir could do the job. But when I try to run run_test.sh, it always gives me the error message like: Anyone know what is going on here? I also tried on my mac, and everything works fine. Thanks!", "isAccepted": false, "likes": null, "poster": "cosmmb" }, { "contents": "what is the version of python on the machine, and how did you install pytorch (wheel or conda)?", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I use python 2.7 for both mac and centos. They are all installed with conda.", "isAccepted": false, "likes": null, "poster": "cosmmb" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "thanks!", "isAccepted": false, "likes": null, "poster": "cosmmb" }, { "contents": "ohhh, i think i know what’s happening. <SCODE>git clone https://github.com/pytorch/pytorch\ncd pytorch\ngit checkout v0.1.9\ncd test\n./run_test.sh\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thanks for your help! Now the error is gone.", "isAccepted": false, "likes": null, "poster": "cosmmb" }, { "contents": "<SCODE>ERROR: test_serialization_map_location (__main__.TestTorch)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"test_torch.py\", line 2713, in test_serialization_map_location\n tensor = torch.load(test_file_path, map_location=map_location)\n File \"/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/serialization.py\", line 222, in load\n return _load(f, map_location, pickle_module)\n File \"/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/serialization.py\", line 370, in _load\n result = unpickler.load()\nAttributeError: Can't get attribute '_rebuild_tensor' on <module 'torch._utils' from '/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/site-packages/torch/_utils.py'>\n\n<ECODE> <SCODE>pytorch 0.1.10 py35_1 soumith\n<ECODE> How should I solve this error? Thanks!", "isAccepted": false, "likes": null, "poster": "dl4daniel" }, { "contents": "you are trying to install pytorch via binaries, but run tests for master. See my comment above.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "The error is gone with the following method: I don’t why but there is no error this time. Thanks", "isAccepted": false, "likes": null, "poster": "dl4daniel" }, { "contents": "What if I installed PyTorch using Anaconda (Anaconda Navigator), how can I test the installation?", "isAccepted": false, "likes": null, "poster": "Royi" } ]
false
Implementation of arbitrary differentiable functions
null
[ { "contents": "Is it possible to just use arbitrary differentiable/supported functions to create other functions without having to implement their backward as described in examples? One thing I really liked about TF is how you can just create an arbitrary compute graph of differentiable pieces. It’s not obvious how to do that here unless I’m missing something?", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "I see, so what then is the reason behind “def backward(self, grad_output):” when implementing your own modules? Why isn’t that redundant?", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jekbradbury" }, { "contents": "It seems that you can’t use numpy constants in such a construct, it leads it being stuck on the CPU : <SCODE>def gelu(x):\n\treturn 0.5 * x * (1 + F.tanh(np.sqrt(2 / np.pi) * (x + 0.044715 * x*x*x)))\n<ECODE> However if you replace the square root with 0.79788456080 everything works fine. Intentional?", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "What’s the error you’re getting on the GPU? You’re only doing scalar ops from numpy.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Not getting any error. It just seems to never get around to training, GPU isn’t busy and a single CPU core is at 100%.", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "And what’s the stack trace once you interrupt the script?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>^CProcess Process-1:\nTraceback (most recent call last):\n File \"/home/rrr/anaconda/lib/python2.7/multiprocessing/process.py\", line 258, in _bootstrap\nTraceback (most recent call last):\n File \"main.py\", line 121, in <module>\n self.run()\n File \"/home/rrr/anaconda/lib/python2.7/multiprocessing/process.py\", line 114, in run\n self._target(*self._args, **self._kwargs)\n File \"/home/rrr/anaconda/lib/python2.7/site-packages/torch/utils/data/dataloader.py\", line 26, in _worker_loop\n output = net(input)\n File \"/home/rrr/anaconda/lib/python2.7/site-packages/torch/nn/modules/module.py\", line 202, in __call__\n r = index_queue.get()\n File \"/home/rrr/anaconda/lib/python2.7/multiprocessing/queues.py\", line 378, in get\n result = self.forward(*input, **kwargs)\n File \"main.py\", line 79, in forward\n x = gelu(self.fc1(x))\n File \"main.py\", line 61, in gelu\n return 0.5 * x * (1 + F.tanh(np.sqrt(2 / np.pi) * (x + 0.044715 * x*x*x)))\n File \"/home/rrr/anaconda/lib/python2.7/site-packages/torch/autograd/variable.py\", line 818, in __iter__\n return recv()\n File \"/home/rrr/anaconda/lib/python2.7/site-packages/torch/multiprocessing/queue.py\", line 21, in recv\n buf = self.recv_bytes()\nKeyboardInterrupt\n return iter(map(lambda i: self[i], range(self.size(0))))\n File \"/home/rrr/anaconda/lib/python2.7/site-packages/torch/autograd/variable.py\", line 818, in <lambda>\n return iter(map(lambda i: self[i], range(self.size(0))))\n File \"/home/rrr/anaconda/lib/python2.7/site-packages/torch/autograd/variable.py\", line 68, in __getitem__\n return Index(key)(self)\n File \"/home/rrr/anaconda/lib/python2.7/site-packages/torch/autograd/_functions/tensor.py\", line 16, in forward\n result = i.index(self.index)\nKeyboardInterrupt<ECODE>", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Well I’m not sure what the problem is but changing np.sqrt(2 / np.pi) to its value immediately fixes the problem. So I can’t say it’s unrelated.", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "I think the only fix we can do is to add scalar types to torch. We’ve been talking about that for some time now, and it’s probably going to happen, but rather in some farther future.", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
How can I get the loss of individual data
null
[ { "contents": "As long as I understand, the loss function can provide the mean loss of batch data or the loss sum of batch data. Currently, to get the individual loss at the evaluation mode, I use the batch size of one. Is there any way to get the loss of each data from the batch while the batch size of larger than one is employed ? Also, why is the loss value of the same input data different at each trial ? At the evaluation mode, are not the parameters fixed ?", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "You can write your own loss function instead of using the built-in one, then you can do every thing you want inside such function.", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Rinku_Jadhav2014" } ]
false
What’s the difference between `torch.nn.functional` and `torch.nn`?
null
[ { "contents": "It seems that there are quite a few similar function in these two modules. What I want to know is if there were any other further difference, say, the efficiency?", "isAccepted": false, "likes": 27, "poster": "Response777" }, { "contents": "", "isAccepted": false, "likes": 20, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": 11, "poster": "Cysu" }, { "contents": "Correct me if I’m wrong. Seems like when using nn.functional net.parameters() won’t find the parameters/weight. you need to specify them explicity", "isAccepted": false, "likes": 2, "poster": "Xixi" }, { "contents": "I think this is correct, using nn.functional is lower-level. 1: you can’t benefit from the nn.sequential, so we have to manually define the parameters. However, you could still use the torch.optim to update the parameters in training.", "isAccepted": false, "likes": null, "poster": "klory" }, { "contents": "I beleive both input and kernel are trainable. Can U explain what is extra in torch.nn or more inherited?? thanks", "isAccepted": false, "likes": null, "poster": "govind_narasimman" } ]
false
How to preprocess input for pre trained networks?
vision
[ { "contents": "Thank you very much", "isAccepted": true, "likes": 9, "poster": "tsterin" }, { "contents": "", "isAccepted": true, "likes": 18, "poster": "smth" }, { "contents": "Thank you very much!", "isAccepted": true, "likes": null, "poster": "tsterin" }, { "contents": "Hi, it looks like the pixel intensities have been rescaled to [0 1] before normalization. It that right?", "isAccepted": true, "likes": null, "poster": "qianguih" }, { "contents": "", "isAccepted": true, "likes": 4, "poster": "smth" }, { "contents": "I see. Thank you very much!", "isAccepted": true, "likes": null, "poster": "qianguih" }, { "contents": "This is important information, I wonder it’s not put in the doc but in the example code?", "isAccepted": true, "likes": null, "poster": "ecolss" }, { "contents": "Agreed. If it wasn’t for this thread, I would have missed this important Normalization step for sure. It would be nice if it could be added to the documentation.", "isAccepted": true, "likes": 1, "poster": "mehdi-shiba" }, { "contents": "This is pretty key information. Without doing this, and only doing mean centering and stddev normalization of the original Hunsfield units, I need to keep batch normalization enabled during test to see reasonable results from my volumetric segmentation network. This should really be in bold somewhere.", "isAccepted": true, "likes": 3, "poster": "mattmacy" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "youkaichao" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Sphinxs" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Deeply" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Sphinxs" }, { "contents": "Or, it is OK to use list sequences as shown in your case?", "isAccepted": true, "likes": null, "poster": "Deeply" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Sphinxs" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Deeply" }, { "contents": "If I use the pretrained model on ImageNet and fine-tune it on my own dataset, should I re-calculate the mean and std with my own dataset?", "isAccepted": true, "likes": null, "poster": "e3312f50f3ba76f35a60" }, { "contents": "Using the mean and std on ImageNet is pretty standard practice. Since the mean and std are calculated using a million of images, the statistics is pretty stable. Also the pretrained model is trained using the mean and std in ImageNet. I do not recommend changing the mean and std to that on your small dataset.", "isAccepted": true, "likes": 1, "poster": "jdhao" }, { "contents": "Thank you for your comment. I’ve been having that doubt a while ago. So basically what I infer from the comments as a summary is that the best practice is to leverage: in our specific task and dataset to check what’s the best option to normalise.", "isAccepted": true, "likes": null, "poster": "r0mer0m" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Nikronic" } ]
true
Variable(xxx).cuda() and Variable(xxx.cuda()), memory leak?
null
[ { "contents": "Hi, I have been hunting a memory leak on GPU for a few days, and there seems to be a pytorch issue. In the code bellow, I define a dummy Function that does nothing, and just forward through it a large tensor. Depending on .cuda() being inside or outside Variable(), there may or may not be a leak. Did I miss something? <SCODE>import torch\n\nfrom torch import Tensor\nfrom torch.autograd import Variable\nfrom torch.autograd import Function\n\n######################################################################\n# That's not pretty\nimport os\nimport re\ndef cuda_memory():\n f = os.popen('nvidia-smi -q')\n fb_total, fb_used = -1, -1\n for line in f:\n if re.match('^ *FB Memory Usage', line):\n fb_total = int(re.search(': ([0-9]*) MiB', f.readline()).group(1))\n fb_used = int(re.search(': ([0-9]*) MiB', f.readline()).group(1))\n return fb_total, fb_used\n######################################################################\nclass Blah(Function):\n def forward(self, input):\n return input\n######################################################################\n\nblah = Blah()\n\nfor k in range(0, 10):\n x = Variable(Tensor(10000, 200).normal_()).cuda()\n y = blah(x)\n fb_total, fb_used = cuda_memory()\n print(k, fb_used, '/', fb_total)\n\nfor k in range(0, 10):\n x = Variable(Tensor(10000, 200).cuda().normal_())\n y = blah(x)\n fb_total, fb_used = cuda_memory()\n print(k, fb_used, '/', fb_total)\n<ECODE> prints: <SCODE>0 257 / 8113\n1 265 / 8113\n2 265 / 8113\n3 265 / 8113\n4 265 / 8113\n5 265 / 8113\n6 265 / 8113\n7 265 / 8113\n8 265 / 8113\n9 265 / 8113\n0 267 / 8113\n1 267 / 8113\n2 275 / 8113\n3 283 / 8113\n4 291 / 8113\n5 299 / 8113\n6 307 / 8113\n7 315 / 8113\n8 323 / 8113\n9 331 / 8113<ECODE>", "isAccepted": false, "likes": null, "poster": "FrancoisFleuret" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "It is fixed in the git version.", "isAccepted": false, "likes": null, "poster": "FrancoisFleuret" }, { "contents": "Thanks for reminding.", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "I’ll look into that. Thanks for the report.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "This should be now fixed in master. It was a reference cycle, so it wouldn’t get freed until the Python’s GC kicked in.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
How could I build network structure equivelant to old nn.Concat or nn.Parallel?
null
[ { "contents": "", "isAccepted": false, "likes": 2, "poster": "Varg_Nord" }, { "contents": "<SCODE>class InceptionBlock(nn.Module):\n def __init__(self, num_in, num_out):\n super(InceptionBlock, self).__init__()\n self.branches = [\n nn.Sequential(\n nn.Conv2d(num_in, num_out, kernel_size=1),\n nn.ReLU()),\n nn.Sequential(\n nn.Conv2d(num_in, num_out, kernel_size=1),\n nn.ReLU(),\n nn.Conv2d(num_out, num_out, kernel_size=3, padding=1),\n nn.ReLU()),\n ...\n ]\n # **EDIT**: need to call add_module\n for i, branch in enumerate(self.branches):\n self.add_module(str(i), branch)\n\n def forward(self, x):\n # Concatenate branch results along channels\n return torch.cat([b(x) for b in self.branches], 1)\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "Cysu" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Cysu" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Oh. That’s perfect! Didn’t aware of this before. Thanks very much.", "isAccepted": false, "likes": null, "poster": "Cysu" } ]
false
What would be an easy way of retrieving the LSTM output for all layers
null
[ { "contents": "What would be an easy way of retrieving the LSTM output for all layers and for all steps, that is, not only the last layer, as it is done by default?", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "DiffEverything" } ]
false
Directly getting gradients
null
[ { "contents": "Is there a way for me to directly compute the gradient of a variable w.r.t. another variable, like tf.gradients()?", "isAccepted": false, "likes": 6, "poster": "yoonholee" }, { "contents": "<SCODE>import torch\nfrom torch.autograd import Variable\n\nx = Variable(torch.ones(10), requires_grad=True)\ny = x * Variable(torch.linspace(1, 10, 10), requires_grad=False)\ny.backward(torch.ones(10))\nprint(x.grad)\n<ECODE> produces <SCODE>Variable containing:\n 1\n 2\n 3\n 4\n 5\n 6\n 7\n 8\n 9\n 10\n[torch.FloatTensor of size 10]<ECODE>", "isAccepted": false, "likes": 9, "poster": "Tudor_Berariu" }, { "contents": "As far as I can tell, there’s no way of accomplishing this using y.backward(). That’s why I was wondering if there was a standalone function that differentiated.", "isAccepted": false, "likes": 1, "poster": "yoonholee" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "alexis-jacq" }, { "contents": "we are working on enabling this. This will be in the next major pytorch release 2 months away. It wont be a full 2nd derivative but it will be the Hessian-vector product (whether it’s tf.gradients or in pytorch). We are tracking this feature as double-backprop / double backwards.", "isAccepted": false, "likes": 8, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "vsahil" }, { "contents": "Dear PyTorch dev,", "isAccepted": false, "likes": 1, "poster": "BruceShakeham" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yf225" }, { "contents": "Yes it will. Hopefully this will be done in the next weeks !", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "Is this issue has be solved?", "isAccepted": false, "likes": null, "poster": "amitoz" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "Is it working? I tried that with v1.6. Same message with older versions.", "isAccepted": false, "likes": null, "poster": "rodsveiga" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "albanD" } ]
false
VGG: Exploding RAM with simple code, bug?
vision
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "tsterin" }, { "contents": "Hi, <SCODE>l = []\nfor i in range(10):\n out = vgg19(Variable(torch.randn([1,3,224,224])))\n l.append(out.data)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "tsterin" }, { "contents": "<SCODE>Variable(torch.randn(1,3,22,224), volatile=True)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Thank you very much for your explanation!", "isAccepted": false, "likes": null, "poster": "tsterin" } ]
false
Serializing HalfTensor Nets?
null
[ { "contents": "Digging through the issues and I can’t find anything on this particular instance: I’m trying to serialize a network that I’ve moved to the GPU and cast to HalfTensor. When I call torch.save(net,filename) I get the following error: <SCODE> File \"<stdin>\", line 1, in <module>\n File \"torch/serialization.py\", line 123, in save\n return _save(obj, f, pickle_module, pickle_protocol)\n File \"torch/serialization.py\", line 218, in _save\n _add_to_tar(save_storages, tar, 'storages')\n File \"torch/serialization.py\", line 31, in _add_to_tar\n fn(tmp_file)\n File \"torch/serialization.py\", line 189, in save_storages\n storage_type = normalize_storage_type(type(storage))\n File \"torch/serialization.py\", line 99, in normalize_storage_type\n return getattr(torch, storage_type.__name__)\nAttributeError: 'module' object has no attribute 'HalfStorage'\n<ECODE> I’ve tried casting to float, bringing the network back onto the cpu, and casting into a few other datatypes but it looks like a call to float() doesn’t change the underlying storage type in a way that would make this possible. If I don’t half() the tensor I can still save it just fine, but once I’ve called half() nothing I do apparently changes the underlying storage type back. Any tips (or if this has been fixed in a recent PR that I’m not seeing) would be appreciated. Thanks again for all the help, this has been an extremely pleasant experience thus far. Best, Andy", "isAccepted": false, "likes": null, "poster": "ajbrock" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "It is, I’m currently just .float().cpu().numpy() 'ing my weights and dumping them to .npz’s like I used to. Thanks for the response!", "isAccepted": false, "likes": null, "poster": "ajbrock" } ]
false
Gradients in variable obtained through narrowing
null
[ { "contents": "<SCODE>import torch\nfrom torch.autograd import Variable\n\nx = Variable(torch.linspace(1, 12, 12).view(3, 4), requires_grad=True)\nx1 = x[:,:2] # x1 is 3 x 2\nx2 = x[:,1:] # x2 is 3 x 3\n\ny1 = 2 * x1\ny2 = 3 * x2\ny1.backward(torch.ones(3, 2))\ny2.backward(torch.ones(3, 3))\n\nprint(x.grad) # This is correct\n\n# Variable containing:\n# 2 5 3 3\n# 2 5 3 3\n# 2 5 3 3\n# [torch.FloatTensor of size 3x4]\n\nprint(x1.grad) # This is zero\n\n# Variable containing:\n# 0 0\n# 0 0\n# 0 0\n# [torch.FloatTensor of size 3x2]<ECODE>", "isAccepted": false, "likes": null, "poster": "Tudor_Berariu" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "smth" } ]
false
Print Autograd Graph
null
[ { "contents": "Is there a way to visualize the graph of a model similar to what Tensorflow offers?", "isAccepted": false, "likes": 17, "poster": "mattyd2" }, { "contents": "", "isAccepted": false, "likes": 22, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "wangg12" }, { "contents": "I tried your code snippet. However, it doesn’t seem to visualize ResNet correctly… I also tried AlexNet, VGG-19, and same story…", "isAccepted": false, "likes": null, "poster": "zym1010" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "wangg12" }, { "contents": "code: <SCODE>%matplotlib inline\nfrom graphviz import Digraph\nimport re\nimport torch\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nfrom torch.autograd import Variable\nimport torchvision.models as models\n\n\ndef make_dot(var):\n node_attr = dict(style='filled',\n shape='box',\n align='left',\n fontsize='12',\n ranksep='0.1',\n height='0.2')\n dot = Digraph(node_attr=node_attr, graph_attr=dict(size=\"12,12\"))\n seen = set()\n\n def add_nodes(var):\n if var not in seen:\n if isinstance(var, Variable):\n value = '('+(', ').join(['%d'% v for v in var.size()])+')'\n dot.node(str(id(var)), str(value), fillcolor='lightblue')\n else:\n dot.node(str(id(var)), str(type(var).__name__))\n seen.add(var)\n if hasattr(var, 'previous_functions'):\n for u in var.previous_functions:\n dot.edge(str(id(u[0])), str(id(var)))\n add_nodes(u[0])\n add_nodes(var.creator)\n return dot\n\n\ninputs = torch.randn(1,3,224,224)\nresnet18 = models.resnet18()\ny = resnet18(Variable(inputs))\nprint(y)\n\ng = make_dot(y)\ng\n<ECODE> definitely, the result doesn’t seem like a ResNet… similar things happen for AlexNet, VGG, etc.", "isAccepted": false, "likes": 3, "poster": "zym1010" }, { "contents": "Yes, the visualization code is currently broken for convnets because certain layers have C++ implementations that don’t expose the graph pointers to Python. It’ll be fixed after the autograd refactor.", "isAccepted": false, "likes": 1, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "wangg12" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "zym1010" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "varun-suresh" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "Same problem here. Any solutions?", "isAccepted": false, "likes": 1, "poster": "platero" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "varun-suresh" }, { "contents": "What’s the key that fails?", "isAccepted": false, "likes": null, "poster": "lantiga" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "<SCODE>from graphviz import Digraph\nimport torch\nfrom torch.autograd import Variable\n\n\ndef make_dot(var, params):\n \"\"\" Produces Graphviz representation of PyTorch autograd graph\n \n Blue nodes are the Variables that require grad, orange are Tensors\n saved for backward in torch.autograd.Function\n \n Args:\n var: output Variable\n params: dict of (name, Variable) to add names to node that\n require grad (TODO: make optional)\n \"\"\"\n param_map = {id(v): k for k, v in params.items()}\n print(param_map)\n \n node_attr = dict(style='filled',\n shape='box',\n align='left',\n fontsize='12',\n ranksep='0.1',\n height='0.2')\n dot = Digraph(node_attr=node_attr, graph_attr=dict(size=\"12,12\"))\n seen = set()\n \n def size_to_str(size):\n return '('+(', ').join(['%d'% v for v in size])+')'\n\n def add_nodes(var):\n if var not in seen:\n if torch.is_tensor(var):\n dot.node(str(id(var)), size_to_str(var.size()), fillcolor='orange')\n elif hasattr(var, 'variable'):\n u = var.variable\n node_name = '%s\\n %s' % (param_map.get(id(u)), size_to_str(u.size()))\n dot.node(str(id(var)), node_name, fillcolor='lightblue')\n else:\n dot.node(str(id(var)), str(type(var).__name__))\n seen.add(var)\n if hasattr(var, 'next_functions'):\n for u in var.next_functions:\n if u[0] is not None:\n dot.edge(str(id(u[0])), str(id(var)))\n add_nodes(u[0])\n if hasattr(var, 'saved_tensors'):\n for t in var.saved_tensors:\n dot.edge(str(id(t)), str(id(var)))\n add_nodes(t)\n add_nodes(var.grad_fn)\n return dot\n\nfrom torchvision import models\ninputs = torch.randn(1,3,224,224)\nresnet18 = models.resnet18()\ny = resnet18(Variable(inputs))\n# print(y)\n\ng = make_dot(y, resnet18.state_dict())\ng.view()\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "hyqneuron" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "hyqneuron" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "elbamos" }, { "contents": "<SCODE>model = crnn.CRNN(32, 1, 37,256, 1).cuda() # it's my model \nprint(model) \n\nCRNN (\n (cnn): Sequential (\n (conv0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (relu0): ReLU (inplace)\n (pooling0): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (relu1): ReLU (inplace)\n (pooling1): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n (conv2): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (batchnorm2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)\n (relu2): ReLU (inplace)\n (conv3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (relu3): ReLU (inplace)\n (pooling2): MaxPool2d (size=(2, 2), stride=(2, 1), dilation=(1, 1))\n (conv4): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (batchnorm4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)\n (relu4): ReLU (inplace)\n (conv5): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (relu5): ReLU (inplace)\n (pooling3): MaxPool2d (size=(2, 2), stride=(2, 1), dilation=(1, 1))\n (conv6): Conv2d(512, 512, kernel_size=(2, 2), stride=(1, 1))\n (batchnorm6): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)\n (relu6): ReLU (inplace)\n )\n (rnn): Sequential (\n (0): BidirectionalLSTM (\n (rnn): LSTM(512, 256, bidirectional=True)\n (embedding): Linear (512 -> 256)\n )\n (1): BidirectionalLSTM (\n (rnn): LSTM(256, 256, bidirectional=True)\n (embedding): Linear (512 -> 37)\n )\n )\n)\n<ECODE> <SCODE>inputs = torch.randn(1,3,224,224)\nmodel = crnn.CRNN(32, 1, 37,256, 1).cuda()\n**y = model(Variable(inputs)) # l got an error at this line** \nprint(y)\ng = make_dot(y)\ng \n<ECODE> <SCODE>======= Backtrace: =========\n/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7ff2608057e5]\n/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7ff26080e37a]\n/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7ff26081253c]\n/home/ahmed/anaconda3/envs/cv/lib/python2.7/site-packages/../../libstdc++.so.6(_ZNSt15basic_stringbufIcSt11char_traitsIcESaIcEE8overflowEi+0x13b)[0x7ff24918604b]\n/home/ahmed/anaconda3/envs/cv/lib/python2.7/site-packages/../../libstdc++.so.6(_ZNSt15basic_streambufIcSt11char_traitsIcEE6xsputnEPKcl+0x36)[0x7ff24918a1b6]\n/home/ahmed/anaconda3/envs/cv/lib/python2.7/site-packages/torch/lib/libshm.so(_ZSt16__ostream_insertIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_PKS3_l+0x1c5)[0x7ff23b391235]\n/home/ahmed/anaconda3/envs/cv/lib/python2.7/site-packages/torch/_C.so(+0x5d2842)[0x7ff23bc12842]\n/home/ahmed/anaconda3/envs/cv/lib/python2.7/site-packages/torch/_C.so(+0x5d34ae)[0x7ff23bc134ae]\n/home/ahmed/anaconda3/envs/cv/lib/python2.7/site-packages/torch/_C.so(_ZN5torch2nn33SpatialConvolutionMM_updateOutputEPN4thpp6TensorES3_S3_S3_S3_S3_iiiiii+0xb3)[0x7ff23bc271a3]\n/home/ahmed/anaconda3/envs/cv/lib/python2.7/site-packages/torch/_C.so(+0x5caf27)[0x7ff23bc0af27]\n/home/ahmed/anaconda3/envs/cv/lib/python2.7/site-packages/torch/_C.so(_ZN5torch8autograd11ConvForward5applyERKSt6vectorISt10shared_ptrINS0_8VariableEESaIS5_EE+0x17bf)[0x7ff23bc0f65f]\n/home/ahmed/anaconda3/envs/cv/lib/python2.7/site-packages/torch/_C.so(+0x5c191b)[0x7ff23bc0191b]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7ff2614cee93]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x715d)[0x7ff26158180d]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7ff261583c3e]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8b47)[0x7ff2615831f7]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7ff261583c3e]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(+0x79b68)[0x7ff2614feb68]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7ff2614cee93]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x61d6)[0x7ff261580886]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7ff261583c3e]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(+0x79a61)[0x7ff2614fea61]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7ff2614cee93]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(+0x5c64f)[0x7ff2614e164f]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7ff2614cee93]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(+0xba2ac)[0x7ff26153f2ac]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7ff2614cee93]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x715d)[0x7ff26158180d]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7ff261583c3e]\n/home/ahmed/anaconda3/envs/cv/bin/../lib/libpython2.7.so.1.0(+0x79b68)[0x7ff2614feb68]\n\n7ff261a9e000-7ff261a9f000 rw-s 133756000 00:06 541 /dev/nvidiactl\n7ff261a9f000-7ff261aa0000 rw-s 9fee8000 00:06 542 /dev/nvidia0\n7ff261aa0000-7ff261aa1000 rw-s 13372f000 00:06 541 /dev/nvidiactl\n7ff261aa1000-7ff261aa2000 rw-s 9fee8000 00:06 542 /dev/nvidia0\n7ff261aa2000-7ff261aa3000 rw-s 133790000 00:06 541 /dev/nvidiactl\n7ff261aa3000-7ff261aa4000 rwxp 00000000 00:00 0 \n7ff261aa4000-7ff261aa6000 rw-p 00000000 00:00 0 \n7ff261aa6000-7ff261aa7000 r--p 00025000 08:01 2101713 /lib/x86_64-linux-gnu/ld-2.23.so\n7ff261aa7000-7ff261aa8000 rw-p 00026000 08:01 2101713 /lib/x86_64-linux-gnu/ld-2.23.so\n7ff261aa8000-7ff261aa9000 rw-p 00000000 00:00 0 \n7fffff3a5000-7fffff3c8000 rwxp 00000000 00:00 0 [stack]\n7fffff3c8000-7fffff3ca000 rw-p 00000000 00:00 0 \n7fffff3ed000-7fffff3ef000 r--p 00000000 00:00 0 [vvar]\n7fffff3ef000-7fffff3f1000 r-xp 00000000 00:00 0 [vdso]\nffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]\nProcess finished with exit code 134 (interrupted by signal 6: SIGABRT)\n<ECODE> Thank you for your help", "isAccepted": false, "likes": null, "poster": "ahmedmazari" } ]
false
Newbie question: What are the prerequisites for running PyTorch with GPU?
null
[ { "contents": "Hi everyone, I’m new to deep learning libraries, so apologies in advance if this is something I’m already supposed to know. I’ve used Theano before but guides for setting up the GPU there were very straightforward, also I was doing this using a WinPy instance on Windows. I come from a MATLAB background where I’m used to being able to play around with the variables and initialize things quickly, so naturally I felt like PyTorch may have more to offer. But there isn’t that much documentation on how to set things up from scratch with PyTorch. I have Anaconda installed on Ubuntu and have tried the basic functionality of PyTorch that way, but I’m not sure how to get it working with CUDA. Any advice would be appreciated!", "isAccepted": false, "likes": 1, "poster": "yenson-lau" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "moskomule" }, { "contents": "if you want to use pytorch with an NVIDIA GPU, all you need to do is install pytorch binaries and start using it. We ship with everything in-built (pytorch binaries include CUDA, CuDNN, NCCL, MKL, etc.). So it’s a one-touch install: Install pytorch from binaries ( http://pytorch.org 1.2k has instructions) Try a simple program with CUDA: <SCODE>import torch\na = torch.randn(10).cuda()\nprint(a)\nprint(a + 2)\n<ECODE>", "isAccepted": false, "likes": 12, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "moskomule" }, { "contents": "<SCODE>a = torch.randn(10).cuda()\n<ECODE> Just to make sure, I updated Pytorch with my Anaconda installation via the following <SCODE>conda install pytorch torchvision cuda80 -c soumith\n<ECODE> Thanks again for the support!", "isAccepted": false, "likes": null, "poster": "yenson-lau" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "yenson-lau" }, { "contents": "Yup. You have to first do the CUDA binaries installation. if the above lines return True then you have Cuda working else not.", "isAccepted": false, "likes": null, "poster": "Sriharsha_Sammeta" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Dhruvrnaik" } ]
false
Numerically unstable problem in matrix multiplication
null
[ { "contents": "<SCODE>X = Variable(torch.randn(2, 30, 400))\nY = Variable(torch.randn(400, 400))\n\n# 1st method\nouts = []\nfor i in range(X.size(0)):\n out = torch.mm(X[i], Y)\n outs.append(out)\nresult1 = torch.stack(outs) # shape of (2, 3, 4)\n\n# 2nd method\nresult2 = X.resize(2*30, 400).mm(Y)\nresult2 = result2.resize(2, 30, 400)\n\n# 3rd method\nresult3 = torch.bmm(X, Y.unsqueeze(0).expand(X.size(0), *Y.size()))\n\nassert np.allclose(result1.data.numpy(), result2.data.numpy()) # this causes an error\nassert np.allclose(result1.data.numpy(), result3.data.numpy())\nassert np.allclose(result2.data.numpy(), result3.data.numpy()) # this causes an error\nassert np.allclose(result2.data.numpy(), result3.data.numpy(), 1e-2) # this doesn't cause an error<ECODE>", "isAccepted": false, "likes": null, "poster": "yunjey" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "fmassa" }, { "contents": "Hi all, I am facing a similar issue: I have to implement a layer from a Theano implementation and reproducing it with the PyTorch framework leads to small numerical instability. Here is the code: <SCODE>import torch\nimport numpy as np\nimport theano\nimport theano.tensor as T\n\ntorch.manual_seed(7)\n\ndef func_pytorch(a, b):\n\n a = a.permute(0,2,1).contiguous().view(-1,400)\n b = b.view(400,-1)\n\n return torch.matmul(a, b).view(100,5,400,10).permute(0,2,3,1)\n\ndef func_theano(a, b):\n return T.tensordot(a, b, [[2], [2]])\n\ndef func_numpy(a, b):\n return np.tensordot(a, b, [[2], [2]])\n\na = torch.randn(100,400,5)\nb = torch.randn(400,400,10)\n\na_p = a.permute(0,2,1)\nb_p = b.permute(2,1,0)\n\nout_pytorch = func_pytorch(a, b)\nout_numpy = np.transpose(func_numpy(a_p,b_p), (0,3,2,1))\n\nout_true = np.transpose(func_theano(a_p, b_p).eval(), (0,3,2,1))\n\nnp.testing.assert_allclose(actual=out_numpy, desired=out_true, rtol=1e-7) # OK\nnp.testing.assert_allclose(actual=out_pytorch, desired=out_true, rtol=1e-7) # 76% mismatch\n<ECODE> Thanks in advance!", "isAccepted": false, "likes": null, "poster": "joelmfonseca" } ]
false
Error: ‘DataLoaderIter’ object has no attribute ‘shutdown’
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "vishal1796" }, { "contents": "Without any context, we cannot reply with an answer. What are you looking for, is there a script to reproduce this error?", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I am trying to implemen fast neural style. The training function is here <SCODE>for epoch in range(args.epochs):\n for iteration, batch in enumerate(data_loader):\n x = Variable(batch[0])\n x = batch_rgb_to_bgr(x)\n if args.cuda:\n x = x.cuda()\n y_hat = model(x)\n xc = Variable(x.clone())\n optimizer.zero_grad()\n loss = loss_function(args.content_weight, args.style_weight, xc, xs, y_hat)\n loss.backward()\n optimizer.step()\n print(\"===> Epoch[{}]({}/{}): Loss: {:.4f}\".format(epoch, iteration, len(data_loader), loss.data[0]))\n torch.save(model.state_dict(), 'model_{}.pth'.format(epoch))\ntorch.save(model.state_dict(), 'model.pth')\n<ECODE> and the dataloader code <SCODE>train_set = datasets.ImageFolder(args.dataset_path, transform)\ndata_loader = DataLoader(dataset=train_set, num_workers=args.threads, batch_size=args.batchSize, shuffle=True)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "vishal1796" }, { "contents": "You can check this by doing: print(len(train_set)) <SCODE>a.png\ncat/b.png\ndog/c.png\n<ECODE> Hope that helps figure out your issue.", "isAccepted": false, "likes": 3, "poster": "smth" } ]
false
Momentum update and in-place operators in optimizers
null
[ { "contents": "In SGD optimizer code, <SCODE> if momentum != 0:\n param_state = self.state[p]\n if 'momentum_buffer' not in param_state:\n param_state['momentum_buffer'] = d_p.clone()\n else:\n buf = param_state['momentum_buffer']\n d_p = buf.mul_(momentum).add_(1 - dampening, d_p)\n\n p.data.add_(-group['lr'], d_p)\n<ECODE> It seems that the update in pytorch is something like this: <SCODE>v = momentum * v + (1-damping) * dp\np = p - lr * v\n<ECODE> I felt just weird that learning rate is multiplied by both terms. Instead, I expect that <SCODE>v = momentum * v + (1-damping) * lr * dp\np = p - v\n<ECODE> I just want to know it is a bug or intended.", "isAccepted": false, "likes": null, "poster": "Ja-Keoung_Koo" }, { "contents": "It’s intended. I think some frameworks use one definition, while others use the second one.", "isAccepted": false, "likes": 3, "poster": "apaszke" } ]
false
What’s the difference between a[0][1] and a[0 , 1]
null
[ { "contents": "<SCODE>\t\trepresentation = Variable(th.zeros(batch_size , max_length , self.HIDDEN_SIZE * 2))\n\n\tfor i in xrange(batch_size):\n\t\tfor j in xrange(length[i]):\n\t\t\trepresentation[i][j] = th.cat((hidden_forward[max_length - length[i] + j][i]\\\n\t\t\t\t, hidden_backward[max_length - 1 - j][i]) , 0)\n\n\treturn representation\n<ECODE> this code yields an error ’ RuntimeError: in-place operations can be only used on variables that don’t share storage with any other variables, but detected that there are 2 objects sharing it’ however, if I replace representation[i][j] with representation[i , j], the code runs just well. I’m wondering what’s the difference between these two ways to mention a particular part of a high-dimension tensor?", "isAccepted": false, "likes": null, "poster": "splinter" }, { "contents": "", "isAccepted": false, "likes": 6, "poster": "smth" } ]
false
Detach the loaded model
null
[ { "contents": "Is there a correct way to detach a loaded model, when we don’t need to compute its gradients ? For the moment, I’m doing that: <SCODE>model = torch.load('mymodel.pth')\nfor variable in model.parameters():\n variable.detach_()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "the correct way is to make the model’s parameters not require gradients: <SCODE>model = torch.load('mymodel.pth')\nfor p in model.parameters()\n p.requires_grad = False\n<ECODE> <SCODE>myinput = torch.randn(10, 20) # example input\nmodel = torch.load('mymodel.pth')\ninput = Variable(myinput, volatile=True)\noutput = model(input)\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "smth" }, { "contents": "Ah! good point for the input! Ok, thanks a lot!", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "NVM - sorry I figured it out my mistake.", "isAccepted": false, "likes": null, "poster": "FuriouslyCurious" } ]
false
Can I use dictionary in an extension of torch.autograd.Function when using GPU?
null
[ { "contents": "I have the following code in an extension of torch.autograd.Function: where y has type LongTensor, self.N and self.C have type long. it works fine if I use CPU, but got the following error if GPU is used: is this error because of I am using dictionary and GPU simultaneously or I did something else wrong? Thank you very much!", "isAccepted": false, "likes": 1, "poster": "lili.ece.gwu" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "I really appreciate your answer, but the next problem is how I can know the function is run in GPU or CPU, is there any flag indicates it? I tried to pass a bool argument from outside of the function but another error raises. Is it because of the torch.autograd.Function accepts Variable only? if so, how can I determine the function is run in GPU or not? Thank you!", "isAccepted": false, "likes": null, "poster": "lili.ece.gwu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Many thanks!! it works now!!", "isAccepted": false, "likes": null, "poster": "lili.ece.gwu" } ]
false
Pytorch Imagenet Github Example - Training input variable not cuda?
null
[ { "contents": "Why is the input variable not moved to cuda? Thank you!", "isAccepted": false, "likes": null, "poster": "nirvan" }, { "contents": "If your input has to use 2 GPUs, it’s more efficient to send the first half of the input to GPU1 and second half to GPU2, rather than sending the entire input to GPU1, and then sending half of it from GPU1 to GPU2. That’s the reason that the input is not transferred to the GPU at the code location that you pointed out.", "isAccepted": false, "likes": 5, "poster": "smth" } ]
false
GPU failure if training on the primary GPU?
null
[ { "contents": "Hi, We have a machine with 4 Titan X GPUs - but we can only train Pytorch models on the 3 GPUs other than the primary one. As long as we utilize the primary GPU, pytorch would hangs after a few iterations and nvidia-smi would report GPU loss afterwards - only rebooting can recover the machine. We have tried to uninstall X-org from Ubuntu 16 desktop, or re-install Ubuntu 16 server without X. Disconnect display etc. It always causes GPU loss if the training utilizes the primary GPU. We also tried to use MXNET train on 4 GPUs, it goes well without seeing this problem. Any idea why Pytorch cannot train on the primary GPU? Any workaround we could try?", "isAccepted": false, "likes": null, "poster": "ming" }, { "contents": "Hi Ming, This could be because when you are using MXNet, the mxnet install did not end up using cudnn or nccl (for example) and runs fine.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thank you Soumith for the prompt reply! We are using Nvidia 375.26, CUDA 8.0 and CUDNN 5.1 which are the latest. I doubt it’s overheating issue because we can train the model using GPUs 1, 2, 3, but cannot use GPUs 0, 1, 2, where GPU 0 is the primary one. We also see cases that on machine with single GTX 1080 GPU, training pytorch model that maximize usage of GPU memory could cause GPU failure too. In any case, I will try installing pytorch from source and report back result.", "isAccepted": false, "likes": null, "poster": "ming" }, { "contents": "Hi Ming, Were you able to solve the problem? I am also stuck at the same problem…", "isAccepted": false, "likes": null, "poster": "phenom" } ]
false
Convolution operator with groups for taking each convolved result
null
[ { "contents": "In my case, I want to take each convolution output. In traditional convolution networks, if we assume that input channel (R,G,B) is 3 and output channel (J1, J2) is 2, <SCODE>J1 = F1 * R + F2 * G + F3 * B\nJ1 = F4 * R + F5 * G + F6 * B\n<ECODE> where F1,F2,… denotes filters (say 3x3 filters) and * denotes convolution operator. In my case, I want to manipulate the each convolved result. So I want to take the followings, not output images: <SCODE>F1 * R, F2 * G, F3 * B,\nF4 * R, F5 * G, F6 * B\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "Ja-Keoung_Koo" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "Thanks you for kind reply! <SCODE> self.weight = Parameter(torch.Tensor(\n out_channels, in_channels // groups, *kernel_size))<ECODE>", "isAccepted": false, "likes": null, "poster": "Ja-Keoung_Koo" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Thanks to this Q&A, it helps me to understand groups operation.", "isAccepted": false, "likes": 3, "poster": "siki" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "juliohm" }, { "contents": "In a backprop context, you’d use groupwise convolution to either make the model splittable into more than one GPU (since the groups won’t have to “cross” data), or to force the training to split channels into different groupings of features (you can google on this, there are some nice explanations available). In a more general context, you could find any number of applications, lets say for simplicity that you just want to blur each R, G and B channel separately with a blur kernel and want RGB output (and you definitely don’t want to blur R with G etc.).", "isAccepted": false, "likes": null, "poster": "hinken" }, { "contents": "Quick follow-up: What is the order of the channels in the output image? Assuming input is RGB image and output is 6 channels (as in OP) is the order:", "isAccepted": false, "likes": 1, "poster": "Yohan_Sumathipala" }, { "contents": "Let’s create a small example and have a look at the order: <SCODE># Set R=100, G=200, B=300\nx = torch.FloatTensor([100, 200, 300]).view(1, -1, 1, 1)\nx = Variable(x)\n\nconv = nn.Conv2d(in_channels=3,\n out_channels=6,\n kernel_size=1,\n stride=1,\n padding=0,\n groups=3,\n bias=False)\n\n# Set Conv weight to [0, 1, 2, 3, 4 ,5]\nconv.weight.data = torch.arange(6).view(-1, 1, 1, 1)\noutput = conv(x)\nprint(output)\n> Variable containing:\n(0 ,0 ,.,.) = \n 0\n\n(0 ,1 ,.,.) = \n 100\n\n(0 ,2 ,.,.) = \n 400\n\n(0 ,3 ,.,.) = \n 600\n\n(0 ,4 ,.,.) = \n 1200\n\n(0 ,5 ,.,.) = \n 1500\n[torch.FloatTensor of size (1,6,1,1)]\n<ECODE> The only way to get this result is the second guess.", "isAccepted": false, "likes": 12, "poster": "ptrblck" }, { "contents": "Thanks so much, this is super helpful!", "isAccepted": false, "likes": null, "poster": "Yohan_Sumathipala" }, { "contents": "I try the following codes: <SCODE>import torch\nfrom torch import nn\n\n# Set (RGB)*2 = (0, 100, 200), (300, 400, 500)\nx = (torch.arange(6, dtype=torch.double)*100).view(1, -1, 1, 1)\n\nconv = nn.Conv2d(in_channels=6,\n out_channels=2,\n kernel_size=1,\n stride=1,\n padding=0,\n groups=2,\n bias=False)\n\n# Set Conv weight to 1\nconv.weight.data = torch.ones(6, dtype=torch.double).view(2, 3, 1, 1)\n\noutput = conv(x)\n\nprint(output)\n<ECODE> output = (500, 5000). The result seems to be your Guess 2, RR GG BB.", "isAccepted": false, "likes": null, "poster": "ixez" } ]
false
Accessing objective function’s value from optimizer
null
[ { "contents": "\np.data is x\n \np.grad.data is the gradient of f at x\n where <SCODE>for group in self.param_groups:\n for p in group['params']:\n pass\n<ECODE> Is there any good way to get the value f(x) from optimizer? Thank you.", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "moskomule" } ]
false
DataLoader gives double instead of float?
null
[ { "contents": "<SCODE>X_train = torch.from_numpy(X_train)\nY_train = torch.from_numpy(Y_train)\nX_test = torch.from_numpy(X_test)\nY_test = torch.from_numpy(Y_test)\n\n[all dtypes torch.FloatTensor confirmed]\n\ntrain = torch.utils.data.TensorDataset(X_train, Y_train)\ntrainloader = torch.utils.data.DataLoader(train, batch_size=BATCH_SIZE, shuffle=True)\n\n\ntrain_iter = iter(trainloader)\ndata = train_iter.next()\nx, y = data\t\t\nprint y\n<ECODE> And I get torch.DoubleTensor x is torch.FloatTensor", "isAccepted": false, "likes": 6, "poster": "Veril" }, { "contents": "I can’t reproduce your issue. To resolve a problem we need a self-contained snippet that we can run.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "The following reproduces this. <SCODE>import numpy as np\n\nimport torch\nimport torch.utils.data\n\n\nX_train = np.random.uniform(-1, 1, (1000,11)).astype(np.float32)\nY_train = np.hstack((np.zeros(500), np.ones(500))).astype(np.float32)\n\nX_train = torch.from_numpy(X_train)\nY_train = torch.from_numpy(Y_train)\n\n\nprint X_train\nprint Y_train\n\n\ntrain = torch.utils.data.TensorDataset(X_train, Y_train)\ntrainloader = torch.utils.data.DataLoader(train, batch_size=128, shuffle=True)\n\n\ntrain_iter = iter(trainloader)\ndata = train_iter.next()\nx, y = data\t\t\nprint x\nprint y\n<ECODE> Maybe this is intentional but I can’t figure out why make a float into a double in that context, and why only the second one.", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "<SCODE>Y_train = torch.from_numpy(Y_train).view(-1, 1)\n<ECODE> <SCODE> return self.data_tensor.narrow(0, index, 1), self.target_tensor.narrow(0, index, 1)\n<ECODE> should solve the issue by always returning Tensor and not numbers. Would this break something I’m not aware of? (Let me know if you want me to send a PR for that)", "isAccepted": false, "likes": 2, "poster": "albanD" }, { "contents": "copy_ doesn’t care if it gets data in shape (batch) or (batch,) ? A quick test shows no changes in loss behavior. <SCODE>input = Variable(torch.FloatTensor(BATCH_SIZE, dims).cuda())\nlabel = Variable(torch.FloatTensor(BATCH_SIZE).cuda())\n\nx, y = train_iter.next()\ninput.data.resize_(x.size()).copy_(x)\nlabel.data.resize_(x.size(0)).copy_(y)<ECODE>", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" } ]
false
How to save/load Torch models?
null
[ { "contents": "", "isAccepted": false, "likes": 3, "poster": "Russel_Russel" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "kindlychung" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "Thanks Yun", "isAccepted": false, "likes": 1, "poster": "YunTang" }, { "contents": "state_dict is easy to manipulate and load into other models. it’s also a simpler structure. Your model class definition might also not be picklable (possible in some complicated models) Other than this, it’s a matter of taste.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Is there any work to export Pytorch models to new Apple format (CoreML), or convert to Keras (so it can be used by Apple devices)? Nick", "isAccepted": false, "likes": null, "poster": "Nick_Brandaleone" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "allenye0119" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "rajarsheem" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Please tell me. Does Pytorch save model in h5df format file or another format? What I must do to save dict model in h5df format?", "isAccepted": false, "likes": null, "poster": "kirill-pinigin" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "singleroc" }, { "contents": "When you save the pytorch model in .pth format. The entire model structure will get saved in it right. Again why do we have to keep the .py script along with .pth file? Thank you,", "isAccepted": false, "likes": 1, "poster": "dhanyashri_b_v" } ]
false
Easy way to compute outer products?
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "yoonholee" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Cysu" }, { "contents": "Perfect, just what i needed. Thanks!", "isAccepted": false, "likes": null, "poster": "yoonholee" }, { "contents": "Is there a way to do this in batch mode? So if I have two batches of vectors, is there an easy way to compute the (batch) outer products?", "isAccepted": false, "likes": 1, "poster": "vitchyr" }, { "contents": "<SCODE>b = u.size(0)\nm = u.size(1)\nn = v.size(1)\nu.unsqueeze(2).expand(b,m,n)*v.unsqueeze(1).expand(b,m,n)\n<ECODE> Best regards Thomas", "isAccepted": false, "likes": 5, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Avinab_Saha" }, { "contents": "Best regards Thomas", "isAccepted": false, "likes": 5, "poster": "tom" }, { "contents": "Hope this helps!", "isAccepted": false, "likes": 2, "poster": "Rahul_Soni" } ]
false
Higher order gradients?
null
[ { "contents": "Is it possible to compute higher order gradients in pytorch? If so, are there any example projects that show how this should be done?", "isAccepted": false, "likes": null, "poster": "xq11" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" } ]
false
How to use Batch normalization in testing model
null
[ { "contents": "Do I get the right understand?", "isAccepted": false, "likes": 1, "poster": "yichuan9527" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "OK, I understand. Thank you very much!", "isAccepted": false, "likes": null, "poster": "yichuan9527" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "No. It’s never recommended to reuse the same Variables between iterations. This will likely lead to graphs growing indefinitely and increasing memory usage. Just recreate them every time, it’s extremely cheap to do.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "What I did in the WassersteinGAN code is not an optimal approach, i have to fix that code (will do now).", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "<SCODE>YoloV2 (\n (path1): ModuleList (\n (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True)\n (2): LeakyReLU (0.1)\n (3): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n (4): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (5): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)\n (6): LeakyReLU (0.1)\n (7): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n (8): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (9): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)\n (10): LeakyReLU (0.1)\n (11): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (12): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)\n (13): LeakyReLU (0.1)\n (14): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (15): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)\n (16): LeakyReLU (0.1)\n (17): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n (18): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (19): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)\n (20): LeakyReLU (0.1)\n (21): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (22): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)\n (23): LeakyReLU (0.1)\n (24): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (25): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)\n (26): LeakyReLU (0.1)\n (27): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n (28): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (29): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)\n (30): LeakyReLU (0.1)\n (31): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (32): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)\n (33): LeakyReLU (0.1)\n (34): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (35): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)\n (36): LeakyReLU (0.1)\n (37): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (38): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)\n (39): LeakyReLU (0.1)\n (40): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (41): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)\n (42): LeakyReLU (0.1)\n )\n (parallel1): ModuleList (\n (0): MaxPool2d (size=(2, 2), stride=(2, 2), dilation=(1, 1))\n (1): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)\n (3): LeakyReLU (0.1)\n (4): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (5): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)\n (6): LeakyReLU (0.1)\n (7): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (8): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)\n (9): LeakyReLU (0.1)\n (10): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (11): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)\n (12): LeakyReLU (0.1)\n (13): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (14): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)\n (15): LeakyReLU (0.1)\n (16): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (17): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)\n (18): LeakyReLU (0.1)\n (19): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (20): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)\n (21): LeakyReLU (0.1)\n )\n (parallel2): ModuleList (\n (0): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)\n (2): LeakyReLU (0.1)\n (3): space_to_depth (\n )\n )\n (path2): ModuleList (\n (0): Conv2d(1280, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True)\n (2): LeakyReLU (0.1)\n (3): Conv2d(1024, 425, kernel_size=(1, 1), stride=(1, 1), bias=False)\n )\n)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "mderakhshani" } ]
false