title
stringlengths
15
126
category
stringclasses
3 values
posts
list
answered
bool
2 classes
CUDA alignment error when using DataParallel
null
[ { "contents": "self.conv1 = nn.Conv2d(N.inputChannels, N.outputChannels, N.kernelSquareSize, stride = (1,1), padding = (1,1)); and now, as per the DataParallel documentation, I have: self.conv1 = nn.Conv2d(N.inputChannels, N.outputChannels, N.kernelSquareSize, stride = (1,1), padding = (1,1)); self.conv1 = torch.nn.DataParallel(self.conv1, device_ids = [1, 2]) With this new code however, I am unable to get it to run, and I get the following error: And it is occurring during the (attempted) forward prop in my code… thanks…", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "Can you please give us the parameters of the conv so we can try to reproduce the issue?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Sure, here is my complete snippet: <SCODE> class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n \n self.bn1 = nn.BatchNorm2d(8)\n self.bn2 = nn.BatchNorm2d(16)\n\n self.conv1 = nn.Conv2d(1, 8, 3 ,stride = (1,1), padding = (1,1))\n self.conv1 = torch.nn.DataParallel(self.conv1, device_ids = [1, 2]) \n self.conv2 = nn.Conv2d(8, 16 ,3, stride = (1,1), padding = (1,1))\n //etc ...\n\n def forward_prop(self, x):\n \n x = F.max_pool2d(F.relu(self.bn1(self.conv1(x))), (2,2))\n x = F.max_pool2d(F.relu(self.bn2(self.conv2(x))), (2,2)) \n<ECODE> This is my only change. I will also mention that if I simply remove the torch.nn.DataParallel line in the above, my code runs and trains fine. Thanks,", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "For this question, I am not sure what the relevance is as far as the name goes?.. Perhaps I am missing something - but to be honest I had defined it before in my class and it worked - perhaps I am missing something deeper here? It’s just a name of the forward propagation function that I give… This one is a 100x100 image, single channel, minibatch size of 16.", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Ah right. The problem is that we’re numbering GPUs starting from 0. So all the modules and data is on GPU0, but you’re telling the DataParallel to run on GPU1 and GPU2 (i.e. 2nd and 3rd GPU). Can you change that and see if it helps? If that’s it, then we need to improve the error message.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>class Net(nn.Module):\n \n def __init__(self):\n super(Net, self).__init__()\n\n # Define the network\n self.bn1 = nn.BatchNorm2d(8)\n self.bn2 = nn.BatchNorm2d(16) \n self.conv1 = nn.Conv2d(1, 8, 3, stride = (1,1), padding = (1,1))\n self.conv2 = nn.Conv2d(8, 16, 3, stride = (1,1), padding = (1,1)) \n self.conv3 = nn.Conv2d(16, 32, 2, stride = (1,1), padding = (0,0))\n self.conv4 = nn.Conv2d(32, 64, 3, stride = (1,1), padding = (1,1))\n self.conv5 = nn.Conv2d(64, 64, 3, stride = (3,3), padding = (0,0))\n self.fc1 = nn.Linear(256, 32) \n self.fc2 = nn.Linear(32, 16)\n self.fc3 = nn.Linear(16, 2)\n\n def forward_prop(self, x):\n \n # Conv1 with batch norm\n x = F.max_pool2d(F.relu(self.bn1(self.conv1(x))), (2,2))\n\n # Conv2 with batch norm.\n x = F.max_pool2d(F.relu(self.bn2(self.conv2(x))), (2,2)) \n\n # Conv3\n x = F.max_pool2d(F.relu(self.conv3(x)), (2,2)) \n\n # Conv4\n x = F.max_pool2d(F.relu(self.conv4(x)), (2,2))\n\n # Conv5\n x = F.relu(self.conv5(x))\n \n # Flatten the feature map thus far for use in the fully connected layer.\n x = x.view(-1, self.num_flat_features(x))\n \n # Fully connected 1 \n x = F.relu(self.fc1(x))\n\n # Fully connected 2 \n x = F.relu(self.fc2(x))\n\n # Final layer\n x = self.fc3(x)\n\n return x\n\n def num_flat_features(self, x): \n # all dimensions except the batch dimension\n size = x.size()[1:] \n num_features = 1\n for s in size:\n num_features *= s\n return num_features\n<ECODE> So real quick, in the rest of my file, I basically have: <SCODE>net = Net().cuda()\nnet.forward_prop(trainingBatch)\n<ECODE> So instead I wrote a simple net (as above), and would like to use the DataParallel capability here. What would I do differently on this setup as above? thanks again!", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "So the problem was 1 vs 0-based GPU indexing right? We need to fix that, it should never give you an invalid memory access.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>myNet = Net()\nmyNet = torch.nn.DataParallel(myNet, device_ids=[0,1])\nmyNet.cuda()\noutput = myNet(input)\n<ECODE> Is this correct? However there is a subtlety that still confuses me: In the DCGAN example, we have: ", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Build model from repeated template
null
[ { "contents": "<SCODE>class Template(nn.Module):\n def __init__(self):\n super().__init__()\n def forward(self, *input):\n pass\n<ECODE> <SCODE>class BuildModel(nn.Module):\n def __init__(self, number_of_layers):\n super().__init__()\n # self.layer_n = Template() for n in range(0, number_of_layers)\n def forward(self, *input):\n pass\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "trypag" }, { "contents": "<SCODE>b - a - A - B\n<ECODE> <SCODE>c - b - a - A - B - C\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "Just about the nn.Sequential point, I implemented a network with 3 branches in parallel, I am not using nn.Sequential, I just end up summing the variables of my branches, this is pure autograd. Hope it can help", "isAccepted": false, "likes": 1, "poster": "trypag" }, { "contents": "what about <SCODE>def template(num_layers):\n if num_layers == 1:\n return nn.Sequential([a,A])\n else:\n return nn.Sequential([f(num_layers), template(num_layers-1), F(num_layers)])\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "ClementPinard" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Also, we’ll be merging an official solution into the core today or tomorrow.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Oh, that would be great. I would have been mucking around with string based attribute settings, otherwise … which should be just fine, no?", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" } ]
false
Discussion about datasets and dataloaders
vision
[ { "contents": "However, there has been some issues that I had to solve in order to match my workflow. So I created this topic to either discuss about possible ameliorations in the dataset interface or ameliorations in my own workflow, which i like but may be far from perfect. So to my mind, dataset would be the class where you decide where to take samples, and data loader is the one deciding if we apply specific data augmentation routines or not. <SCODE>dataset = datasets.foo(data, split)\ntrain_loader = torch.utils.data.DataLoader(\n dataset,batch_size,workers,\n transforms=[input_transform,target_transform,co_transform]\n )\ntest_loader = torch.utils.data.DataLoader(\n dataset,batch_size,workers,\n transforms=[input_transform_test,target_transform_test,co_transform_test]\n )\ndataset.train()\nenumerate(train_loader) #*batches from train set with data augmentation\ndataset.eval()\nenumerate(test_loader) #batches from test set without data augmentation\ndataset.train()\nenumerate(test_loader) #batches from train set without data augmentation\n<ECODE> Last problem will be that graphic functions will be involved in a module that is not from vision (because data loaders are from pytorch/utils) but i think vision was separated from the rest because it involved PIL operations which was not necessary for some other problems such as text embedding. But I think if we work with tensors, these transform don’t have to be graphics and could be anything as long as tensors are given in output. sum up of ideas co_transforms splitted dataset dynamic samplers random sampling without replacement for epoch size < dataset size attach transformations to data loaders add tensor related image loading and transformations to avoid numpy HxWxC to CxHxW conversion", "isAccepted": false, "likes": 12, "poster": "ClementPinard" }, { "contents": "Do you have an update on this especially the data split with dataloader for train, validation, and test?", "isAccepted": false, "likes": null, "poster": "shaun" }, { "contents": "same question, Do you have an update on this especially the data split with dataloader for train, validation, and test? Im about to do the same thing and dont want to write a new piece of code for this.", "isAccepted": false, "likes": null, "poster": "deepcode" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "chsasank" }, { "contents": "The main logic of the code is as follows: Generate a train dataset and test dataset using the argument train in torchvision.datasets.XYZ where XYZ is your desired data (i.e. CIFAR10 or ImageNet). Figure out the length of your validation set num_valid. If say you want 10% of your training data to be used for validation, then you would multiply the total length of the training set by 0.1. Create a list of indices of size num_train, shuffle it, and then slice num_valid indices from it and call it valid_idx and store the rest in train_idx. Feed these indices to separate instances of SubsetRandomSampler. Finally feed these 2 samplers to torch.utils.data.DataLoader using the sampler argument. And voila!", "isAccepted": false, "likes": 3, "poster": "kevinzakka" } ]
false
Output of RNN is not contiguous
null
[ { "contents": "I would expect the output of RNN to be contiguous in memory. This doesn’t seem to be the case. For instance, the final output in this snippet has output.is_contiguous() == False. <SCODE>train = True\nnum_layers = 1\nbidirectional = True\nbi = 2 if bidirectional else 1\n\nx = Variable(torch.from_numpy(_x), volatile=not train)\nbatch_size, seq_length, input_dim = x.size()\n\nrnn = nn.LSTM(input_dim, model_dim / bi, num_layers,\n batch_first=True,\n bidirectional=bidirectional,\n )\n\nh0 = Variable(torch.zeros(num_layers * bi, batch_size, model_dim / bi), volatile=not train)\nc0 = Variable(torch.zeros(num_layers * bi, batch_size, model_dim / bi), volatile=not train)\n\nprint(x.is_contiguous())\n# True\n\n# Expects (input, h_0):\n# input => batch_size x seq_length x model_dim\n# h_0 => (num_layers x bi[1,2]) x batch_size x model_dim\n# c_0 => (num_layers x bi[1,2]) x batch_size x model_dim\noutput, (hn, cn) = self.encode(x, (h0, c0))\n\nprint(output.is_contiguous())\n# False<ECODE>", "isAccepted": false, "likes": null, "poster": "mrdrozdov" }, { "contents": "Yeah I think that’s expected. Depending on the chosen backend, a contig or non-contig result may be returned. Why is that a problem?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Okay. Noticed the same behavior for on cpu/gpu. I don’t have a specific problem, but assumed that if input is contiguous then output should/would be as well. Thanks for the response!", "isAccepted": false, "likes": null, "poster": "mrdrozdov" }, { "contents": "No, I don’t think we’ve every guaranteed that. I’ll take a look at RNNs anyway, thanks for the notice!", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
RNN for generating time series
null
[ { "contents": "I’m trying to modify the world_language_model example to generate a time series. My naive approach was to replace the softmax output with a single linear output layer, and change the loss function to MSELoss. Unfortunately, my network seems to learn to output the current input, instead of predicting the next sample. So when I try to generate a new time series, the network is soon stuck at a fixed point. Any suggestions on how to improve my model? Here’s my code: <SCODE>#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n'''\nTrain a LSTM network to generate a time series.\n'''\n\nimport argparse\nimport collections\nimport csv\nimport math\nimport pickle\nimport time\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\n\ndef read_arguments():\n parser = argparse.ArgumentParser(description='Train a recurrent network to generate a time series.')\n parser.add_argument('--data', type=str, default='data.txt',\n help='data file to read (CSV)')\n parser.add_argument('--model', type=str, default='LSTM',\n help='type of recurrent net (RNN_TANH, RNN_RELU, LSTM, GRU)')\n parser.add_argument('--nhid', type=int, default=100,\n help='humber of hidden units per layer')\n parser.add_argument('--nlayers', type=int, default=2,\n help='number of layers')\n parser.add_argument('--lr', type=float, default=.05,\n help='initial learning rate')\n parser.add_argument('--clip', type=float, default=5,\n help='gradient clipping')\n parser.add_argument('--epochs', type=int, default=10,\n help='upper epoch limit')\n parser.add_argument('--batch-size', type=int, default=10, metavar='N',\n help='batch size')\n parser.add_argument('--bptt', type=int, default=375,\n help='sequence length')\n parser.add_argument('--checkpoint-interval', type=int, default=10, metavar='N',\n help='interval to save intermediate models')\n parser.add_argument('--save', type=str, default='model',\n help='path to save the final model')\n args = parser.parse_args()\n return args\n\n\nclass RNNModel(nn.Module):\n \"\"\"Container module with an encoder, a recurrent module, and a decoder.\"\"\"\n\n def __init__(self, rnn_type, nhid, nlayers):\n super(RNNModel, self).__init__()\n self.rnn = getattr(nn, rnn_type)(1, nhid, nlayers)\n self.output = nn.Linear(nhid, 1)\n\n self.init_weights()\n\n self.rnn_type = rnn_type\n self.nhid = nhid\n self.nlayers = nlayers\n\n def init_weights(self):\n initrange = 0.1\n self.output.bias.data.fill_(1.0)\n self.output.weight.data.uniform_(-initrange, initrange)\n\n def forward(self, input, hidden):\n output_lstm, hidden = self.rnn(input, hidden)\n output = self.output(output_lstm.view(output_lstm.size(0)*output_lstm.size(1), output_lstm.size(2)))\n return output.view(output_lstm.size(0), output_lstm.size(1), output.size(1)), hidden\n\n def init_hidden(self, bsz):\n weight = next(self.parameters()).data\n if self.rnn_type == 'LSTM':\n return (Variable(weight.new(self.nlayers, bsz, self.nhid).zero_()),\n Variable(weight.new(self.nlayers, bsz, self.nhid).zero_()))\n else:\n return Variable(weight.new(self.nlayers, bsz, self.nhid).zero_())\n\n\ndef flatten(l):\n for el in l:\n if isinstance(el, collections.Iterable) and not isinstance(el, str):\n for sub in flatten(el):\n yield sub\n else:\n yield el\n \n \ndef batchify(data, bsz):\n nbatch = data.size(0) // bsz\n data = data.narrow(0, 0, nbatch * bsz)\n data = data.view(bsz, -1).t().contiguous()\n if torch.cuda.is_available():\n data = data.cuda()\n return data \n\n\ndef load_data(filename, batch_size):\n '''\n Load a training data sequence from a CSV file\n '''\n with open(filename) as csvfile:\n csvreader = csv.reader(csvfile)\n data = list(csvreader)\n data = torch.Tensor([float(x) for x in flatten(data)])\n\n train_length = math.ceil(len(data) * .7)\n val_length = math.ceil(len(data) * .2)\n train_data = data[:train_length]\n val_data = data[train_length:train_length+val_length]\n test_data = data[train_length+val_length:]\n return batchify(train_data, batch_size), batchify(val_data, batch_size), batchify(test_data, batch_size)\n \n###############################################################################\n# Training code\n###############################################################################\n\ndef clip_gradient(model, clip):\n \"\"\"Computes a gradient clipping coefficient based on gradient norm.\"\"\"\n totalnorm = 0\n for p in model.parameters():\n modulenorm = p.grad.data.norm()\n totalnorm += modulenorm ** 2\n totalnorm = math.sqrt(totalnorm)\n return min(1, clip / (totalnorm + 1e-6))\n\n\ndef repackage_hidden(h):\n \"\"\"Wraps hidden states in new Variables, to detach them from their history.\"\"\"\n if type(h) == Variable:\n return Variable(h.data)\n else:\n return tuple(repackage_hidden(v) for v in h)\n\n\ndef get_batch(source, i, seq_length, evaluation=False):\n seq_len = min(seq_length, len(source) - 1 - i)\n data = Variable(source[i:i+seq_len].view(seq_len, -1, 1), volatile=evaluation)\n target = Variable(source[i+1:i+1+seq_len].view(-1))\n return data, target\n\n\ndef evaluate(data_source, model, criterion, batch_size, seq_length):\n total_loss = 0\n hidden = model.init_hidden(batch_size)\n for i in range(0, data_source.size(0) - 1, seq_length):\n data, targets = get_batch(data_source, i, seq_length, evaluation=True)\n output, hidden = model(data, hidden)\n total_loss += len(data) * criterion(output, targets).data\n hidden = repackage_hidden(hidden)\n return total_loss[0] / len(data_source)\n\n\ndef train(train_data, model, criterion, lr, batch_size, seq_length, grad_clip):\n total_loss = 0\n hidden = model.init_hidden(batch_size)\n for batch, i in enumerate(range(0, train_data.size(0) - 1, seq_length)):\n data, targets = get_batch(train_data, i, seq_length)\n hidden = repackage_hidden(hidden)\n model.zero_grad()\n output, hidden = model(data, hidden)\n loss = criterion(output, targets)\n loss.backward()\n\n clipped_lr = lr * clip_gradient(model, grad_clip)\n for p in model.parameters():\n p.data.add_(-clipped_lr, p.grad.data)\n\n print('.', end='', flush=True)\n total_loss += loss.data\n return total_loss[0] / batch\n\ndef save_model(model, name, checkpoint = ''):\n filename = name + str(checkpoint) + '.pt'\n print('Saving model to', filename)\n with open(filename, 'wb') as f:\n torch.save(model, f)\n \n\ndef main():\n args = read_arguments()\n print(\"Loading data...\")\n ecg_data, val_data, test_data = load_data(args.data, args.batch_size)\n print(\"Building network ...\")\n ###############################################################################\n # Build the model\n ###############################################################################\n model = RNNModel(args.model, args.nhid, args.nlayers)\n if torch.cuda.is_available():\n print('Using CUDA')\n model.cuda()\n\n criterion = nn.MSELoss()\n\n print(\"Training network ...\")\n try:\n lr = args.lr\n ci = args.checkpoint_interval\n filename = args.save\n print('Learning rate {:.5f}'.format(lr))\n prev_loss = None\n for epoch in range(1, args.epochs+1):\n epoch_start_time = time.time()\n train_loss = train(ecg_data, model, criterion, lr, args.batch_size, args.bptt, args.clip)\n val_loss = evaluate(val_data, model, criterion, args.batch_size, args.bptt)\n print()\n print('-' * 89)\n print('| end of epoch {:3d} | time: {:5.2f}s | '\n 'train loss {:5.2f} | val loss{:5.2f}'.format(epoch, (time.time() - epoch_start_time),\n train_loss, val_loss))\n print('-' * 89)\n if not (epoch % ci):\n save_model(model, filename, epoch)\n if prev_loss and val_loss > prev_loss:\n lr /= 4\n print('New learning rate {:.5f}'.format(lr))\n prev_loss = val_loss\n except KeyboardInterrupt:\n pass\n finally:\n print()\n test_loss = evaluate(test_data, model, criterion, args.batch_size, args.bptt)\n print('=' * 89)\n print('| End of training | test loss {:5.2f} |'.format(\n test_loss))\n print('=' * 89)\n save_model(model, filename)\n\nif __name__ == '__main__':\n main()<ECODE>", "isAccepted": false, "likes": null, "poster": "AndreaCogliati" }, { "contents": "It’s hard to tell why it doesn’t learn anything. There are a lot of factors that can cause it, and they depend on the data, preprocessing, model, etc. Maybe it’s one of the things I mentioned below, maybe something else. If the expected output given the input is equal to the input, and the network can’t find any patterns in your data, it’s quite logical that the only thing it can do to optimize the loss is to simply return what it got. Another thing is that if the outputs barely differ from the inputs, the loss value will be very small even if the network returns the input.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "drscotthawley" }, { "contents": "<SCODE>#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n'''\nSample from an LSTM network to generate a time series.\n'''\n\nimport argparse\nimport time\nimport math\n\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\nfrom train import RNNModel\n\ndef read_arguments():\n parser = argparse.ArgumentParser(description='Sample from a recurrent network to generate a signal.')\n parser.add_argument('--checkpoint', type=str, default='./model.pt',\n help='model checkpoint to use')\n parser.add_argument('--outf', type=str, default='generated.txt',\n help='output file for generated signal') \n parser.add_argument('--length', type=int, default='12500',\n help='length of the output signal')\n args = parser.parse_args()\n return args\n\n\ndef main():\n args = read_arguments()\n print(\"Loading model...\")\n with open(args.checkpoint, 'rb') as f:\n model = torch.load(f)\n if torch.cuda.is_available():\n model.cuda()\n else:\n model.cpu()\n\n print('Sampling...')\n try:\n hidden = model.init_hidden(1)\n output = Variable(torch.zeros(1,1,1), volatile=True)\n # output.data[0] = 0.0\n if torch.cuda.is_available():\n output.data = output.data.cuda()\n with open(args.outf, 'w') as outf:\n for _ in range(args.length):\n output, hidden = model(output, hidden)\n outf.write('{:.5f}\\n'.format(output.squeeze().data.cpu()[0]))\n except KeyboardInterrupt:\n pass\n\nif __name__ == '__main__':\n main()<ECODE>", "isAccepted": false, "likes": null, "poster": "AndreaCogliati" }, { "contents": "I took the code that you posted and I found that, for my data, the predicted solution moves rapidly to zero. I trained with a simple sine wave (black) and got the red line when I ran your prediction code… (this is zoomed in towards the beginning… the training data goes on for 50,000 timesteps)", "isAccepted": false, "likes": null, "poster": "drscotthawley" }, { "contents": "Perhaps that would be helpful for those more in-the-know, as far as making the PyTorch implementation match the Torch implementation. I’m not at the stage where I can do that yet. I don’t see a Sequencer() layer defined in PyTorch though; perhaps it’s not necessary.", "isAccepted": false, "likes": null, "poster": "drscotthawley" }, { "contents": "Thanks for your interest. Yes, that’s the very same behavior I’m observing, in general. With longer training, and longer backpropagation through time I was able to generate simple sinusoidal signals and even a square wave, but with more complex signals the network always ends up to a fixed point (not necessarily zero). And thanks for listing the post by Element Research. It looks like that my naive model is not completely off-track.", "isAccepted": false, "likes": null, "poster": "AndreaCogliati" }, { "contents": "Can you please explain what is the purpose of your read_argumnets function? What does the argparser do here (I always see it in all pytorch examples? Also, what is an application of generating time series data (I mean how would that be useful)? Please excuse my limited knowledge, as I am still new to deep_learning. Thanks for the help.", "isAccepted": false, "likes": null, "poster": "Russel_Russel" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "AndreaCogliati" }, { "contents": "What kind of “more complex signals” are you having trouble with?", "isAccepted": false, "likes": null, "poster": "csarofeen" }, { "contents": "For instance a combination of 3 sinusoids.", "isAccepted": false, "likes": null, "poster": "AndreaCogliati" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "osm3000" }, { "contents": "No, I didn’t, sorry. However, I haven’t spent much on it lately, since I have been busy on other projects.", "isAccepted": false, "likes": null, "poster": "AndreaCogliati" }, { "contents": "Can someone comment on this? It’s odd when even the example code doesn’t work.", "isAccepted": false, "likes": null, "poster": "drscotthawley" }, { "contents": "I get not perfect but reasonable predictions (the following image is after 25 steps) once the MSE is around 1e-5 to 3e-5. Sometimes the amplitudes vary more wildly, but that might be expected. What is the MSE you get? Best regards Thomas", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "So, what is it that you’re doing that’s different from the rest of us? One thing seems to be that you’re running for more steps (25): the example stops at 15. But I don’t think that would make a difference, because the loss seems to “flatten out” fairly early, e.g. by step 3. Output is… …and all the predict*.pdf files from predict2.pdf onward look like the graph I posted.", "isAccepted": false, "likes": null, "poster": "drscotthawley" }, { "contents": "Hello Scott, Best regards Thomas", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "drscotthawley" }, { "contents": "Best regards Thomas", "isAccepted": false, "likes": 1, "poster": "tom" }, { "contents": "Thanks Thomas! That’s it. I confirm that your code yields the pictured result. So glad to be able to move on. I’m also able to run the example code now that your pull request has been accepted. I’ll consider my part of this thread as “closed”.", "isAccepted": false, "likes": null, "poster": "drscotthawley" } ]
false
Compiling an Extension with CUDA files
null
[ { "contents": "<SCODE>#include <THC/THC.h>\n\nextern THCState *state;\n\nint my_lib_add_forward_cuda(THCudaTensor *input, THCudaTensor *output)\n{\n float * pinput = THCudaTensor_data(state, input);\n float * poutput = THCudaTensor_data(state, output);\n for(...)\n {\n poutput[i] = do_something(pinput[i]);\n }\n return 1;\n}\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "longcw" }, { "contents": "No, you cannot access CUDA memory like this.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thank you for your reply.", "isAccepted": false, "likes": null, "poster": "longcw" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "longcw" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you very much. It is helpful.", "isAccepted": false, "likes": null, "poster": "longcw" }, { "contents": "Hello, I have a question about compiling an extension with cuda files. I get the following error: <SCODE>distutils.errors.UnknownFileError: unknown file type '.cu'\n<ECODE> when I include .cu files in the sources: <SCODE>import os\nimport torch\nimport glob\nfrom torch.utils.ffi import create_extension\n\nthis_file = os.path.dirname(__file__)\n\nsources = ['../Library/~.cpp',\n '../Library/~.cpp',\n '../Library/~.cu']\nheaders = ['../Library/~.h']\nhere = os.path.abspath(os.path.dirname(__file__))\nlib_dir = os.path.join(here, '..', 'Library')\ninclude_dirs = [\n os.path.join(lib_dir, '~'),\n os.path.join(lib_dir, 'Math'),\n]\ndefines = [('WITH_CUDA', None)]\nwith_cuda = True\n\nffi = create_extension(\n '_CUDA.~',\n headers=headers,\n sources=sources,\n define_macros=defines,\n relative_to=__file__,\n with_cuda=with_cuda,\n include_dirs = include_dirs,\n extra_compile_args=[\"-fopenmp\"]\n)\n\nif __name__ == '__main__':\n ffi.build()\n from _CUDA import ~\n print ~.__dict__\n<ECODE> Am I doing something wrong?", "isAccepted": false, "likes": null, "poster": "lupoglaz" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>#!/usr/bin/env bash\n\nCUDA_PATH=/usr/local/cuda/\n\ncd layers/reorg/src\necho \"Compiling reorg layer kernels by nvcc...\"\nnvcc -c -o reorg_cuda_kernel.cu.o reorg_cuda_kernel.cu -x cu -Xcompiler -fPIC -arch=sm_52\n\ncd ../\npython build.py\n<ECODE> <SCODE>import os\nimport torch\nfrom torch.utils.ffi import create_extension\n\n\nsources = ['src/reorg_cpu.c']\nheaders = ['src/reorg_cpu.h']\ndefines = []\nwith_cuda = False\n\nif torch.cuda.is_available():\n print('Including CUDA code.')\n sources += ['src/reorg_cuda.c']\n headers += ['src/reorg_cuda.h']\n defines += [('WITH_CUDA', None)]\n with_cuda = True\n\nthis_file = os.path.dirname(os.path.realpath(__file__))\n# print(this_file)\nextra_objects = ['src/reorg_cuda_kernel.cu.o']\nextra_objects = [os.path.join(this_file, fname) for fname in extra_objects]\n\nffi = create_extension(\n '_ext.reorg_layer',\n headers=headers,\n sources=sources,\n define_macros=defines,\n relative_to=__file__,\n with_cuda=with_cuda,\n extra_objects=extra_objects\n)\n\nif __name__ == '__main__':\n ffi.build()\n<ECODE> Hope it helps you.", "isAccepted": false, "likes": 2, "poster": "longcw" }, { "contents": "Sorry for the inconvenience, we’re going to be fixing that in the future.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>CUDA_ADD_LIBRARY(~ STATIC c~CUDAKernels.cu)\nTARGET_LINK_LIBRARIES(~)\n<ECODE> and then link them as an extra link argument: <SCODE>...\nextra_link_args=['~.a']\nffi = create_extension(\n '~',\n headers=headers,\n sources=sources,\n define_macros=defines,\n relative_to=__file__,\n with_cuda=with_cuda,\n include_dirs = include_dirs,\n extra_compile_args=[\"-fopenmp\"],\n extra_link_args = extra_link_args\n)\n...\n<ECODE> This workaround does work with a static library. I had problems trying to link a shared library this way (haven’t tried enough though).", "isAccepted": false, "likes": null, "poster": "lupoglaz" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "ThibaultGROUEIX" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "chen_dongdong" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Jiang_He" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "GriffinLiang" } ]
false
Understand mark_dirty()
null
[ { "contents": "So I read the inline documentation about mark_dirty() here: I don’t quite understand what extra checks are needed for inplace operators. Would be great if the devs can give some hints. Thanks!", "isAccepted": false, "likes": null, "poster": "yzhu" }, { "contents": "If you are doing an in-place operation, and further operate on the original Tensor, the backward gradients might be wrong. Let’s take a small example:", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "yzhu" } ]
false
Sampled softmax loss
null
[ { "contents": "Hi, Does sampled softmax loss exist in pytorch? I cound not find it. Thanks", "isAccepted": false, "likes": 1, "poster": "beegii" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "beegii" }, { "contents": "We don’t have any such thing in the core. We’ll need to add them. Thanks for the pointer!", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ngimel" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "beegii" }, { "contents": "EDIT: sorry, I see that original link is to a page with a number of different softmax approximations, and NCE is one of them. I personally would be more interested in sampled softmax, as it tends to work better for me.", "isAccepted": false, "likes": 1, "poster": "lopuhin" }, { "contents": "You may also be interested in this implementation: Giving very good results for the LM task.", "isAccepted": false, "likes": null, "poster": "vince62s" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "beegii" }, { "contents": "Any updates on this?? It probably isn’t a priority…but I secretly wish PyTorch has a big development team like TensorFlow and can add these functionalities easily!", "isAccepted": false, "likes": null, "poster": "windweller" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "windweller" } ]
false
Leaf variable was used in an inplace operation
null
[ { "contents": "", "isAccepted": false, "likes": 5, "poster": "OswinLG" }, { "contents": "Loosely, tensors you create directly are leaf variables. Tensors that are the result of a differentiable operation are not leaf variables For example: <SCODE>w = torch.tensor([1.0, 2.0, 3.0]) # leaf variable\nx = torch.tensor([1.0, 2.0, 3.0], requires_grad=True) # also leaf variable\ny = x + 1 # not a leaf variable\n<ECODE> An in-place operation is something which modifies the data of a variable. For example: <SCODE>x += 1 # in-place\ny = x + 1 # not in place\n<ECODE> <SCODE>x2 = x.clone() # clone the variable\nx2 += 1 # in-place operation\n<ECODE> <SCODE>with torch.no_grad():\n x += 1\n<ECODE>", "isAccepted": false, "likes": 57, "poster": "colesbury" }, { "contents": "Is it bad practice to get around pytorch disallowing in-place operations by assigning to Variable().data?", "isAccepted": false, "likes": null, "poster": "henrye" }, { "contents": "Best regards Thomas", "isAccepted": false, "likes": 7, "poster": "tom" }, { "contents": "Yes, I remember now, messing with backprop and autograd was why I was running into problems with in-place assignment before. Using .data as I am currently for initialising word embeddings seems ok then.", "isAccepted": false, "likes": null, "poster": "henrye" }, { "contents": "Hi, Trying to read the code for optim (I want to implement something a bit differently) and your previous example/explanation of what is a leaf Variable doesn’t seem to be valid anymore. <SCODE>y = x + 1 # not a leaf variable\n<ECODE> Well, here’s output from my termial for the code which you have mentioned: <SCODE>>>> x = torch.autograd.Variable(torch.Tensor([1, 2, 3, 4]))\n>>> x.is_leaf\nTrue\n>>> y = x + 1\n>>> y.is_leaf\nTrue\n>>> y\nVariable containing:\n 2\n 3\n 4\n 5\n[torch.FloatTensor of size (4,)]\n<ECODE> So, can someone please explain what is a leaf Variable, and what is not a leaf variable? Clearly a non-leaf-variable cannot be optimized, but what is it?", "isAccepted": false, "likes": 1, "poster": "Yoni_Keren" }, { "contents": "<SCODE>Came across a similar issue. Reason is requires grad.\n\nx = torch.autograd.Variable(torch.Tensor([1, 2, 3, 4]), requires_grad=True)\nx.is_leaf \n#True\ny = x + 1\ny.is_leaf\n#False<ECODE>", "isAccepted": false, "likes": 7, "poster": "Ed_Beeching" }, { "contents": "Hi, I used to create leaf variable like: y = torch.autograd.Variable(torch.zeros([batch_size, c, h, w]), requires_grad=True) Then I want to assign value to indexed parts of y like below,(y_local is a Variable computed based on other variables and I want to assign the value of y_local to part of the y and ensure that the gradients from y can flow to the y_local.) y.data[:,:,local_x[i]:local_x[i+1],local_y[i]:local_y[i+1]] = y_local.data I am wondering such operation supports the normal gradient backward for the y_local varible?", "isAccepted": false, "likes": 2, "poster": "11185" }, { "contents": "I am also facing similar issue. Please let me know how were you able to resolve it", "isAccepted": false, "likes": null, "poster": "shivangi" }, { "contents": "leaf variable, in essence, is a variable, or a tensor with requires_grad=True. So, if a tensor with requires_grad=False, it does not belong to the variable, let alone leaf variable.", "isAccepted": false, "likes": null, "poster": "dreamyun" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "klory" }, { "contents": "Could you please explain why it’s not correct?", "isAccepted": false, "likes": 1, "poster": "klory" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "danmoller" }, { "contents": "I had a similar case. I used: <SCODE>y = torch.zeros([batch_size, c, h, w]), requires_grad=False)\n<ECODE> then I update the value of y according to the value of the network output and then apply a loss function on y and it worked for me.", "isAccepted": false, "likes": null, "poster": "nima_rafiee" }, { "contents": "Is the reason this is not “usually correct” because we could have just initialized it directly with the data that we wanted in the first place instead of doing a in-place op?", "isAccepted": false, "likes": null, "poster": "pinocchio" }, { "contents": "I also agree it shouldn’t be a leaf but Pytorch disagrees with us…why?", "isAccepted": false, "likes": null, "poster": "pinocchio" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "colesbury" }, { "contents": "<SCODE>def inplace_playground():\n import torch\n\n x = torch.tensor([1,2,3.], requires_grad=True)\n y = x + 1\n print(f'x.is_leaf = {x.is_leaf}')\n print(f'y.is_leaf = {y.is_leaf}')\n x += 1\n<ECODE> output: <SCODE>x.is_leaf = True\ny.is_leaf = False\n<ECODE> Thanks!", "isAccepted": false, "likes": null, "poster": "pinocchio" }, { "contents": "Why not? What were the competing semantics? What’s the difficulty in defining the semantics for leafs + in-place ops?", "isAccepted": false, "likes": null, "poster": "pinocchio" }, { "contents": "Perhaps, you could try adding the following code before updating the gradient: <SCODE>with torch.no_grad():\n<ECODE> and it works: <SCODE>N, D_in, H, D_out = 64, 1000, 100, 10\n\nX = torch.randn(N, D_in).cuda()\ny = torch.randn(N, D_out).cuda()\n\n# device = torch.cuda.device('cuda:0')\ndevice = torch.device('cuda:0')\nw1 = torch.randn(D_in, H, requires_grad=True, device=device)\nw2 = torch.randn(H, D_out, requires_grad=True, device=device)\n\nlearning_rate = 1e-6\nfor it in range(10):\n # 1.forward pass\n y_pred = X.mm(w1).clamp(min=0).mm(w2).cuda()\n # 2.compute loss\n loss = (y_pred - y).pow(2).sum().cuda()\n print(f'iter {it}, loss {loss}')\n # Backward pass\n loss.backward()\n # update weights of w1 and w2\n with torch.no_grad():\n w1 -= learning_rate * w1.grad\n w2 -= learning_rate * w2.grad\n w1.grad.zero_()\n w2.grad.zero_()\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "jeremysun1224" } ]
false
Updating PyTorch
null
[ { "contents": "I just wanted to pin this topic, so that it can be used for future reference. <SCODE>conda config --add channels soumith\n<ECODE> <SCODE>conda update pytorch torchvision\n<ECODE> <SCODE>$HOME/anaconda3/lib/python3.5/site-packages/torch\n<ECODE> <SCODE>$HOME/anaconda3/lib/python3.5/site-packages/torchvision-0.1.6-py3.5.egg/torchvision/__init__.py\n<ECODE> but such location does not exist. Nevertheless, my IDE can find those files in that location, when exploring the source code. <SCODE>/data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485445763020/work/torch/lib/THNN/generic/SpatialConvolutionMM.c\n<ECODE>", "isAccepted": false, "likes": 9, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "About 2, that’s kind of not what I see here. <SCODE>>>> f = open(torchvision.__file__)\nTraceback (most recent call last):\n File \"/home/atcold/anaconda3/lib/python3.5/site-packages/IPython/core/interactiveshell.py\", line 2881, in run_code\n exec(code_obj, self.user_global_ns, self.user_ns)\n File \"<ipython-input-95-16f43d841fa3>\", line 1, in <module>\n f = open(torchvision.__file__)\nNotADirectoryError: [Errno 20] Not a directory: '/home/atcold/anaconda3/lib/python3.5/site-packages/torchvision-0.1.6-py3.5.egg/torchvision/__init__.py'\n>>> f2 = open(torch.__file__)\n# succeed\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Oh, maybe I’ve figured it out… <SCODE>atcold@GPU0 ~/anaconda3/lib/python3.5/site-packages $ ls -daF torch*\ntorch/ torch-0.1-py3.5.egg-info/ torchvision-0.1.6-py3.5.egg torchvision.pth\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "Oh yeah, I think it’s zipped in an egg.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "this is no longer working. PackageNotFoundError: Package not found: ‘pytorch’", "isAccepted": false, "likes": 1, "poster": "Hubert" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "antspy" }, { "contents": "<SCODE>conda install pytorch torchvision -c pytorch\n<ECODE>", "isAccepted": false, "likes": 6, "poster": "jdhao" }, { "contents": "For me this did not work. It always tried to downgrade my pytorch and torchvision to some rather ancient version. I installed it again in a separate environment using pip then which worked.", "isAccepted": false, "likes": null, "poster": "omair-kg" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jdhao" }, { "contents": "Tried that aswell. So far only Pip seems to work for me.", "isAccepted": false, "likes": null, "poster": "omair-kg" }, { "contents": "This worked for me! The following NEW packages will be INSTALLED: <SCODE>cudatoolkit: 8.0-3\n<ECODE> The following packages will be UPDATED: <SCODE>pytorch: 0.2.0-py36h53baedd_4cu80 soumith [cuda80] --> 0.3.0-py36_cuda8.0.61_cudnn7.0.3h37a80b5_4 pytorch\ntorchvision: 0.1.9-py36h7584368_1 soumith --> 0.2.0-py36h17b6947_1 pytorch<ECODE>", "isAccepted": false, "likes": 5, "poster": "manozzm" }, { "contents": "If the dependencies are not met, conda will not update to recent pytorch. What I did was the following.", "isAccepted": false, "likes": 4, "poster": "shehabk" }, { "contents": "<SCODE>conda config --add channels pytorch\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Atcold" }, { "contents": "<SCODE>>>> import torch\n>>> import os\n>>> os.path.isfile(torch.__file__)\nTrue\n>>> os.path.abspath(torch.__file__)\n'/Users/sebastian/miniconda3/lib/python3.6/site-packages/torch/__init__.py'\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "rasbt" }, { "contents": "Hey guys. What is the recommended way of upgrading to PyTorch V1.0 using conda? After adding the pytorch channel and trying to update this message appears: # All requested packages already installed. Any suggestions?", "isAccepted": false, "likes": null, "poster": "stoddur" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "ptrblck" }, { "contents": "Thanks for your reply. How does one go about installing the 1.0 preview? After installing the nightly version from source I end up with version 0.4.1", "isAccepted": false, "likes": null, "poster": "stoddur" } ]
false
Will pytorch be supported on Windows?
null
[ { "contents": "As the problem said, will pytorch be supported on Windows?", "isAccepted": false, "likes": null, "poster": "cumttang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" } ]
false
How to get the df_do as we used in torch7
null
[ { "contents": "<SCODE> optimizer.zero_grad()\n output = model(data)\n loss = criterion(output, target)\n loss.backward()\n optimizer.step()\n<ECODE> So all the network is back-propagated when I call loss.backward() ?", "isAccepted": false, "likes": null, "poster": "dafang_He" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Thanks!", "isAccepted": false, "likes": null, "poster": "dafang_He" }, { "contents": "Thanks!", "isAccepted": false, "likes": null, "poster": "dafang_He" }, { "contents": "You need to make the hook a closure - just redefine it at every step, and use the data in its body: <SCODE>optimizer.zero_grad()\n\noutput = model(data)\ndef hook(d_output):\n return d_output * data.mean() # you can use the data here\noutput.register_hook(hook)\n\nloss = criterion(output, target)\nloss.backward()\noptimizer.step()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Model.zero_grad only fill the grad of parameters to 0
null
[ { "contents": "", "isAccepted": false, "likes": 1, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for the explanations, but if I don’t fill other variable.grad.data to 0, will the grad of parameters that depended on those variables be wrongly estimated? Keeping tracking of all the variables and setting their grad to 0 properly seems quite error prone.", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "No, why would it matter? Gradient of parameters is not a function of the gradient w.r.t. some other Variable.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Do you mean whatever the module is, as long as it’s inherited from base nn.Module class, simply calling the default model.zero_grad() is enough? I mean the error can pass from other Variable to the parameter, if some Variable’s grad is unintentionally accumulated from the last run, the gradient of parameters may also be wrong.", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE> def __init__(self):\n self.out = torch.autograd.Variable(torch.zeros(timestep, batchsize, \n self.W_decode.size()[1]) , requires_grad=True)\n\n def forward(self, input, state=None):\n # input is timesetep*N\n batchsize, timestep = input.size()[1], input.size()[0]\n vec_input = input.view(-1)\n emb = torch.index_select(self.W_emb, 0, vec_input).view(timestep,batchsize, -1) # emb = N*ninp\n inp = matmul(emb, self.W_rnn)\n state = torch.autograd.Variable( torch.zeros(inp.size()[1:])) if state is None else state\n for step in range(inp.size()[0]):\n this_input = inp[step] # N * nhid\n this_input = torch.addmm(this_input, state, self.U_rnn)\n state = F.tanh(this_input + self.b_rnn.expand_as(this_input) )\n self.out[step] = torch.addmm(self.out[step], state, self.W_decode)\n self.out[step] = F.softmax(self.out[step] + self.b_decode.expand_as(out[step]) )\n return self.out\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "<SCODE>def __init__(self):\n self.out = get_new_out()\n\ndef forward(self, input):\n ...\n next_out = get_new_out()\n for step in range(inp.size(0)):\n next_out[step] = ... # an expression containing self.out\n self.out = next_out\n return next_out\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "No, you’re not going to feel any difference in speed. CPU allocations are fast and we have a custom CUDA allocator that caches the memory, so it’s also very fast. I don’t know how you define the safety of Module.zero_grad - it does what it’s meant to do, i.e. zero the grad of parameters.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Also, as I said, if you don’t reallocate the output your graphs will never be freed and that will blow up the memory. Don’t cache intermediate things too long, PyTorch philosophy is quite different from Lua torch.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>from torch.autograd import Variable\nx = Variable(torch.ones(2, 2), requires_grad = True)\ny = x + 2 \nz = y * y * 3\nout = z.mean()\nout.backward(retain_variables=True)\nprint(x.grad)\n<ECODE> I get <SCODE>Variable containing:\n 4.5000 4.5000\n 4.5000 4.5000\n[torch.FloatTensor of size 2x2]\n<ECODE> <SCODE>Variable containing:\n 0 0\n 0 0\n[torch.FloatTensor of size 2x2]\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Many thanks, very helpful!", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Unrolling adversarial networks
null
[ { "contents": "Hi, Is the point that we simulate optimisation of the discriminator for K steps, then use the final error at D to optimise the parameters of D from before we started unrolling? Does this mean we have to store a copy of D’s parameters before we start the unrolling procedure? Does this mean that we end up discarding all the optimisation changes that occurred during unrolling? (Excluding the fact that they are implicitly bundled into the final output from the unrolling). Also, do we unroll for both real and fake data through D, or just fake data? Any help with this would be very much appreciated, cheers!", "isAccepted": false, "likes": null, "poster": "Jordan_Campbell" }, { "contents": "I don’t really know how unrolled GANs work, but as far as I remember they require taking gradient of functions of another gradient, and we don’t support that yet. It’s on our roadmap and we’re actively working on that, but it might take us some time.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Jordan_Campbell" }, { "contents": "this is supported now right? How can it be done?", "isAccepted": false, "likes": null, "poster": "prolearner" }, { "contents": "Best regards Thomas", "isAccepted": false, "likes": null, "poster": "tom" }, { "contents": "Does it support recurrent modules using Pytorch 1.0 now ?", "isAccepted": false, "likes": null, "poster": "lewiskit" } ]
false
Select GPU device through env vars
null
[ { "contents": "", "isAccepted": false, "likes": 4, "poster": "emanjavacas" }, { "contents": "<SCODE>CUDA_VISIBLE_DEVICES=1 python myscript.py\n<ECODE>", "isAccepted": false, "likes": 12, "poster": "fmassa" }, { "contents": "thanks! are there any docs for this? I really couldn’t find it :-s", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "samarth-robo" } ]
false
Model parameter changes on every load
null
[ { "contents": "Hi, I saved a model of a simple convnet created without using any loops in the class using torch.save(). When I load the same saved model using torch.load() and print parameters param.data iterating through model.parameters(), it prints different values each time while running the code. I used torch.cuda.manual_seed(1234) while training the model. But I guess it should not affect loading the saved model. It might be a silly mistake on my part, but I could not figure it out.", "isAccepted": false, "likes": null, "poster": "Vijay_Rengarajan" }, { "contents": "Edit: There was a mistake in my code that I overwrote the loading with a model = ConvNet() definition. Please close this thread.", "isAccepted": false, "likes": null, "poster": "Vijay_Rengarajan" } ]
false
Adaptive learning rate
null
[ { "contents": "How do I change the learning rate of an optimizer during the training phase? thanks", "isAccepted": false, "likes": 12, "poster": "davidenitti" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "If you want to change the LR we recommend reconstructing the optimizer with new parameters.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Oh, ok sorry ! things move fast <SCODE>def adjust_learning_rate(optimizer, epoch):\n \"\"\"Sets the learning rate to the initial LR decayed by 10 every 30 epochs\"\"\"\n lr = args.lr * (0.1 ** (epoch // 30))\n for param_group in optimizer.param_groups:\n param_group['lr'] = lr\n\n<ECODE>", "isAccepted": false, "likes": 29, "poster": "trypag" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "ecolss" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 20, "poster": "ruotianluo" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "lysuhin" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "mderakhshani" }, { "contents": "Keras provides two functions which are fairly straightforward to implement, and everyone loves them: This one reduces LR when gradient is stuck on a plateau for past “X=patience” epochs: This one stops you from burning up your Amazon AWS $$$ credits if your model is not learning anything after “X=patience” epochs: Would PyTorch be open to adding something like this?", "isAccepted": false, "likes": 19, "poster": "FuriouslyCurious" }, { "contents": "Any update on this thread about learning rate decay?", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "In the models I’m training right now I see an increase in the loss when I construct a new optimizer to decay the learning rate.", "isAccepted": false, "likes": 1, "poster": "will" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "Rongzhao_Zhan" }, { "contents": "Hi, I hope this clears things up.", "isAccepted": false, "likes": null, "poster": "chsasank" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "will" }, { "contents": "How about more general situations where it is desired to adaptively adjust any parameters defined in the model? Is it practical to reconstruct optim in every loop? Maybe a mechanism like placeholder? Thanks!", "isAccepted": false, "likes": null, "poster": "lliu25" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "lliu25" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "ncullen93" } ]
false
Return activations for all the layers
null
[ { "contents": "Hello, Given the following code: <SCODE>def forward(self, x):\n x = x.transpose(2,1)\n x = self.max_pool(x)\n x = x.view(x.size(0), -1)\n x = self.classifier(x)\n<ECODE> Does exist an elegant way to get back the results of each layer? Or I should use a dictionary containing a key for each layer result? Thank you", "isAccepted": false, "likes": null, "poster": "lcelona" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" } ]
false
NaN when I use batch normalization (BatchNorm1d)
null
[ { "contents": "I made a module that uses the following MLP module: <SCODE>class MLP(nn.Module):\n def __init__(self, size_layers, activation):\n super(MLP, self).__init__()\n self.layers=[]\n self.layersnorm = []\n self.activation=activation\n for i in range(len(size_layers)-1):\n self.layers.append(nn.Linear(size_layers[i], size_layers[i + 1]))\n self.add_module('layers_' + str(i),self.layers[-1])\n\n self.layersnorm.append(nn.BatchNorm1d(size_layers[i + 1]))\n self.add_module('BatchNorm1d_' + str(i), self.layersnorm[-1])\n\n def forward(self, x):\n for i in range(len(self.layers)-1):\n if self.activation=='relu':\n x = F.relu(self.layersnorm[i](self.layers[i](x)))\n elif self.activation=='lrelu':\n x = F.leaky_relu(self.layersnorm[i](self.layers[i](x)))\n elif self.activation=='tanh':\n x = F.tanh(self.layersnorm[i](self.layers[i](x)))\n x = self.layersnorm[-1](self.layers[-1](x))\n return x\n def l1reg(self):\n w=0.\n for i in range(len(self.layers)):\n w = w + torch.sum((self.layers[i].weight).abs())\n return w\n<ECODE> thanks!", "isAccepted": true, "likes": 2, "poster": "davidenitti" }, { "contents": "I can’t see anything obviously wrong with the model, are you sure the test data doesn’t have NaNs inside?", "isAccepted": true, "likes": null, "poster": "apaszke" }, { "contents": "Could you provide a minimal script that reproduces the problem? I can imagine having NaN during training mode if all the elements of the batch are zero, and so the mean and the std over the batch would be zero as well, leading to NaN.", "isAccepted": true, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "davidenitti" }, { "contents": "The docs aren’t there yet, I’ll be writing them today in the afternoon. You can construct it giving it a list of modules and that should work.", "isAccepted": true, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "I also have that problem. I wanted to suggest increasing eps, which temporarily seemed to fixed the issue, but it didn’t. Is there any suggestion how to debug this? My input is fine (no nan’s).", "isAccepted": true, "likes": null, "poster": "smb" }, { "contents": "Hello all, This problem doesn’t occur with BatchNorm2d. I thought it was possibly due to the eps value as someone suggested above, but this wouldn’t explain why it’s ok for 2d cases and why it doesn’t produce NaN’s for the first stddev calculation. EDIT: I presume the NaN isn’t a result of performing 1 / (0 + eps)? Where the 0 arises because it is computing the variance from a single example. For example: <SCODE>input = torch.FloatTensor(1,4).normal_(0,1)\nbn = nn.BatchNorm1d(4)\n\noutput = bn(Variable(input))\n\nprint(\"output ...\\n\", output)\nprint(\"running mean ...\\n\",bn.running_mean)\nprint(\"running var ...\\n\",bn.running_var)\n<ECODE> produces: Have I missed something obvious?", "isAccepted": true, "likes": null, "poster": "Jordan_Campbell" }, { "contents": "", "isAccepted": true, "likes": 4, "poster": "apaszke" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "apaszke" }, { "contents": "So if I’m understanding correctly, the solution is to use Batchnorm2d?", "isAccepted": true, "likes": null, "poster": "kbrown42" }, { "contents": "In case of batchnorm2d and batch size = 1. Does it work for you even in eval() mode? I’m currently using batchnorm2d with batch size = 1, but I have to stay in train() mode, otherwise the accuracy drops dramatically.", "isAccepted": true, "likes": 1, "poster": "lhatsk" }, { "contents": "hello, did you have solved this problem? if i should use the BN1d layer", "isAccepted": true, "likes": null, "poster": "cold_wind" }, { "contents": "Hi, As per the batch normalization paper, This is because of the Bessel’s correction as pointed out by Adam So if, you can afford to use batch size > 1, that would solve the NaN problem for you.", "isAccepted": true, "likes": 1, "poster": "Nabarun_Goswami" }, { "contents": "Hey, the totel of my test data is 10000, my batchsize is 32. but also only in model.eval(),the output of bn1d is nan", "isAccepted": true, "likes": null, "poster": "cold_wind" }, { "contents": "In that case there is some other problem, most probably with your data. Batchnorm by itself will not give nan for batch sizes greater than 1. Did you scale your data? If in your training you were using float in range 0-1 and in test if its int 0-65535, your network might blowup.", "isAccepted": true, "likes": null, "poster": "Nabarun_Goswami" }, { "contents": "", "isAccepted": true, "likes": 6, "poster": "liubola" }, { "contents": "<SCODE>pretrain_dict['featureExtract.12.num_batches_tracked']\nOut[52]: tensor(8638, device='cuda:0')\n<ECODE>", "isAccepted": true, "likes": 1, "poster": "Zichun_Zhang" } ]
true
Sample multivariate normal with per-example standard deviation
null
[ { "contents": "Hi, I want to do something similar to this: <SCODE>mu = torch.zeros(5, 2)\nsd = torch.ones(5)\ntorch.normal(mu, sd)\n<ECODE> I noticed in the 1d case it works: <SCODE>mu = torch.zeros(5, 1)\nsd = torch.ones(5)\ntorch.normal(mu, sd)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "solidor" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "solidor" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>mu = Variable(torch.zeros(5, 2))\nsd = Variable(torch.rand(5, 1))\ntorch.normal(mu, sd.expand_as(mu))\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "solidor" }, { "contents": "Yes, expand will be much better! There will be no memory copy in this case.", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Getting error when resizing Variable
null
[ { "contents": "But this gives an error: Is this a bug? The error message I get is:", "isAccepted": false, "likes": null, "poster": "yusuf_isik" }, { "contents": "Resize only accepts integer arguments, you can’t pass in another Variable.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Indexing a Variable with a mask generated from another Variable
null
[ { "contents": "x[y[:,0] > 0] But I assume there should be a much easier way. Thanks.", "isAccepted": false, "likes": 2, "poster": "yusuf_isik" }, { "contents": "Yeah, we’ve added the ability to select based on the long tensor a few days ago. I think it’s in the new binaries, so once you reinstall pytorch this should work: <SCODE>x[(y[:, 0] > 0).nonzero().squeeze()]\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "And in the future, we’re going to support automatic broadcasting so the numpy way should work in some time too.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "That’s very good news. Thanks.", "isAccepted": false, "likes": null, "poster": "yusuf_isik" }, { "contents": "I just reinstalled from scratch , but it cannot find nonzero() function. I am getting the following message: I also have a version that I built from source, and I get the same error from it, too. How can I get the right version? Thanks.", "isAccepted": false, "likes": null, "poster": "yusuf_isik" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Four months passed, we still can not use nonzero() for Variables…", "isAccepted": false, "likes": null, "poster": "acgtyrant" }, { "contents": "You can always open an issue in the main repo", "isAccepted": false, "likes": 2, "poster": "apaszke" } ]
false
GPU lost in training imagenet
null
[ { "contents": "I trained pytorch example resnet18 on imagenet, after about 1 epoch the training hangs and nvidia-smi says GPU lost …", "isAccepted": false, "likes": null, "poster": "stevegu" }, { "contents": "this is not specific to pytorch. it looks like you have either a hardware issue or a NVIDIA driver issue. I suspect hardware / thermal issue.", "isAccepted": false, "likes": 2, "poster": "smth" } ]
false
Fast Tensor access in python?
null
[ { "contents": "In lua torch, we can access a Tensor using luajit-FFI pointer as fast as in C. Do we have similar thing in pytorch?", "isAccepted": false, "likes": 1, "poster": "yzhu" }, { "contents": "no. Python doesn’t have JITting.", "isAccepted": false, "likes": 4, "poster": "smth" }, { "contents": "another maybe more noob question: since we are talking about the access speed here, how’s the overhead of .numpy() operation?", "isAccepted": false, "likes": null, "poster": "yzhu" }, { "contents": "", "isAccepted": false, "likes": 5, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
PyTorch tutorial for Neural transfert of artistic style
null
[ { "contents": "Hi, If someones are interested, I’ve realized this PyTorch tutorial to implement the neural transfer of artistic style developed by Leon Gatys and AL: Any feedback is welcome!", "isAccepted": false, "likes": 6, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Yes, the code works and I give the whole script in the “.py”. I show the wrong code to explain the global idea, then I give the correct version just below (maybe not a good pedagogy…). But I didn’t know that just a list of Variable works, this is much simpler that what I did (I constructed a module with the variable as a parameter). Thanks !", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "If using torch.optim.SGD instead of Adam, the codes work. However, I guessed Adam would give better performance in some cases, and thus tried to use Adam algorithm. Would you mind give an example of using the Adam optimizer? Thank you Adam O(∩_∩)O", "isAccepted": false, "likes": null, "poster": "phenixcx" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "phenixcx" }, { "contents": "<SCODE> Traceback (most recent call last):\n File \"Neural_Style.py\", line 205, in <module>\n style_score += sl.backward()\n File \"Neural_Style.py\", line 105, in backward\n self.loss.backward(retain_variables=retain_variables)\n File \"/usr/local/lib/python3.6/site-packages/torch/autograd/variable.py\", line 146, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n File \"/usr/local/lib/python3.6/site-packages/torch/nn/_functions/conv.py\", line 48, in backward\n if self.needs_input_grad[0] else None)\n File \"/usr/local/lib/python3.6/site-packages/torch/nn/_functions/conv.py\", line 119, in _grad_input\n return self._thnn('grad_input', input, weight, grad_output)\n File \"/usr/local/lib/python3.6/site-packages/torch/nn/_functions/conv.py\", line 161, in _thnn\n return impl[fn_name](self, self._bufs[0], input, weight, *args)\n File \"/usr/local/lib/python3.6/site-packages/torch/nn/_functions/conv.py\", line 251, in call_grad_input\n grad_input, weight, *args)\nRuntimeError: Need gradOutput of dimension 4 and gradOutput.size[1] == 64 but got gradOutput to be of shape: [64 x 2401] at /Users/soumith/code/pytorch-builder/wheel/pytorch-src/torch/lib/THNN/generic/SpatialConvolutionMM.c:50<ECODE>", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "Did you update to 0.1.10?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "Yes, I did. I thought the problem was that, the input is cloned and resized in GramMatrix module, and the style loss is then computed on it, and as the error occurs at the stage of style loss backward(), so would it be that, the grad of style loss over GramMatrix is a 2 dim tensor, and further the grad over the cloned and resized input is not properly computed?", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "Debug for a while, found the root cause of the error: Variable.data.resize_() -> Variable.resize().", "isAccepted": false, "likes": 1, "poster": "ecolss" }, { "contents": "Thanks for having reported this issue.", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mehdi-shiba" }, { "contents": "Yes, it’s better to use .view", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "leongatys" } ]
false
In place randomization
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "eulerreich" }, { "contents": "you can do: <SCODE>a = torch.zeros(3, 4) # or whatever shape\ntorch.randn(3, 4, out=a)\n<ECODE> But yea we didn’t implement some methods. We just didn’t get around to it.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "FYI, this isn’t implemented yet: <SCODE>>>> torch.cuda.FloatTensor(3,2).random_()\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: 'torch.cuda.FloatTensor' object has no attribute 'random_'\n<ECODE>", "isAccepted": false, "likes": null, "poster": "jnhwkim" } ]
false
How does autograd handle multiple objectives?
null
[ { "contents": "Using the pytorch framework. Suppose you have 4 NN modules of which 2 share weights such that one objective relies on the computation of 3 NN modules (including the 2 that share weights) and the other objective relies on the computation of 2 NN modules of which only 1 belongs to the weight sharing pair, the other module is not used for the first objective. What would the optimisation step in this scenario entail? With efficiency in mind.", "isAccepted": false, "likes": 1, "poster": "Veril" }, { "contents": "In this case, you only have 3 NN modules, and one of them is simply reused. You give it the list of losses and grads. The optimization step is pretty standard, you give the all the modules’ parameters to a single optimizer.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "So just to be clear, specify a single objective that merges (concat) all the sub-objectives and backward() on it? There won’t be any issue regarding going over the same variables twice through different pathways? Thanks.", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "Yes. No issues.", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
About the variable length input in RNN scenario
null
[ { "contents": "Hi all, I am recently trying to build a RNN model for some NLP task, during which I found that the RNN layer interface provided by pytorch (no matter what cell type, gru or lstm) doesn’t support masking the inputs. Masking is broadly used in NLP domain for the inputs within a single batch having different length (as inputs are generally bunch of natural language sentences), so just wondering will this be a future feature in pytorch? or I have to find some other way to do the masking myself? Thanks.", "isAccepted": false, "likes": 3, "poster": "rolanchen" }, { "contents": "Yeah, that’s something we’ll need to and plan to figure out quite soon, as it’s an important feature. For now, you could pad the outputs of the network after the EOS token with some special values that would make the loss be equal to 0. Hopefully we’ll have a solution ready this week.", "isAccepted": false, "likes": 6, "poster": "apaszke" }, { "contents": "That’s great! really appreciate your efforts on it", "isAccepted": false, "likes": null, "poster": "rolanchen" }, { "contents": "Padding variable length input works reasonably well on CPU (haven’t tried GPU yet). Here are a few examples with “dynamic batching”. Basically for batch that looks like this: [[0, 0, 1, 1], [1, 1, 1, 1]] Batch size at time steps 0 and 1 will be 1, and at time steps 2 and 3 will be 2. I was surprised that dynamic batching was slower. That being said, there is some tricky indexing and concatenations that might have a nicer implementation.", "isAccepted": false, "likes": null, "poster": "mrdrozdov" }, { "contents": "What is dynamic batching? Just iterating over inputs one step at a time, and slicing the batch if some sequence ends?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>Line # Hits Time Per Hit % Time Line Contents\n==============================================================\n 108 @profile\n 109 def forward(self, x, lengths):\n 110 541 1331 2.5 0.0 batch_size = len(x)\n 111 17853 20234 1.1 0.2 lengths = [len(s) for s in x]\n 112\n 113 541 514 1.0 0.0 outputs = [Variable(torch.zeros(1, self.model_dim).float(), volatile=not self.training)\n 114 17853 231300 13.0 1.9 for _ in range(batch_size)]\n 115\n 116 11522 14014 1.2 0.1 for t in range(max(lengths)):\n 117 10981 19603 1.8 0.2 batch = []\n 118 10981 15608 1.4 0.1 h = []\n 119 10981 14756 1.3 0.1 idx = []\n 120 362373 424946 1.2 3.5 for i, (s, l) in enumerate(zip(x, lengths)):\n 121 351392 809330 2.3 6.7 if l >= max(lengths) - t:\n 122 267925 399322 1.5 3.3 batch.append(s.pop())\n 123 267925 307910 1.1 2.6 h.append(outputs[i])\n 124 267925 300516 1.1 2.5 idx.append(i)\n 125\n 126 10981 316257 28.8 2.6 batch = np.concatenate(np.array(batch).reshape(-1, 1), 0)\n 127 10981 161699 14.7 1.3 emb = Variable(torch.from_numpy(self.initial_embeddings.take(batch, 0)), volatile=not self.training)\n 128 10981 522216 47.6 4.3 h = torch.cat(h, 0)\n 129 10981 2529893 230.4 21.1 h_next = self.rnn(emb, h)\n 130 10981 4748304 432.4 39.5 h_next = torch.chunk(h_next, len(idx))\n 131\n 132 278906 322694 1.2 2.7 for i, o in zip(idx, h_next):\n 133 267925 474999 1.8 4.0 outputs[i] = o\n 134\n 135 541 27823 51.4 0.2 outputs = torch.cat(outputs, 0)\n 136 541 174478 322.5 1.5 h = F.relu(self.l0(F.dropout(outputs, 0.5, self.training)))\n 137 541 152165 281.3 1.3 h = F.relu(self.l1(F.dropout(h, 0.5, self.training)))\n 138 541 25429 47.0 0.2 y = F.log_softmax(h)\n 139 541 585 1.1 0.0 return y\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "WarBean" }, { "contents": "For now, you have to use the padding approach in order to take advantage of the substantial speedup afforded by CUDNN’s accelerated RNN kernels. If you only need a unidirectional RNN, you can mask the resulting tensors and remove the effects of the padding completely. If you want variable-sequence-length support with a bidirectional RNN, or would like true dynamic batching that doesn’t even run computations for padding tokens, CUDNN actually supports this internally but PyTorch does not yet have a wrapper (expect one fairly soon). BTW, are there benchmarks on TF Fold? I can’t imagine their repeated concatenations+splits are all that much faster than they’d be in PyTorch.", "isAccepted": false, "likes": 4, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Looking forward to it very much. Also wondering whether this strategy is faster than padding + masking solution. For now, I will use the old-fashion padding (maybe with masking) approach.", "isAccepted": false, "likes": null, "poster": "WarBean" }, { "contents": "Padding + masking might have some advantage on the GPU, because you can use cuDNN RNN kernels that parallelize computation across multiple timesteps, and the more data you give them, the more efficient they’ll get.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>0010011\n0000111\n<ECODE> Where everything marked 1 at a timestep is involved in a batched RNN op. Not clear to me how to get away without using torch.chunk. Here’s another example: <SCODE>001001001001111\n000011100001111\n000001110001111\n000000011111111<ECODE>", "isAccepted": false, "likes": null, "poster": "mrdrozdov" }, { "contents": "When is such format used? I assume that each line is an independent batch element, and never interacts with other ones, right? We do you have these blanks in the data?", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "We’re going to add cuDNN variable length bindings soon.", "isAccepted": false, "likes": 5, "poster": "apaszke" }, { "contents": "I see that you’ve pushed the variable length RNN support to main branch. Does it take care of the bidirectional case?", "isAccepted": false, "likes": null, "poster": "Sandeep42" }, { "contents": "Yes. Feel free to check it out – none of the examples have been updated to use it yet, but we’ll update SNLI and OpenNMT soon.", "isAccepted": false, "likes": 4, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "CodePothunter" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "CodePothunter" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jekbradbury" } ]
false
Loading huge data functionality
null
[ { "contents": "Do you have any plan on implementing big data files loading functionality? Suppose I have 300G data files for training, and I can’t load them all into memory. With it, I don’t need to read all data set into memory at once, and I can load data in parallel fashion. Any plan on similar functionality? Thanks.", "isAccepted": false, "likes": 9, "poster": "Xia_Yandi" }, { "contents": "<SCODE>def load_func(line):\n # a line in 'list.txt\"\n\n # Implement how you load a single piece of data here\n\n # assuming you already load data into src and target respectively\n return {'src': src, 'target': target} # you can return a tuple or whatever you want it to\n\ndef batchify(batch):\n # batch will contain a list of {'src', 'target'}, or how you return it in load_func.\n\n # Implement method to batch the list above into Tensor here\n\n # assuming you already have two tensor containing batched Tensor for src and target\n return {'src': batch_src, 'target': batch_target} # you can return a tuple or whatever you want it to\n\n\ndataset = ListDataset('list.txt', load_func) #list.txt contain list of datafiles, one per line\ndataset = DataLoader(dataset=dataset, batch_size=50, num_workers=8, collate_fn=batchify) #This will load data when needed, in parallel, up to <num_workers> thread.\n\nfor x in dataset: #iterate dataset\n print(x)\n<ECODE> There are surely other way to do it. Hope this helps.", "isAccepted": false, "likes": 8, "poster": "NgPDat" }, { "contents": "<SCODE>class MyDataset(torch.utils.Dataset):\n def __init__(self):\n self.data_files = os.listdir('data_dir')\n sort(self.data_files)\n\n def __getindex__(self, idx):\n return load_file(self.data_files[idx])\n\n def __len__(self):\n return len(self.data_files)\n\n\ndset = MyDataset()\nloader = torch.utils.DataLoader(dset, num_workers=8)\n<ECODE>", "isAccepted": false, "likes": 37, "poster": "apaszke" }, { "contents": "That is so cool! Thanks!", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Morpheus_Hsieh" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Morpheus_Hsieh" }, { "contents": "Yeah, that’s correct too!", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Morpheus_Hsieh" }, { "contents": "Is that CPU memory or GPU memory? Everything should get freed from time to time. Are you just running the example with that single modification?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I wrote something follow your instruction. But it doesn’t work for me. Here is what I do: <SCODE> def _load_hdf5_file(hdf5_file):\n f = h5py.File(hdf5_file, \"r\")\n data = []\n for key in f.keys():\n data.append(f[key])\n return tuple(data)\n\n\n class HDF5Dataset(Dataset):\n def __init__(self, data_files):\n self.data_files = sorted(data_files)\n\n def __getitem__(self, index):\n return _load_hdf5_file(self.data_files[index])\n\n def __len__(self):\n return len(self.data_files)\n\n train_set = HDF5Dataset(train_files) # there is only one file in train_files, i.e. train_files = [\"foo_1\"]\n train_loader = DataLoader(dataset=train_set,\n batch_size=train_batch_size,\n shuffle=True,\n num_workers=2)\n<ECODE> And during iteration, I got this error: <SCODE> Traceback (most recent call last):\n File \"/usr/lib/python2.7/multiprocessing/util.py\", line 274, in _run_finalizers\n File \"/usr/lib/python2.7/multiprocessing/util.py\", line 207, in __call__\n File \"/usr/lib/python2.7/shutil.py\", line 239, in rmtree\n File \"/usr/lib/python2.7/shutil.py\", line 237, in rmtree\n OSError: [Errno 24] Too many open files: '/tmp/pymp-Y6oJsO'\n Process Process-1:\n Traceback (most recent call last):\n File \"/usr/lib/python2.7/multiprocessing/process.py\", line 258, in _bootstrap\n File \"/usr/lib/python2.7/multiprocessing/process.py\", line 114, in run\n File \"/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/utils/data/dataloader.py\", line 36, in _worker_loop\n File \"/usr/lib/python2.7/multiprocessing/queues.py\", line 392, in put\n File \"/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/multiprocessing/queue.py\", line 17, in send\n File \"/usr/lib/python2.7/pickle.py\", line 224, in dump\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n File \"/usr/lib/python2.7/pickle.py\", line 554, in save_tuple\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n File \"/usr/lib/python2.7/pickle.py\", line 606, in save_list\n File \"/usr/lib/python2.7/pickle.py\", line 639, in _batch_appends\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n File \"/usr/lib/python2.7/pickle.py\", line 606, in save_list\n File \"/usr/lib/python2.7/pickle.py\", line 639, in _batch_appends\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n File \"/usr/lib/python2.7/pickle.py\", line 606, in save_list\n File \"/usr/lib/python2.7/pickle.py\", line 639, in _batch_appends\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n File \"/usr/lib/python2.7/multiprocessing/forking.py\", line 67, in dispatcher\n File \"/usr/lib/python2.7/pickle.py\", line 401, in save_reduce\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n File \"/usr/lib/python2.7/pickle.py\", line 554, in save_tuple\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n File \"/usr/lib/python2.7/multiprocessing/forking.py\", line 66, in dispatcher\n File \"/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/multiprocessing/reductions.py\", line 116, in reduce_storage\n File \"/usr/lib/python2.7/multiprocessing/reduction.py\", line 145, in reduce_handle\n OSError: [Errno 24] Too many open files\n<ECODE> And the program never terminates. Did I do anything wrong? Thanks By the way, I don’t know if it is appropriate to ask here, how can I post python style code here like you did? Thanks.", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "I added the line, and I got this error: <SCODE>Process Process-1:\nTraceback (most recent call last):\n File \"/usr/lib/python2.7/multiprocessing/process.py\", line 258, in _bootstrap\n self.run()\n File \"/usr/lib/python2.7/multiprocessing/process.py\", line 114, in run\n self._target(*self._args, **self._kwargs)\n File \"/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/utils/data/dataloader.py\", line 36, in _worker_loop\n data_queue.put((idx, samples))\n File \"/usr/lib/python2.7/multiprocessing/queues.py\", line 392, in put\n return send(obj)\n File \"/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/multiprocessing/queue.py\", line 17, in send\n ForkingPickler(buf, pickle.HIGHEST_PROTOCOL).dump(obj)\n File \"/usr/lib/python2.7/pickle.py\", line 224, in dump\n self.save(obj)\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n f(self, obj) # Call unbound method with explicit self\n File \"/usr/lib/python2.7/pickle.py\", line 554, in save_tuple\n save(element)\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n f(self, obj) # Call unbound method with explicit self\n File \"/usr/lib/python2.7/pickle.py\", line 606, in save_list\n self._batch_appends(iter(obj))\n File \"/usr/lib/python2.7/pickle.py\", line 639, in _batch_appends\n save(x)\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n f(self, obj) # Call unbound method with explicit self\n File \"/usr/lib/python2.7/pickle.py\", line 606, in save_list\n self._batch_appends(iter(obj))\n File \"/usr/lib/python2.7/pickle.py\", line 639, in _batch_appends\n save(x)\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n f(self, obj) # Call unbound method with explicit self\n File \"/usr/lib/python2.7/pickle.py\", line 606, in save_list\n self._batch_appends(iter(obj))\n File \"/usr/lib/python2.7/pickle.py\", line 639, in _batch_appends\n save(x)\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n f(self, obj) # Call unbound method with explicit self\n File \"/usr/lib/python2.7/multiprocessing/forking.py\", line 67, in dispatcher\n self.save_reduce(obj=obj, *rv)\n File \"/usr/lib/python2.7/pickle.py\", line 401, in save_reduce\n save(args)\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n f(self, obj) # Call unbound method with explicit self\n File \"/usr/lib/python2.7/pickle.py\", line 554, in save_tuple\n save(element)\n File \"/usr/lib/python2.7/pickle.py\", line 286, in save\n f(self, obj) # Call unbound method with explicit self\n File \"/usr/lib/python2.7/multiprocessing/forking.py\", line 66, in dispatcher\n rv = reduce(obj)\n File \"/home/ts-yandixia01/.local/lib/python2.7/site-packages/torch/multiprocessing/reductions.py\", line 109, in reduce_storage\n metadata = storage._share_filename_()\nRuntimeError: $ Torch: unable to mmap memory: you tried to mmap 0GB. at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/TH/THAllocator.c:317\n<ECODE> Thanks", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Morpheus_Hsieh" }, { "contents": "That was probably an out of memory error. If the data or the code is public (or if you could just isolate the data loading into a separate script), I could run it myself and make sure it doesn’t leak. But we have some tests for that.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Hi, I just wrote a simple demo code to reproduce the error. The code is here: The data is randomly generated, but everything is the same as mine except the actual values. You could run main.py to reproduce the error. Thanks!", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "<SCODE>--- a/word_language_model/main.py\n+++ b/word_language_model/main.py\n@@ -57,8 +57,9 @@ def batchify(data, bsz):\n nbatch = data.size(0) // bsz\n data = data.narrow(0, 0, nbatch * bsz)\n data = data.view(bsz, -1).t().contiguous()\n- if args.cuda:\n- data = data.cuda()\n+\n+ # if args.cuda:\n+ # data = data.cuda()\n return data\n \n eval_batch_size = 10\n@@ -103,6 +104,9 @@ def get_batch(source, i, evaluation=False):\n seq_len = min(args.bptt, len(source) - 1 - i)\n data = Variable(source[i:i+seq_len], volatile=evaluation)\n target = Variable(source[i+1:i+1+seq_len].view(-1))\n+ if args.cuda:\n+ data = data.cuda()\n+ target = target.cuda()\n return data, target\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "Morpheus_Hsieh" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "for the code snippet that you provided. <SCODE>class MyDataset(torch.utils.Dataset):\n def __init__(self):\n self.data_files = os.listdir('data_dir')\n sort(self.data_files)\n\n def __getindex__(self, idx):\n return load_file(self.data_files[idx])\n\n def __len__(self):\n return len(self.data_files)\n<ECODE> Are the file paths stored in self.data_files suppose to represent each batch of data (or data per loop) returned by iterating loader?", "isAccepted": false, "likes": null, "poster": "itzjustricky" }, { "contents": "it is data per instance of the loop", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
How to combine multiple criterions to a loss function?
null
[ { "contents": "<SCODE>def loss_calc(data,targets):\n data = Variable(torch.FloatTensor(data)).cuda()\n targets = Variable(torch.LongTensor(targets)).cuda()\n output= model(data)\n final = output[-1,:,:]\n loss = criterion(final,targets)\n return loss\n<ECODE> <SCODE>def loss_calc(data,targets):\n data = Variable(torch.FloatTensor(data)).cuda()\n targets = Variable(torch.LongTensor(targets)).cuda()\n output= model(data)\n final = output[-1,:,:]\n loss = 0.0\n for b in range(batch_size):\n loss += criterion(final[b], targets[b])\n return loss\n<ECODE> I doesn;t return errors, but the network simply won;t train. This alternative results an error <SCODE>def loss_calc(data,targets):\n data = Variable(torch.FloatTensor(data)).cuda()\n targets = Variable(torch.LongTensor(targets)).cuda() \n output= model(data)\n final = output[-1,:,:]\n loss = []\n for b in range(batch_size):\n loss.append(criterion(final[b], targets[b]))\n loss = torch.sum(loss)\n return loss\n<ECODE> Note, this is a dummy example. If I understand how to fix this, I can apply that to the recursive neural nets. For the final project, the sequence of losses have arbitrary length. For example, sometimes it’s adding 4 losses, other times 6. So it’s not an option to pre-allocate a Tensor and save the losses to the Tensor", "isAccepted": false, "likes": 11, "poster": "robromijnders" }, { "contents": "", "isAccepted": false, "likes": 33, "poster": "apaszke" }, { "contents": "Thank you. That helped a lot.", "isAccepted": false, "likes": null, "poster": "robromijnders" }, { "contents": "@apaszke For eg. I have a network with two outputs and corresponding losses - crossentropy2d and crossentropy. I tried the following but it throws an error about incompatible types. Is there a way around? <SCODE>b = cross_entropy2d(output_x, x_labels)\na = nn.CrossEntropyLoss(output_y, y_labels)\nloss = a + b\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "meetshah1995" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "jekbradbury" }, { "contents": "@jekbradbury Thanks a lot.", "isAccepted": false, "likes": null, "poster": "meetshah1995" }, { "contents": "<SCODE>b = nn.MSELoss(output_x, x_labels)\na = nn.CrossEntropyLoss(output_y, y_labels)\nloss = a + b<ECODE>", "isAccepted": false, "likes": null, "poster": "Bixqu" }, { "contents": "Doing that is fine, it would be: <SCODE>b = nn.MSELoss()(output_x, x_labels)\na = nn.CrossEntropyLoss()(output_y, y_labels)\nloss = a + b\n\nloss.backward()\n<ECODE> Note the additional parentheses, as James mentioned above. This is equivalent to: <SCODE>b = nn.MSELoss()\na = nn.CrossEntropyLoss()\n\nloss_a = a(output_x, x_labels)\nloss_b = b(output_y, y_labels)\n\nloss = loss_a + loss_b\n\nloss.backward()<ECODE>", "isAccepted": false, "likes": 18, "poster": "Jordan_Campbell" }, { "contents": "which way should I take? Thanks.", "isAccepted": false, "likes": 1, "poster": "D-X-Y" }, { "contents": "both give you same result. I’d say (1) is simpler.", "isAccepted": false, "likes": 11, "poster": "smth" }, { "contents": "I did this but got the following error : add received an invalid combination of arguments - got (torch.FloatTensor), but expected one of: (float value)\ndidn’t match because some of the arguments have invalid types: (torch.FloatTensor) (torch.cuda.FloatTensor other)\ndidn’t match because some of the arguments have invalid types: (torch.FloatTensor) (torch.cuda.sparse.FloatTensor other)\ndidn’t match because some of the arguments have invalid types: (torch.FloatTensor) (float value, torch.cuda.FloatTensor other) (float value, torch.cuda.sparse.FloatTensor other) Both my loss are of the same type - nn.CrossEntropyLoss() <class ‘torch.autograd.variable.Variable’> Any suggestion on how to resolve this please?", "isAccepted": false, "likes": null, "poster": "BikashgG" }, { "contents": "If you stare down the error message, you see that you have one cuda and one not. Best regards Thomas", "isAccepted": false, "likes": 6, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "BikashgG" }, { "contents": "Is it possible to weight the losses when multiple losses are used and how? Is it correct by using the following code? <SCODE> mse_loss = nn.MSELoss(size_average=True)\n a = weight1 * mse_loss(inp, target1)\n b = weight2 * mse_loss(inp, target2)\n loss = a + b\n\n loss.backward()\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "platero" }, { "contents": "Yup, you can certainly weigh the losses however you see fit. What you’re doing in your example should be fine.", "isAccepted": false, "likes": 3, "poster": "achaiah" }, { "contents": "what if my second loss function requires some computed value from first loss (or even the grad of first loss?) in that case I can’t add two loss together; they must be gradients respectively; and retain_graph=True gives wrong results as well as the intermediate grads not correct", "isAccepted": false, "likes": 1, "poster": "ElleryL" }, { "contents": "thank you very much Jordan! your answer help me to combine my custom loss function with nn.Loss.", "isAccepted": false, "likes": null, "poster": "PeterXiaoGuo" }, { "contents": "Hi BikashgG, I think you could use .type() to change tensor’s type from torch.FloatTensor to torch.LongTensor. I face a similar problem when I use this CrossEntropyLoss(). Check the official document on this loss function, there is a requirement on the type of feed-in tensor. Hope it helps, Peter", "isAccepted": false, "likes": null, "poster": "PeterXiaoGuo" }, { "contents": "What if the losses are computed over different parts of the network, say loss1 is for first 3 layers and loss2 is for first 7 layers (incl. the first 3)? Wouldn’t the sum of losses method also backprop loss1 through layers 4-7? Would calling loss1.backward() and loss2.backward() separately be recommended in that case?", "isAccepted": false, "likes": 2, "poster": "tarun" }, { "contents": "Hope it helps!", "isAccepted": false, "likes": null, "poster": "sid87" } ]
false
On a cpu device, how to load checkpoint saved on gpu device
null
[ { "contents": "Loading this checkpoint on my cpu device gives an error: <SCODE> raise AssertionError(\"Torch not compiled with CUDA enabled\")\nAssertionError: Torch not compiled with CUDA enabled```<ECODE>", "isAccepted": false, "likes": 8, "poster": "Ja-Keoung_Koo" }, { "contents": "<SCODE>torch.load('my_file.pt', map_location=lambda storage, location: 'cpu')\n<ECODE> While this will only map storages from GPU0: <SCODE>torch.load('my_file.pt', map_location={'cuda:0': 'cpu'})\n<ECODE>", "isAccepted": false, "likes": 22, "poster": "apaszke" }, { "contents": "I’m trying to load a GPU-trained model onto a CPU with the code you suggested: <SCODE>torch.load('my_file.pt', map_location=lambda storage, location: 'cpu')\n<ECODE> … and I get this error: <SCODE>Traceback (most recent call last):\n File \"net_predict.py\", line 146, in <module>\n net = torch.load(f_net, map_location=(lambda storage, location: 'cpu'))\n File \"/home/[...]/anaconda2/lib/python2.7/site-packages/torch/serialization.py\", line 248, in load\n return _load(f, map_location, pickle_module)\n File \"/home/[...]/anaconda2/lib/python2.7/site-packages/torch/serialization.py\", line 340, in _load\n tensor = tensor_type._new_with_metadata_file(f, storage)\nAttributeError: type object 'str' has no attribute '_new_with_metadata_file'\n<ECODE> (I replaced my username with […]) Any idea what I’m doing wrong?", "isAccepted": false, "likes": null, "poster": "mromaniuk" }, { "contents": "I’m sorry, my bad. This should work: <SCODE>torch.load('my_file.pt', map_location=lambda storage, loc: storage)\n<ECODE>", "isAccepted": false, "likes": 47, "poster": "apaszke" }, { "contents": "Out of curiosity: could you explain what this does? I’m not sure how it knows to remap storage to CPU, since the lambda returns the storage it got as an argument.", "isAccepted": false, "likes": null, "poster": "mromaniuk" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "<SCODE>import torch\n\nencoder = torch.load('encoder.pt', map_location=lambda storage, loc: storage)\ndecoder = torch.load('decoder.pt', map_location=lambda storage, loc: storage)\n\nencoder.cpu()\ndecoder.cpu()\n<ECODE> How can I load a GPU-trained model on a CPU device (without any GPUs) correctly? Thank you for your great work!", "isAccepted": false, "likes": null, "poster": "cyyyyc123" }, { "contents": "Hey, no problem! I only have a couple more questions:", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Good morning! Thank you very much!", "isAccepted": false, "likes": null, "poster": "cyyyyc123" }, { "contents": "May this is useful to provide some information for solving the problem. Thank you!", "isAccepted": false, "likes": null, "poster": "cyyyyc123" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "cyyyyc123" }, { "contents": "Sorry to reopen the thread. After running the code: I met the same problem as Yangyu met before. The error message shows: I just updated my pytorch to the latest version in the master branch. The version number is 0.1.11+761eef1. Any idea why?", "isAccepted": false, "likes": null, "poster": "gaoking132" }, { "contents": "Hello, I tried to load a snapshot from gpu-training to run it on CPU-mode, but faced with the same problem, that described above. Of course, tried to use given advice, but there is no effect. <SCODE>torch.load('./snapshots/cpu_final_snapshot.pth', map_location=lambda storage, loc: storage)\n<ECODE> I have the following traceback: <SCODE>Traceback (most recent call last):\n File \"predict.py\", line 39, in <module>\n params = torch.load('./snapshots/cpu_final_snapshot.pth', map_location=lambda storage, loc: storage)\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/serialization.py\", line 222, in load\n return _load(f, map_location, pickle_module)\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/serialization.py\", line 370, in _load\n result = unpickler.load()\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/cuda/__init__.py\", line 279, in __new__\n _lazy_init()\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/cuda/__init__.py\", line 96, in _lazy_init\n _check_driver()\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/cuda/__init__.py\", line 63, in _check_driver\n raise AssertionError(\"Torch not compiled with CUDA enabled\")\n<ECODE> <SCODE>torch.__version__\n'0.1.10_1'\n<ECODE> Would be appreciated any help.", "isAccepted": false, "likes": null, "poster": "denis_64" }, { "contents": "It seems that I found the problem that causes the error of “invalid combination of arguments”. Yesterday I used the model trained on 0.1.9 version of pytorch, and loaded it to cpu using the latest version of 0.1.11. The error appeared. Today I retrained the model using the latest version of 0.1.11 and loaded also using the latest version. Everything works. So I guess that there are inconsistencies between different versions of pytorch models.", "isAccepted": false, "likes": 4, "poster": "gaoking132" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "Eugenio_Culurciello" }, { "contents": "Is anything wrong with the new version of PyTorch?", "isAccepted": false, "likes": 2, "poster": "hazelnutsgz" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "colllin" }, { "contents": "torch.load(‘save/best_BiLSTMCRF_pos_2019-01-10 12-42-50’,map_location=‘cpu’)", "isAccepted": false, "likes": null, "poster": "Zayd" }, { "contents": "Sorry for reviving this post. I have a closely related question. I want to do the exact same thing, but using the C++ front-end. I.e. I want to save a model, trained using the C++ front-end on GPU, and then load in using the C++ front-end on a CPU device. It is possible? The documentation on torch::load does not give the map_location? Thanks for any help.", "isAccepted": false, "likes": null, "poster": "Willem" }, { "contents": "torch.load(WEIGHTS_FILE, map_location=torch.device(‘cpu’) )", "isAccepted": false, "likes": null, "poster": "Towsif_Ahamed" } ]
false
How to define a new layer with autograd?
null
[ { "contents": "Hi all,", "isAccepted": false, "likes": 3, "poster": "big_tree" }, { "contents": "Sure, this will be handled for you. For example: <SCODE>import torch.nn as nn\nfrom torch.autograd import Variable\n\nclass Gaussian(nn.Module):\n def __init__(self):\n self.a = nn.Parameter(torch.zeros(1))\n self.b = nn.Parameter(torch.zeros(1))\n self.c = nn.Parameter(torch.zeros(1))\n\n def forward(self, x):\n # unfortunately we don't have automatic broadcasting yet\n a = self.a.expand_as(x)\n b = self.b.expand_as(x)\n c = self.c.expand_as(x)\n return a * torch.exp((x - b)^2 / c)\n\nmodule = Gaussian()\nx = Variable(torch.randn(20))\nout = module(x)\nloss = loss_fn(out)\nloss.backward()\n\n# Now module.a.grad should be non-zero.\n<ECODE>", "isAccepted": false, "likes": 11, "poster": "apaszke" }, { "contents": "to update all the parameters?", "isAccepted": false, "likes": null, "poster": "big_tree" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thank you so much for your help!", "isAccepted": false, "likes": null, "poster": "big_tree" }, { "contents": "Now we have defined the Gaussian layer, I want to know how to use it in a network of several layers. For example, <SCODE>class MyNet(nn.Module):\n def __init__(self):\n super(MyNet, self).__init__()\n self.fc1 = nn.Linear(100, 20)\n self.gauss = Gaussian()\n def forward(self, x):\n x = F.relu(self.fc1(x))\n x = self.gauss(x)\n\nnet = MyNet()\n<ECODE> Is the above code correct? Or do we have to set up other things in order to use the newly-defined layer in other network?", "isAccepted": false, "likes": null, "poster": "jdhao" }, { "contents": "the above is correct.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Can I define my own layer with a forward/backward function or I will have to define the layer as a Function so that I can define the backward function too?", "isAccepted": false, "likes": null, "poster": "Soumava_Roy" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Hao_Chu" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "herleeyandi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jdhao" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "herleeyandi" }, { "contents": "Most of the time we do not need to extend PyTorch using numpy. As long as you use the builtin method of Variable, you can only write forward method and backward gradient computation is handled by autograd. So using a composition of builtin Variable method to achieve what you want is more time-saving.", "isAccepted": false, "likes": null, "poster": "jdhao" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "colesbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "herleeyandi" }, { "contents": "Yes, you can do that. Writing new CUDA kernels usually requires a lot of effort. If you can express your layer in terms of existing Tensor operations, then that’s usually a better way to get started. If you can’t do that, then you might have to write new kernels.", "isAccepted": false, "likes": 1, "poster": "colesbury" }, { "contents": "What if the forward function is not differentiable? How should I update the parameters and return the gradient w.r.t inputs?", "isAccepted": false, "likes": null, "poster": "MahdiNazemi" }, { "contents": "I tried the piece of code posted by apaszke, but I got the following error. <SCODE>RuntimeError: bitwise_xor(): functions with out=... arguments don't support automatic differentiation, but one of the arguments requires grad.\n<ECODE>", "isAccepted": false, "likes": null, "poster": "jwang9" }, { "contents": "I figured that the problem arises from the power calculation in python. I changed ^2 to **2 and it works now.", "isAccepted": false, "likes": 1, "poster": "jwang9" } ]
false
Reducing LSTM Hidden State Output to 1 Dimension
null
[ { "contents": "<SCODE>>>> rnn = nn.LSTM(1, 100, 4)\n>>> input = Variable(torch.randn(1, 1, 1))\n>>> h0 = Variable(torch.randn(4, 1, 100))\n>>> c0 = Variable(torch.randn(4, 1, 100))\n>>> output, hn = rnn(input, (h0, c0))\n<ECODE> Any ideas?", "isAccepted": false, "likes": null, "poster": "ritchieng" }, { "contents": "Not sure what 1 in 1 out LSTM is, you want to have 1 of input and output features, while using 100 for hidden size? I think the standard practice is to add a Linear layer on top of the LSTM. If the network learns with hidden states of size 1, but doesn’t with that of size 100, it might mean that the optimization task becomes too complex for a simple task that you’re training the network to perform. Can’t really help you a lot, it depends on a lot of factors.", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
LSTM Output Output Range
null
[ { "contents": "", "isAccepted": false, "likes": 2, "poster": "ritchieng" }, { "contents": "Can’t you just multiply the output by 10?", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "<SCODE>from sklearn.preprocessing import MinMaxScaler\n\n# build a transformation to have the transformation result in the range 0..1\nscaler = MinMaxScaler(feature_range=(0, 1)) \n\n# apply the transformation to the target values (note that fit_transform() \n# expects a 2D tensor)\ntransformedTarget = scaler.fit_transform(target)\n\n# perform training\n...\n\n# apply inverse transform\norigOutput = network.forward(...)\noutput = scaler.inverse_transform(origOutput.data.numpy())\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "Andre_Holzner" } ]
false
Multi-GPU error
null
[ { "contents": "After I wrapped my model with DataParallel, this error happened: RuntimeError: Assertion `THCTensor_(checkGPU)(state, 5, input, gradOutput, gradWeight, sorted, indices)’ failed. Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/THCUNN/generic/LookupTable.cu:17 My model includes an embedding() layer. Is this caused by embedding()? If so, any suggestion on how to do multi-gpu properly with embedding() layers inside the model? Thanks", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "peak" }, { "contents": "Not yet, the PR still needs some fixes.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "This is merged now. But you need to build from source if you need this change right away.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
What’s the purpose of “retain_variables” in Variable backward function
null
[ { "contents": "How to use “retain_variables” in Variable backward function. I tried the following code:", "isAccepted": false, "likes": 1, "poster": "zhengyun" }, { "contents": "Hi, <SCODE>import torch\nfrom torch.autograd import Variable\nx = Variable(torch.ones(2, 2), requires_grad = True)\ny = x ** 2\ny.backward(torch.ones(2, 2), retain_variables=False)\nprint \"first backward of x is:\"\nprint x.grad\ny.backward(2*torch.ones(2, 2), retain_variables=False)\nprint \"second backward of x is:\"\nprint x.grad<ECODE>", "isAccepted": false, "likes": 2, "poster": "albanD" }, { "contents": "I see. Thanks very much.", "isAccepted": false, "likes": null, "poster": "zhengyun" } ]
false
Sigmoid Belief Networks
null
[ { "contents": "Cheers.", "isAccepted": false, "likes": null, "poster": "GeorgeStam" }, { "contents": "Is there an efficient way to implement this as the full stochastic neuron? Would I call a simple one layer nn recursively, since Torch is dynamic? <SCODE>import numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torch.autograd as autograd\nfrom torch.autograd import Variable\n\ntorch.manual_seed(543)\n\nclass Policy(nn.Module):\n def __init__(self):\n super(Policy, self).__init__()\n self.affine1 = nn.Linear(2, 10)\n self.affine2 = nn.Linear(10, 2)\n self.affine = nn.Linear(2, 2)\n self.saved_outputs = []\n self.rewards = []\n def forward(self, x):\n x = F.sigmoid(self.affine1(x))\n action_scores = self.affine2(x)\n return F.softmax(action_scores)\n\nmodel = Policy()\noptimizer = optim.Adam(model.parameters(), lr=1e-2)\n\nx_input = np.array([[1.0,1.0],[1.0,0.0],[0.0,1.0],[0.0,0.0]])\ntarget = np.array([[0.0], [1.0],[1.0],[0.0]])\n\nfor i_episode in range(400):\n for t in range(20):\n ind = np.random.randint(4)\n xin = x_input[ind]\n tar = target[ind]\n\n x_input_Tensor = torch.from_numpy(xin).float().unsqueeze(0)\n probs = model(Variable(x_input_Tensor)) # prob of y\n output = probs.multinomial() # sampled from softmax\n print(xin, tar, output.data.numpy())\n model.saved_outputs.append(output) # action is a torch.LongTensor, 0s and 1s\n\n reward = 1.0*(output.data.numpy() == tar)\n model.rewards.append(reward)\n saved_outputs = model.saved_outputs\n rewards = []\n for r in model.rewards[::-1]:\n R = r\n rewards.insert(0, R)\n rewards = torch.Tensor(rewards)\n rewards = (rewards - rewards.mean()) / rewards.std()\n for output, r in zip(model.saved_outputs, rewards):\n output.reinforce(r)\n\noptimizer.zero_grad()\nautograd.backward(model.saved_outputs, [None for _ in model.saved_outputs])\noptimizer.step()\ndel model.rewards[:]\ndel model.saved_outputs[:]<ECODE>", "isAccepted": false, "likes": null, "poster": "GeorgeStam" }, { "contents": "I think you might have a small bug in your XOR example - you only call the optimizer once after all these iterations. Not sure if that’s what you wanted.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Basically, you subtract from the ‘reward’ something (a control variate) that correlates with it (but does not depend on the sample generating the reward), but to keep the gradient estimate unbiased you add the mean that this term has contributed. The MuProp algorithm uses a control variate based on a Taylor expansion around what it calls the mean field network, but is really the deterministic ‘standard’ neural net (neurons take values of sigmoids without sampling). Below is a single sample SBN that works with the MNIST example. If you change the line \"z1.reinforce(loss_repeated-CV_repeated) \" to just have loss_repeated as the reward, you have standard REINFORCE for an SBN. I am OK with implementing the Taylor expansion, but I’m wondering how I would add back the mean to make the gradient unbiased. Should I use a hook to the parameter gradients to add something to the gradient before the update step? At the moment I call backward() on the deterministic network’s loss inside the forward pass to add this as a hook - can I do this? <SCODE> self.w1 = Parameter(torch.Tensor(28 * 28, 200)) #seems to be more flexibility in using parameters than Linear layers\n self.wlast = Parameter(torch.Tensor(200, 10))\n\ndef expected_loss(self, target, forward_result):\n (a1, mu1, z1), (a2, logprobs_out) = forward_result\n return F.nll_loss(logprobs_out, target)\n\ndef expected_loss(self, target, forward_result):\n (a1, mu1, z1), (a2, logprobs_out) = forward_result\n return F.nll_loss(logprobs_out, target)\n\ndef forward(self, x, target):\n x = x.view(-1, 28*28)\n\n a1 = x.mm(self.w1)\n mu1 = F.sigmoid(a1)\n\n z1 = torch.bernoulli(mu1) # first hidden layer samples\n\n alast = z1.mm(self.wlast)\n logprobs_out = F.log_softmax(alast)\n\n expected_loss = self.expected_loss(target, ((a1, mu1, z1), (alast, logprobs_out)))\n '''MuProp Taylor expansion, deterministic forward prop'''\n deta1 = x.mm(self.w1)\n detmu1 = F.sigmoid(deta1)\n detalast = detmu1.mm(self.wlast)\n detlogprobs_out = F.log_softmax(detalast)\n detexpected_loss = self.expected_loss(target, ((a1, mu1, z1), (detalast, detlogprobs_out)))\n\n detexpected_loss.backward() # can I do this in forward???\n control_var = detexpected_loss.data + torch.sum(detmu1.grad.data*(z1.data-detmu1.data))\n loss_repeated = expected_loss.data.repeat(z1.size())\n CV_repeated = control_var.repeat(z1.size())\n z1.reinforce(loss_repeated-CV_repeated) # REINFORCE stochastic layer, control variate included\n #print(self.w1.grad.size())\n cvmu = self.w1.grad.data # Here is where I am confused! Is this gradient from the deterministic network?\n h1 = self.w1.register_hook(lambda grad: grad+cvmu)\n return ((alast, logprobs_out)), expected_loss<ECODE>", "isAccepted": false, "likes": 1, "poster": "GeorgeStam" } ]
false
Clarifying input size to RNN in word_language_model example?
null
[ { "contents": "Hi. I’m trying to understand something… How is this possible? If the model is expecting 20 inputs, shouldn’t it produce an error when you try to send it only 1? Furthermore, when I try to actually send the generation code a sequence of length 20 by creating… <SCODE>input = corpus.test[0:20]\nprint(\"input =\",input)\n<ECODE> Then I get… <SCODE>corpus = data.Corpus(args.data)\nntokens = len(corpus.dictionary)\nhidden = model.init_hidden(1)\n\ndef batchify(data, bsz): # breaks into parallel streams\n nbatch = data.size(0) // bsz\n data = data.narrow(0, 0, nbatch * bsz)\n data = data.view(bsz, -1).t().contiguous()\n if args.cuda:\n data = data.cuda()\n return data\n\neval_batch_size = 10\ntest_data = batchify(corpus.test, eval_batch_size)\n \ndef get_batch(source, i, evaluation=False):\n bptt = 20\n seq_len = min(bptt, len(source) - 1 - i)\n data = Variable(source[i:i+seq_len], volatile=evaluation)\n target = Variable(source[i+1:i+1+seq_len].view(-1))\n return data, target\n\n#input = Variable(torch.rand(1, 1).mul(ntokens).long(), volatile=True)\ninput, targets = get_batch(test_data, 0, evaluation=True)\n<ECODE> Then I when I get to the prediction step (i.e., \" output, hidden = model(input, hidden)\" ), I get the error… PS- I see the documentation for torch.nn.RNN says input is supposed to be a Tensor, but that’s just what I’m sending. It didn’t say anything about needing a Variable or other “matrix”: Thanks!", "isAccepted": false, "likes": null, "poster": "mcskwayrd" }, { "contents": "The confusion comes from a dynamic vs static graph framework. PyTorch constructs the graphs every time, so it doesn’t care in advance what length of the sequence will you be using with the RNN. The only arguments that you have to pass in to the constructor of the RNN are how many features should the input have, and what’s the hidden layer size. Then, you can use sequences of different lengths at every iteration, and it should work just fine. The only note that can lower the memory usage is to forward a fake batch before the training, that’s of the size of the longest sequence. This will allow our CUDA allocator to preallocate memory that can be reused for all (smaller) batches.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for writing back Adam. So this is the “flexible input size” feature I’ve been hearing so much about. Great! <SCODE>eval_batch_size = 10\ntest_data = batchify(corpus.test, eval_batch_size)\n\ndef get_batch(source, i, evaluation=False):\n bptt = 20\n seq_len = min(bptt, len(source) - 1 - i)\n data = Variable(source[i:i+seq_len], volatile=evaluation)\n target = Variable(source[i+1:i+1+seq_len].view(-1))\n return data, target\n\ninput, target = get_batch(test_data, 0, evaluation=True)\n#input = Variable(torch.rand(1, 1).mul(ntokens).long(), volatile=True)\n\ninput = Variable(input, volatile=True)\n<ECODE> But if it’s already a Variable, then I don’t understand why I can’t use it as an input to “model” further below. (?) Because if I don’t include that extra “Variable” re-casting, then still I get “inconsistent tensor size”… What is the source of this inconsistency, if the input length isn’t supposed to matter?", "isAccepted": false, "likes": null, "poster": "mcskwayrd" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Ok, removed the rewrap. <SCODE>corpus = data.Corpus(args.data)\nntokens = len(corpus.dictionary)\nhidden = model.init_hidden(1)\n# input = Variable(torch.rand(1, 1).mul(ntokens).long(), volatile=True)\ndef batchify(data, bsz): # breaks into parallel streams\n nbatch = data.size(0) // bsz\n data = data.narrow(0, 0, nbatch * bsz)\n data = data.view(bsz, -1).t().contiguous()\n if args.cuda:\n data = data.cuda()\n return data\neval_batch_size = 10\ntest_data = batchify(corpus.test, eval_batch_size)\ndef get_batch(source, i, evaluation=False):\n bptt = 20\n seq_len = min(bptt, len(source) - 1 - i)\n data = Variable(source[i:i+seq_len], volatile=evaluation)\n target = Variable(source[i+1:i+1+seq_len].view(-1))\n return data, target\ninput, target = get_batch(test_data, 0, evaluation=True)\n<ECODE> Running this version of the code produces an error about hidden size… It seems that it wants the hidden size to somehow follow the batch size, only with no \"L\"s…", "isAccepted": false, "likes": null, "poster": "mcskwayrd" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "The only issue is that “output” then ends up being [20x1x10000] instead of [1x1x10000] like the remainder of the code expects. So I grab only the last element of output via output = output[-1] <SCODE>###############################################################################\n# Language Modeling on Penn Tree Bank\n#\n# This file generates new sentences sampled from the language model\n#\n###############################################################################\n\nimport argparse\nimport time\nimport math\n\nimport torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\nimport data\n\nparser = argparse.ArgumentParser(description='PyTorch PTB Language Model')\n\n# Model parameters.\nparser.add_argument('--data', type=str, default='./data/penn',\n help='location of the data corpus')\nparser.add_argument('--checkpoint', type=str, default='./model.pt',\n help='model checkpoint to use')\nparser.add_argument('--outf', type=str, default='generated.txt',\n help='output file for generated text')\nparser.add_argument('--words', type=int, default='1000',\n help='number of words to generate')\nparser.add_argument('--seed', type=int, default=1111,\n help='random seed')\nparser.add_argument('--cuda', action='store_true',\n help='use CUDA')\nparser.add_argument('--temperature', type=float, default=1.0,\n help='temperature - higher will increase diversity')\nparser.add_argument('--log-interval', type=int, default=100,\n help='reporting interval')\nargs = parser.parse_args()\n\n# Set the random seed manually for reproducibility.\ntorch.manual_seed(args.seed)\nif torch.cuda.is_available():\n if not args.cuda:\n print(\"WARNING: You have a CUDA device, so you should probably run with --cuda\")\n else:\n torch.cuda.manual_seed(args.seed)\n\nif args.temperature < 1e-3:\n parser.error(\"--temperature has to be greater or equal 1e-3\")\n\nwith open(args.checkpoint, 'rb') as f:\n model = torch.load(f)\n\nif args.cuda:\n model.cuda()\nelse:\n model.cpu()\n\n\ndef batchify(data, bsz): # breaks into parallel streams\n nbatch = data.size(0) // bsz\n data = data.narrow(0, 0, nbatch * bsz)\n data = data.view(bsz, -1).t().contiguous()\n if args.cuda:\n data = data.cuda()\n return data\n\ncorpus = data.Corpus(args.data)\nntokens = len(corpus.dictionary)\ndef batchify(data, bsz): # breaks into parallel streams\n nbatch = data.size(0) // bsz\n data = data.narrow(0, 0, nbatch * bsz)\n data = data.view(bsz, -1).t().contiguous()\n if args.cuda:\n data = data.cuda()\n return data\neval_batch_size = 1\ntest_data = batchify(corpus.test, eval_batch_size)\nhidden = model.init_hidden(1)\ndef get_batch(source, i, evaluation=False):\n bptt = 20\n seq_len = min(bptt, len(source) - 1 - i)\n data = Variable(source[i:i+seq_len], volatile=evaluation)\n target = Variable(source[i+1:i+1+seq_len].view(-1))\n return data, target\n\ninput, target = get_batch(test_data, 0, evaluation=True)\n#input = Variable(torch.rand(1, 1).mul(ntokens).long(), volatile=True)\n#print(\"input = \",input)\n\nif args.cuda:\n input.data = input.data.cuda()\n\nwith open(args.outf, 'w') as outf:\n for i in range(args.words):\n output, hidden = model(input, hidden)\n output = output[-1]\n# print(\"output = \",output)\n word_weights = output.squeeze().data.div(args.temperature).exp().cpu()\n word_idx = torch.multinomial(word_weights, 1)[0]\n input.data.fill_(word_idx)\n word = corpus.dictionary.idx2word[word_idx]\n\n outf.write(word + ('\\n' if i % 20 == 19 else ' '))\n\n if i % args.log_interval == 0:\n print('| Generated {}/{} words'.format(i, args.words))\n\nprint(\" \")<ECODE>", "isAccepted": false, "likes": null, "poster": "mcskwayrd" }, { "contents": "Actually that might not be what you want. You want to pass the large input only once, to initialize the network, and then do the steps one by one. In this example you’ll forward a sequence of 20 words from the data, and then you’ll be feeding each output for 20 steps, and taking the last one as the next input (that will be applied 20 times). You should forward the batch through the network only once and slice off the last hidden state. Then, use that slice with an input of length 1 to generate the data.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mcskwayrd" } ]
false
Converting a Variable to a Parameter
vision
[ { "contents": "(Also notice that I’ve changed batch size to 1, is there a way to do this with bigger batches?) Thanks a lot.", "isAccepted": false, "likes": 1, "poster": "Quilby" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks a lot apaszke, I was not aware of these “functionals”. Is this the correct way to do this convolution in a batch manner? (it runs very slow…) <SCODE> y = self.pool(F.relu(self.conv1(y)))\n z = Variable(torch.Tensor(x.size()[0], 16, 10, 10))\n\n for i in range(x.size()[0]):\n z[i,:]= F.conv2d(y[i,:].unsqueeze(0), x[i,:]).squeeze(0)\n\n z = self.pool(F.relu(z))\n z = z.view(-1, 16*5*5)\n<ECODE> (x contains the convolutional weights, y contains the image I want to convolve over, and z is where I put the result) Thanks", "isAccepted": false, "likes": null, "poster": "Quilby" }, { "contents": "Why can’t you compute the convolution with all filters in one go?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Exactly, that is the point of the dynamic filter network. The conv filters for a certain picture are a function of that picture.", "isAccepted": false, "likes": null, "poster": "Quilby" }, { "contents": "You could still separate the convolutions using groups. It’s not going to be super fast, but you could give that a try.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "The sizes are: Thanks", "isAccepted": false, "likes": null, "poster": "Quilby" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Properly make autograd.Function with scalar variable
null
[ { "contents": "<SCODE>class mul_scalar(torch.autograd.Function):\n \"\"\"\n Customized autograd.Function of\n f(T,s) = s * T,\n where T is a fixed Tensor and s is a Variable\n \"\"\"\n\n def forward(self, T, s_var):\n self.save_for_backward(T, s_var)\n return T.mul(s_var[0])\n\n def backward(self, grad_output):\n T, s_var = self.saved_tensors\n return grad_output.mul(s_var[0]), grad_output.dot(T)\n<ECODE> <SCODE>class Net(nn.Module):\n def __init__(self, var=1):\n super(Net, self).__init__()\n self.ms = mul_scalar()\n \n def forward(self, x):\n ...\n self.ms(x, w)\n ...\n<ECODE> <SCODE> def forward(self, x):\n c = Variable(torch.FloatTensor([1]), requires_grad = True)\n ms = mul_scalar()\n z1 = ms(x, c)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Ja-Keoung_Koo" }, { "contents": "<SCODE>def mul_scalar(T, s_var):\n # supposes that T is 1D var\n # need to unsqueeze s_var if not\n return T * s_var.expand_as(T)\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks for kind replies. Is it safe to use functions like expand_as, unsqueeze and so on? I wonder that when backprogating, grad_output is automatically resized.", "isAccepted": false, "likes": null, "poster": "Ja-Keoung_Koo" }, { "contents": "Yes, it is safe to use these functions, and they were unit tested so they should be fine.", "isAccepted": false, "likes": 1, "poster": "fmassa" } ]
false
Problems with weight array of FloatTensor type in loss function
null
[ { "contents": "I have mostly worked on keras with tf backend and sometimes dabbled with torch7. I was intrigued by the pytorch project and wanted to test it out. So, I was trying to run a simple model on a dataset where I loaded my features into a np.float64 array and the target labels into a np.float64 array. Now, PyTorch automatically converted them both to DoubleTensor and that seems okay to me. However, the loss function expects Double Tensors for the Weight and the Bias but apparently it is getting Float Tensors for both of them. I am not sure how to change my code to ensure that the loss function gets what it is expecting. I am putting my model definition below: <SCODE>class ColorizerNet(nn.Module):\n\n\tdef __init__(self):\n\t\tsuper(ColorizerNet, self).__init__()\n\t\tself.layer1 = nn.Conv2d(1, 8, 2, 2)\n\t\tself.layer2 = nn.Conv2d(8, 16, 2, 2)\n\t\tself.layer3 = nn.Conv2d(16, 8, 2, 2)\n\t\tself.layer4 = nn.Conv2d(8, 1, 2, 2)\n\n\n\tdef forward(self, x):\n\t\tx = F.relu(self.layer1(x))\n\t\tx = F.relu(self.layer2(x))\n\t\tx = F.relu(self.layer3(x))\n\t\tx = F.relu(self.layer4(x))\n\t\treturn x\n<ECODE> <SCODE>for epoch in range(num_epochs): # loop over the dataset multiple times\n\n running_loss = 0.0\n for i, data in enumerate(data_loader, 0):\n # get the inputs\n inputs, labels = data\n \n # wrap them in Variable\n inputs, labels = Variable(inputs), Variable(labels)\n \n # zero the parameter gradients\n optimizer.zero_grad()\n \n # forward + backward + optimize\n outputs = model(inputs)\n loss = criterion(outputs, labels)\n loss.backward() \n optimizer.step()\n \n # print statistics\n running_loss += loss.data[0]\n<ECODE> Here is the error I am getting: <SCODE>TypeError: DoubleSpatialConvolutionMM_updateOutput \nreceived an invalid combination of arguments - \ngot (int, torch.DoubleTensor, torch.DoubleTensor, torch.FloatTensor, torch.FloatTensor, torch.DoubleTensor, torch.DoubleTensor, int, int, int, int, int, int), \nbut expected (int state, torch.DoubleTensor input, torch.DoubleTensor output, torch.DoubleTensor weight, [torch.DoubleTensor bias or None], torch.DoubleTensor finput, torch.DoubleTensor fgradInput, int kW, int kH, int dW, int dH, int padW, int padH)\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "Spider101" }, { "contents": "To convert your inputs to float (recommended): <SCODE>inputs, labels = data\ninputs = inputs.float()\nlabels = labels.float()\ninputs, labels = Variable(inputs), Variable(labels)\n<ECODE> To convert your model to double: <SCODE>model = ColorizerNet()\nmodel.double()\n<ECODE> I recommend using floats instead of doubles. It’s the default tensor type in PyTorch. On GPUs, float calculations are much faster than double calculations.", "isAccepted": false, "likes": 7, "poster": "colesbury" }, { "contents": "Yes, I had heard about that. Thank you for pointing out the missing link. I think I will change the inputs to float. Just one follow-up question: why does pytorch convert numpy’s float64 to Double Tensors? If Float Tensors are the go-to type for the language, I would have thought the numpy to torch conversion would maintain that; instead it gets converted to Double Tensor. Could you shed some light on this?", "isAccepted": false, "likes": null, "poster": "Spider101" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": 8, "poster": "apaszke" }, { "contents": "Thank you guys! This helps a lot and clarifies a lot of doubts I had!", "isAccepted": false, "likes": null, "poster": "Spider101" }, { "contents": "I think there is a little inconsistency, because <SCODE>torch.FloatTensor(np.array([1, 2], dtype=np.float64)) # works\ntorch.FloatTensor(np.array([1, 2], dtype=np.float32)) #works\ntorch.IntTensor(np.array([1, 2], dtype=np.int64)) #fails\ntorch.IntTensor(np.array([1, 2], dtype=np.int32)) #works\n<ECODE>", "isAccepted": false, "likes": null, "poster": "jdhao" } ]
false
How to Reverse a Torch Tensor
null
[ { "contents": "How to Reverse a Torch Tensor", "isAccepted": false, "likes": 1, "poster": "peak" }, { "contents": "<SCODE>tensor = torch.rand(10) # your tensor\n# create inverted indices\nidx = [i for i in range(tensor.size(0)-1, -1, -1)]\nidx = torch.LongTensor(idx)\ninverted_tensor = tensor.index_select(0, idx)\n<ECODE>", "isAccepted": false, "likes": 8, "poster": "fmassa" }, { "contents": "Hi, it looks like this can’t work on torch.autograd.variable.Variable. Is there any way to reverse a specific dimension in a Variable? Thanks!", "isAccepted": false, "likes": 1, "poster": "qianguih" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "If your use case is to reverse sequences to use in Bidirectional RNNs, I just create a clone and flip using numpy. <SCODE>rNpArr = np.flip(fTensor.numpy(),0).copy() #Reverse of copy of numpy array of given tensor\nrTensor = torch.from_numpy(rNpArr) \n<ECODE>", "isAccepted": false, "likes": 7, "poster": "DiffEverything" }, { "contents": "Can you try something like:", "isAccepted": false, "likes": null, "poster": "Sunil_Sharma" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "aRI0U" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "chorus12" } ]
false
Pytorch version of SpatialFullConvolution
null
[ { "contents": "I am trying to recreate a torch7 model architecture in pytorch. I have used several layers of SpatialFullConvolution and as such, was wondering if there is anything analogous to that in PyTorch. I have not been able to find anything similar by name.", "isAccepted": false, "likes": null, "poster": "Spider101" }, { "contents": "ConvTranspose2d", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Okay. I made the changes but now I am getting a runtime error: <SCODE>RuntimeError: input and target have different number of elements: \ninput[128 x 1 x 128 x 128] has 2097152 elements, while target[128 x 2 x 128 x 128] has 4194304 element\n<ECODE> This is the code for my model architecture: <SCODE>class ColorizerNet(nn.Module):\n\ndef __init__(self):\n\tsuper(ColorizerNet, self).__init__()\n\tself.layer1 = nn.Conv2d(1, 8, 2, 2)\n\tself.layer2 = nn.Conv2d(8, 16, 2, 2)\n\tself.layer3 = nn.ConvTranspose2d(16, 8, 2, 2)\n\tself.layer4 = nn.ConvTranspose2d(8, 1, 2, 2)\n\n\ndef forward(self, x):\n\tx = F.relu(self.layer1(x))\n\tx = F.relu(self.layer2(x))\n\tx = F.relu(self.layer3(x))\n\tx = F.relu(self.layer4(x))\n\treturn x\n<ECODE> Am I making any obvious errors here? If required, I can start a separate thread on this followup error.", "isAccepted": false, "likes": null, "poster": "Spider101" }, { "contents": "It’s an error in the loss function. Your output’s second dimension has a different size (1) than the target’s (2).", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Flatten the parameters in DataParallel
null
[ { "contents": "I tried the ImageNet example with ResNet152 on 8GPUs but it is much slower than fb.resnet.torch (1.5s vs 0.8s per iter). It’s elegant to implement the Broadcast as an Op/Function. I wonder if it is possible to overlap the communication with computation during forward/backward? Or it is necessary to flatten the parameters in order to improve the efficiency?", "isAccepted": false, "likes": 2, "poster": "Cysu" }, { "contents": "We don’t support flattening the parameters. It’s quite complex and bug prone to do that correctly, and maintain flexibility. The main problem is that with 8 GPUs we’re a bit limited because of Python’s GIL (only one thread can execute Python code at a time). That’s why we’re working on moving more of logic commonly used in the vision networks, as well as some others, to our C backends, so they can proceed without blocking other threads.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
How to create model with sharing weight?
null
[ { "contents": "I want to create a model with sharing weights, for example: given two input A, B, the first 3 NN layers share the same weights, and the next 2 NN layers are for A, B respectively. How to create such model, and perform optimally?", "isAccepted": false, "likes": 6, "poster": "xiaozhun07" }, { "contents": "EDIT: we do support sharing Parameters between modules, but it’s recommended to decompose your model into many pieces that don’t share parameters if possible. We don’t support using the same Parameters in many modules. Just reuse the base for two inputs: <SCODE>class MyModel(nn.Module):\n def __init__(self):\n self.base = ...\n self.head_A = ...\n self.head_B = ...\n\n def forward(self, input1, input2):\n return self.head_A(self.base(input1)), self.head_B(self.base(input2))\n \n<ECODE>", "isAccepted": false, "likes": 20, "poster": "apaszke" }, { "contents": "in your example, what will happen to gradients of self.base? will they be calculated taking into account both input1 and input2?", "isAccepted": false, "likes": null, "poster": "vladimir" }, { "contents": "Yes, you can use the same module multiple times during forward.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 10, "poster": "jekbradbury" }, { "contents": "Yeah they are supported, sorry for this. But it’s still considered better practice to not do it. I’ve updated the answer.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "In this code, he make share modules(‘G_block_share/D_block_share’) out of class, and then use these share modules in different two classes(‘Generator A&B or Discriminator A&B)… This code is right way to share weights between two generator/discriminators?", "isAccepted": false, "likes": null, "poster": "11189" }, { "contents": "Could you please tell us why it is better to not do it? Thanks", "isAccepted": false, "likes": 4, "poster": "N_Hunter" }, { "contents": "Dear Apaszke, thank you for your updates! But I am still a little confused about your answer. Like in your example, you have 3 modules (base, headA, headB), but how could you decompose them into pieces that don’t share parameters? Looking forward to your answer, please! Thank you for your attention.", "isAccepted": false, "likes": 1, "poster": "ZHANGHeng19931123" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Baichuan" }, { "contents": "<SCODE>class MyModel(nn.Module):\n def __init__(self):\n self.base1 = ...\n self.base2=...\n self.head_A = ...\n self.head_B = ...\n\n def forward(self, input1, input2):\n return self.head_A(self.base1(input1)), self.head_B(self.base2(input2))\n \n<ECODE>", "isAccepted": false, "likes": null, "poster": "Baichuan" }, { "contents": "But in this case, how would base1 and base2 share the same weights? It seems like base1 + head_A and base2 + head_B are totally separate models.", "isAccepted": false, "likes": null, "poster": "Hazel" }, { "contents": "Thanks!", "isAccepted": false, "likes": null, "poster": "dariodematties" }, { "contents": "I think sometimes you have to use weight sharing, like in the case where you want one layer to be the transpose of another. For this case, one can do this: <SCODE>shared_w = torch.rand((n_y, n_z))*.2 - .1 # initialize somehow\nself.yzdecoding = nn.Linear(n_y, n_z) # Create shared layers\nself.zydecoding = nn.Linear(n_z, n_y)\nself.yzdecoding.weight = nn.Parameter(shared_w.T) # Share weights\nself.zydecoding.weight = nn.Parameter(shared_w)\n<ECODE> Note that the (n_y, n_z) Linear layer has weights of shape (n_z, n_y), which may not be intuitive at first.", "isAccepted": false, "likes": null, "poster": "Alex_Li" } ]
false
Distribution Implementations
null
[ { "contents": "", "isAccepted": false, "likes": 5, "poster": "solidor" }, { "contents": "We didn’t plan on adding that, but it seems like a useful thing to have. It’s not going to be a priority for us, but if someone wants to send a PR, then we’ll be happy to merge it in.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I would love to contribute as well. However, I am still getting familiar with the infrastructure of PyTorch. It would be great if someone from the dev could provide a template for implementation (e.g., a Normal distribution), ideally one for the continuous and one for the discrete distribution, as a reference for best practice using PyTorch.", "isAccepted": false, "likes": null, "poster": "junpenglao" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "junpenglao" }, { "contents": "<SCODE>def normal_pdf(x, mean, std):\n # compute the pdf here using x, mean and std Variables\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "junpenglao" }, { "contents": "How do you compute the gradients in that case? Right now StochasticFunctions will use REINFORCE.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "A distribution could be implemented as a class containing one Function “self.pdf” and a method sample() <SCODE>class Distribution():\n def __init__(self):\n self.pdf = # autograd Function\n\n def sample():\n # use self.pdf for Monte Carlo algorithm\n<ECODE> ( by the way, how do you use python coloration when providing code in this forum ?)", "isAccepted": false, "likes": 2, "poster": "alexis-jacq" }, { "contents": "When opening up the block append “python” right after the opening backticks:", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "<SCODE>W = torch.rand(nv, nh)); a = torch.rand(nh)); b = torch.rand(nv));\nnet_forward = nn.Sequential(nn.Linear(wieght=W, bias=a), nn.Sigmoid())\nnet_backward = nn.Sequential(nn.Linear(wieght=W, bias=b), nn.SoftMax())\n\n# training:\nfor input in training_set:\n v1 = input # from training set\n\n x = net_forward(v1)\n H1 = distrib.estimation(x, distrib.Bernouilli())\n h1 = distrib.sample(H1)\n\n y = net_backward(h)\n V2 = distrib.estimation(y, distrib.Multinomial())\n v2 = distrib.sample(V2)\n\n x = net_forward(v2)\n H2 =distrib. estimation(x, distrib.Bernouilli())\n h2 = distrib.sample(H2)\n\n # gradient descent step:\n W -= lr * (v1.h1 - v2.h2)\n a -= lr * (h1 - h2)\n b -= lr * (v1 - v2)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "<SCODE>import torch\nfrom torch.autograd import Variable,Function\nimport torch.optim as optim\nfrom scipy import stats\nimport numpy as np\n\ndtype = torch.FloatTensor\ndef get_tau_sd(tau=None, sd=None):\n \"\"\"\n Function port from PyMC3\n\n Find precision and standard deviation\n .. math::\n \\tau = \\frac{1}{\\sigma^2}\n Parameters\n ----------\n tau : array-like, optional\n sd : array-like, optional\n Results\n -------\n Returns tuple (tau, sd)\n Notes\n -----\n If neither tau nor sd is provided, returns (1., 1.)\n \"\"\"\n if tau is None:\n if sd is None:\n sd = 1.\n tau = 1.\n else:\n tau = sd**-2.\n\n else:\n if sd is not None:\n raise ValueError(\"Can't pass both tau and sd\")\n else:\n sd = tau**-.5\n\n # cast tau and sd to float in a way that works for both np.arrays\n # and pure python\n tau = 1. * tau\n sd = 1. * sd\n\n return (tau, sd)\n\nclass Normal(Function):\n \n def __init__(self, mu=0, sd=None, tau=None, **kwargs):\n \"\"\"Construct Normal distributions with mean and stddev `loc` and `scale`.\n The parameters `loc` and `scale` must be shaped in a way that supports\n broadcasting (e.g. `loc + scale` is a valid operation).\n Args:\n mu: Floating point tensor; the means of the distribution(s).\n sd: Floating point tensor; the stddevs of the distribution(s).\n Must contain only positive values.\n Raises:\n TypeError: if `loc` and `scale` have different `dtype`.\n \"\"\"\n tau, sd = get_tau_sd(tau=tau, sd=sd)\n self.sd = sd\n self.tau = tau\n\n self.mean = self.median = self.mode = self.mu = mu\n self.variance = 1. / self.tau\n\n super(Normal, self).__init__(**kwargs)\n\n def random(self, size=None):\n standard_normal = torch.randn(size)\n sd = self.sd\n mu = self.mu\n \n muvec = mu.repeat(size)\n sdvec = sd.repeat(size)\n \n return standard_normal.mul_(sdvec).add_(muvec)\n\n def logp(self, value):\n tau = self.tau\n mu = self.mu\n \n muvec = mu.repeat(value.size(0), 1)\n tauvec = tau.repeat(value.size(0), 1)\n \n return (-tauvec * (value - muvec)**2 + torch.log(tauvec / np.pi / 2.)) / 2.\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "junpenglao" }, { "contents": "I think distribution could be implemented as a collection of functions as below: <SCODE>class Distribution:\n\n @staticmethod\n def pdf(self):\n raise NotImplementedError\n\n @staticmethod\n def logpdf(self):\n raise NotImplementedError\n \n @staticmethod\n def cdf(self):\n raise NotImplementedError\n \n @staticmethod\n def qtl(self):\n raise NotImplementedError\n\n @staticmethod\n def rnd(self):\n raise NotImplementedError\n\nclass Gauss(Distribution):\n #scalar version\n @staticmethod\n def logpdf(x, mu, sigma):\n return -0.5 * (x-mu) * (x-mu)/(sigma*sigma) - 0.5 * torch.log(2*math.pi*sigma*sigma)\n\n<ECODE> and it could be used like this: <SCODE>Gauss.logpdf(x, mu, sigma)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "kirk86" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "kirk86" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "stepelu" }, { "contents": "Any advances on that? Highly interested.", "isAccepted": false, "likes": null, "poster": "Andres_Masegosa" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "chssch" } ]
false
Installation problem for python 3.6
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Hamid" }, { "contents": "What’s your OS? Do you have a 64-bit system?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "My OS is Ubuntu 14.04 64-bit", "isAccepted": false, "likes": null, "poster": "Hamid" }, { "contents": "<SCODE>wget https://s3.amazonaws.com/pytorch/whl/cu75/torch-0.1.8.post1-cp36-cp36m-linux_x86_64.whl\nmv torch-0.1.8.post1-cp36-cp36m-linux_x86_64.whl torch-0.1.8.post1-cp36-none-linux_x86_64.whl\npip install torch-0.1.8.post1-cp36-none-linux_x86_64.whl\n<ECODE> <SCODE>pip3 --version\npython3 --version\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Hamid" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Hamid" }, { "contents": "yes, the pip needs to be linked to python3.6, because the wheel file is for python 3.6", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Could you tell how can I do that?", "isAccepted": false, "likes": null, "poster": "Hamid" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Hamid" }, { "contents": "Doesn’t it work with python 3.5?", "isAccepted": false, "likes": null, "poster": "herlimenezes" } ]
false
Pre-trained network demo
null
[ { "contents": "<SCODE># get input image\nimport skimage.io\nimport os\nfile_name = '26132.jpg'\nif not os.access(file_name, os.R_OK):\n file_URL = 'http://www.zooclub.ru/attach/26000/26132.jpg'\n os.system('wget ' + file_URL)\nimg = skimage.io.imread(file_name)\n\n\n# get model\nimport torchvision\nresnet_18 = torchvision.models.resnet18(pretrained=True)\n\n\n# get classes\nfile_name = 'synset_words.txt'\nif not os.access(file_name, os.W_OK):\n synset_URL = 'https://github.com/szagoruyko/functional-zoo/raw/master/synset_words.txt'\n os.system('wget ' + synset_URL)\nclasses = list()\nwith open(file_name) as class_file:\n for line in class_file:\n classes.append(line.strip().split(' ', 1)[1].split(', ', 1)[0])\nclasses = tuple(classes)\n\n\n# define image transformation\nfrom torchvision import transforms as trn\ncentre_crop = trn.Compose([\n trn.ToPILImage(),\n trn.Scale(256),\n trn.CenterCrop(224),\n trn.ToTensor(),\n trn.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n])\n\n\n# get top 5 probabilities\nfrom torch.autograd import Variable as V\nfrom torch.nn import functional as f\nx = V(centre_crop(img).unsqueeze(0), volatile=True)\nlogit = resnet_18.forward(x)\nh_x = f.softmax(logit).data.squeeze()\nprobs, idx = h_x.sort(0, True)\nfor i in range(0, 5):\n print('{:.3f} -> {}'.format(probs[i], classes[idx[i]]))\n<ECODE> And this is the output <SCODE>0.009 -> bucket\n0.007 -> plunger\n0.006 -> hook\n0.005 -> water bottle\n0.005 -> water jug\n<ECODE> which should be, instead, roughly <SCODE>0.99 -> German shepherd\n0.01 -> malinois\n0.00 -> Norwegian elkhound\n0.00 -> Leonberg\n0.00 -> red wolf\n<ECODE> And this is the input picture.", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "colesbury" }, { "contents": "<SCODE>0.935 -> German shepherd\n0.033 -> Leonberg\n0.031 -> malinois\n0.000 -> Norwegian elkhound\n0.000 -> African hunting dog\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Atcold" } ]
false
How to make custom method in nn.Module work with GPUs
vision
[ { "contents": "I’m trying to implement simple res-net like below and it works with CPU. <SCODE>class ResNet(nn.module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv_1 = nn.Conv2d(3, 64, 4, stride=2)\n self.bn_1 = nn.BatchNorm2d(64)\n self.res_1 = self.__res_block(64, [32,32,128], True)\n ...\n def forward(self, x):\n x = self.conv_1(x)\n x = F.relu(self.bn_1(x))\n x = F.max_pool2d(x, 2, 2)\n x = self.res_1(x)\n ...\n\n def __res_block(self, in_channels,\n nb_filters, right=False):\n def __res_base(_in_channels, out_channels,\n kernel_size=1, padding=0):\n def g(x):\n x = nn.Conv2d(in_channels=_in_channels,\n out_channels=out_channels,\n kernel_size=kernel_size,\n padding=padding)(x)\n x = nn.BatchNorm2d(num_features=out_channels)(x)\n return x\n return g\n\n def f(x):\n y = F.relu(__res_base(in_channels, nb_filters[0])(x))\n y = F.relu(__res_base(nb_filters[0], nb_filters[1],\n kernel_size=3, padding=1)(y))\n y = F.relu(__res_base(nb_filters[1], nb_filters[2])(y))\n if right is True:\n x = __res_base(in_channels, nb_filters[2])(x)\n return F.relu(x+y)\n return f\n<ECODE> but it doesn’t work with GPU and throws TypeError, <SCODE>TypeError: _cudnn_convolution_full_forward received an invalid combination of arguments \n- got (torch.cuda.FloatTensor, torch.FloatTensor, torch.FloatTensor, torch.cuda.FloatTensor, tuple, tuple, int, bool), \nbut expected (torch.cuda.RealTensor input, torch.cuda.RealTensor weight, torch.cuda.RealTensor bias, \ntorch.cuda.RealTensor output, std::vector<int> pad, std::vector<int> stride, int groups, bool benchmark)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "moskomule" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "Ho I see,", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "marioviti" } ]
false
Some wrong while install via pip
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "JunshengShen" }, { "contents": "It’s a network error. Probably there’s a problem with your internet connection, or with a proxy if you use one.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
PyTorch serialization
null
[ { "contents": "Hi, Awesome library! I’d like to ask if it is possible to save a trained PyTorch model in (\"*.t7\") and read it in Torch. Thanks", "isAccepted": false, "likes": null, "poster": "ablavatski1" }, { "contents": "No, we don’t have such option. PyTorch allows for creating much more complex models than Lua Torch without a need to define a lot of helpers, and because of that it’s not easy to translate models in this direction.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks for the fast reply!", "isAccepted": false, "likes": null, "poster": "ablavatski1" } ]
false
Tensors as items in multiprocessing.queue
null
[ { "contents": "<SCODE>import torch\nimport torch.multiprocessing as mp\n\ndef put_in_q(idx, q):\n q.put(torch.IntTensor(2, 2).fill_(idx))\n # q.put(idx) # works with int, float, str, np.ndarray, but not torch.Tensor\n\nq = mp.Queue()\n\np = mp.Process(target=put_in_q, args=(0, q))\np.start()\n\nx = q.get()\nprint(x)\n\np.join()\n<ECODE> The error I get: <SCODE>Traceback (most recent call last):\n File \"test_torch_queue.py\", line 15, in <module>\n x = q.get()\n File \"/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/queues.py\", line 113, in get\n return _ForkingPickler.loads(res)\n File \"/home/florin/Tools/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py\", line 72, in rebuild_storage_fd\n fd = df.detach()\n File \"/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py\", line 57, in detach\n with _resource_sharer.get_connection(self._id) as conn:\n File \"/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py\", line 87, in get_connection\n c = Client(address, authkey=process.current_process().authkey)\n File \"/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/connection.py\", line 493, in Client\n answer_challenge(c, authkey)\n File \"/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/connection.py\", line 732, in answer_challenge\n message = connection.recv_bytes(256) # reject large message\n File \"/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/connection.py\", line 216, in recv_bytes\n buf = self._recv_bytes(maxlength)\n File \"/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/connection.py\", line 407, in _recv_bytes\n buf = self._recv(4)\n File \"/home/florin/Tools/anaconda3/lib/python3.6/multiprocessing/connection.py\", line 379, in _recv\n chunk = read(handle, remaining)\nConnectionResetError: [Errno 104] Connection reset by peer\n<ECODE> Thanks!", "isAccepted": false, "likes": null, "poster": "florin" }, { "contents": "The subprocess needs to be alive at the time when the master process receives the Tensor. There are two ways to fix that: This example should work: <SCODE>import torch\nimport torch.multiprocessing as mp\n\ndef put_in_q(idx, q, evt):\n q.put(torch.IntTensor(2, 2).fill_(idx))\n evt.wait()\n\nq = mp.Queue()\n\nevt = mp.Event()\np = mp.Process(target=put_in_q, args=(0, q, evt))\np.start()\n\nx = q.get()\nevt.set()\nprint(x)\n\np.join()\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "Thank you for the quick reply!", "isAccepted": false, "likes": null, "poster": "florin" } ]
false
Help debugging DenseNet model on CIFAR-10
vision
[ { "contents": "Hi PyTorch community, -Brandon.", "isAccepted": false, "likes": 2, "poster": "bamos" }, { "contents": "One thing is that this: <SCODE>for param_group in optimizer.state_dict()['param_groups']:\n<ECODE> should be replaced with that: <SCODE>for param_group in optimizer.param_groups:\n<ECODE> I know that the first version appears in the ImageNet example, but it no longer works as expected. I can’t see anything wront at a glance, but I’ll try to look more carefully into it sometime.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks for quickly looking! I fixed that. (It doesn’t sound like you expected this to help the convergence – the model’s still not converging as before.)", "isAccepted": false, "likes": null, "poster": "bamos" }, { "contents": "Thanks! My training code does successfully train a known model (VGGnet) on CIFAR-10, so my logic is also that there’s something wrong with my model. However I have also exactly compared my model’s outputs and gradients to the official model. I thought about trying to remove the passthrough connections from my DenseNet implementation to further debug this. However I haven’t tried this yet because I’ve never seen reports of such an architecture (even correctly implemented) converging on CIFAR-10. And if my implementation of this did converge, then it would indicate that there’s a problem of layers that concatenate the input and output. So to directly check if there’s a problem with this kind of operation, I used numdifftools to numerically check the gradients of a single PyTorch layer that concatenated the input to a fully-connected operation. As another idea of breaking the DenseNet into a known architecture, I could start with a ResNet architecture that’s known to converge and then start adding DenseNet features. However these intermediate architecture are not known to converge, so if it doesn’t work, then I won’t know if it’s because of a code bug or something more fundamental.", "isAccepted": false, "likes": 2, "poster": "bamos" }, { "contents": "Adam’s been helping me debug this over Slack today and we’ve solved it! We found a new PyTorch bug with cudnn that comes up with DenseNet-style layers. After fixing this, my DenseNet model converges much better than before, and I’ll update my repo with the current results shortly. My understanding is that Adam will push a short patch to PyTorch master soon. Thanks again for the help, Adam. -Brandon.", "isAccepted": false, "likes": 4, "poster": "bamos" }, { "contents": "The fix is already in master.", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "What was the problem ? Been having some convergence issues with my network which has some layers concatenation just as yours. My problems are not necessarily related to yours, but I’m glad you could figure it out with adam !", "isAccepted": false, "likes": null, "poster": "ClementPinard" }, { "contents": "The problem arised form concatenating outputs of convolution along second dimension. This lead to calling conv’s backward with non-contiguous gradients, and we were overly smart about reusing cuDNN descriptors, so the backend thought that it’s contiguous. The fix is to either disable cuDNN or rebuild pytorch.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" } ]
false
Sum of matrices with different dimensions
null
[ { "contents": "I am trying to sum two tensors with dimensions: \na: 10 x 49 x 1024 \nb: 10 x 1024 Thanks", "isAccepted": false, "likes": null, "poster": "lcelona" }, { "contents": "How do you want to add these matrices? They have different numbers of elements.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Don’t hate me. In Tensorflow, the following code works: I’ll replace my code using the following: Correct?", "isAccepted": false, "likes": null, "poster": "lcelona" }, { "contents": "This should do it for you: <SCODE>c = a + b.unsqueeze(1).expand_as(a)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "But your code would work too.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you for your help!", "isAccepted": false, "likes": null, "poster": "lcelona" } ]
false
How to replace Tensor.cmul functionality
null
[ { "contents": "", "isAccepted": false, "likes": 1, "poster": "evcu" }, { "contents": "<SCODE>a = torch.range(0, 99).view(10, 10)\nb = torch.range(0, 99).view(10, 10)\nc = a * b<ECODE>", "isAccepted": false, "likes": 2, "poster": "mrdrozdov" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "apaszke" } ]
false
How to perform finetuning in Pytorch?
null
[ { "contents": "Can anyone tell me how to do finetuning in pytorch? Suppose, I have loaded the Resnet 18 pretrained model. Now I want to finetune it on my own dataset which contain say 10 classes. How to remove the last output layer and change to as per my requirement?", "isAccepted": false, "likes": 17, "poster": "avijit_dasgupta" }, { "contents": "", "isAccepted": false, "likes": 12, "poster": "apaszke" }, { "contents": "Thanks for your reply.", "isAccepted": false, "likes": null, "poster": "avijit_dasgupta" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "avijit_dasgupta" }, { "contents": "<SCODE>optimizer = torch.optim.SGD([\n {'params': model.conv1.parameters()},\n {'params': model.bn1.parameters()},\n {'params': model.relu.parameters()},\n {'params': model.maxpool.parameters()},\n {'params': model.layer1.parameters()},\n {'params': model.layer2.parameters()},\n {'params': model.layer3.parameters()},\n {'params': model.layer4.parameters()},\n {'params': model.avgpool.parameters()},\n {'params': model.fc.parameters(), 'lr': opt.lr}\n ], lr=opt.lr*0.1, momentum=0.9)\n<ECODE> Is this the correct way of defining different learning rate to different layers of ResNet18? Or is there any other optimized way to do that?", "isAccepted": false, "likes": null, "poster": "avijit_dasgupta" }, { "contents": "This should do it: <SCODE>ignored_params = list(map(id, model.fc.parameters()))\nbase_params = filter(lambda p: id(p) not in ignored_params,\n model.parameters())\n\noptimizer = torch.optim.SGD([\n {'params': base_params},\n {'params': model.fc.parameters(), 'lr': opt.lr}\n ], lr=opt.lr*0.1, momentum=0.9)\n\n<ECODE>", "isAccepted": false, "likes": 44, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "avijit_dasgupta" }, { "contents": "This may also help to learn how to modify layers without changing other layers’ parameters and construct a new model. <SCODE> model = models.vgg16(pretrained=True)\n print list(list(model.classifier.children())[1].parameters())\n mod = list(model.classifier.children())\n mod.pop()\n mod.append(torch.nn.Linear(4096, 2))\n new_classifier = torch.nn.Sequential(*mod)\n print list(list(new_classifier.children())[1].parameters())\n model.classifier = new_classifier\n<ECODE> As for finetuning resnet, it is more easy: <SCODE>model = models.resnet18(pretrained=True)\nmodel.fc = torch.nn.Linear(2048, 2)\n<ECODE>", "isAccepted": false, "likes": 18, "poster": "zhoubinxyz" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "srv902" }, { "contents": "<SCODE>class MyModel(nn.Module):\n def __init__(self, pretrained_model):\n self.pretrained_model = pretrained_model\n self.last_layer = ... # create layer\n\n def forward(self, x):\n return self.last_layer(self.pretrained_model(x))\n\npretrained_model = torchvision.models.resnet18(pretrained=True)\nmodel = MyModel(pretrained_model)\n<ECODE>", "isAccepted": false, "likes": 19, "poster": "apaszke" }, { "contents": "Thank you for the help.", "isAccepted": false, "likes": null, "poster": "srv902" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "panovr" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "panovr" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "how can I only replace the last fully-connected layer for fine-tuning and freeze other fully-connected layers? Is the forward the right way to code? Because you give some reference code above: Original fine-tuing code: <SCODE>class FineTuneModel(nn.Module):\n def __init__(self, original_model, arch, num_classes):\n super(FineTuneModel, self).__init__()\n\n if arch.startswith('alexnet') :\n self.features = original_model.features\n self.classifier = nn.Sequential(\n nn.Dropout(),\n nn.Linear(256 * 6 * 6, 4096),\n nn.ReLU(inplace=True),\n nn.Dropout(),\n nn.Linear(4096, 4096),\n nn.ReLU(inplace=True),\n nn.Linear(4096, num_classes),\n )\n self.modelName = 'alexnet'\n elif arch.startswith('resnet') :\n # Everything except the last linear layer\n self.features = nn.Sequential(*list(original_model.children())[:-1])\n self.classifier = nn.Sequential(\n nn.Linear(512, num_classes)\n )\n self.modelName = 'resnet'\n elif arch.startswith('vgg16'):\n self.features = original_model.features\n self.classifier = nn.Sequential(\n nn.Dropout(),\n nn.Linear(25088, 4096),\n nn.ReLU(inplace=True),\n nn.Dropout(),\n nn.Linear(4096, 4096),\n nn.ReLU(inplace=True),\n nn.Linear(4096, num_classes),\n )\n self.modelName = 'vgg16'\n else :\n raise(\"Finetuning not supported on this architecture yet\")\n\n # Freeze those weights\n for p in self.features.parameters():\n p.requires_grad = False\n\n\n def forward(self, x):\n f = self.features(x)\n if self.modelName == 'alexnet' :\n f = f.view(f.size(0), 256 * 6 * 6)\n elif self.modelName == 'vgg16':\n f = f.view(f.size(0), -1)\n elif self.modelName == 'resnet' :\n f = f.view(f.size(0), -1)\n y = self.classifier(f)\n return y<ECODE>", "isAccepted": false, "likes": 1, "poster": "panovr" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "I added following lines to imagenet example, using pretrained model of resnet18. <SCODE> for param in model.parameters():\n param.requires_grad = False\n\n # Replace the last fully-connected layer\n # Parameters of newly constructed modules have requires_grad=True by default\n model.fc = torch.nn.Linear(512, 3)\n\n optimizer = torch.optim.SGD(model.fc.parameters(), args.lr,\n momentum=args.momentum,\n weight_decay=args.weight_decay)\n<ECODE> But then I have following error: <SCODE>File \"main.py\", line 234, in train\n loss.backward()\nFile \"/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py\", line 146, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\nRuntimeError: there are no graph nodes that require computing gradients\n<ECODE> I would like to freeze all parameters of original ResNet18 and just learn the last layer with 3 classes. How I should do this correctly? Based on information from the forum, this should we the working version.", "isAccepted": false, "likes": null, "poster": "melgor" } ]
false
Runtime error occurs when using .cuda(1)
null
[ { "contents": "Hi all, I try to use pytorch on the 2nd GPU, <SCODE>`a = torch.ones(1).cuda(1)\n b = torch.ones(1).cuda(1)\n c = torch.cat((a,b),0)`\n<ECODE> Then an error comes out: RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485444530918/work/torch/lib/THC/generic/THCTensorCopy.c:65 How can I fix this?", "isAccepted": false, "likes": null, "poster": "big_tree" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "big_tree" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "big_tree" }, { "contents": "Encounter the same problem. I have just updated to the latest version but the error sitll rises. Has it been fixed? If not, is there any workaround?", "isAccepted": false, "likes": null, "poster": "WarBean" }, { "contents": "For now, it seems that I can workaround by using GPU 0.", "isAccepted": false, "likes": null, "poster": "WarBean" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
WhiteNoise Layer for DCGAN tutorial
null
[ { "contents": "Hi everyone, Any hint would be welcome and I’m happy to make pull request as an added feature once done.", "isAccepted": false, "likes": null, "poster": "lmoss" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I will have another look at the tutorials, thank you in the meantime.", "isAccepted": false, "likes": null, "poster": "lmoss" }, { "contents": "I’m replying to myself, in case anyone else runs into this issue.", "isAccepted": false, "likes": 2, "poster": "lmoss" }, { "contents": "You could also just sample white noise, wrap it in a Variable and add that. I think that would be a more elegant solution.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "supakjk" }, { "contents": "If you want to add the noise in the middle of a network, just use two sequentials.", "isAccepted": false, "likes": 2, "poster": "apaszke" } ]
false
AttributeError: ‘CudnnRNN’ object has no attribute ‘_nested_output’
null
[ { "contents": "The API is designed this way because I need this API to interact with another code that requires such calls. Simple LSTM (single input with multiple hidden states that are updated) <SCODE>from torch.autograd import Variable\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport numpy as np\nimport matplotlib as mpl\nmpl.use('Agg')\nimport matplotlib.pyplot as plt\n\n\nclass Net(nn.Module):\n def __init__(self, input_size, hidden_size, num_layers, bias, dropout):\n super(Net, self).__init__()\n self.rnn = nn.LSTM(input_size=input_size,\n hidden_size=hidden_size,\n num_layers=num_layers,\n bias=bias,\n dropout=dropout)\n\ndef input_var(i):\n test = np.array([i])\n# print(test.shape)\n # test = np.array([i])\n input_var = test.reshape(1, 1, 1) # (seq_len, batch, input_size)\n input_var = torch.from_numpy(input_var).float()\n return input_var\n\n\ndef label_var(i):\n test = np.array([i*4])\n label_var = test.reshape(1, 1) #\n label_var = torch.from_numpy(label_var).float()\n return label_var\n\n\nclass lstmModule:\n def __init__(self, input_size, hidden_size, num_layers, bias, dropout,\n seq_len, batch_size, meta_lr, n_meta_iter):\n self.input_size = input_size\n self.hidden_size = hidden_size\n self.num_layers = num_layers\n self.bias = bias\n self.dropout = dropout\n self.seq_len = seq_len\n self.batch_size = batch_size\n self.meta_lr = meta_lr\n self.n_meta_iter = n_meta_iter\n\n self.net = Net(input_size=input_size,\n hidden_size=hidden_size,\n num_layers=num_layers,\n bias=bias,\n dropout=dropout)\n\n self.net.cuda()\n\n self.h0 = Variable(torch.randn(self.num_layers,\n self.batch_size,\n self.hidden_size)).cuda()\n\n self.c0 = Variable(torch.randn(self.num_layers,\n self.batch_size,\n self.hidden_size)).cuda()\n\n self.optimizer = optim.Adam(self.net.rnn.parameters(), lr=self.meta_lr)\n\n self.loss_lst = []\n self.loss_lst2 = []\n\n def lstm_forward(self, seq_num, meta_num):\n print('i fed', seq_num)\n def pseudo_loss(output, label):\n return torch.mean(torch.sum(torch.abs(output - label)))\n\n inp = input_var(seq_num)\n input = Variable(inp).cuda()\n\n lab = label_var(seq_num)\n label = Variable(lab).cuda()\n\n if seq_num == 0:\n\n # Ensure clear gradient buffer\n self.optimizer.zero_grad()\n self.loss_tot = [0 for i in range(self.hidden_size)]\n\n # Label concatenation\n self.label_all = label\n\n # LSTM\n output, hn = self.net.rnn(input, (self.h0, self.c0))\n output = 100 * output\n\n op = [output[:, :, i] for i in range(self.hidden_size)]\n\n self.output_all = op\n # print('1 step length:', len(self.output_all))\n self.h, self.c = hn\n print('Done', i)\n else:\n self.label_all = torch.cat((self.label_all, label), 0)\n output, hn = self.net.rnn(input, (self.h, self.c))\n output = 100 * output\n op = [output[:, :, i] for i in range(self.hidden_size)]\n self.h, self.c = hn\n self.output_all = [torch.cat((self.output_all[i], op[i]), 0) for i in range(self.hidden_size)]\n print('Done', i)\n # print('{} step length: {}'.format(i, len(self.output_all)))\n # print('{} step output size: {}'.format(i, output.size()))\n # print(self.output_all[0].size())\n print('-'*10)\n if seq_num == (self.seq_len - 1):\n # Get loss\n self.loss_tot = [self.loss_tot[i] + pseudo_loss(self.output_all[i], self.label_all) for i in range(self.hidden_size)]\n\n # Append loss\n self.loss_lst.append(self.loss_tot[0].cpu().data.numpy()[0])\n self.loss_lst2.append(self.loss_tot[1].cpu().data.numpy()[0])\n\n # Backprop\n print(len(self.loss_tot))\n print(self.loss_tot)\n for k in range(self.hidden_size):\n print('backprop', k)\n # print('backprop', k)\n # print(self.loss_tot[k].size())\n self.loss_tot[k].backward(retain_variables=True)\n\n # Update optimizer\n self.optimizer.step()\n\n if seq_num == (self.seq_len - 1) and meta_num == (self.n_meta_iter - 1):\n # print(len(self.loss_lst))\n print('Loss 1', self.loss_tot[0].cpu().data.numpy())\n print('Loss 2', self.loss_tot[1].cpu().data.numpy())\n plt.clf()\n plt.plot()\n plt.title('Loss Curve')\n plt.plot(self.loss_lst, label='Hidden 1')\n plt.plot(self.loss_lst2, label='Hidden 2')\n plt.legend(loc='best')\n plt.savefig('loss.png')\n\n def lstm_check(self, seq_num):\n inp = input_var(seq_num)\n input = Variable(inp).cuda()\n lab = label_var(seq_num)\n label = Variable(lab).cuda()\n\n if seq_num == 0:\n # Ensure clear gradient buffer\n self.optimizer.zero_grad()\n self.loss_tot = [0 for i in range(self.hidden_size)]\n\n # Label concatenation\n self.label_all = label\n\n # LSTM\n output, hn = self.net.rnn(input, (self.h0, self.c0))\n output = 100 * output\n op = [output[:, :, i] for i in range(self.hidden_size)]\n self.output_all = op\n self.h, self.c = hn\n else:\n self.label_all = torch.cat((self.label_all, label), 0)\n output, hn = self.net.rnn(input, (self.h, self.c))\n output = 100 * output\n op = [output[:, :, i] for i in range(self.hidden_size)]\n self.h, self.c = hn\n self.output_all = [torch.cat((self.output_all[i], op[i]), 0) for i\n in\n range(self.hidden_size)]\n\n if seq_num == (self.seq_len - 1):\n print('-' * 10)\n print(self.output_all[0].cpu().data.numpy())\n print(self.label_all.cpu().data.numpy())\n print('-' * 10)\n print(self.output_all[1].cpu().data.numpy())\n print(self.label_all.cpu().data.numpy())\n\nN_meta = 100\nLR_meta = 0.1\nN_seq = 4\nbatch_size = 1\nlayers = 4\ninput_size = 1\nhidden_size = 10\n\n# Initialize and assign class to object once\n# input_size, hidden_size, num_layers, bias, dropout, seq_len, batch_size, meta_lr, n_meta_iter):\nprint 'Initializing LSTM'\nlstm = lstmModule(input_size, hidden_size, layers, True, 0.1, N_seq, batch_size, LR_meta, N_meta)\nprint 'Initialized LSTM'\n\n# Run through meta iterations\nprint 'Training'\nfor j in range(N_meta):\n # Run through each step\n for i in range(N_seq):\n print('i start', i)\n lstm.lstm_forward(i, j)\nprint 'Done Training'\n\n# Check\nprint 'Checking'\nfor i in range(N_seq):\n lstm.lstm_check(i)\nprint 'Done Checking'\n<ECODE> Error: <SCODE>Traceback (most recent call last):\n File \"test.py\", line 202, in <module>\n lstm.lstm_forward(i, j)\n File \"test.py\", line 127, in lstm_forward\n self.loss_tot[k].backward(retain_variables=True)\n File \"/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py\", line 158, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n File \"/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py\", line 208, in backward\n nested_gradients = _unflatten(gradients, self._nested_output)\nAttributeError: 'CudnnRNN' object has no attribute '_nested_output'\n<ECODE> It works in backpropagating one of the hidden state (the first one) but not the second one onwards.", "isAccepted": false, "likes": null, "poster": "ritchieng" }, { "contents": "We’re aware of that issue. Right now there’s an error in RNNs that occurs when you try to backprop through them multiple times. However, your code doesn’t need to do that, and would be much more efficient if it didn’t. Replacing <SCODE>for k in range(self.hidden_size):\n self.loss_tot[k].backward(retain_variables=True)\n<ECODE> with <SCODE>sum(self.loss_tot).backward()\n<ECODE> will be much better. Backproping from a number of losses is equal to backproping from their sum (the gradients are accumulated). Additionally, this will save a lot of computation, because the backward will batch all operations for all the losses and execute them in one go.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks for the prompt reply on this issue. Thankfully your recommendation works. Really appreciate it.", "isAccepted": false, "likes": null, "poster": "ritchieng" } ]
false
UNet implementation
null
[ { "contents": "I’m still in the process of learning, so I’m not sure my implementation is right. Right now it seems the loss becomes nan quickly, while the network output “pixels” become 0 or 1 seemingly randomly. I’m not sure it is because of my implementation or because of my lack of understanding of the loss (I pass the last layer through a LogSoftmax, and then use NLLLoss2d). <SCODE>class UNetConvBlock(nn.Module):\n def __init__(self, in_size, out_size, kernel_size=3, activation=F.relu):\n super(UNetConvBlock, self).__init__()\n self.conv = nn.Conv2d(in_size, out_size, kernel_size)\n self.conv2 = nn.Conv2d(out_size, out_size, kernel_size)\n self.activation = activation\n\n def forward(self, x):\n out = self.activation(self.conv(x))\n out = self.activation(self.conv2(out))\n\n return out\n\n\nclass UNetUpBlock(nn.Module):\n def __init__(self, in_size, out_size, kernel_size=3, activation=F.relu, space_dropout=False):\n super(UNetUpBlock, self).__init__()\n self.up = nn.ConvTranspose2d(in_size, out_size, 2, stride=2)\n self.conv = nn.Conv2d(in_size, out_size, kernel_size)\n self.conv2 = nn.Conv2d(out_size, out_size, kernel_size)\n self.activation = activation\n\n def center_crop(self, layer, target_size):\n batch_size, n_channels, layer_width, layer_height = layer.size()\n xy1 = (layer_width - target_size) // 2\n return layer[:, :, xy1:(xy1 + target_size), xy1:(xy1 + target_size)]\n\n def forward(self, x, bridge):\n up = self.up(x)\n crop1 = self.center_crop(bridge, up.size()[2])\n out = torch.cat([up, crop1], 1)\n out = self.activation(self.conv(out))\n out = self.activation(self.conv2(out))\n\n return out\n\n\nclass UNet(nn.Module):\n def __init__(self, imsize):\n super(UNet, self).__init__()\n self.imsize = imsize\n\n self.activation = F.relu\n \n self.pool1 = nn.MaxPool2d(2)\n self.pool2 = nn.MaxPool2d(2)\n self.pool3 = nn.MaxPool2d(2)\n self.pool4 = nn.MaxPool2d(2)\n\n self.conv_block1_64 = UNetConvBlock(1, 64)\n self.conv_block64_128 = UNetConvBlock(64, 128)\n self.conv_block128_256 = UNetConvBlock(128, 256)\n self.conv_block256_512 = UNetConvBlock(256, 512)\n self.conv_block512_1024 = UNetConvBlock(512, 1024)\n\n self.up_block1024_512 = UNetUpBlock(1024, 512)\n self.up_block512_256 = UNetUpBlock(512, 256)\n self.up_block256_128 = UNetUpBlock(256, 128)\n self.up_block128_64 = UNetUpBlock(128, 64)\n\n self.last = nn.Conv2d(64, 2, 1)\n\n\n def forward(self, x):\n block1 = self.conv_block1_64(x)\n pool1 = self.pool1(block1)\n\n block2 = self.conv_block64_128(pool1)\n pool2 = self.pool2(block2)\n\n block3 = self.conv_block128_256(pool2)\n pool3 = self.pool3(block3)\n\n block4 = self.conv_block256_512(pool3)\n pool4 = self.pool4(block4)\n\n block5 = self.conv_block512_1024(pool4)\n\n up1 = self.up_block1024_512(block5, block4)\n\n up2 = self.up_block512_256(up1, block3)\n\n up3 = self.up_block256_128(up2, block2)\n\n up4 = self.up_block128_64(up3, block1)\n\n return F.log_softmax(self.last(up4))<ECODE>", "isAccepted": false, "likes": 4, "poster": "GPistre" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Indeed, there’s a typo. A simpler fully convolutional network will also eventually “converge” to nan, albeit more slowly (a few epochs). I was using a similar setup with theano with decent results, so I think my data is ok, but I’m not sure how to troubleshoot. I was using the Adam optimizer, but I’ll try with SGD and a small learning rate and go from there if the results are promising.", "isAccepted": false, "likes": null, "poster": "GPistre" }, { "contents": "That’s weird, are you sure the networks are the same and that data is preprocessed correctly? It’s important to find the source of NaNs, that should not happen.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "It appears there was a size mismatch I overlooked between the network output and the target. (8x2x68x68 vs 8x70x70) I think there might be a bug there as it gives an error when the tensors are on cpu, but is silent and outputs something when they are on gpu. Do you want me to open an issue ? That might be where the nans were coming from. I’ll let it train overnight and see. To reproduce the error: <SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\nloss = nn.NLLLoss2d()\n\ntarget = Variable(torch.Tensor(8, 70, 70).random_(0, 1)).long()\noutput = Variable(torch.randn(8, 2, 68, 68))\n\nloss(output, target)\n<ECODE> <SCODE>loss(output.cuda(), target.cuda())\n<ECODE>", "isAccepted": false, "likes": null, "poster": "GPistre" }, { "contents": "Yes please! It seems that this loss is lacking some shape checks on the GPU indeed.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "last_layer <SCODE>Variable containing:\n(0 ,0 ,.,.) = \n 3.5879 3.6678 3.8380 ... 3.1548 3.0576 2.9584\n 3.4753 3.7363 3.8944 ... 2.9736 3.0051 2.9889\n 3.3298 3.3160 3.5382 ... 2.8276 2.8111 2.8584\n ... ⋱ ... \n 3.2416 3.1960 3.2502 ... 90.0304 98.9006 98.5473\n 3.2719 3.2843 3.2724 ... 39.1980 67.7482 73.4172\n 3.2535 3.3880 3.3061 ... 13.5371 25.7164 37.9838\n\n(0 ,1 ,.,.) = \n -3.7768 -3.8683 -4.0629 ... -3.2815 -3.1703 -3.0568\n -3.6481 -3.9466 -4.1274 ... -3.0742 -3.1102 -3.0917\n -3.4816 -3.4659 -3.7200 ... -2.9072 -2.8883 -2.9424\n ... ⋱ ... \n -3.3808 -3.3285 -3.3906 ... -122.8841 -137.5700 -122.2672\n -3.4154 -3.4296 -3.4160 ... -48.6680 -82.5093 -91.6747\n -3.3943 -3.5482 -3.4545 ... -17.6337 -33.1317 -46.8320\n[torch.cuda.FloatTensor of size 1x2x68x68 (GPU 0)]\n<ECODE> F.log_softmax(last_layer) <SCODE>Variable containing:\n(0 ,0 ,.,.) = \n -0.0006 -0.0005 -0.0004 ... -0.0016 -0.0020 -0.0024\n -0.0008 -0.0005 -0.0003 ... -0.0024 -0.0022 -0.0023\n -0.0011 -0.0011 -0.0007 ... -0.0032 -0.0033 -0.0030\n ... ⋱ ... \n -0.0013 -0.0015 -0.0013 ... nan nan nan\n -0.0012 -0.0012 -0.0012 ... 0.0000 0.0000 0.0000\n -0.0013 -0.0010 -0.0012 ... 0.0000 0.0000 0.0000\n\n(0 ,1 ,.,.) = \n -7.3653 -7.5366 -7.9013 ... -6.4379 -6.2299 -6.0176\n -7.1242 -7.6833 -8.0221 ... -6.0501 -6.1175 -6.0829\n -6.8125 -6.7830 -7.2590 ... -5.7381 -5.7027 -5.8038\n ... ⋱ ... \n -6.6237 -6.5260 -6.6420 ... -inf -inf -inf\n -6.6886 -6.7152 -6.6896 ... -87.8659 -inf -inf\n -6.6491 -6.9371 -6.7618 ... -31.1708 -58.8481 -84.8159\n[torch.cuda.FloatTensor of size 1x2x68x68 (GPU 0)]\n<ECODE> F.log_softmax(last_layer.cpu()) <SCODE>Variable containing:\n(0 ,0 ,.,.) = \n -0.0006 -0.0005 -0.0004 ... -0.0016 -0.0020 -0.0024\n -0.0008 -0.0005 -0.0003 ... -0.0024 -0.0022 -0.0023\n -0.0011 -0.0011 -0.0007 ... -0.0032 -0.0033 -0.0030\n ... ⋱ ... \n -0.0013 -0.0015 -0.0013 ... 0.0000 0.0000 0.0000\n -0.0012 -0.0012 -0.0012 ... 0.0000 0.0000 0.0000\n -0.0013 -0.0010 -0.0012 ... -0.0000 0.0000 0.0000\n\n(0 ,1 ,.,.) = \n -7.3653 -7.5366 -7.9013 ... -6.4379 -6.2299 -6.0176\n -7.1242 -7.6833 -8.0221 ... -6.0501 -6.1175 -6.0829\n -6.8125 -6.7830 -7.2590 ... -5.7381 -5.7027 -5.8038\n ... ⋱ ... \n -6.6237 -6.5260 -6.6420 ... -212.9145 -236.4706 -220.8145\n -6.6886 -6.7152 -6.6896 ... -87.8659 -150.2575 -165.0919\n -6.6491 -6.9371 -6.7618 ... -31.1708 -58.8481 -84.8159\n[torch.FloatTensor of size 1x2x68x68]<ECODE>", "isAccepted": false, "likes": null, "poster": "GPistre" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "lopuhin" }, { "contents": "Do you have any idea what the problem could be? Do you think it maybe because of missing Batch Norm layers?", "isAccepted": false, "likes": null, "poster": "bodokaiser" }, { "contents": "One thing that is different in my implementation is that I use upsampling instead of transposed convolutions, it worked significantly better in my case. Batch normalization speeds up convergence but is by no means essential, it worked fine without it too. Simpler models also gave okayish results, but UNet was consistently better - in this task the metric was intersection over union, and simple models were giving results in the 0.2-0.3 range (average over 10 classes), while UNet gave 0.4+ without too much tuning.", "isAccepted": false, "likes": 1, "poster": "lopuhin" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "bodokaiser" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Saeed_Izadi" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "fmassa" }, { "contents": "The point is that the problem persists even after removing batchnorm. Additionally, I’m using batch size of size 16 in my recent efforts . The same model is working under theano/lasagna implementation!", "isAccepted": false, "likes": null, "poster": "Saeed_Izadi" }, { "contents": "Hey, I already commented on this issue on GitHub but got some new ideas today which might be worth to check out: Good luck!", "isAccepted": false, "likes": null, "poster": "bodokaiser" }, { "contents": "If you share the lasagne and the pytorch code, I can have a look", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "for the 1. I think it is not the problem, because I’m normalizing the groundtuth to the range of {0,1} and the prediction after sigmoid is also in the range [0,1]. The BCELoss also is supposed to work with this range. For 2, yeah maybe there is a problem with ConvTranspose2d. I will replace it with Upsampling and update you.", "isAccepted": false, "likes": null, "poster": "Saeed_Izadi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Saeed_Izadi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" } ]
false
Dynamic parameter declaration in forward function
null
[ { "contents": "Declare the parameters in the forward function seems to be a solution because the intermediate results are already known at that point, but the thing is this might make the parameters be declared every time we run the forward function. <SCODE>def conv_relu(input):\n # Create variable named \"weights\", its shape can be infered from input.\n kernel_shape = (64, input.size()[0],3,3)\n weights = nn.Module.get_parameter(\"weights\", kernel_shape,\n initializer='uniform')\n conv = F.conv2d(input, weights)\n return F.relu(conv)\n<ECODE>", "isAccepted": false, "likes": 6, "poster": "ypxie" }, { "contents": "<SCODE>class MyModule(nn.Module):\n def __init__(self):\n # you need to register the parameter names earlier\n self.register_parameter('weight', None)\n\n def forward(self, input):\n if self.weight is None:\n self.weight = nn.Parameter(torch.randn(input.size()))\n return self.weight @ input\n<ECODE>", "isAccepted": false, "likes": 10, "poster": "apaszke" }, { "contents": "In this way, model.cuda() which is usually called before forward() might not work properly. if one more if_else is added to check use_cuda, the code can be unnecessarily long.", "isAccepted": false, "likes": 3, "poster": "ypxie" }, { "contents": "Good point. This would be better then: <SCODE>class MyModule(nn.Module):\n def __init__(self):\n # you need to register the parameter names earlier\n self.register_parameter('weight', None)\n\n def reset_parameters(self, input):\n self.weight = nn.Parameter(input.new(input.size()).normal_(0, 1)) \n\n def forward(self, input):\n if self.weight is None:\n self.reset_parameters(input)\n return self.weight @ input\n<ECODE>", "isAccepted": false, "likes": 9, "poster": "apaszke" }, { "contents": "Thanks for your reply, but I didn’t get why would this help? It seems to me register_parameter just register a None parameter to the parameters list. How could model.cuda() affect a None parameter?", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "How about the following? <SCODE>def cuda(self, device_id=None):\n \"\"\"Moves all model parameters and buffers to the GPU.\n\n Arguments:\n device_id (int, optional): if specified, all parameters will be\n copied to that device\n \"\"\"\n self._cuda = Ture\n self._device_id = device_id\n return self._apply(lambda t: t.cuda(device_id))\n\ndef cpu(self, device_id=None):\n \"\"\"Moves all model parameters and buffers to the CPU.\"\"\"\n self._cuda = False\n return self._apply(lambda t: t.cpu())\n\ndef get_parameter(self, name, shape):\n if not name in self._parameters.keys():\n self._parameters[name] = nn.Parameter(torch.randn(shape))\n var = self._parameters[name] \n fn = lambda t: t.cuda(self._device_id) if self._cuda else lambda t: t.cpu()\n var.data = fn(var.data)\n if var.grad is not None: \n var.grad.data = fn(var.grad.data)\n return var\n\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Also, don’t fiddle with internal fields. It’s not a good idea. They’re subject to change without notice.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Ben_Usman" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Amir_Rosenfeld" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "quoniammm" }, { "contents": "<SCODE>self.title_conv = nn.Sequential(\n nn.Conv1d(),\n nn.ReLU(),\n# the kernel_size is changed because the input's length of conv layer is Changeable.\n# Therefore, kernel_size was computed in forward function.\n# Then i want pass it to maxpool layer.How can I implement it?\n nn.MaxPool1d(kernel_size=)\n )\n<ECODE>", "isAccepted": false, "likes": null, "poster": "quoniammm" }, { "contents": "I also have the same problem, since I would like to adapt different kernel sizes according to different input data sizes. Did you solve your problem?", "isAccepted": false, "likes": null, "poster": "karlTUM" }, { "contents": "<SCODE>class DynamicLinear(nn.Module):\n def __init__(self, output_dim):\n super(DynamicLinear, self).__init__()\n self.output_dim = output_dim\n\n def forward(self, inputs):\n if not hasattr(self, '_linear'):\n input_dim = inputs.shape[-1]\n self._linear = nn.Linear(input_dim, self.output_dim)\n return self._linear(inputs)<ECODE>", "isAccepted": false, "likes": null, "poster": "Dave_Kielpinski" }, { "contents": "won’t work properly . same issue about model.cuda()", "isAccepted": false, "likes": null, "poster": "OrNot" }, { "contents": "How to set the bias? I mean F.conv2d using default bias or not? how to make nn.Conv2d dyamic weights in forward and also same effect as calling F.conv2d without bais?", "isAccepted": false, "likes": null, "poster": "jinfagang" }, { "contents": "I think the above therefore results in the parameter not being found in the .parameters() method when an object is instantiated from this class. What is the up-to-date way to do this? (Apologies if I am mistaken and have the wrong end of the stick here).", "isAccepted": false, "likes": null, "poster": "ilanfri" } ]
false
How to do mini batch with dynamic computation graph
null
[ { "contents": "Hi all, I am new to framework with dynamic computation graph. I search everywhere but I couldn’t find a reference about how to implement mini-batch with RNN or even tree LSTM with varying length input. So I guess my general problem is how to do mini batch with dynamic computation graph. Thanks.", "isAccepted": false, "likes": 3, "poster": "shijie-wu" }, { "contents": "TreeRNNs are harder. I’ll add an example soon that does this, but the general idea for TreeRNNs is that batching is up to you as the user, and you should split and concatenate when you need to. So if you use a binary tree structure, you can represent it as a shift-reduce parser (see the SPINN paper from Bowman et al) that means you can process multiple trees in parallel by doing preprocessing like this: <SCODE>input:\n tree1: ((ab)c)\n tree2: (d(ef))\npreprocessed input:\n 1 2 3 4 5\n tree1: SHIFT(a) SHIFT(b) REDUCE SHIFT(c) REDUCE\n tree2: SHIFT(d) SHIFT(e) SHIFT(f) REDUCE REDUCE\n<ECODE> and then using advanced indexing to copy all the tokens for SHIFT at each timestep in parallel while concatenating the stack representations for batched REDUCE. Sorry if this is confusing, I promise an example will be up soon. I would add that PyTorch is impressively fast on TreeRNNs even without batching.", "isAccepted": false, "likes": 4, "poster": "jekbradbury" }, { "contents": "Thank you for the pointer. So it’s actually up to the user to design batching mechanism. Thank you all for building pytorch with amazing flexibility and great tutorial. Just curious, is there any plan to release a technical report about the performance of pytorch compared to other framework with support of dynamic computation graph?", "isAccepted": false, "likes": null, "poster": "shijie-wu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "That make sense. Thanks for the great work.", "isAccepted": false, "likes": null, "poster": "shijie-wu" }, { "contents": "examples/snli/train.py", "isAccepted": false, "likes": null, "poster": "rituk" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "supakjk" }, { "contents": "Yes, this kind of padding is a stopgap until PyTorch has full masked RNN support, which is on its way.", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "What if we are facing arbitrary trees rather than binary ones? This can correspond to the childsum treelstm where each node can have different number of children. Is it still possible to batch with the shift-reduce strategy?", "isAccepted": false, "likes": null, "poster": "xuehy" }, { "contents": "It’s possible but a lot harder.", "isAccepted": false, "likes": null, "poster": "jekbradbury" }, { "contents": "I hear that tensorflow-fold is able to batch trees of arbitrary shapes. Is there similar implementations in pytorch? Why no body is trying to make a tool?", "isAccepted": false, "likes": null, "poster": "xuehy" }, { "contents": "for what it’s worth, I could install torchtext using: pip install git+https://github.com/pytorch/text.git (in a virtualenv environment, otherwise try with --user)", "isAccepted": false, "likes": null, "poster": "Andre_Holzner" }, { "contents": "whats an example for a feedforward NN or CNN? I try to index my torch arrays of data and it says I can’t/shouldn’t be using numpy to index things. As in: <SCODE>def get_batch(X,Y,M):\n N = len(Y)\n valid_indices = np.array( range(N) )\n batch_indices = np.random.choice(valid_indices,size=M,replace=False)\n batch_xs = X[batch_indices,:]\n batch_ys = Y[batch_indices]\n return batch_xs, batch_ys\n<ECODE> where X and Y are torch tensors (or variables).", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "I think my code runs now, but it seems there has to be a better way than doing: <SCODE>def get_batch2(X,Y,M,dtype):\n X,Y = X.data.numpy(), Y.data.numpy()\n N = len(Y)\n valid_indices = np.array( range(N) )\n batch_indices = np.random.choice(valid_indices,size=M,replace=False)\n batch_xs = torch.FloatTensor(X[batch_indices,:]).type(dtype)\n batch_ys = torch.FloatTensor(Y[batch_indices]).type(dtype)\n return Variable(batch_xs, requires_grad=False), Variable(batch_ys, requires_grad=False)\n<ECODE> <SCODE> #valid_indices = torch.arange(0,N).numpy()\n #valid_indices = np.array( range(N) )\n #batch_indices = np.random.choice(valid_indices,size=M,replace=False)\n #indices = torch.LongTensor(batch_indices)\n #batch_xs, batch_ys = torch.index_select(X_mdl, 0, indices), torch.index_select(y, 0, indices)\n #batch_xs,batch_ys = torch.index_select(X_mdl, 0, indices), torch.index_select(y, 0, indices)\n<ECODE> wonder if this silly moving from numpy to torch is actually slowing my code down! I hope not.", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" } ]
false
Module.zero_grad() with requires_grad=False for some Parameter?
null
[ { "contents": "It seems that Module.zero_grad() does not like parameters with no grad (source below crashes), but what is the proper way to have model parameters which should not be touched by the backprop, but should benefit from Module’s comfort (cuda() etc.)? <SCODE>import torch\n\nfrom torch import Tensor\nfrom torch.nn.parameter import Parameter\nfrom torch.nn import Module\n\nclass Blah(Module):\n def __init__(self, dim):\n super(Blah, self).__init__()\n self.s = Parameter(torch.rand(1, dim), requires_grad = False)\n self.t = Parameter(torch.rand(1, dim))\n\nblah = Blah(10)\n\nblah.zero_grad()<ECODE>", "isAccepted": false, "likes": null, "poster": "FrancoisFleuret" }, { "contents": "This should do the trick. <SCODE>import torch\n\nfrom torch import Tensor\nfrom torch.nn.parameter import Parameter\nfrom torch.autograd import Variable\nfrom torch.nn import Module\n\nclass Blah(Module):\n def __init__(self, dim):\n super(Blah, self).__init__()\n self.s = Variable(torch.rand(1, dim))\n self.t = Parameter(torch.rand(1, dim))\n\nblah = Blah(10)\n\nblah.zero_grad()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "Using a Variable was my first choice, but then Module.cuda() does not propagate to it. With the use of Variable you suggest, blah.cuda() will convert blah.t.data to torch.cuda.FloatTensor as expected, but will leave blah.s.data unchanged. Wouldn’t it be more consistent that Module.zero_grad() deal with requires_grad=False? <SCODE>def zero_grad(self):\n \"\"\"Sets gradients of all model parameters to zero.\"\"\"\n for p in self.parameters():\n if hasattr(p, 'grad'): p.grad.data.zero_()<ECODE>", "isAccepted": false, "likes": null, "poster": "FrancoisFleuret" }, { "contents": "you can overlod the cuda function and call default cuda method inside it. <SCODE>def cuda(self):\n super(Blah, self).cuda()\n self.s.cuda()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "But ain’t there other functionalities that I will have to fix also ? Persistence in particular ?", "isAccepted": false, "likes": null, "poster": "FrancoisFleuret" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "I must be missing something, I thought registe_buffer was to specify persistent fields, not the ones to be “cudaified”. In any case, I presume it should be self.register(‘s’, self.s) ? The following , with s a Variable <SCODE>import torch\n\nfrom torch import Tensor\nfrom torch.nn.parameter import Parameter\nfrom torch.nn import Module\nfrom torch.autograd import Variable\n\nclass Blah(Module):\n def __init__(self, dim):\n super(Blah, self).__init__()\n # self.s = Parameter(torch.rand(dim), requires_grad = False)\n self.s = Variable(torch.rand(dim))\n self.register_buffer('s', self.s)\n self.t = Parameter(torch.rand(dim))\n\nblah = Blah(10)\n\nblah.zero_grad()\n\nprint('s', type(blah.s.data))\nprint('t', type(blah.t.data))\n\nif torch.cuda.is_available():\n blah.cuda()\n print('s', type(blah.s.data))\n print('t', type(blah.t.data))\n<ECODE> does print <SCODE>s <class 'torch.FloatTensor'>\nt <class 'torch.FloatTensor'>\ns <class 'torch.FloatTensor'>\nt <class 'torch.cuda.FloatTensor'>\n<ECODE> And making s a Tensor instead of a Variable does not help.", "isAccepted": false, "likes": null, "poster": "FrancoisFleuret" }, { "contents": "What about this. <SCODE>import torch\n\nfrom torch import Tensor\nfrom torch.nn.parameter import Parameter\nfrom torch.nn import Module\nfrom torch.autograd import Variable\n\nclass Blah(Module):\n def __init__(self, dim):\n super(Blah, self).__init__()\n self.register_buffer('s', torch.rand(dim))\n self.t = Parameter(torch.rand(dim))\n\nblah = Blah(10)\n\nblah.zero_grad()\n\nprint('s', type(blah.s))\nprint('t', type(blah.t.data))\n\nif torch.cuda.is_available():\n blah.cuda()\n print('s', type(blah.s))\n print('t', type(blah.t.data))\n<ECODE> Which outputs the following. <SCODE>/home/atcold/anaconda3/bin/python /home/atcold/Work/buffer.py\ns <class 'torch.FloatTensor'>\nt <class 'torch.FloatTensor'>\ns <class 'torch.cuda.FloatTensor'>\nt <class 'torch.cuda.FloatTensor'>\n\nProcess finished with exit code 0\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "Atcold" }, { "contents": "Shouldn’t you use ?", "isAccepted": false, "likes": null, "poster": "csarofeen" }, { "contents": "I’d also recommend using a buffer for that. However it is a bug, and it should be fixed anyway.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Error on transpose and view
null
[ { "contents": "<SCODE>import torch\nimport torch.nn.functional as F\ndef softmax(input, axis=1):\n \"\"\" \n Apply softmax on input at certain axis.\n \n Parammeters:\n ----------\n input: Tensor (N*L or rank>2)\n axis: the axis to apply softmax\n \n Returns: Tensor with softmax applied on that dimension.\n \"\"\"\n \n input_size = input.size()\n \n trans_input = input.transpose(axis, len(input_size)-1)\n trans_size = trans_input.size()\n\n input_2d = trans_input.view(-1, trans_size[-1])\n soft_max_2d = F.softmax(input_2d)\n \n soft_max_nd = soft_max_2d.view(*trans_size)\n \n return soft_max_nd.transpose(axis, len(input_size)-1)\n\naa= torch.randn(3,4,4)\nprint aa\n\nsoft_1 = softmax(aa, axis = 1)\nprint soft_1\n<ECODE> gives the following error: <SCODE>File \"/local/anaconda2/lib/python2.7/site-packages/torch/tensor.py\", line 214, in view\n raise ValueError(\"input should be contiguous\")\nValueError: input should be contiguous\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "You can try this: <SCODE>input_2d = trans_input.contiguous().view(-1, trans_size[-1])\n<ECODE>", "isAccepted": false, "likes": null, "poster": "longcw" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
PyTorch Resources
Site Feedback
[ { "contents": "PyTorch is relatively new compared to other frameworks and I had issues finding more guides and tutorials.", "isAccepted": false, "likes": 11, "poster": "ritchieng" }, { "contents": "Excellent. Will look into that and hopefully contribute in the future, although I am still using both PyTorch and Tensorflow for my projects.", "isAccepted": false, "likes": 1, "poster": "Ismail_Elezi" }, { "contents": "In less than 1 week I’ve added more than 20 implementations and it’s by far the largest list currently out there on PyTorch. I will grow this list with at least an update every 3 days. I’m a proponent of reproducible research so the section on paper implementation would be rapidly growing.", "isAccepted": false, "likes": null, "poster": "ritchieng" } ]
false
Updating pytorch versions?
null
[ { "contents": "Thanks", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "How can I check the version of pytorch if installed with pip?", "isAccepted": false, "likes": 3, "poster": "kevinzakka" } ]
false
What happened to documentation of nn.Sequential()?
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "We recommend implementing custom containers if you need anything more complex than passing the data forward (including reshaping). See how torchvision models are implemented.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Sequential docs are now added", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
PReLU and Conv3d bugs with 5d Tensors
null
[ { "contents": "Here is an example of torch.nn.PReLU(num_parameters) acting on a 5d Tensor: <SCODE>out = nn.PReLU(8)(Variable(torch.rand(2,8,16,16,16)))\n<ECODE> The error looks like: <SCODE>RuntimeError: wrong number of input planes at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.8_1486040640754/work/torch/lib/THNN/generic/PReLU.c:49\n<ECODE> Here is an example of torch.nn.Conv3d(bias=False) acting on a 5d Tensor: <SCODE>out = nn.Conv3d(8,16, kernel_size=3, padding=1, bias=False)(Variable(torch.rand(2,8,16,16,16)))\n<ECODE> The error looks like: <SCODE>TypeError: FloatVolumetricConvolutionMM_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.FloatTensor, NoneType, torch.FloatTensor, int, int, int, int, int, int, int, int, int), but expected (int state, torch.FloatTensor input, torch.FloatTensor output, torch.FloatTensor weight, torch.FloatTensor bias, torch.FloatTensor finput, int kT, int kW, int kH, int dT, int dW, int dH, int pT, int pW, int pH)\n<ECODE> Lua Torch had similar problems with 5d Tensors, so maybe this is a back end issue.", "isAccepted": false, "likes": null, "poster": "abweiss" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Number of input for linear layer
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "SenJia" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "SenJia" } ]
false
Cuda Out of Memory
null
[ { "contents": "Barely a few steps through my forward propagation for an LSTM I received an error: <SCODE>THCudaCheck FAIL file=/home/soumith/local/builder/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory\nTraceback (most recent call last):\n File \"external_script.py\", line 1002, in <module>\n final_loss = run()\n File \"external_script.py\", line 584, in run\n optimizer_iter_num, feature, x_dim)\n File \"/home/ubuntu/lstm_special/rnn.py\", line 74, in lstm_forward\n output, hn = self.net.rnn(input, (self.h, self.c))\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py\", line 210, in __call__\n result = self.forward(*input, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/modules/rnn.py\", line 79, in forward\n return func(input, self.all_weights, hx)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py\", line 228, in forward\n return func(input, *fargs, **fkwargs)\n File \"/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py\", line 202, in _do_forward\n flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)\n File \"/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py\", line 218, in forward\n result = self.forward_extended(*nested_tensors)\n File \"/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/rnn.py\", line 180, in forward_extended\n cudnn.rnn.forward(self, input, hx, weight, output, hy)\n File \"/usr/local/lib/python2.7/dist-packages/torch/backends/cudnn/rnn.py\", line 257, in forward\n fn.workspace = torch.cuda.ByteTensor(workspace_size.value)\nRuntimeError: cuda runtime error (2) : out of memory at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.cu:66\n<ECODE> I’ve a 12gb Tesla K80 NVIDIA GPU so this shouldn’t be an issue I believe it has something to do with the variables. 12GB RAM. Checking CUDA is working. <SCODE>>>> torch.cuda.is_available()\nTrue\n>>> torch.cuda.current_stream()\n<torch.cuda.Stream device=0 cuda_stream=0x0>\n>>> torch.cuda.device_count()\n1L\n<ECODE> NVIDIA SMI <SCODE>+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 367.57 Driver Version: 367.57 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 Tesla K80 Off | 0000:00:1E.0 Off | 0 |\n| N/A 51C P8 27W / 149W | 2MiB / 11439MiB | 0% Default |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+\n<ECODE> Script <SCODE>from torch.autograd import Variable\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport matplotlib as mpl\nimport numpy as np\n\ntorch.manual_seed(0)\n\n\nclass Net(nn.Module):\n def __init__(self, input_size, hidden_size, num_layers, bias, dropout):\n super(Net, self).__init__()\n self.rnn = nn.LSTM(input_size=input_size,\n hidden_size=hidden_size,\n num_layers=num_layers,\n bias=bias,\n dropout=dropout)\n\n\nclass lstmModule:\n def __init__(self, input_size, hidden_size, num_layers, bias, dropout,\n seq_len, batch_size, meta_lr, n_meta_iter):\n self.input_size = input_size\n self.hidden_size = hidden_size\n self.num_layers = num_layers\n self.bias = bias\n self.dropout = dropout\n self.seq_len = seq_len\n self.batch_size = batch_size\n self.meta_lr = meta_lr\n self.n_meta_iter = n_meta_iter\n\n self.net = Net(input_size=input_size,\n hidden_size=hidden_size,\n num_layers=num_layers,\n bias=bias,\n dropout=dropout)\n\n self.net.cuda()\n\n self.h0 = Variable(torch.randn(self.num_layers,\n self.batch_size,\n self.hidden_size)).cuda()\n\n self.c0 = Variable(torch.randn(self.num_layers,\n self.batch_size,\n self.hidden_size)).cuda()\n\n self.optimizer = optim.Adam(self.net.rnn.parameters(), lr=self.meta_lr)\n\n self.loss_lst = []\n self.loss_lst2 = []\n\n def lstm_forward(self, seq_num, inp, x_dim):\n inp = inp.reshape(1, 1, inp.shape[0]*inp.shape[1])\n inp = torch.from_numpy(inp).float()\n input = Variable(inp).cuda()\n\n if seq_num == 0:\n # Ensure clear gradient buffer\n self.optimizer.zero_grad()\n self.loss_tot = [0 for i in range(self.hidden_size)]\n\n # LSTM\n output, hn = self.net.rnn(input, (self.h0, self.c0))\n output = torch.abs(2 * output)\n op = [output[:, :, i] for i in range(self.hidden_size)]\n self.output_all = op\n self.h, self.c = hn\n return output.cpu().data.numpy()\n else:\n output, hn = self.net.rnn(input, (self.h, self.c))\n output = torch.abs(2 * output)\n op = [output[:, :, i] for i in range(self.hidden_size)]\n self.h, self.c = hn\n self.output_all = [torch.cat((self.output_all[i], op[i]), 0) for i in range(self.hidden_size)]\n return output.cpu().data.numpy()\n\n def lstm_update(self, lab):\n def pseudo_loss(output, label):\n # print('output size', output.size())\n # print('label size', label.size())\n # return torch.mean(torch.abs(output*label))\n return torch.mean(output * label)\n\n lab = torch.from_numpy(lab).float()\n self.label = Variable(lab).cuda()\n\n # Get loss\n self.loss_tot = [\n self.loss_tot[i] + pseudo_loss(self.output_all[i], self.label[:, i]) for i in range(self.hidden_size)]\n\n # Append loss\n self.loss_lst.append(self.loss_tot[0].cpu().data.numpy()[0])\n self.loss_lst2.append(self.loss_tot[1].cpu().data.numpy()[0])\n\n # Backprop\n sum(self.loss_tot).backward()\n\n # Update optimizer\n self.optimizer.step()\n\n return self.loss_lst, self.loss_lst2\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "ritchieng" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "If you know you’re not going to use the intermediate Variables for backpropagation, it’s safer to keep references to tensors only.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "What do you mean by keeping references to tensors only?", "isAccepted": false, "likes": 1, "poster": "ritchieng" }, { "contents": "When you do this: <SCODE>self.output_all = op\n<ECODE> <SCODE>self.output_all = [o.data for o in op]\n<ECODE> you’ll only save the tensors i.e. the final values.", "isAccepted": false, "likes": 9, "poster": "apaszke" }, { "contents": "I’m starting to get the hang of how Variables work in PyTorch. Thanks. I’ll try it freeing some variables and let you know.", "isAccepted": false, "likes": null, "poster": "ritchieng" }, { "contents": "Tried your recommendation that made the process super fast but it prevented me from backpropagating because the resulting loss is not a variable type but a float type (no history). I was running the other CPU version with a larger dataset and this came out: <SCODE> File \"/home/ubuntu/anaconda2/lib/python2.7/site-packages/torch/optim/adam.py\", line 52, in step\n state['exp_avg_sq'] = grad.new().resize_as_(grad).zero_()\nRuntimeError: $ Torch: not enough memory: you tried to allocate 7GB. Buy new RAM! at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.8_1486039719409/work/torch/lib/TH/THGeneral.c:270\n<ECODE> This is weird considering how I’ve more than 60GB RAM. May I know if PyTorch is limiting the amount of RAM somehow?", "isAccepted": false, "likes": null, "poster": "ritchieng" }, { "contents": "No, it means that the allocation has failed - you didn’t have enough free RAM at that moment. Since you’re running low even on CPU memory it doesn’t seem surprising for me that it also fails on the GPU. If you think there’s a bug, it would be very helpful if you could isolate the minimal portion of the code that would allow us to reproduce the error.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>'''\nGPU LSTM\n 1 single input\n 1 single output\n'''\n\nfrom torch.autograd import Variable\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport numpy as np\nimport matplotlib as mpl\nmpl.use('Agg')\nimport matplotlib.pyplot as plt\n\n\nclass Net(nn.Module):\n def __init__(self, input_size, hidden_size, num_layers, bias, dropout):\n super(Net, self).__init__()\n self.rnn = nn.LSTM(input_size=input_size,\n hidden_size=hidden_size,\n num_layers=num_layers,\n bias=bias,\n dropout=dropout)\n\n\ndef input_var(i):\n test = np.array([i])\n# print(test.shape)\n # test = np.array([i])\n input_var = test.reshape(1, 1, 1) # (seq_len, batch, input_size)\n input_var = torch.from_numpy(input_var).float()\n return input_var\n\n\ndef label_var(i):\n test = np.array([i*4])\n label_var = test.reshape(1, 1) #\n label_var = torch.from_numpy(label_var).float()\n return label_var\n\n\nclass lstmModule:\n def __init__(self, input_size, hidden_size, num_layers, bias, dropout,\n seq_len, batch_size, meta_lr, n_meta_iter):\n self.input_size = input_size\n self.hidden_size = hidden_size\n self.num_layers = num_layers\n self.bias = bias\n self.dropout = dropout\n self.seq_len = seq_len\n self.batch_size = batch_size\n self.meta_lr = meta_lr\n self.n_meta_iter = n_meta_iter\n\n self.net = Net(input_size=input_size,\n hidden_size=hidden_size,\n num_layers=num_layers,\n bias=bias,\n dropout=dropout)\n\n self.net.cuda()\n\n self.h0 = Variable(torch.randn(self.num_layers,\n self.batch_size,\n self.hidden_size)).cuda()\n\n self.c0 = Variable(torch.randn(self.num_layers,\n self.batch_size,\n self.hidden_size)).cuda()\n\n self.optimizer = optim.Adam(self.net.rnn.parameters(), lr=self.meta_lr)\n\n self.loss_lst = []\n\n def lstm_forward(self, seq_num, meta_num):\n def pseudo_loss(output, label):\n return torch.mean(torch.sum(torch.abs(output - label)))\n\n inp = input_var(seq_num)\n input = Variable(inp).cuda()\n\n lab = label_var(seq_num)\n label = Variable(lab).cuda()\n\n if seq_num == 0:\n\n # Ensure clear gradient buffer\n self.optimizer.zero_grad()\n self.loss_tot = [0 for i in range(self.hidden_size)]\n\n # Label concatenation\n self.label_all = label\n\n # LSTM\n output, hn = self.net.rnn(input, (self.h0, self.c0))\n output = 100 * output\n\n op = [output[:, :, i] for i in range(self.hidden_size)]\n\n self.output_all = op\n # print('1 step length:', len(self.output_all))\n self.h, self.c = hn\n else:\n self.label_all = torch.cat((self.label_all, label), 0)\n output, hn = self.net.rnn(input, (self.h, self.c))\n output = 100 * output\n op = [output[:, :, i] for i in range(self.hidden_size)]\n self.h, self.c = hn\n self.output_all = [torch.cat((self.output_all[i], op[i]), 0) for i in range(self.hidden_size)]\n\n if seq_num == (self.seq_len - 1):\n # Get loss\n self.loss_tot = [self.loss_tot[i] + pseudo_loss(self.output_all[i], self.label_all) for i in range(self.hidden_size)]\n\n # Append loss\n self.loss_lst.append(sum(self.loss_tot).cpu().data.numpy()[0])\n\n # Backprop\n sum(self.loss_tot).backward()\n\n # Update optimizer\n self.optimizer.step()\n\n if seq_num == (self.seq_len - 1) and meta_num == (self.n_meta_iter - 1):\n # print(len(self.loss_lst))\n print('Loss 1', self.loss_tot[0].cpu().data.numpy())\n print('Loss 2', self.loss_tot[1].cpu().data.numpy())\n plt.clf()\n plt.plot()\n plt.title('Loss Curve')\n plt.plot(self.loss_lst, label='Loss Curve')\n plt.legend(loc='best')\n plt.savefig('loss.png')\n\n def lstm_check(self, seq_num):\n inp = input_var(seq_num)\n input = Variable(inp).cuda()\n lab = label_var(seq_num)\n label = Variable(lab).cuda()\n\n if seq_num == 0:\n # Ensure clear gradient buffer\n self.optimizer.zero_grad()\n self.loss_tot = [0 for i in range(self.hidden_size)]\n\n # Label concatenation\n self.label_all = label\n\n # LSTM\n output, hn = self.net.rnn(input, (self.h0, self.c0))\n output = 100 * output\n op = [output[:, :, i] for i in range(self.hidden_size)]\n self.output_all = op\n self.h, self.c = hn\n else:\n self.label_all = torch.cat((self.label_all, label), 0)\n output, hn = self.net.rnn(input, (self.h, self.c))\n output = 100 * output\n op = [output[:, :, i] for i in range(self.hidden_size)]\n self.h, self.c = hn\n self.output_all = [torch.cat((self.output_all[i], op[i]), 0) for i in range(self.hidden_size)]\n\n if seq_num == (self.seq_len - 1):\n print('-' * 10)\n print(self.output_all[0].cpu().data.numpy())\n print(self.label_all.cpu().data.numpy())\n print('-' * 10)\n print(self.output_all[1].cpu().data.numpy())\n print(self.label_all.cpu().data.numpy())\n\nN_meta = 10\nLR_meta = 0.1\nN_seq = 4\nbatch_size = 1\nlayers = 4\ninput_size = 1\nhidden_size = 15000\n\n# Initialize and assign class to object once\n# input_size, hidden_size, num_layers, bias, dropout, seq_len, batch_size, meta_lr, n_meta_iter):\nprint 'Initializing LSTM'\nlstm = lstmModule(input_size, hidden_size, layers, True, 0.1, N_seq, batch_size, LR_meta, N_meta)\nprint 'Initialized LSTM'\n\n# Run through meta iterations\nprint 'Training'\nfor j in range(N_meta):\n print('Meta iteration', j)\n # Run through each step\n for i in range(N_seq):\n lstm.lstm_forward(i, j)\nprint 'Done Training'\n\n# Check\nprint('-' * 10)\nprint 'Checking'\nfor i in range(N_seq):\n lstm.lstm_check(i)\nprint 'Done Checking'\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ritchieng" }, { "contents": "And why do you think it should work with a history size of 10000? That requires a lot of stuff to keep around. You can’t expect the framework to work with arbitrarily large inputs, and that size sounds like something that possibly could raise an OOM error.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Was wondering if you’ve any suggestions on saving memory?", "isAccepted": false, "likes": null, "poster": "ritchieng" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "I’m having a similar CUDA memory issue but I’m only running a shallow CNN…I was wondering if anyone could provide any insight as to where this memory leak is? My model is pretty small: <SCODE>class RegressionalNet(torch.nn.Module):\n \n def __init__(self):\n super().__init__()\n self.feature_extractor = torch.nn.Sequential(\n \n torch.nn.Conv2d(1,64,5,padding=2), \n torch.nn.ReLU(),\n torch.nn.Conv2d(64,128,5,padding=2,bias=False),\n torch.nn.ReLU(),\n torch.nn.BatchNorm2d(128),\n torch.nn.MaxPool2d(2),\n torch.nn.Conv2d(128,256,5,padding=2,bias=False),\n torch.nn.ReLU(),\n torch.nn.BatchNorm2d(256),\n torch.nn.MaxPool2d(2))\n \n self.classifier = torch.nn.Sequential(\n\t torch.nn.Dropout(0.5),\n torch.nn.Linear(256*16*16,256), # Fully connected layer \n torch.nn.ReLU(), \n torch.nn.Linear(256,1))\n \n def forward(self,x):\n features = self.feature_extractor(x)\n output = self.classifier(features.view(int(x.size()[0]),-1))\n return output\n<ECODE> My training procedure looks as follows: <SCODE>def train(\n model: torch.nn.Module, \n transforms, \n data_path= DATA_PATH, \n val_path= VAL_PATH, \n num_epochs=101, \n batch_size=64, \n verbose=True,\n cube_length=640, img_size=(64, 64), \n loss=torch.nn.MSELoss(), \n lr_schedule=True, initial_lr=1e-3, suffix=\"\"):\n\n data_path = os.path.abspath(data_path)\n val_path = os.path.abspath(val_path)\t\n model = model.train()\n device = torch.device(\"cuda\")\n model = model.to(device).to(torch.float)\n \n \"\"\" LOADING IN THE REAL TRAINING AND VALIDATION DATASETS \"\"\"\n \n loader = DataLoader(FITSCubeDataset(data_path, cube_length, transforms, img_size), \n batch_size=batch_size, shuffle=True) \n validation_loader = DataLoader(FITSCubeDataset(val_path, cube_length, transforms, img_size), \n batch_size=batch_size, shuffle=True) \n \n optim = torch.optim.Adam(model.parameters(), initial_lr)\t\n accuracies, val_accuracies, epochs, val_epochs = [0], [0], [0], [0]\n\t\n for i in range(num_epochs):\n print(\"Epoch %d of %d\" % (i+1, num_epochs))\n _accuracies,_val_accuracies = [],[] \n model.train(True) \n for idx, (batch, target) in enumerate(tqdm(loader)):\n batch = batch.to(device).to(torch.float)\n if isinstance(loss, torch.nn.CrossEntropyLoss):\n target = target.to(device).to(torch.long)\n else:\n target = target.to(device).to(torch.float)\n pred = model(batch).reshape(-1)\n loss_value = loss(pred, target)\n optim.zero_grad()\n loss_value.backward()\n optim.step()\n \n ###Change the error metric here###\n\n _accuracies.append(loss_value) \n epochs.append(i+1) \n mean_accuracy = sum(_accuracies)/len(_accuracies)\n accuracies.append(mean_accuracy)\n print(\"Mean training loss: %f\" % mean_accuracy) \n<ECODE>", "isAccepted": false, "likes": null, "poster": "spacemeerkat" }, { "contents": "You need to do <SCODE>_accuracies.append(loss_value.detach())\n<ECODE> because otherwise you are saving the complete computation graph for every sample which causes the OOM", "isAccepted": false, "likes": 5, "poster": "justusschock" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "spacemeerkat" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "maqboolurrahim" } ]
false
Are only used parameters updated?
null
[ { "contents": "When A is a module with multiple sub-modules, where only a sub-set of the sub-modules are used dependent on the input, for example, A is defined as follows, <SCODE>class A(nn.Module):\n def __init__(self):\n self.common = nn.Linear(100, 50)\n self.module1 = nn.Linear(50,30)\n self.module2 = nn.Linear(50,2)\n def forward(self, input, idx):\n commonOut = self.common(input)\n if idx == 0:\n return self.module1(commonOut)\n else:\n return self.module2(commonOut)\n\nopt = torch.optim.Adam(A.parameters()) \n<ECODE> If it does, is there any way I can get the list of parameters that are actually updated? Thanks", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "No, we don’t do that. There’s no way how the optimizer would know which parameters were or weren’t used, and there’s no possible way to find out about that. It would require us to impose some strict requirements, that don’t really make sense in most use cases, for only minor improvements in other. Why is it such a problem that these parameters are getting updated? Is your script slow? It shouldn’t add a lot of overhead.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks!", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Aside from performance considerations, if the error is only being evaluated in terms of the output of a single module, aren’t there other (possibly negative) implications of updating the weights for components unused in the forward pass?", "isAccepted": false, "likes": 1, "poster": "jablongo" } ]
false
Speed benchmark on VGG16
vision
[ { "contents": "I am testing pytorch’s speed on a simple VGG16 benchmark and I have noticed the following timings: Do these timings sound reasonable and is there a reason why Iteration 1 is much faster than the rest ? System specs: Thanks !", "isAccepted": false, "likes": null, "poster": "tdeboissiere" }, { "contents": "Also, you’re measuring the time of CUDA copies, but using pinned memory and async transfers would be faster than the simplest approach.", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "Thanks! Can you point me to the documentation to re-implement the benchmark with pinned memory and async transfers ?", "isAccepted": false, "likes": null, "poster": "tdeboissiere" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" } ]
false
How to assign values to tensor based on index array efficiently?
null
[ { "contents": "<SCODE>tf.TensorArray.scatter(indices, value, name=None)\n\nScatter the values of a Tensor in specific indices of a TensorArray.\n\nArgs:\n indices: A 1-D Tensor taking values in [0, max_value). If the TensorArray is not dynamic, max_value=size().\n value: (N+1)-D. Tensor of type dtype. The Tensor to unpack.\nname: A name for the operation (optional).\n\nReturns:\n A new TensorArray object with flow that ensures the scatter occurs. Use this object all for subsequent operations.\n\nRaises:\n\nValueError: if the shape inference fails.\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Sun_Mingjie" } ]
false
Normalization in the mnist example
null
[ { "contents": "", "isAccepted": false, "likes": 10, "poster": "Russel_Russel" }, { "contents": "I think those are the mean and std deviation of the MNIST dataset.", "isAccepted": false, "likes": 1, "poster": "avijit_dasgupta" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "<SCODE># The output of torchvision datasets are PILImage images of range [0, 1].\n# We transform them to Tensors of normalized range [-1, 1]\ntransform=transforms.Compose([transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ])\ntrainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=4, \n shuffle=True, num_workers=2)\n\ntestset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=4, \n shuffle=False, num_workers=2)\n<ECODE> Why should be any different for MNIST dataset? Thanks in advance, David", "isAccepted": false, "likes": 5, "poster": "dlmacedo" }, { "contents": "MNIST is not natural images, it’s data distribution is quite different.", "isAccepted": false, "likes": 7, "poster": "smth" }, { "contents": "What an honor to be replied by you, smth. But the pytorch imagenet example is also very different from 0.5, 0.5, 0.5. <SCODE>normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225])\n\ntrain_loader = torch.utils.data.DataLoader(\n datasets.ImageFolder(traindir, transforms.Compose([\n transforms.RandomSizedCrop(224),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n normalize,\n ])),\n batch_size=args.batch_size, shuffle=True,\n num_workers=args.workers, pin_memory=True)\n<ECODE> David", "isAccepted": false, "likes": 3, "poster": "dlmacedo" }, { "contents": "", "isAccepted": false, "likes": 10, "poster": "smth" }, { "contents": "Ok. Thank you very much for your answer. David", "isAccepted": false, "likes": 1, "poster": "dlmacedo" }, { "contents": "Any way you could share the code with which you compute mean/std on the dataset? Do you use the dataset class and iterate over it? Thanks", "isAccepted": false, "likes": 1, "poster": "achaiah" }, { "contents": "Did you figure out the code for calculating the mean and std within pytorch ?", "isAccepted": false, "likes": null, "poster": "padamsethia" }, { "contents": "Should just be able to use the ImageFolder or some other dataloader to iterate over imagenet and then use the standard formulas to compute mean and std. at the channel level E.g., for mean keep 3 running sums, one for the R, G, and B channel values as well as a total pixel count (if you are using Python2 watch for int overflow on the pixel count, could need a different strategy). Then simply divide the running sums by the pixel count", "isAccepted": false, "likes": 1, "poster": "adamvest" }, { "contents": "<SCODE>import argparse\nimport os\nimport numpy as np\nimport torchvision\nimport torchvision.transforms as transforms\n\ndataset_names = ('cifar10','cifar100','mnist')\n\nparser = argparse.ArgumentParser(description='PyTorchLab')\nparser.add_argument('-d', '--dataset', metavar='DATA', default='cifar10', choices=dataset_names,\n help='dataset to be used: ' + ' | '.join(dataset_names) + ' (default: cifar10)')\n\nargs = parser.parse_args()\n\ndata_dir = os.path.join('.', args.dataset)\n\nprint(args.dataset)\n\nif args.dataset == \"cifar10\":\n train_transform = transforms.Compose([transforms.ToTensor()])\n train_set = torchvision.datasets.CIFAR10(root=data_dir, train=True, download=True, transform=train_transform)\n #print(vars(train_set))\n print(train_set.train_data.shape)\n print(train_set.train_data.mean(axis=(0,1,2))/255)\n print(train_set.train_data.std(axis=(0,1,2))/255)\n\nelif args.dataset == \"cifar100\":\n train_transform = transforms.Compose([transforms.ToTensor()])\n train_set = torchvision.datasets.CIFAR100(root=data_dir, train=True, download=True, transform=train_transform)\n #print(vars(train_set))\n print(train_set.train_data.shape)\n print(np.mean(train_set.train_data, axis=(0,1,2))/255)\n print(np.std(train_set.train_data, axis=(0,1,2))/255)\n\nelif args.dataset == \"mnist\":\n train_transform = transforms.Compose([transforms.ToTensor()])\n train_set = torchvision.datasets.MNIST(root=data_dir, train=True, download=True, transform=train_transform)\n #print(vars(train_set))\n print(list(train_set.train_data.size()))\n print(train_set.train_data.float().mean()/255)\n print(train_set.train_data.float().std()/255)<ECODE>", "isAccepted": false, "likes": 15, "poster": "dlmacedo" }, { "contents": "So How should I know what mean and std should I use to transfer my images to? it is different for MNIST, CIFAR10, and ImageNEt… Any role that I need to stick with? Thanks", "isAccepted": false, "likes": null, "poster": "isalirezag" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Jing" }, { "contents": "The code is not widely applicable, if the training images are not the same size and in image format, you can not use the code to calculate per channel mean and std", "isAccepted": false, "likes": null, "poster": "jdhao" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Royi" }, { "contents": "By normalizing the input, SGD algorithm will work better. If the feature scale is not approximately the same, it will takes longer time to find the minimum.", "isAccepted": false, "likes": null, "poster": "jdhao" }, { "contents": "Put my question differently, after this “Centering” does the Bias of the first layer filter is around 0?", "isAccepted": false, "likes": null, "poster": "Royi" }, { "contents": "Training is more stable and faster when parameters are small. As a fact, none of these first order optimization method guarantees finding minimum for arbitrary network (in fact, they can’t even find it for the simple ones). Therefore, although scaling & offsetting is equivalent to scaling the weights and offsetting bias at first linear layer, normalization proves to often give better results. Moreover, you shouldn’t normalize using every pixel’s mean and std. Since conv is an operation on channels, you should just use each channel’s mean and std.", "isAccepted": false, "likes": 2, "poster": "SimonW" }, { "contents": "Do we need tensors to be in the range of [-1,1] or is [0,1] okay? I have my own dataset of RGB images with a range of [0,1]. I manually normalized the dataset but the tensors are still in the range of [0,1]. What is the benefit of transforming the range to [-1,1]?", "isAccepted": false, "likes": 1, "poster": "lkins" } ]
false
Non-determinisic results
null
[ { "contents": "I run the follow code before definition of modules. (My model uses Embeding, Dropout, LSTM, and Linear layers.) <SCODE>torch.manual_seed(1000)\ntorch.backends.cudnn.enabled = False\ntorch.cuda.manual_seed(1000)\n<ECODE> However, the final results are still different for each trial (regardless I use CPU or GPU). Are there any other things I should do to get the deterministic results given the same input? Thanks.", "isAccepted": false, "likes": 2, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "edgarriba" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "edgarriba" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "edgarriba" }, { "contents": "I am trying to understand the role of seeding in pytorch. For example, if I have a model trained with a specific seed, can I say, it will produce the same output for a specific input? While with no seed, its not guaranteed to produce the same output? One thing is also bothering me that if I train without setting any seed, why I would get different output for the same input given that there is no randomness associated with my model?", "isAccepted": false, "likes": null, "poster": "Samrat_Hasan" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Houjing_Huang" } ]
false
Format of weight parameters in KLDivLoss
null
[ { "contents": "KLDivLoss can take a weight parameter but the docs don’t specify how it should be formatted. What should the format be?", "isAccepted": false, "likes": null, "poster": "mromaniuk" }, { "contents": "It should be a 1D tensor having as many elements as you have classes. We’ll have to add that to the docs, thanks for reporting that!", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Can it? I can see nothing weight related in the docs, nor in the source.", "isAccepted": false, "likes": null, "poster": "nofreewill" } ]
false
Put a new top on ResNet
null
[ { "contents": "I’m trying to replace the last layer of an imagenet trained model. <SCODE>model_imagenet = models.resnet18(pretrained=True)\nmodel_imagenet\n\nclass ResNetNewTop(nn.Module):\n def __init__(self, old_model, num_classes=2):\n super(ResNetNewTop, self).__init__()\n self.bottom = nn.Sequential(*list(old_model.children())[:-1])\n self.top = nn.Linear(512, num_classes) # TODO: input size (512) from bottom\n\n def forward(self, x):\n x = self.bottom(x)\n x = self.top(x) # <<< Errors here\n return x\n\nmodel = ResNetNewTop(model_imagenet, num_classes=2)\nmodel.cuda()\nmodel\n<ECODE> This is throwing RuntimeError: matrix and matrix expected at [snip] /torch/lib/THC/generic/THCTensorMathBlas.cu:235 Can I put Sequential() into a new model directly? Or do I have to loop through the layers and build up the model that way?", "isAccepted": false, "likes": null, "poster": "telesphore" }, { "contents": "Additionally, in case of ResNet, you could just replace the final layer with a new one and it should work I think: <SCODE>model_imagenet.fc = nn.Linear(512, num_classes)\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "I do this: <SCODE>model = models.__dict__[opt.arch](pretrained=True) #pretrained=False if you don't want to use pre-trained weights.\nmodel.fc = nn.Linear(512, num_classes)<ECODE>", "isAccepted": false, "likes": null, "poster": "avijit_dasgupta" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Yes. It will only work for ResNets. Your answer is more generalized.", "isAccepted": false, "likes": null, "poster": "avijit_dasgupta" }, { "contents": "Thank you! The diagnosis was spot on. I like the shortcut too; it helped me realize how loosely coupled all of the layers are.", "isAccepted": false, "likes": null, "poster": "telesphore" }, { "contents": "<SCODE>feat = nn.Sequential(*list(resnet.children())[:-1])\nfc1 = nn.Linear(a, b)\nfc1 = nn.Linear(b, c)\n\n<ECODE> the layer names in feat are lost.", "isAccepted": false, "likes": null, "poster": "drcege" } ]
false
How to slice a matrice in PyTorch
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Jie317" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "apaszke" } ]
false
Runtime error caused by dependency engine?
null
[ { "contents": "Test script: <SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\nclass Net(nn.Module):\n\n def __init__(self, **config):\n super(Net, self).__init__()\n self.config = config\n self.embedding = nn.Embedding(config['vocab_size'], config['embedding_size'])\n self.rnn = nn.LSTM(\n input_size = config['code_size'] + config['embedding_size'],\n hidden_size = config['hidden_size'],\n num_layers = config['num_layers'],\n dropout = config['dropout_ratio'],\n )\n self.linear = nn.Linear(config['hidden_size'], config['vocab_size'])\n self.softmax = nn.Softmax()\n\n def forward(self, code, step):\n batch_size = code.size()[0]\n prev_index = Variable(torch.LongTensor(batch_size).fill_(self.config['beg_index']))\n prev_h = prev_c = Variable(torch.zeros(self.config['num_layers'], batch_size, self.config['hidden_size']))\n logits = []\n for i in range(step):\n prev_vector = self.embedding(prev_index)\n curr_input = torch.cat((code, prev_vector), 1)\n curr_input = curr_input.view(1, *curr_input.size())\n curr_output, (curr_h, curr_c) = self.rnn(curr_input)\n prev_h, prev_c = curr_h, curr_c\n logit = self.linear(curr_output.squeeze())\n prev_index = torch.max(logit, 1)[1].squeeze()\n logits.append(logit)\n shape = (len(logits),) + logits[0].size()\n logit = torch.cat(logits, 0)\n prob = self.softmax(logit).view(*shape)\n return prob\n\nnet = Net(\n code_size = 100,\n hidden_size = 50,\n num_layers = 2,\n dropout_ratio = 0,\n vocab_size = 1000,\n embedding_size = 10,\n beg_index = 1,\n)\ncode = Variable(torch.FloatTensor(14, 100))\nprob = net(code, 10)\nprob.backward(torch.ones(prob.size()))\n<ECODE> Execution result (note the last line): <SCODE>Traceback (most recent call last):\n File \"test.py\", line 50, in <module>\n prob.backward(torch.ones(prob.size()))\n File \"/Users/warbean/anaconda3/envs/py35/lib/python3.5/site-packages/torch/autograd/variable.py\", line 158, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\nRuntimeError: could not compute gradients for some functions (Linear, Linear, Linear, Linear, Linear, Linear, Linear, Linear, Linear)\n<ECODE> <SCODE>THPUtils_assert(not_ready.empty(), \"could not compute gradients for some functions (%s)\", names.c_str());\n<ECODE> Is there something wrong in PyTorch’s dependency engine or in my usage? (I’m not sure whether there is a “dependency engine” concept in PyTorch. I just borrow it from MXNet.)", "isAccepted": false, "likes": null, "poster": "WarBean" }, { "contents": "Yeah we call that backward engine. It’s a bug indeed. Use this workaround for now, and I’ll fix it today: <SCODE>prev_index = Variable(torch.max(logit.data, 1)[1].squeeze())\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I now see that the problem arises because the Linear layer will get gradients only from the last iteration of the loop (it is followed by non-differentiable argmax in all other cases). Not sure if that’s desired, just wanted to give you a heads up.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you for the workaround! It runs without error. However I don’t understand what is “get gradients only from the last iteration of the loop”. What I desire is to propagate gradients into each output step, throughout all the sequence to the beginning. Should I propagate into each output step indivisually? Like this: <SCODE>for prob in probs:\n prob.backward(gradient, retain_variables = True)<ECODE>", "isAccepted": false, "likes": null, "poster": "WarBean" }, { "contents": "Sorry, nevermind my comment. I only visualized the graph for a single iteration. It’s all fine.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Reparameterizing a Parameter
null
[ { "contents": "Hi guys, W_effective[:,:,::2,::2] = W_base And have a 5x5 filter which is effectively a dilated 3x3 filter, and where only the elements that I indexed in that call are parameters that can be updated. Note that I’m not just trying to dilate a filter (the convNd modules already look to have that on lock) but ideally I’d like to be able to arbitrarily parameterize a tensor by indexing. Any help is appreciated! Best, Andy", "isAccepted": false, "likes": null, "poster": "ajbrock" }, { "contents": "<SCODE>class MyModule(nn.Module):\n def __init__(self):\n self.large_linear = nn.Linear(100, 100)\n\n def forward(self):\n x = F.sigmoid(self.large_linear(x))\n W_small = self.large_linear.weight[:10]\n b_small = self.large_linear.bias[:10]\n return F.linear(x, W_small, b_small)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks! Hmm, so including the paramterization in forward() makes sense so that the new param is a part of the graph…might it instead be easier to initialize a Parameter, set the static values to zero, then specifically index in the backprop to only update the desired indices? Best, Andy", "isAccepted": false, "likes": null, "poster": "ajbrock" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Alright, so I’ve dug further into this and found some interesting things. TL; DR: I solved it for my use-case, but I might have stumbled onto some bugs. Don’t use in-place operations, bad things happen. I managed to achieve what I was going for by creating the main weight parameter, w, and two intermediate variables, I initialize both W and M in the init method of the module, then call W[M] = w during the forward() method, and convolve using the modified W. This works, but for some reason when I do things this way it results in training starting out fast, then progressively slowing down (starting out at around 5 batches/s and dropping to 0.2 batches/s over the course of the first 1000 batches). It also throws an error about non-leaf variables not yet being serializable when I train to use torch.save, presumably because I’m creating some naughty nodes that I shouldn’t be. My initial suspicion was that I was creating additional subgraphs that weren’t being deleted, or not freeing memory appropriately (more on that in a moment), but the memory usage in this case was constant. Investigating further, I found that if I replaced W[M] = w with W.masked_copy_(M,w), I get an error after the first batch saying that I need to use\"retain_variables=True\" if I want to backpropagate a second time through the graph. The error method is a bit confusing here, as I am only calling backward() once. My intuition is that the above error occurs because I’m calling an in-place method in forward(), which seems to be against best pytorch practice at the moment, so whatever variables autograd would need to do the backprop aren’t getting saved. Calling backward(retain_variables=True) results in the same behavior as using W[M]=w; it works, but it progressively slows down throughout training. I’m still not sure on what’s causing the slowing–my best guess is that some part of the graph isn’t getting appropriately freed in such a way that rather than creating multiple subgraphs that take up memory, it’s just propping through the same graph element an increasing number of times on each successive iteration. Anyhow, I did manage to get things working for my use-case–rather than trying to do a masked copy or in-place operation, I just instantiate W as a full-rank tensor and drop W*M into the F.conv2d call. Works great and is about twice as fast as using the dilation parameter (presumably because it allows for the use of the cuDNN backend). Here’s a code snippet with which I’m currently getting ~80-100% speedup over using the dilation keyword for my use-case. Note that this currently prevents you from saving with torch.save due to a “can’t serialize non-leaf variables yet” dealio. in init: in forward: Sorry for the wall of text, but hopefully this will prove enlightening and thorough if other people come along with similar issues on in-place ops. For the record, I’m using the build provided by conda install and am on python 2.7 (my attempts at building from source crash, sigh). I tested these with Cuda7.5 on a GTX980 and Cuda8.0 on a Titan X. Best, Andy", "isAccepted": false, "likes": null, "poster": "ajbrock" }, { "contents": "This also causes the serialization failure - you can only save leaf Variables, but your W keeps hold of the history, so it’s not a leaf! That’s probably because PyTorch has to pick a more memory-intensive backend for dilated convolutions. If you can send me a diff that I could test, I could confirm that it’s because of this.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I nominate Adam for king of speed replies, wow! So I tried two variations on repackage_hidden in the forward() method, both of which solve this issue: I was only calling backward() a single time, hence my initial confusion; I mostly found this interesting because it only occurred when switching to masked_copy_ from a direct indexing call. Thanks, Andy", "isAccepted": false, "likes": 1, "poster": "ajbrock" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Pretrained resnet model converted from caffe
null
[ { "contents": "I’m using resnet to do feature extraction. I’m assuming the current resnet provided in model zoo is converted from fb.resnet.torch. Are you planning to convert the caffe model into pytorch version? (From my own experience, it seems the caffe one is better.)", "isAccepted": false, "likes": 1, "poster": "ruotianluo" }, { "contents": "the current model is trained from scratch and matches accuracy to fb.resnet.torch model", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I realize that. The features just don’t work as well as the features extracted from the caffe model.", "isAccepted": false, "likes": 3, "poster": "ruotianluo" } ]
false
Construct new tensor on correct device based on input
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Hi! I’m sorry to reply to an old post but I thought replying here would be better than starting a new topic. <SCODE>device = 1\nX = torch.rand(10).cuda(device) # or X = torch.rand(10)\n<ECODE> How to make code below agnostic to whether X is on CPU/GPU and which device? <SCODE># Fill with numpy data\ndata = np.array([1, 2, 3]) # For example indices calculated by custom Python procedure\nX.new().long().new(data) # Works on CPU, error on GPU?\nX.new().long().new(*data.shape).copy_(torch.from_numpy(data)) # Works but verbose\nX.new(data.astype(float)).long() # Inefficient, lossy?\n# X.new(data) # Error\n\n# Or fill with zeros\nX.new(10).long().zero_() # Works, but inefficient?\nX.new().long().new(10).zero_() # Works, but verbose?<ECODE>", "isAccepted": false, "likes": 1, "poster": "wouter" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "wouter" }, { "contents": "What is the new alternative for this use case?", "isAccepted": false, "likes": null, "poster": "bgobbi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "shlomia" }, { "contents": "This creates very ugly (and slow) code such as <SCODE> if std.is_cuda:\n eps = torch.FloatTensor(std.size()).cuda().normal_()\n else:\n eps = torch.FloatTensor(std.size()).normal_()\n<ECODE> Isn’t there a better way?", "isAccepted": false, "likes": null, "poster": "Noam_Salomonski" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Noam_Salomonski" } ]
false
How to get the class names to class label mapping
null
[ { "contents": "I am using ResNet-18 for classification purpose. I have used dataloader to load the data. But how get the label to class name mapping? Does it load in alphabetical order?", "isAccepted": false, "likes": 1, "poster": "avijit_dasgupta" }, { "contents": "For future reference: <SCODE>def find_classes(dir):\n classes = os.listdir(dir)\n classes.sort()\n class_to_idx = {classes[i]: i for i in range(len(classes))}\n return classes, class_to_idx<ECODE>", "isAccepted": false, "likes": null, "poster": "avijit_dasgupta" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "zym1010" }, { "contents": "Something like this: <SCODE>dataset = datasets.ImageFolder(data_folder, transform=data_transforms)\nprint(dataset.class_to_idx)\n<ECODE>", "isAccepted": false, "likes": 9, "poster": "Ray_Luo" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "vipin14119" } ]
false
Improving the Performance of Fully Connected Neural Networks by Out-of-Place Matrix Transpose
null
[ { "contents": "If these results are to be believed it’s perhaps worth looking into.", "isAccepted": false, "likes": null, "poster": "Veril" }, { "contents": "Thanks for the reference! We’ll look into it.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Register_backward_hook on nn.Sequential
null
[ { "contents": "<SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\na = nn.Sequential(nn.Linear(5,3), nn.Tanh(), nn.Linear(3,2))\n\ndef hookFunc(module, gradInput, gradOutput):\n\tprint(len(gradInput))\n\tfor v in gradInput:\n\t\tprint v\na.register_backward_hook(hookFunc)\n\ninput = Variable(torch.randn(4,5))\noutput = a(input)\n\ntarget = torch.FloatTensor(4,2).fill_(1)\noutput.backward(target)\n<ECODE> <SCODE>Variable containing:\n-0.5910 -0.7340 -0.4239\n-0.5910 -0.7340 -0.4239\n[torch.FloatTensor of size 2x3]\n\nVariable containing:\n 4\n 4\n[torch.FloatTensor of size 2]\n<ECODE>", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "Yes, you should never modify any arguments given to the hook in-place. If you want to replace grad input, you can do out-of-place operations on it, and return new values from the hook.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "Like that: <SCODE>x = Variable(torch.randn(5, 5), requires_grad=True)\ny = x + 2\ny.register_hook(lambda grad: grad * 2)\ny.sum().backward()\nx.grad # is now filled with 2\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "But remember about that container hook problem. Do this only on primitive modules or Variables.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks!", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "It works in the same way for modules: <SCODE>module.register_backward_hook(lambda module, grad_i, grad_o: grad_i * -1)\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Qizhe_Xie" }, { "contents": "That approach worked in my case.", "isAccepted": false, "likes": 1, "poster": "supakjk" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "WERush" }, { "contents": "<SCODE>class DDReLU(nn.Module):\n def __init__(self):\n super(DDReLU, self).__init__()\n self.threshold = nn.Parameter(torch.rand(1), requires_grad=True)\n self.register_backward_hook(lambda module, grad_i, grad_o: (grad_i[0], grad_i[1]*0.01))\n #self.threshold.data.fill_(0.1)\n self.ReLU = nn.ReLU(True)\n\n def forward(self, x):\n print(self.threshold.data[0])\n return self.ReLU(x + self.threshold) - self.threshold\n #return self.ReLU(x) + self.threshold\n<ECODE> Is the code above fine to change the relative learning rate of the new parameter? By relative learning rate, I mean: The parameter created has a learning rate that is 0.01 times the one used to the other model’s parameters.", "isAccepted": false, "likes": null, "poster": "dlmacedo" } ]
false
About pytorch update
null
[ { "contents": "Hi all, Anybody know what this means?", "isAccepted": false, "likes": null, "poster": "big_tree" }, { "contents": "Since the model structure is actually defined by the code, we’ve implemented a simple mechanism that saves the source code of each Module that you save, and when it’s loaded it compares it with the source that’s used to reinstantiate it. It’s meant to warn you that the model might now work differently, if you change your model code in the meantime. I didn’t realize it also worked for built in modules, we’ll have to disable that. You can safely ignore the warning, and overwriting the old checkpoint with a new one will make it disappear. Also, we recommend serializing only the state dict.", "isAccepted": false, "likes": 5, "poster": "apaszke" }, { "contents": "Thanks for your help!", "isAccepted": false, "likes": null, "poster": "big_tree" } ]
false
RuntimeError: could not compute gradients for some functions (CudnnRNN)
null
[ { "contents": "Hi, Unfortunately getting the following behavior: <SCODE> File \"/home/jd/pytorch/examples/translation/translate_gpu.py\", line 296, in trainEpochs\n loss = train(input_variable, target_variable, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion)\n\n File \"/home/jd/pytorch/examples/translation/translate_gpu.py\", line 247, in train\n loss.backward()\n\n File \"/home/jd/anaconda/lib/python2.7/site-packages/torch/autograd/variable.py\", line 158, in backward\n self._execution_engine.run_backward((self,), (gradient,), retain_variables)\n\nRuntimeError: could not compute gradients for some functions (CudnnRNN)\n<ECODE> It works on the CPU. Any hints as to what might be causing this behavior? The RuntimeError sadly doesn’t mention which operations couldn’t get the gradient computed. Is there a list of supported/unsupported ops I could reference somewhere? Thanks,", "isAccepted": false, "likes": 1, "poster": "ponythewhite" }, { "contents": "I don’t think it’s your fault. We’ll have to look into that. Thanks for the report.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks! Should I create an issue on Github, or will you? Would be great if I could track the status. Best,", "isAccepted": false, "likes": null, "poster": "ponythewhite" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Awesome, thanks! Would love to help with this, but can’t do much in C++.", "isAccepted": false, "likes": null, "poster": "ponythewhite" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thank you! I also ran into this but forgot to report it.", "isAccepted": false, "likes": null, "poster": "spro" }, { "contents": "Got 0.1.9 hot off the press and I can confirm it works, approx 3.5x speedup on my GPU. Any ideas for making it faster? Is it a bad idea to make new Variables in the training loop?", "isAccepted": false, "likes": null, "poster": "spro" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "And no, creating Variables is very very cheap, so you can do it at every step without any problem.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "spro" }, { "contents": "Yup, that’s been fixed! The fix initially introduced another bug, but that’s fixed as well.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "<SCODE>class Linear(torch.autograd.Function):\n\n def forward(self, input, weight, bias):\n self.eps = 1e-16\n self.Z = weight.t()[None, :, :] * input[:, :, None]\n self.Zs = self.Z.sum(dim=1, keepdim=True)\n if bias is not None:\n self.Zs += bias[None, None, :]\n return self.Zs.squeeze(dim=1)\n\n def backward(self,R):\n print('linear')\n return (R[:, None, :] * self.Z / (self.Zs + (2 * (self.Zs >= 0) - 1) * self.eps)).sum(dim=2)\n\n<ECODE> and run <SCODE>model.readout.backward(out)\n<ECODE> Unfortunately, it reported as follows <SCODE>Traceback (most recent call last):\n\n File \"G:\\Explainer\\_Geo_Exp_inter\\infectionLRP.py\", line 117, in <module>\n print(model.readout.backward(out))\n\n File \"D:\\Anaconda3\\lib\\site-packages\\torch\\tensor.py\", line 195, in backward\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\n\n File \"D:\\Anaconda3\\lib\\site-packages\\torch\\autograd\\__init__.py\", line 98, in backward\n allow_unreachable=True) # allow_unreachable flag\n\nRuntimeError: could not compute gradients for some functions\n<ECODE> It has confused me for a few days and I don’t really kown what cause it and how to fix it. Yingxin Wu", "isAccepted": false, "likes": null, "poster": "Yingxin_Wu" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "albanD" } ]
false
Discussion on pyTorch packages: which ones do you use?
null
[ { "contents": "Hello everyone! This is my first post in the forums. I am a new user, starting my Torch and pyTorch experience and I am very excited to do so! I am still going over documentation, tutorials and such and considering further options. One thing I thought I would ask the more experienced people in the forum concerns packages. There’s quite a few packages in pyTorch (this is one of my favourite aspects of Torch actually!) and I was wondering if people would like to ‘pass the torch’ in a sense sharing their experiences with them. Which package was the best in loading data from a csv file? Which packages do you use more often when dealing with images? Which packages do you use more often when dealing with text and NLP? Which packages do you use for visualization of model performance and understanding of your results? Which packages do you think are missing and could be developed to further enhance pyTorch? Thanks in advance for any comments that might come! See you down the Torch road!", "isAccepted": false, "likes": null, "poster": "Theodore" } ]
false
Are there any recommended methods to clone a model?
null
[ { "contents": "I’m interested to clone a model for various reasons (makes it easy to stack untied versions of the same model for instance). Any recommended methods for doing so?", "isAccepted": false, "likes": 7, "poster": "mrdrozdov" }, { "contents": "", "isAccepted": false, "likes": 31, "poster": "apaszke" }, { "contents": "Thanks! Seems like a reasonable approach.", "isAccepted": false, "likes": 1, "poster": "mrdrozdov" }, { "contents": "Ended up going with something like: <SCODE>model_clone = model_cls(**kwargs)\nmodel_clone.load_state_dict(copy.deepcopy(original_model.state_dict()))\n<ECODE> Works well for me!", "isAccepted": false, "likes": 4, "poster": "mrdrozdov" }, { "contents": "No need to deepcopy the state dict. Its contents won’t be assigned, but copied into the clone.", "isAccepted": false, "likes": 9, "poster": "apaszke" }, { "contents": "I have a situation where I am copying weights back and forth between two instances of the same model. Unfortunately, copy.deepcopy is not working for me. I am having to do: <SCODE>mp = list(model.parameters())\nmcp = list(model_copy.parameters())\nn = len(mp)\nfor i in range(0, n):\n mp[i].data[:] = mcp[i].data[:]\n<ECODE> While this is fine, I wonder why deepcopy function is not working.", "isAccepted": false, "likes": 3, "poster": "rk.epfl" }, { "contents": "Did you figure it out? I am having the same problem", "isAccepted": false, "likes": null, "poster": "damaru" }, { "contents": "I did not find a different way when I had this problem. But more recently, I used the “copy_” operation to copy weights layer by layer from one model to another: <SCODE>mydict = mymodel.state_dict()\nlayer_names = list(mydict)\n\n# Now, to copy values for a particular layer using the name or index of it:\n\nmydict[layer_names[index_of_layer]].copy_(some_data_with_matching_shape)\n<ECODE> If there is a better way, I would be happy to learn.", "isAccepted": false, "likes": 2, "poster": "rk.epfl" }, { "contents": "What happens if I do this: <SCODE>hNetModel = Model()\n for trainBatch, trainLabels in hTrainLoader:\n <Train the Model by a Function>\n modelEvaluationMetric = hNetModel(Validation)\n if(modelEvaluationMetric < bestModelMetric):\n hBestModel = hNetModel\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Royi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "<SCODE>hNet in NetList\nhNet = TrainNet(hNetModel)\nmodelEvaluationMetric = hNetModel(Validation)\n if(modelEvaluationMetric < bestModelMetric):\n hBestModel = hNet\n<ECODE> I thought at least when something gets back from a function it is a different copy of it (Yea, I’m not so experienced with Python).", "isAccepted": false, "likes": null, "poster": "Royi" }, { "contents": "just to make your answer clear you mean: <SCODE>new_mdl = copy.deepcopy(model)\n<ECODE> right?", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "why is deep copy not working for you? in what way is it not working compared to what u expected?", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "Does something inspired from: or not work for you?", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "Hi, copy.deepcopy(model) works fine for me in previous PyTorch versions, but as I’m migrating to version 0.4.0, it seems to break. It seems to have something to do with torch.device. How should I do cloning properly in version 0.4.0? (torch.device device) (str type, int index)", "isAccepted": false, "likes": null, "poster": "X-czh" }, { "contents": "Deepcopy is not working for me. I have a function train(model) which returns the trained model, model_trained = train(model_untrained). However as result both are trained at the end, but I want the model_untrained to be unchanged. So I tried to deep-copy the model_untrained inside the function before the training loop, but It is not working – the model is not trained correctly. Any idea why is it happening?", "isAccepted": false, "likes": null, "poster": "kuzand" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "ptrblck" }, { "contents": "Yes I am training the copied model. You are right about the optimizer, I was passing the original model parameters to it. Thanks for spotting it!", "isAccepted": false, "likes": 1, "poster": "kuzand" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "shivamsaboo17" }, { "contents": "<SCODE>import pickle\ncopyed_model = pickle.loads(pickle.dumps(model))\n<ECODE>", "isAccepted": false, "likes": 6, "poster": "52153d3b7bd2d9066500" } ]
false
How can I make custom nn.Module with max pooling index?
null
[ { "contents": "I would like to custom nn.Module including max unpooling module. For example, if I declare MyModule1 and 2 which outputs the pooling index beside output signal and takes the pooling index (pool_idx), respectively, as below. <SCODE>class MyModule1(nn.Module):\n def __init__(self):\n super(MyModule1, self).__init__()\n self.conv = nn.Conv2d(64, 3, 4)\n self.pool = nn.MaxPool2d(3, 3, return_indices = True)\n\n def foward(self, x):\n x = self.convl(x)\n x, pool_idx = self.pool(x)\n return x, pool_idx\n\nclass MyModule2(nn.Module):\n def __init__(self, pool_idx):\n super(MyModule, self).__init__()\n self.pool_idx = pool_idx\n self.conv = nn.ConvTransposed2d(3, 64, 4)\n self.unpool = nn.MaxUnpool2d(3,3)\n\n def forward(self, x):\n x = self.unpool(x, self.pool_idx)\n x = self.conv(x)\n return x\n<ECODE> For example, <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.m1 = MyModule1()\n self.m2 = MyModule2()\n\n def forward(self, x):\n x, pool_idx1, pool_idx2 = self.m1(x)\n x = self.m2(x, pool_idx1, pool_idx2)\n return x\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" }, { "contents": "You shouldn’t save the pooling index in the constructor. This should work: <SCODE>class MyModule1(nn.Module):\n def __init__(self):\n super(MyModule1, self).__init__()\n self.conv = nn.Conv2d(64, 3, 4)\n self.pool = nn.MaxPool2d(3, 3, return_indices = True)\n\n def foward(self, x):\n x = self.convl(x)\n x, pool_idx = self.pool(x)\n return x, pool_idx\n\nclass MyModule2(nn.Module):\n def __init__(self):\n super(MyModule2, self).__init__()\n self.conv = nn.ConvTransposed2d(3, 64, 4)\n self.unpool = nn.MaxUnpool2d(3,3)\n\n def forward(self, x, pool_idx):\n x = self.unpool(x, pool_idx)\n x = self.conv(x)\n return x\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.m1 = MyModule1()\n self.m2 = MyModule2()\n\n def forward(self, x):\n x, pool_idx = self.m1(x)\n x = self.m2(x, pool_idx)\n return x\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Seungyoung_Park" } ]
false
Some confusions about nn.Linear that bothered me ;-(
null
[ { "contents": "Linear layer requires 2D tensor. Does it mean that I got to use ‘view’ function to reshape the previous output every time I call it? But for some experiments BN and RELU are both needed afterwards. They both require 4D, for which I need to call reshape function again and again… and I can’t find one corresponding reshape function for nn.Sequence()? Is there any plan to add it?", "isAccepted": false, "likes": null, "poster": "hbkunn" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "thank you for replying me! I’m so sorry for not noticing BatchNorm1d in the document. Now my confusion is perfectly solved~", "isAccepted": false, "likes": 1, "poster": "hbkunn" }, { "contents": "that’s reasonable enough~ Thanks for your replying!", "isAccepted": false, "likes": null, "poster": "hbkunn" } ]
false
Prefix parameter names in saved model if trained by multi-GPU?
null
[ { "contents": "Hi, While doing inference I only use one GPU so the model failed to load the latter model file because the parameter names are not matching. I am wondering why parameter names are prepend the prefix? Can I trim the prefix and still use the model?", "isAccepted": false, "likes": 2, "poster": "ming" }, { "contents": "Yes, you can just remove the prefix: <SCODE>state_dict = {k.partition('model.')[2]: v for k,v in state_dict}\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks for the reply! Removing prefix works. Curious why the prefix is needed? It creates inconvenience when we want to resume training a single-GPU-trained-model with multi-GPU, or pass a multi-GPU trained model to inference code that only uses one GPU? Also it looks the pretrained resnet doesn’t have ‘module.’ prefix, does it mean that they were trained on single GPU?", "isAccepted": false, "likes": 1, "poster": "ming" }, { "contents": "No, they probably had the prefixes trimmed before serialization.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "<SCODE>state_dict =checkpoint['model_dict']\nstate_dict = {k.partition('model.')[2]: v for k,v in state_dict}\nmodel.load_state_dict(state_dict)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "lunasdejavu" }, { "contents": "{k.partition(‘module.’)[2]:state_dict[k] for k in state_dict.keys()}", "isAccepted": false, "likes": 3, "poster": "Lynx_Commando" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pepeportolo" } ]
false
Trainable Variable in Loss Function and Matrix Multiplication
null
[ { "contents": "Hi, could you check the example with trainable-variable? I was thinking about two cases: implementing a Linear layer without torch.nn. implementing the Loss layer with learable parameter (ex. Center Loss where mean-center is trainable parameters). Such stuff are easy in TesnotFlow, but I’m not sure if I coded it correctly in PyTorch Could you check if the idea of using PyTorch here is correct? Or I should change sth here.", "isAccepted": false, "likes": 2, "poster": "melgor" }, { "contents": "Hi! There are some issues with your first script: you need to define linear1 and linear2 as nn.Parameter instead of Variable, if not model.parameters() won’t see them just for information, nn.Linear implements it as torch.mm(input, weight.t() 61, as in torch7 BTW you might want to divide the loss by the batch size, that’s what’s the default in pytorch", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "I’ve implemented the same loss, however, the centers learned are nearly the same. I don’t know where is the problem.", "isAccepted": false, "likes": null, "poster": "waitwaitforget" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jxgu1016" }, { "contents": "A more easily implenment of center loss. <SCODE>class CenterLoss(nn.Module):\n\n def __init__(self,center_num,feature_dim):\n super(CenterLoss,self).__init__()\n self.center_num = center_num\n self.feature_dim = feature_dim\n self.center_features = nn.Parameter(torch.Tensor(self.center_num,self.feature_dim))\n nn.init.normal(self.center_features.data,mean=0,std=0.1)\n\n def forward(self,x,label):\n B = x.size()[0]\n center = torch.index_select(self.center_features,0,label)\n diff = x-center\n loss = diff.pow(2).sum() / B\n return loss\n<ECODE>", "isAccepted": false, "likes": null, "poster": "CKboss" }, { "contents": "I had the same issue. My parameters learned are the same. Did you solve it and how? Thanks!", "isAccepted": false, "likes": null, "poster": "lelegu" } ]
false
Select specific columns of each row in a torch Tensor
null
[ { "contents": "There’s probably a simple way to do this, but owing to my noobness I do not know who to do this in PyTorch. Basically, let’s say I have a torch tensor like so: m = Variable(torch.randn(4,2)) Furthermore, I have a bunch of indices, given by inds, where inds is: <SCODE>inds\nVariable containing\n 1\n 1\n 0\n 0\n [torch.LongTensor of size 4]\n<ECODE> How can I do that? Thanks!", "isAccepted": false, "likes": 10, "poster": "Kalamaya" }, { "contents": "<SCODE>m = torch.randn(4,2)\nids = torch.Tensor([1,1,0,0]).long()\nprint(m.gather(1, ids.view(-1,1)))\n<ECODE>", "isAccepted": false, "likes": 28, "poster": "fmassa" }, { "contents": "Thank you that’s exactly what I needed. Why cant we use normal slicing as in numpy?", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "Not really, this would require passing in a 2D list.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "magnus_w" }, { "contents": "A follow up question: is the columns selection in Pytorch differentiable? Say the column index is obtained though a network, and the loss is computed based on the selected columns from the matrix. Will the back propagation go through?", "isAccepted": false, "likes": 7, "poster": "Qiqin_Dai" }, { "contents": "Yes indexing is fully differentiable in PyTorch! Obviously this only back-propagates into the indexed elements with the other elements receiving zero gradient. This is necessary with some techniques such as Policy Gradients however.", "isAccepted": false, "likes": 2, "poster": "Jan_Konopka" }, { "contents": "How would you do the same when assigning to an existing tensor? Example: <SCODE>tensor.gather(-1, indices) = new_values_tensor\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "Enamex" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "usman095" } ]
false
Convert to numpy cuda variable
null
[ { "contents": "How to convert cuda variables to numpy?", "isAccepted": false, "likes": 2, "poster": "pronics2004" }, { "contents": "<SCODE>cuda_tensor = torch.rand(5).cuda()\nnp_array = cuda_tensor.cpu().numpy()\n<ECODE>", "isAccepted": false, "likes": 15, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": 10, "poster": "apaszke" }, { "contents": "<SCODE>import cv2\nimg=cv2.imread('xx.jpg')\nimg=cv2.resize(img,(32,32))\nimg = img.transpose((2,0,1))\nimg=np.expand_dims(img,axis=0)\nimg=img/255.0\nimg=torch.FloatTensor(img)\nimg=Variable(img)\nimg = img.cuda()\n\n output=net(img)\n\nprint(output)\nprint('---------------------------------------------------')\n\nnp_array = output.cpu()\nprint (np_array)\nprint('---------------------------------------------------')\n\nnp_array = np_array.numpy()\nprint (np_array)\nprint('---------------------------------------------------')\n<ECODE> <SCODE>np_array = np_array.numpy()\n\n\nAttributeError: numpy<ECODE>", "isAccepted": false, "likes": null, "poster": "Adel_Saleh1" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "spruceb" }, { "contents": "Thanks … fixed that", "isAccepted": false, "likes": null, "poster": "Adel_Saleh1" }, { "contents": "Hello, what’s wrong with my model <SCODE>m=torch.nn.Softmax()\nmodel.eval()\npreds = model(image)\ntemps=preds.cpu()\nprob=torch.max(m(temps)*100)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ahmedmazari" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pvskand" }, { "contents": "Is there a reason that the numpy() call doesn’t check if the tensor is on the gpu and simply do .cpu().numpy()?", "isAccepted": false, "likes": null, "poster": "dhpollack" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "smth" }, { "contents": "In my case I also had to use .detach() <SCODE>\noutput.cpu().detach().numpy()\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "Mona_Jalal" }, { "contents": "<SCODE>loss_semi_adv_value += loss_semi_adv.data.cpu().numpy()[0]/args.lambda_semi_adv\nIndexError: too many indices for array\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Golnoush123" } ]
false
Initialize a new network from a sub-network
null
[ { "contents": "What is the best way to initialize a new network from a sub-network of larger network (saved using torch.save(net.state_dict())?", "isAccepted": false, "likes": null, "poster": "pronics2004" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" } ]
false
CUDA memory continuously increases when net(images) called in every iteration
null
[ { "contents": "", "isAccepted": false, "likes": 4, "poster": "Kalamaya" }, { "contents": "This is because pytorch will build a the graph again and again, and all the intermediate states will be stored.", "isAccepted": false, "likes": 9, "poster": "ruotianluo" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "I think that did it! Ugh! I spent two days on this! >< Thanks though… so what was going on here?", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "<SCODE>for i in xrange(100):\n out = net(input) \n loss = someLossFunction(out)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "The graph won’t be re-made. The 100 loops basically create 100 graphs. The 100 graphs share the same input and parameters, but all the intermediate variables of 100 graphs (although they could be the same) are saved separately.", "isAccepted": false, "likes": 1, "poster": "ruotianluo" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Kalamaya" }, { "contents": "<SCODE>for i in xrange(100):\n out = net(input)\n loss += someLossFunction(input) # BAD, because it keeps continuing the graph over the for-loop\n\n loss = someLossFunction(input) # this is fine\n\n loss = someLossFunction(input)\n total_loss += loss.data[0] # this is fine\n<ECODE>", "isAccepted": false, "likes": 12, "poster": "smth" }, { "contents": "Im just asking for the sake of understanding: Is what is happening, let me put it differently: Are you saying that this statement here, will make two graphs that are identical to each other? loss = someLossFunction(input1) + someLossFunction(input2) Is my conclusion correct?", "isAccepted": false, "likes": 1, "poster": "Kalamaya" }, { "contents": "yes, your conclusion is correct.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "To clarify for RNN users that might come across this, we do actually want to keep the graph around for backprop-through-time. (Right? Or is there a better way?)", "isAccepted": false, "likes": null, "poster": "spro" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "This should be written in the tutorial!", "isAccepted": false, "likes": 2, "poster": "heihei" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "hmishfaq" }, { "contents": "hi, can the total_loss here be back propagated?", "isAccepted": false, "likes": 3, "poster": "11112" }, { "contents": "this solved my problem", "isAccepted": false, "likes": null, "poster": "Felix_Lessange" }, { "contents": "This solved my problem too. I didn’t know call loss instead of loss.data[0] will cause rewriting the graph.", "isAccepted": false, "likes": 2, "poster": "ddeng" }, { "contents": "This is really important. Everyone learning PyTorch should know this at the first place.", "isAccepted": false, "likes": null, "poster": "zhaoethz" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Albert_Christianto" } ]
false
Elegant way to transpose a variable
null
[ { "contents": "For example, A Tensor with shape [2, 3, 4] -> [4, 2, 3] In numpy, we could directly use np.transpose(Tensor, [2, 0, 1]). However, in pytorch, I could not find a elegant way to do it. Thanks for your help.", "isAccepted": false, "likes": null, "poster": "meijieru" }, { "contents": "torch.Tensor.permute…", "isAccepted": false, "likes": 2, "poster": "ruotianluo" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "meijieru" } ]
false
Convert int into one-hot format
null
[ { "contents": "Hi all. <SCODE>for batch_idx, (x, y) in enumerate(train_loader):\n y_onehot = y.numpy()\n y_onehot = (np.arange(num_labels) == y_onehot[:,None]).astype(np.float32)\n y_onehot = torch.from_numpy(y_onehot)\n<ECODE> However, I notice that the it gets slower each iteration, and I doubt it’s these code which might request new memory each iteration that makes the code slower. Thanks!", "isAccepted": false, "likes": 8, "poster": "Response777" }, { "contents": "", "isAccepted": false, "likes": 6, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Response777" }, { "contents": "Hi, <SCODE>import torch\n\nbatch_size = 5\nnb_digits = 10\n# Dummy input that HAS to be 2D for the scatter (you can use view(-1,1) if needed)\ny = torch.LongTensor(batch_size,1).random_() % nb_digits\n# One hot encoding buffer that you create out of the loop and just keep reusing\ny_onehot = torch.FloatTensor(batch_size, nb_digits)\n\n# In your for loop\ny_onehot.zero_()\ny_onehot.scatter_(1, y, 1)\n\nprint(y)\nprint(y_onehot)\n<ECODE>", "isAccepted": false, "likes": 56, "poster": "albanD" }, { "contents": "Thanks, that is exactly what I need!", "isAccepted": false, "likes": 4, "poster": "Response777" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Nadav_Bhonker" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Response777" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "And also for future readers just to reiterate what user moskomule says- cross entropy and neg. log-likelihood losses in pytorch do NOT require one-hot encodings, so you can just use the normal target vector.", "isAccepted": false, "likes": 2, "poster": "ncullen93" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "How about overriding the default nn.Embedding weights data with torch.eye ? <SCODE>emb = nn.Embedding(10, 10) \nemb.weight.data = torch.eye(10)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "rajarsheem" }, { "contents": "Additionaly, zero + scatter a few ones will be much faster than copying whole rows, of which most values are 0 anyway.", "isAccepted": false, "likes": 10, "poster": "apaszke" }, { "contents": "<SCODE>batch_size=10\ny = torch.LongTensor(batch_size,5,5).random_() % 3#3 classes,5x5 img\ny_onehot = torch.FloatTensor(batch_size,3, 5,5)#I want the one hot going through the chans dim\ny_onehot.zero_()\nones=torch.ones(y.size())\n\ny_onehot.scatter_(1,y,ones)\n<ECODE> Could you help me with this? Thanks!", "isAccepted": false, "likes": null, "poster": "rogetrullo" }, { "contents": "oh never mind, I just found that it works if I add a singleton dimension so that y and y_onehot have the same NUMBER of dimensions…", "isAccepted": false, "likes": 1, "poster": "rogetrullo" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "zeng" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "Sorry, I write wrong, it is also not available for cuda.LongTensor.", "isAccepted": false, "likes": null, "poster": "zeng" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "Thank you, my problem. I ignored the y_onehot type is not a cuda tensor.", "isAccepted": false, "likes": null, "poster": "zeng" }, { "contents": "Thank you! I got my cvae implemented with this tip.", "isAccepted": false, "likes": null, "poster": "kakrafoon" } ]
false
Gradient computation with index_select (for recursive neural networks)
null
[ { "contents": "Hi all, My cell looks something like this: <SCODE>import torch\nimport torch.nn.functional as F\nimport torch.nn as nn\n\nclass ReNNCell(nn.Module):\ndef __init__(self, dim):\n super(ReNNCell, self).__init__()\n self.dim = dim\n self.W = nn.Linear(dim*2, dim)\n self.W_score = nn.Linear(dim, 1)\n\ndef forward(self, inputs):\n assert(inputs.size()[0] == 1), 'we expect batch size = 1'\n rep = F.relu(self.W(inputs))\n score = F.relu(self.W_score(rep))\n return rep, score\n<ECODE> <SCODE>tensors = [Variable(torch.randn(1,3)), Variable(torch.randn(1,3)), Variable(torch.randn(1,3)), Variable(torch.randn(1,3))]\n\ncats = []\nfor i in range(1,4,2):\n cats.append(torch.cat([tensors[i-1], tensors[i]],1))\n\ncell = ReNNCell(3,2)\noutputs = [cell(c) for c in cats]\nreps = [o[0] for o in outputs]\nscores = [o[1] for o in outputs]\nscores = torch.cat(scores, 1)\nreps = torch.cat(reps, 0)\nmax_score, max_index = torch.max(scores, 1)\nmax_index = torch.squeeze(max_index, 1)\nmax_score = torch.squeeze(max_score, 1)\n<ECODE> Based on this selection I will compute some dummy loss. <SCODE>crit = torch.nn.MSELoss()\nloss = crit(reps.index_select(0,max_index), Variable(torch.ones(1,3)))\n<ECODE> <SCODE>RuntimeError: could not compute gradients for some functions (Threshold, Threshold)\n<ECODE> Could anybody point me in the right direction? Is this a bug or do I perhaps have to somehow compute the gradients myself?", "isAccepted": false, "likes": null, "poster": "elPistolero" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "albanD" }, { "contents": "Yes it does! So index_select somehow discards the Variable state (or is it squeeze)?", "isAccepted": false, "likes": null, "poster": "elPistolero" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "albanD" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "elPistolero" } ]
false