title
stringlengths
15
126
category
stringclasses
3 values
posts
list
answered
bool
2 classes
About the Site Feedback category
Site Feedback
[ { "contents": "Discussion about this site, its organization, how it works, and how we can improve it.", "isAccepted": false, "likes": null, "poster": "system" }, { "contents": "Is it possible to have more categories other than those specified? Similar to tags on stackoverflow where people can create their own tags that then appear as suggestions.", "isAccepted": false, "likes": null, "poster": "shaun" }, { "contents": "For instant, a question of the first kind would be something like: “How could one change the learning rate of of the optimizer as a function of epoch number?” A question of the second kind would be something like: “How would you chose architecture for a network for CIFAR10 which is small, namely less than 100,000 parameters?”", "isAccepted": false, "likes": null, "poster": "Royi" } ]
false
Pytorch is awesome!
null
[ { "contents": "Can’t wait for the open source release!", "isAccepted": false, "likes": 7, "poster": "ebetica" } ]
false
About the vision category
vision
[ { "contents": "Topics related to either pytorch/vision or vision research related topics", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "I created my model from scratch . I tried with batch_size 32 , it threw me this error. I tried with batch_size 1 , I got 320% accuracy (something wrong !). The error went away. so there is some dimensional mismatch some where, I am lost! thanks a million !!! for your help and advise", "isAccepted": false, "likes": null, "poster": "bunnisiva" }, { "contents": "can we see your Model ?", "isAccepted": false, "likes": null, "poster": "JoBot_CoBot" } ]
false
About the reinforcement-learning category
reinforcement-learning
[ { "contents": "A section to discuss RL implementations, research, problems", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "lakehanne" }, { "contents": "I had the same problem, it only reached at 199 at my env, then back and force…", "isAccepted": false, "likes": null, "poster": "elynn" }, { "contents": "I got it work the first few times I ran it, but later without any changes the same situation happened as you mentioned. Very weird. Thought it’s using the same random seed thought out.", "isAccepted": false, "likes": null, "poster": "howard50b" }, { "contents": "Dear all, One interesting thing that I am observing is that, after a few training iterations (one match is enough), my Q-network starts outputting zeros regardless of the input state! Initially, I thought there was a bug in my code, but now I think that it somehow makes sense. In Pong, the obtained reward is almost always zero (except in the frames where we score or concede a goal) and the Bellman equation is: Q(s,a) = reward + GAMMA * max_a’ (Q(s’,a’)) so, every time we get a zero reward, the Bellman equation is easily satisfied if Q(s,a) = max_a’ (Q(s’,a’)) = 0. That’s why I think my Q-network is basically learning to output zeros regardless of the input… Any hints on how I can overcome this issue? I am following the exact same methodology as in DeepMind’s paper, including the network architecture and the preprocessing.", "isAccepted": false, "likes": null, "poster": "dpernes" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mathematics" }, { "contents": "Hi everyone, If any of you could help to clarify this uncertainty, it would be much appreciated. Thank you.", "isAccepted": false, "likes": null, "poster": "cruzas" } ]
false
Request: Tutorial for deploying on cloud based virtual machine
null
[ { "contents": "I don’t have a GPU computer. Is there a tutorial or best practices for using PyTorch on a cloud-based virtual machine with GPU capabilities?", "isAccepted": false, "likes": 1, "poster": "rcmckee" }, { "contents": "PyTorch does not NEED GPUs to function. It works great on CPUs as well. That said, if you want to use a cloud based VM with GPUs, checkout Amazon EC2, Nimbix or Azure which all provide decent GPU instances.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "I just installed PyTorch on AWS g2.2xlarge machine with the default Ubuntu AWS image (ami-e13739f6). Here’s what I did: <SCODE>sudo apt-get update\nsudo apt-get install -y gcc make python3-pip linux-image-extra-`uname -r` linux-headers-`uname -r` linux-image-`uname -r`\nwget http://us.download.nvidia.com/XFree86/Linux-x86_64/375.39/NVIDIA-Linux-x86_64-375.39.run\nchmod 755 NVIDIA-Linux-x86_64-375.39.run\nsudo ./NVIDIA-Linux-x86_64-375.39.run -a\nsudo nvidia-smi -pm 1 # enable persistence mode for faster CUDA start-up\n<ECODE> And then install NumPy and PyTorch <SCODE>pip3 install numpy ipython\npip3 install https://download.pytorch.org/whl/cu75/torch-0.1.10.post2-cp35-cp35m-linux_x86_64.whl \npip3 install torchvision\n<ECODE> Now PyTorch works with CUDA <SCODE>ipython3\n>>> import torch\n>>> torch.randn(5, 5).cuda()\n 0.8154 0.9884 -0.7032 0.8225 0.5738\n-1.0872 1.0991 0.5105 -1.2160 0.3384\n-0.0405 0.2946 0.3753 -1.9461 0.0952\n 1.6247 -0.8727 -0.6441 -0.8109 1.7622\n 1.2141 1.3939 -1.2827 -0.3837 -0.0731\n[torch.cuda.FloatTensor of size 5x5 (GPU 0)]\n<ECODE>", "isAccepted": false, "likes": 16, "poster": "colesbury" }, { "contents": "Thank you! I especially like your warning about the error message! Very helpful!", "isAccepted": false, "likes": null, "poster": "rcmckee" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "mjdietzx" }, { "contents": "Thanks, it is great help!", "isAccepted": false, "likes": 1, "poster": "QianXuX" } ]
false
Convert/import Torch model to PyTorch
null
[ { "contents": "Hi, Great library! I’d like to ask if it is possible to import a trained Torch model to PyTorch… Thanks", "isAccepted": false, "likes": 4, "poster": "miliadis" }, { "contents": "As of today, you can deserialize Lua’s .t7 files into PyTorch containing Tensors, numbers, tables, nn models (no nngraph), strings. Here’s an example of saving a tensor in Torch and loading it back in PyTorch <SCODE>\nth> a = torch.randn(10)\n [0.0027s]\nth> torch.save('a.t7', a)\n [0.0010s]\nth> a\n-1.4479\n 1.3707\n 0.5663\n-1.0590\n 0.0706\n-1.6495\n-1.0805\n 0.8277\n-0.4595\n 0.1237\n[torch.DoubleTensor of size 10]\n\n [0.0033s]\n<ECODE> <SCODE>In [1]: import torch\n\nIn [2]: from torch.utils.serialization import load_lua\n\nIn [3]: a = load_lua('a.t7')\n\nIn [4]: a\nOut[4]:\n\n-1.4479\n 1.3707\n 0.5663\n-1.0590\n 0.0706\n-1.6495\n-1.0805\n 0.8277\n-0.4595\n 0.1237\n[torch.DoubleTensor of size 10]\n<ECODE> Here’s an example of loading a 2 layer sequential neural network: <SCODE>th> a = nn.Sequential():add(nn.Linear(10, 20)):add(nn.ReLU())\n [0.0001s]\nth> a\nnn.Sequential {\n [input -> (1) -> (2) -> output]\n (1): nn.Linear(10 -> 20)\n (2): nn.ReLU\n}\n [0.0001s]\nth> torch.save('a.t7', a)\n [0.0008s]\nth>\n<ECODE> <SCODE>In [5]: a = load_lua('a.t7')\n\nIn [6]: a\nOut[6]:\nnn.Sequential {\n [input -> (0) -> (1) -> output]\n (0): nn.Linear(10 -> 20)\n (1): nn.ReLU\n}\n\nIn [7]: a.__class__\nOut[7]: torch.legacy.nn.Sequential.Sequential\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "smth" }, { "contents": "Hi, Is there a simple way to convert a torch.legacy.nn module into a torch.nn module ?", "isAccepted": false, "likes": 5, "poster": "alexis-jacq" }, { "contents": "No, unfortunately we don’t have an automatic converter at the moment.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "But it should be quite simple to add.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "<SCODE>>>> n=load_lua('/Users/eugenioculurciello/Dropbox/shared/models/enet128-demo-46/model.net')\n>>> n.forward(torch.FloatTensor(1,3,128,128))\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/usr/local/lib/python3.6/site-packages/torch/legacy/nn/Module.py\", line 32, in forward\n return self.updateOutput(input)\n File \"/usr/local/lib/python3.6/site-packages/torch/legacy/nn/Sequential.py\", line 35, in updateOutput\n currentOutput = module.updateOutput(currentOutput)\n File \"/usr/local/lib/python3.6/site-packages/torch/legacy/nn/ConcatTable.py\", line 12, in updateOutput\n self.output = [module.updateOutput(input) for module in self.modules]\n File \"/usr/local/lib/python3.6/site-packages/torch/legacy/nn/ConcatTable.py\", line 12, in <listcomp>\n self.output = [module.updateOutput(input) for module in self.modules]\n File \"/usr/local/lib/python3.6/site-packages/torch/legacy/nn/SpatialMaxPooling.py\", line 33, in updateOutput\n if self.indices is None:\nAttributeError: 'SpatialMaxPooling' object has no attribute 'indices'\n<ECODE> also on an AlexNet: <SCODE>>>> n=load_lua('/Users/eugenioculurciello/Dropbox/shared/models/elab-alexowt-46/model.net')\n>>> n\nnn.Sequential {\n [input -> (0) -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> (13) -> (14) -> (15) -> (16) -> (17) -> (18) -> (19) -> output]\n (0): nn.SpatialConvolution(3 -> 64, 11x11, 4, 4, 2, 2)\n (1): nn.ReLU\n (2): nn.SpatialMaxPooling(3x3, 2, 2)\n (3): nn.SpatialConvolution(64 -> 192, 5x5, 1, 1, 2, 2)\n (4): nn.ReLU\n (5): nn.SpatialMaxPooling(3x3, 2, 2)\n (6): nn.SpatialConvolution(192 -> 384, 3x3, 1, 1, 1, 1)\n (7): nn.ReLU\n (8): nn.SpatialConvolution(384 -> 256, 3x3, 1, 1, 1, 1)\n (9): nn.ReLU\n (10): nn.SpatialConvolution(256 -> 256, 3x3, 1, 1, 1, 1)\n (11): nn.ReLU\n (12): nn.SpatialMaxPooling(3x3, 2, 2)\n (13): nn.View(9216)\n (14): nn.Linear(9216 -> 4096)\n (15): nn.ReLU\n (16): nn.Linear(4096 -> 4096)\n (17): nn.ReLU\n (18): nn.Linear(4096 -> 46)\n (19): nn.SoftMax\n}\n>>> n.forward(torch.FloatTensor(1,3,224,224))\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/usr/local/lib/python3.6/site-packages/torch/legacy/nn/Module.py\", line 32, in forward\n return self.updateOutput(input)\n File \"/usr/local/lib/python3.6/site-packages/torch/legacy/nn/Sequential.py\", line 35, in updateOutput\n currentOutput = module.updateOutput(currentOutput)\n File \"/usr/local/lib/python3.6/site-packages/torch/legacy/nn/Linear.py\", line 43, in updateOutput\n assert input.dim() == 2\nAssertionError\n>>> n.forward(torch.FloatTensor(3,224,224))\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/usr/local/lib/python3.6/site-packages/torch/legacy/nn/Module.py\", line 32, in forward\n return self.updateOutput(input)\n File \"/usr/local/lib/python3.6/site-packages/torch/legacy/nn/Sequential.py\", line 35, in updateOutput\n currentOutput = module.updateOutput(currentOutput)\n File \"/usr/local/lib/python3.6/site-packages/torch/legacy/nn/Linear.py\", line 43, in updateOutput\n assert input.dim() == 2\nAssertionError\n<ECODE> Can you explain how to run old networks here? Or how to go about doing that?", "isAccepted": false, "likes": null, "poster": "Eugenio_Culurciello" }, { "contents": "For Alexnet just do: (instead of nn.View(9216)) For enet it’s trickier. There are several errors in the pytorch legacy code. Then you have again the View issue at the end. After all these changes, it will run.", "isAccepted": false, "likes": 2, "poster": "mvitez" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Sure, I will. This forum wants 20 characters to answer.", "isAccepted": false, "likes": 2, "poster": "mvitez" }, { "contents": "Modify your Torch according to this PR <SCODE>function patch(m)\n if torch.type(m) == 'nn.Padding' and m.nInputDim == 3 then\n m.dim = m.dim+1\n m.nInputDim = 4\n end\n if torch.type(m) == 'nn.View' and #m.size == 1 then\n newsize = torch.LongStorage(2)\n newsize[1] = 1\n newsize[2] = m.size[1]\n m.size = newsize\n end\n if m.modules then\n for i =1,#m.modules do\n patch(m.modules[i])\n end\n end\nend\n\nrequire 'nn'\nnet = torch.load('model.net')\npatch(net)\ntorch.save('model2.net',net)\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "mvitez" }, { "contents": "Can ‘load_lua’ works for model with cudnn layers ? I am getting some serialization issues in read_lua_file.py. I had to use cudnn.convert(model,nn) and load_lua.", "isAccepted": false, "likes": 1, "poster": "HarshaVardhanP" }, { "contents": "No, it doesn’t support cudnn layers at the moment.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "clcarwin" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pronics2004" }, { "contents": "I think the serialization of GPU models might not be implemented. A quick workaround would be to load the checkpoint in Lua, cast it to CPU float, and try loading in PyTorch then.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "When using load_lua, I got an error AttributeError: type object ‘FloatStorage’ has no attribute ‘from_buffer’, how can I solve this problem?", "isAccepted": false, "likes": 1, "poster": "liuhantang" }, { "contents": "Is there any chance your model contains CUDA tensors?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I got the same problem. Any solutions?", "isAccepted": false, "likes": null, "poster": "melody-rain" }, { "contents": "Yes, do you have any CUDA tensors in your model?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks, that solves my problem. Previously I used cudnn.convert(model, nn), but did not use model:float(), so there was still cuda tensor.", "isAccepted": false, "likes": 1, "poster": "liuhantang" } ]
false
Roadmap for torch and pytorch
null
[ { "contents": "why did you choose to create this interface for python now, it did not seem a priority for the community (however I understand this will probably increase the adoption rate) ? as a torch user, I invested some time to learn how lua and torch operate, I guess it will operate the same way in the near future, can you give us some insights about this ? again as a torch user, would you see some big pros to move from torch to pytorch, without sacrifying performance for example ? maybe sharing your vision of how pytorch will be used at Facebook or Twitter ?", "isAccepted": false, "likes": 1, "poster": "trypag" }, { "contents": "hi pierre, Continue to develop things in whichever frontend you want (Torch or PyTorch) and you can be assured that the Lua side of things will be maintained. we have done some thorough benchmarks and found this to be not the case, especially for the Torch ecosystem where we cross the C boundaries all the time (LuaJIT traces stop everytime you cross the C boundary). recurrent nets, weight sharing and memory usage will be big positives with PyTorch compared to Torch, while retaining the flexibility of interfacing with C and the current speed of Torch.", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "Hi all, Also, I’ve tried to find a roadmap for the future (2017+) of any of these implementations on top of the torch computing framework and got not much. I do see some of the ideas proposed in the previous roadmap for torch coming to life (like tutorials, standardized datasets, a nice forum to hang out), so I must point out to the big elephant in the room: which one will take the focus on now? Pytorch or Torch7? Don’t get me wrong, I much prefer python’s libraries because they are more standardized and mature than Lua’s, and Lua lacks many of the key functionality for some of these tasks (scipy, matplotlib, etc.). But this wasn’t that big of a deal due to some libraries like “fb.python” that allowed the use of some functionalities from python to be used with Lua, but I realise that python is a better choice for a research/developing platform compared to Lua. Another thing I would like to clarify from the devs is this: what is the big advantage of pytorch regarding tensorflow? Or shall I say, what makes pytorch stand out of the rest of the “competition”?", "isAccepted": false, "likes": 2, "poster": "farrajota" }, { "contents": "Hi, could you further elaborate on “memory usage will be big positives with PyTorch compared to Torch”, any benchmark/example scenario?", "isAccepted": false, "likes": null, "poster": "rouniuyizu" }, { "contents": "Hi, You say retaining the current speed of Torch. From my takeaway of minpy, the dynamic definition of the graph in python would hurt speed, is that also true in pytorch. And when would you do start working on benchmarks.", "isAccepted": false, "likes": null, "poster": "ruotianluo" }, { "contents": "There is a slight impact on the perf (within 5% in most cases), but we know of a couple things that can speed things up further. Remember it’s still beta.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "You can try running the examples. We’re usually seeing 30-50% memory usage improvements. Benchmarks coming soon.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "That’s an old README, it has changed a bit since then.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
But what about Hugh Perkins?
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Jeremy_Godenz" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
CFFI for interfacing C functions
null
[ { "contents": "There are many benefits of CFFI, just to mention few: Abstract the Python version (CPython2, CPython3, PyPy). Better control over when and why the C compilation occurs, and more standard ways to write setuptools-based setup.py 3 files. Keep all the Python-related logic in Python so that you don’t need to write much C code.", "isAccepted": false, "likes": null, "poster": "robert.zaremba" }, { "contents": "Are you asking about our library (TH, THC, etc.) wrappers or about the extensions?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "robert.zaremba" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jdily" } ]
false
Import/Export models
null
[ { "contents": "Meanwhile, do you have already any internal data structure to read the models definition?", "isAccepted": false, "likes": 2, "poster": "edgarriba" }, { "contents": "We’re open to discussion about formats for sharing parameters and model definitions between libraries. If there’s going to be a larger initiative with some of the frameworks adopting a new format, we’ll likely join and try to help. Right now our serialization is very torch specific and we didn’t design it having in mind interoperability with other software. If you need to export model weights I encourage using state_dict and some more common formats like HDF5 or protobuf.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "is there a suggested new format to export in pytorch for use in other framework and custom compilers?", "isAccepted": false, "likes": null, "poster": "Eugenio_Culurciello" } ]
false
Is new serialization format documented?
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "jsenellart-systran" }, { "contents": "I can write up the specs, but they’re much more complicated now. Serialized files are actually tars containing 4 other files - one listing tensors, one listing storages, one with storage data and one with system info (long size, endianness, etc.). I think for this use case it would by much simpler to take advantage of some more standardized format, with a C library supporting loading from it (such as HDF5 or protobuf). Do you think that would be ok?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "edgarriba" }, { "contents": "Thanks for your answer. yes I think this would be great - but cannot you use torch native serialization since you have them at hand in the TH* libraries - the 4 containers you are talking about could be simply put in a lua table - and this would avoid dependencies with other libraries? I understand that the pytorch objects (variables) would not be compatible with lua modules objects but at least would be readable.", "isAccepted": false, "likes": null, "poster": "jsenellart-systran" }, { "contents": "No we can’t. These are not continers, these four are binary blobs. In Lua torch serialization has been written from scratch, in PyTorch we depend on pickle to save the objects, and while it allows us to have very robust serialization that conforms to python standards, it gives us less control, and is unreadable by Lua. I don’t think it’s worth changing that. It’s a complex matter, and conforming even to some of the old Lua standards would hurt usability for all Python users. HDF5 is quite widespread and will provide you with a way to save weights in Python and load them in Lua.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jsenellart-systran" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I think in the future we’ll also have to figure out some kind of format that allows for exporting graphs. This will be needed for Caffe2 and TF model exports.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Let me rephrase that to be sure to correctly understand :).", "isAccepted": false, "likes": null, "poster": "jsenellart-systran" }, { "contents": "Yes, because in Lua torch nngraph objects are the same all the time. PyTorch builds the graph dynamically, so every iteration uses fresh Python objects to represent the graph, and the recipe for the graph construction is the model’s code.", "isAccepted": false, "likes": 2, "poster": "apaszke" } ]
false
Does it support Multi-GPU card on a single node?
null
[ { "contents": "Hi, Now I am using keras with a simplified interface of data parallisim sync version of parallelism. Now I wish to know if pytorch has similar function, seems like PyTorch is trying to advocate in “Speed”, so it had better support Multi-GPU just for a single node.", "isAccepted": false, "likes": 1, "poster": "Pab_Peter" }, { "contents": "Hi Shawn, Yes we support multi-GPU on a single machine. Check out our examples: Also check out corresponding documentation:", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "rasbt" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Hmm, have you figured out how to use DataParallel? I cant for the life of me get it to work! :-/", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "The codes are written as follows: <SCODE>model = models.resnet18(pretrained=True) \nmodel =torch.nn.DataParallel(model).cuda()\nx=x.cuda(async=True)# there is no difference no matter whether we include async=True or not\nyt=yt.cuda(async=True)#\noutput = model(x)\n<ECODE> When using two GPUs, the output is recorded as follows: <SCODE>+------------------------------------------------------+\n| NVIDIA-SMI 352.79 Driver Version: 352.79 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 Tesla M40 Off | 0000:06:00.0 Off | 0 |\n| 0% 56C P0 74W / 250W | 2440MiB / 11519MiB | 99% Default |\n+-------------------------------+----------------------+----------------------+\n| 1 Tesla M40 Off | 0000:87:00.0 Off | 0 |\n| 0% 37C P0 87W / 250W | 1854MiB / 11519MiB | 97% Default |\n+-------------------------------+----------------------+----------------------+\n\n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n| 0 16788 C python 1874MiB |\n| 0 56331 C python 298MiB |\n| 0 58531 C python 207MiB |\n| 1 16788 C python 1797MiB |\n+-----------------------------------------------------------------------------+\n<ECODE> When using one GPU, the output is recorded as follows: <SCODE>+------------------------------------------------------+\n| NVIDIA-SMI 352.79 Driver Version: 352.79 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 Tesla M40 Off | 0000:06:00.0 Off | 0 |\n| 0% 71C P0 233W / 250W | 3878MiB / 11519MiB | 99% Default |\n+-------------------------------+----------------------+----------------------+\n| 1 Tesla M40 Off | 0000:87:00.0 Off | 0 |\n| 0% 26C P8 18W / 250W | 55MiB / 11519MiB | 0% Default |\n+-------------------------------+----------------------+----------------------+\n\n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n| 0 33037 C python 3312MiB |\n| 0 56331 C python 298MiB |\n| 0 58531 C python 207MiB |\n+-----------------------------------------------------------------------------+\n<ECODE> How can we improve the efficiency using two GPUs?", "isAccepted": false, "likes": null, "poster": "phenixcx" }, { "contents": "I don’t know what code are you using to benchmark that, but the numbers seem quite off. Multi-GPU on 2 GPUs should be pretty much the same as with Lua Torch right now (which is fast).", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "phenixcx" }, { "contents": "If you have very small batches or a model that can’t even ully utilize a single GPU, using many GPUs will only add communication overhead, without benefits.", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "phenixcx" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "wangg12" }, { "contents": "Can you add some comments to these examples?", "isAccepted": false, "likes": null, "poster": "mfa" } ]
false
Import nn module written in Lua?
null
[ { "contents": "Thanks", "isAccepted": false, "likes": null, "poster": "zzz" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Eugenio_Culurciello" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
2017 and we still can’t support AMD hardware, why?
null
[ { "contents": "Not all of us own NV hardware. Some of us want to use these tools at home without having to make a new purchase or purchase some AWS time. Not all of us are research scientists with fat grants allowing us to buy the latest NV hardware. When will the vendor lock-in stop? Surely “free” tools should be open to all.", "isAccepted": false, "likes": 2, "poster": "yaxattax" }, { "contents": "Lots of work need to be done to reach Nvidia’s performance. Cudnn is a huge step for performance that AMD does not have yet. There was OpenCL support for torch, but pragmatically everyone just turned to CUDA devices. If someone is willing to make a cudnn equivalent maybe it would change.", "isAccepted": false, "likes": null, "poster": "ClementPinard" }, { "contents": "Please see this thread for more details. We are waiting for HIP to be released from AMD.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Felix_Lessange" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "gvskalyan" }, { "contents": "Did you have a good experience using ROCm?", "isAccepted": false, "likes": null, "poster": "Madegomez" } ]
false
Package manager choice: conda vs. pip
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "This is because conda manages dependencies that are not just python dependencies. For example with conda, we get tighter control over which BLAS is installed.", "isAccepted": false, "likes": 3, "poster": "smth" }, { "contents": "How can I deal with the problem? Thank you beforehand.", "isAccepted": false, "likes": null, "poster": "phenixcx" }, { "contents": "It looks as if your script ran out of memory.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I guessed the problem is caused by the imcompatibility of OpenBLAS with Pytorch, because the problem is totally solved by reinstalling Pytorch with the command ‘conda install pytorch torchvision -c soumith’. It is heard that Pytorch compiled from the sources of github cannot well control OpenBLAS.", "isAccepted": false, "likes": null, "poster": "phenixcx" }, { "contents": "I haven’t heard about problems with OpenBLAS before, but I don’t think it’s really a PyTorch bug, since we’re only calling its functions. If it can’t manage its threads properly, there’s nothing we can do about it. I’d recommend using MKL.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "phenixcx" } ]
false
How to do backward() for a net with multiple outputs?
null
[ { "contents": "e.g., in Torch 7 I have a net with the last module an nn.ConcatTable, then I make the gradOutputs a table of tensors and do the net:backward(inputs, gradOutputs) How to do similar things with pytorch? I tried to backward() for each output, but it complained that backward should not be called multiple times?", "isAccepted": false, "likes": 2, "poster": "pengsun" }, { "contents": "It takes a list of variables and a list of gradients (one for each variable).", "isAccepted": false, "likes": 7, "poster": "colesbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pengsun" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pengsun" }, { "contents": "Thanks for the report, we’ll fix that!", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Why cant I see .grad of an intermediate variable?
null
[ { "contents": "Hi! I am loving this framework… Since Im a noob, I am probably not getting something, but I am wondering why I cant get the gradient of an intermediate variable with .grad? Here is an example of what I mean: <SCODE>xx = Variable(torch.randn(1,1), requires_grad = True)\nyy = 3*xx\nzz = yy**2\nzz.backward()\nxx.grad # This is ok\nyy.grad # This gives 0! \nzz.grad # This should give 1! \n<ECODE> So I get the correct result for xx.grad, but why does yy.grad show 0, as does zz.grad? How can I get the yy.grad value in this case? Thanks!", "isAccepted": false, "likes": 21, "poster": "Kalamaya" }, { "contents": "Hi Kalamaya, By default, gradients are only retained for leaf variables. non-leaf variables’ gradients are not retained to be inspected later. This was done by design, to save memory. <SCODE>from __future__ import print_function\nfrom torch.autograd import Variable\nimport torch\n\nxx = Variable(torch.randn(1,1), requires_grad = True)\nyy = 3*xx\nzz = yy**2\n\nyy.register_hook(print)\nzz.backward()\n<ECODE> Output: <SCODE>Variable containing:\n-3.2480\n[torch.FloatTensor of size 1x1]\n<ECODE>", "isAccepted": false, "likes": 42, "poster": "smth" }, { "contents": "The only way I have been able to really extract the gradient however is via a global variable at the moment. This is because the function I pass in (apparently) only allows me to pass in one argument, and that is reserved for the yy.grad. What I mean is given here: <SCODE>yGrad = torch.zeros(1,1)\ndef extract(xVar):\n\tglobal yGrad\n\tyGrad = xVar\t\n\nxx = Variable(torch.randn(1,1), requires_grad = True)\nyy = 3*xx\nzz = yy**2\n\nyy.register_hook(extract)\n\n#### Run the backprop:\nprint (yGrad) # Shows 0.\nzz.backward()\nprint (yGrad) # Show the correct dzdy\n<ECODE> So here, I am able to extract the yy.grad, BUT, I can only do so with a global variable, which I would rather not do. Is there a simpler way? Many thanks.", "isAccepted": false, "likes": 3, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "mrdrozdov" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Kalamaya" }, { "contents": "<SCODE>grads = {}\ndef save_grad(name):\n def hook(grad):\n grads[name] = grad\n return hook\n\nx = Variable(torch.randn(1,1), requires_grad=True)\ny = 3*x\nz = y**2\n\n# In here, save_grad('y') returns a hook (a function) that keeps 'y' as name\ny.register_hook(save_grad('y'))\nz.register_hook(save_grad('z'))\nz.backward()\n\nprint(grads['y'])\nprint(grads['z'])\n<ECODE>", "isAccepted": false, "likes": 31, "poster": "apaszke" }, { "contents": "Just for my own knowledge, am I to understand that, given what I am trying to do, the only ways we have are i) global variables, and ii) closures? Thanks again.", "isAccepted": false, "likes": 1, "poster": "Kalamaya" }, { "contents": "I’d say that these are the most obvious ways, but you could probably come up with more sophisticated solutions too. As I said, the best one depends on the specific use case, and it’s hard to provide a one that fits all. I find using closures like above to be ok, others will find something else better.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks again. I will process it tonight and reply back here for my exact use case. Thanks again!", "isAccepted": false, "likes": 1, "poster": "Kalamaya" }, { "contents": "Aren’t the gradients of internal nodes necessary for doing backprop?", "isAccepted": false, "likes": 3, "poster": "EvanZ" }, { "contents": "Yes, they are, but as soon as they have been used and are not necessary anymore, they are freed to save memory", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "While I understand why this design decision was made, are there any plans to make it easier to save the gradients of intermediate variables? For example, it’d be nice if something like this was supported: <SCODE>from torch.autograd import Variable\nimport torch\n\nxx = Variable(torch.randn(1,1), requires_grad = True)\nyy = 3*xx\nyy.require_grad = True # <-- Override default behavior\nzz = yy**2\n\nzz.backward()\n\n# do something with yy.grad\n<ECODE> It seems like it’d be easier to let variables keep track of their own gradients rather than having to keep track of them with my own closures. Then if I want to analyze the gradients of my variables (leaf or not), I can do something like <SCODE>do_something_with_data_and_grad_of(xx)\ndo_something_with_data_and_grad_of(yy)\n<ECODE>", "isAccepted": false, "likes": 7, "poster": "vitchyr" }, { "contents": "Is it possible to get the gradients of a torch.nn.Linear module using the way you suggested or am I limited to capturing gradients by defining Variables? Would this work for convolutions or recurrent layers?", "isAccepted": false, "likes": 1, "poster": "chenjus" }, { "contents": "Is it possible to create a (torch.autograd) flag in order to save all the variable’s gradients?", "isAccepted": false, "likes": null, "poster": "miguelvr" }, { "contents": "", "isAccepted": false, "likes": 16, "poster": "yusaku" }, { "contents": "For example, how to get yy’s grad_output from zz1 part? <SCODE>xx = Variable(torch.randn(1,1), requires_grad = True)\nyy = 3*xx\nzz1 = yy**2\nzz2 = yy**2\n\nyy.register_hook(print)\n(zz1+zz2).backward()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "blackyang" }, { "contents": "Ha, I recently did exactly this. Not sure if its the best way, but I did:", "isAccepted": false, "likes": null, "poster": "SimonW" }, { "contents": "Great, thanks! That’s also what in my mind, basically we need a dummy variable. BTW, is there something like nn.Identity() in torch? I didn’t find it", "isAccepted": false, "likes": null, "poster": "blackyang" } ]
false
Input preparation
null
[ { "contents": "Hello all, For instance, the fallowing code results in error: <SCODE>(class model above)\n self.affine1(10, 100)\n(...)\n\nx = np.random.random(10)\nipt = torch.from_numpy(x)\nprobs = model(Variable(ipt))\n<ECODE> Then <SCODE>TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.DoubleTensor, torch.FloatTensor), but expected one of:\n * (torch.DoubleTensor mat1, torch.DoubleTensor mat2)\n(...)\n<ECODE> How is the proper data preparation in pytorch? Thanks in advance and looking forward to use pytorch (the performance of the cart pole in openai gym was better than with other frameworks). Obs: As I didn’t saw any other mechanic-ish question topic, nor a question flag, hope that is not off topic for the forum.", "isAccepted": false, "likes": 1, "poster": "gabrieldlm" }, { "contents": "hi there. In this particular case, thanks for your feedback. We will improve our onboarding process, by making more newbie friendly tutorials. <SCODE>ipt = torch.from_numpy(x)\nipt = ipt.float()\n<ECODE>", "isAccepted": false, "likes": 9, "poster": "smth" }, { "contents": "Please, let me know of anything you think it should be included, so that I can better plan the structure of my lessons.", "isAccepted": false, "likes": 3, "poster": "Atcold" }, { "contents": "Thanks for the reply, but after changing the tensor type to float I get the fallowing error: <SCODE>RuntimeError: matrices expected, got 1D, 2D tensors at /Users/soumith/anaconda/conda-bld/pytorch0.1.6_1484755992574/work/torch/lib/TH/generic/THTensorMath.c:857\n<ECODE> <SCODE>print ipt\n(...)\n[torch.FloatTensor of size 10]<ECODE>", "isAccepted": false, "likes": null, "poster": "gabrieldlm" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "gabrieldlm" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "henrye" } ]
false
Help core dumped problem!
null
[ { "contents": "", "isAccepted": false, "likes": 1, "poster": "Chun_Li" }, { "contents": "Thank you. It seems to be specific to AMD Athlon CPU. I will take a look.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thanks for the response. BTW, I forgot mention to OS of my computer, which is Linux Ubuntu 16.10.", "isAccepted": false, "likes": null, "poster": "Chun_Li" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Chun_Li" }, { "contents": "what is the result of the command: <SCODE>which gcc<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "This gcc comes from anaconda2, not the gcc comes from ubuntu 16.10.", "isAccepted": false, "likes": null, "poster": "Chun_Li" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Chun_Li" }, { "contents": "Summary of the solution on my last post. I think this case can be closed if the above solution is verified by the pytorch team.", "isAccepted": false, "likes": 1, "poster": "Chun_Li" }, { "contents": "Great to hear that it works for you now! We’re aware that building from source will solve the problem, yet it would be good to have binary packages that work on any CPU architecture. We’ll keep the issue open util it’s fixed.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Vijay_Dubey" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I built from source and problem is gone", "isAccepted": false, "likes": null, "poster": "buildabeast" }, { "contents": "processor\t: 1", "isAccepted": false, "likes": null, "poster": "buildabeast" } ]
false
Model.cuda() takes long time
null
[ { "contents": "Even for a small model, calling its cuda() method takes minutes to finish. Is it normal? If yes what it does behind the scene that takes so long? Thanks!", "isAccepted": false, "likes": 4, "poster": "ming" }, { "contents": "We’re aware that there are problems with binary packages we’ve been shipping, that only appear on certain GPU architectures. Would you mind sharing what’s your GPU? I believe this has been already fixed, so a reinstall might help you.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I upgraded pytorch from 0.1.6-py35_19 to 0.1.6-py35_22. It still took 2.5 minutes to run my_small_model.cuda(). I am using Conda Python 3.5 env, GTX 1080 with cuda 8.0. Thanks for your quick reply!", "isAccepted": false, "likes": null, "poster": "ming" }, { "contents": "i will update back when this is fixed. I hope to get a fix out today or tomorrow.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ming" }, { "contents": "I am using Conda Python 2.7 env, GTX 1080 with cuda 8.0 too. And pytorch was installed by binary packages. Even when learning, it was much slower than lua’s torch.nn. Is this problem related to my GPU? Or is it just a matter of my code? Thanks.", "isAccepted": false, "likes": null, "poster": "bellatoris" }, { "contents": "During learning, there is a part where data calls cuda (). Does data.cuda() have same problem? Thanks", "isAccepted": false, "likes": null, "poster": "bellatoris" }, { "contents": "This should be fixed now if you use the command (website is updated): <SCODE>conda install pytorch torchvision cuda80 -c soumith\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "smth" }, { "contents": "Tested the new release and it works great. Thanks a lot for the super responsive fix!", "isAccepted": false, "likes": null, "poster": "ming" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "chenyuntc" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "stepelu" }, { "contents": "Hello I just encountered this problem as well. I am running pytorch on a cluster. Here is the information about the sysmtem and code I am running: Thanks in advance for any help!!!", "isAccepted": false, "likes": null, "poster": "jtang10" }, { "contents": "I don’t want to hijack the thread, but at least the title fits. The case here is that the home fold where my script lives is NFS mounted from another server. When comes to model.cuda(), pytorch takes some time to move something over the NFS link. But only my script is on NFS. Both pytorch under conda and the data is on local disk. The question is is there a way(or simple modification) to ask pytorch to use a local path to do compiling work if all the hypothesis is correct?", "isAccepted": false, "likes": null, "poster": "pipehappy1" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "<SCODE>torch 0.2.0+59e0472 <pip>\npytorch 0.2.0 py36hf0d2509_4cu75 soumith\n<ECODE> Running a simple script like this gives me the following results: <SCODE>import torch\nfrom datetime import datetime\n\nfor i in range(10):\n x = torch.randn(3, 4)\n t1 = datetime.now()\n x.cuda()\n print(i, datetime.now() - t1)\n<ECODE> <SCODE>0 0:06:24.108245\n1 0:00:00.000110\n2 0:00:00.000055\n3 0:00:00.000048\n4 0:00:00.000046\n5 0:00:00.000046\n6 0:00:00.000044\n7 0:00:00.000044\n8 0:00:00.000044\n9 0:00:00.000044\n<ECODE> Sorry that I’m new to PyTorch so I might be doing something incorrectly. Thanks in advance.", "isAccepted": false, "likes": null, "poster": "Cognac" }, { "contents": "In my experience installing torchvision after installing conda overrides the pytorch source install. However you can re-install pytorch from source and you’ll be using the latest pytorch. <SCODE>import torch\ntorch.__version__\ntorch.version.cuda\ntorch.version.cudnn\n<ECODE> ?", "isAccepted": false, "likes": null, "poster": "richard" }, { "contents": "Thanks for the immediate response! <SCODE>>>> import torch\n>>> torch.__version__\n'0.2.0_4'\n>>> torch.version.cuda\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: module 'torch.version' has no attribute 'cuda'\n>>> torch.version.cudnn\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nAttributeError: module 'torch.version' has no attribute 'cudnn'\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Cognac" }, { "contents": "Interesting. You’re definitely not on pytorch master, you’re on the version that comes with conda. If you want to build from source again (to see if your problem will go away on master, but I don’t know if it will), you can try the following: (I’m assuming you have the pytorch source code somewhere): <SCODE>pip uninstall torch \npip uninstall torch # yes, this is intentional\ncd pytorch\npython setup.py install\n<ECODE>", "isAccepted": false, "likes": null, "poster": "richard" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Cognac" }, { "contents": "I’m trying to get torch removed from your pip list. I think you’ll still see pytorch installed via conda – you shouldn’t remove that because I believe removing that will uninstall torchvision as well.", "isAccepted": false, "likes": null, "poster": "richard" } ]
false
How to transform Variable into numpy?
null
[ { "contents": "When I used pytorch, I met a problem that I can not transform Variable into numpy. When I try to use torch.Tensor or transform it into torch.FloatTensor, but it’s fail. So how can I to solve this problem?", "isAccepted": false, "likes": 6, "poster": "cumttang" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "edgarriba" }, { "contents": "", "isAccepted": false, "likes": 49, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "cumttang" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "Morpheus_Hsieh" }, { "contents": "", "isAccepted": false, "likes": 10, "poster": "Morpheus_Hsieh" }, { "contents": "Yes, it’s working: (Variable(x).data).cpu().numpy()", "isAccepted": false, "likes": 10, "poster": "Robi" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "blitu12345" }, { "contents": "Is there a function which handles both CUDA and CPU at once?", "isAccepted": false, "likes": null, "poster": "Royi" }, { "contents": "what will type if cuda tenso is fp16 than after converting it will numpy(fp32) or numpy(fp16)?", "isAccepted": false, "likes": null, "poster": "Nitin286roxs" } ]
false
What’s the easiest way to clean gradients before saving a net?
null
[ { "contents": "<SCODE>torch.save(net.state_dict(), 'net_name.pth')\n<ECODE> right?", "isAccepted": false, "likes": null, "poster": "pengsun" }, { "contents": "<SCODE>torch.save(net.state_dict(), 'net_name.pth')\n<ECODE>", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pengsun" } ]
false
Load a saved model
null
[ { "contents": "and how do you forward with this model file?", "isAccepted": false, "likes": 1, "poster": "Eugenio_Culurciello" }, { "contents": "<SCODE>x = torch.rand(1, 3, 224, 224)\nxVar = torch.autograd.Variable(x)\nres18(xVar)\n\n--> \n\nVariable containing:\n-0.4374 -0.3994 -0.5249 ... -0.5333 1.4113 0.9452\n[torch.FloatTensor of size 1x1000]\n<ECODE> Let me know if it works.", "isAccepted": false, "likes": 3, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "phenixcx" } ]
false
Copying nn.Modules without shared memory
null
[ { "contents": "", "isAccepted": false, "likes": 4, "poster": "lenny" }, { "contents": "no, what you need to do is to send them model to the new process, and then do: <SCODE>import copy\nmodel = copy.deepcopy(model)\n<ECODE> share_memory() only shares the memory ahead of time (in case you want to reuse the shared memory for example)", "isAccepted": false, "likes": 7, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "Then could you give me an example of using load_state_dict() to copy a module?", "isAccepted": false, "likes": null, "poster": "Juna" }, { "contents": "He is referring to saving the state_dict (for instance into a file), sending it over to another process, then loading it with load_state_dict", "isAccepted": false, "likes": null, "poster": "JosueCom" } ]
false
Why passing in tensor instead of Variable when backward?
null
[ { "contents": "Hi, I just realized Variable.grad field is also a Variable, it thus seems more natural to pass gradients as a Variable when doing backward (currently it must be a tensor). Any reason for such an interface?", "isAccepted": false, "likes": 1, "poster": "pengsun" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pengsun" } ]
false
Why cant I change the padding, and strides in Conv2d?
null
[ { "contents": "Hi, perhaps I am missing something, but I cannot seem to figure out how to change the padding and stride amounts in the Conv2d method. Thanks", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "You can pass them as arguments to the Module constructor e.g. <SCODE>nn.Conv2d(16, 32, kernel_size=3, padding=(5, 3))\n<ECODE> Alternatively, if you need to change them at runtime, I’d suggest using the functional interface: <SCODE>import torch.nn.functonal as F\n\n...\n\nF.conv2d(input, self.weight, self.bias, kernel_size=3, padding=(x, y))\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "The change at run-time is intriguing … I didn’t even know people did that. How could you guarantee the same dimensionality if you changed padding amounts during run time though?", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "I don’t know if people do that, but it might be reasonable if you have variably sized inputs, or are doing something completely new.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Very interesting - I mean I know that in the context of RNNs we can have variable length inputs etc, since the back-prop can be unrolled N times, but I didnt realize that we could have variable length weights/parameters etc… I’ll read more about it, (and any sources on this are appreciated), but the top question in my mind is how do you guarantee that in a dynamic graph, when we change the dimensionality of weights, that those are new weights are even trained? Does that make sense?", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "For example conv layers can be applied to variably sized inputs, with a fixed set of weights.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you will take a look!", "isAccepted": false, "likes": null, "poster": "Kalamaya" } ]
false
Asynchronous parameters updating?
reinforcement-learning
[ { "contents": "<SCODE>-- in main thread: shared parameters\nparams, _ = sharedNet:getParameters()\n\n-- in worker thread: its own gradParameters\ntNet = sharedNet:clone()\n_, gradParams = tNet:getParameters()\n\n-- in worker thread: stuff\n\n-- in worker thread: updating shared parameters with its own gradParameters\nfunction feval() return nil, gradParams end\noptim.rmsprop(feval, params, sharedStates)\n<ECODE> Thanks in advance!", "isAccepted": false, "likes": 1, "poster": "pengsun" }, { "contents": "I have already implemented A3C in pytorch, and it works just fine. When you get a copy with everything shared in the subprocess just do this to break the gradient sharing, and use the optimizer as you’d normaly do: <SCODE>for param in model.parameters():\n param.grad.data = param.grad.data.clone()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE># after every several steps (e.g., 5 or 20)\nfor t_param, shared_param in zip(t_model.parameters(), shared_model.parameters()):\n t_param.data.copy_(shared_param.data)\n<ECODE> Or did you find it not critical to the accuracy in your implementation? (Previously, I strictly follow the paper and I can reproduce the scoring curve for breakout as in the paper’s figure)", "isAccepted": false, "likes": null, "poster": "pengsun" }, { "contents": "<SCODE>t_model.load_state_dict(shared_state_dict)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pengsun" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Anurup_Arvind" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "It converges on PongDeterministic-v3 in 10 minutes with 16 threads. However, it work poorly on PongDeterministic-v3. Could you please take a look? I wonder whether I got asynchronous updates in PyTorch right.", "isAccepted": false, "likes": null, "poster": "Ilya_Kostrikov" }, { "contents": "I don’t think I’d recommend reusing the same grad tensors in multiple Variables, but apart form that I can’t see anything wrong at a glance.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "The problem was in the architecture that I used initially, seems to work on Breakout now. Reusing the same grad tensor might cause some problems?", "isAccepted": false, "likes": null, "poster": "Ilya_Kostrikov" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "AttributeError: ‘float’ object has no attribute ‘backward’ Any guesses what’s caused it? Apart from my custom environment, the only difference to your code is I’ve remove the conv layers, and used a 2 layer LSTM - which is the model I’m using in TensorFlow. As a performance improvement have you tried concatenating the previous action, reward, and a timestep counter onto the end of the state as 3 scalars - I noticed a significant improvement in my TF implementation when I do this.", "isAccepted": false, "likes": 1, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Which I’m working through, which seems closer to my TF implementation. Hopefully I’ll figure it out soon. At the moment my environment code, (I guess like yours), is stuck in the middle of a larger project, and is not stand alone, yet. This is actually the reason why I’m moving away from TF. Hopefully, I’ll be able to wrap my environment into something like an OpenAI env, and be able to open source it on Github within a few days, and get back to you. It’s a nice example of graph optimisation/travelling salesman, so it should be of interest to quite a few people. All the best, Aj", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "About Variables, you only need to use the for parts that will require differentiation, and it’s best to not use them for everything else. If you have any questions about any specific cases, I’ll be happy to answer.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "you mean right?", "isAccepted": false, "likes": null, "poster": "onlytailei" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "denizs" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Hung_Nguyen" } ]
false
How to extract features of an image from a trained model
null
[ { "contents": "Hi all,", "isAccepted": false, "likes": 19, "poster": "big_tree" }, { "contents": "<SCODE>torchvision.models.resnet18(pretrained=True)\n<ECODE>", "isAccepted": false, "likes": 7, "poster": "apaszke" }, { "contents": "You can either reconstruct the classifier once the model was instantiated, as in the following example: <SCODE>import torch\nimport torch.nn as nn\nfrom torchvision import models\n\nmodel = models.alexnet(pretrained=True)\n\n# remove last fully-connected layer\nnew_classifier = nn.Sequential(*list(model.classifier.children())[:-1])\nmodel.classifier = new_classifier\n<ECODE> Or, if instead you want to extract other parts of the model, you might need to recreate the model structure, and reusing the parts of the pre-trained model in the new model. <SCODE>import torch\nimport torch.nn as nn\nfrom torchvision import models\n\noriginal_model = models.alexnet(pretrained=True)\n\nclass AlexNetConv4(nn.Module):\n def __init__(self):\n super(AlexNetConv4, self).__init__()\n self.features = nn.Sequential(\n # stop at conv4\n *list(original_model.features.children())[:-3]\n )\n def forward(self, x):\n x = self.features(x)\n return x\n\nmodel = AlexNetConv4()\n<ECODE>", "isAccepted": false, "likes": 90, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "big_tree" }, { "contents": "Is there a way to get values from multiple layers with one forward pass (for neural style transfer etc.)? In Tensorflow it would be along the lines of <SCODE>features = sess.run([net.conv1, net.conv2, net.conv3])\n<ECODE>", "isAccepted": false, "likes": 7, "poster": "kpar" }, { "contents": "<SCODE>class Net(nn.Module):\n def __init__(self):\n self.conv1 = nn.Conv2d(1, 1, 3)\n self.conv2 = nn.Conv2d(1, 1, 3)\n self.conv3 = nn.Conv2d(1, 1, 3)\n\n def forward(self, x):\n out1 = F.relu(self.conv1(x))\n out2 = F.relu(self.conv2(out1))\n out3 = F.relu(self.conv3(out2))\n return out1, out2, out3\n<ECODE>", "isAccepted": false, "likes": 29, "poster": "fmassa" }, { "contents": "That looks like it’ll do the trick. Is there a convenient way to fetch the intermediate values when the forward behavior is defined by nn.Sequential()? It seems like right now the only way to compose multiple responses would be to split off all the individual layers and forward the values manually in forward(). Essentially what I want to do is take an existing network (e.g. VGG) and just pick some responses of some layers (conv1_1, pool1, pool2, etc.) and concatenate them into a feature vector.", "isAccepted": false, "likes": 7, "poster": "kpar" }, { "contents": "You could write your own sequential version that keeps track of all intermediate results in a list. Something like <SCODE>class SelectiveSequential(nn.Module):\n def __init__(self, to_select, modules_dict):\n super(SelectiveSequential, self).__init__()\n for key, module in modules_dict.items():\n self.add_module(key, module)\n self._to_select = to_select\n \n def forward(x):\n list = []\n for name, module in self._modules.iteritems():\n x = module(x)\n if name in self._to_select:\n list.append(x)\n return list\n<ECODE> And then you could use it like <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.features = SelectiveSequential(\n ['conv1', 'conv3'],\n {'conv1': nn.Conv2d(1, 1, 3),\n 'conv2': nn.Conv2d(1, 1, 3),\n 'conv3': nn.Conv2d(1, 1, 3)}\n )\n\n def forward(self, x):\n return self.features(x)\n<ECODE>", "isAccepted": false, "likes": 31, "poster": "fmassa" }, { "contents": "Thank you for your help", "isAccepted": false, "likes": null, "poster": "lcelona" }, { "contents": "<SCODE>net = models.alexnet(pretrained=True).features`\n\nclass Feature_extractor(nn.module):\n def forward(self, input):\n self.feature = input.clone()\n return input\n\nnew_net = nn.Sequential().cuda() # the new network\n\ntarget_layers = [conv_1, conv_2, conv_4] # layers you want to extract`\n\ni = 1\nfor layer in list(cnn):\n if isinstance(layer,nn.Conv2d):\n name = \"conv_\"+str(i)\n art_net.add_module(name,layer)\n\n if name in target_layers:\n new_net.add_module(\"extractor_\"+str(i),Feature_extractor())\n\n i+=1\n\n if isinstance(layer,nn.ReLU):\n name = \"relu_\"+str(i)\n new_net.add_module(name,layer)\n\n if isinstance(layer,nn.MaxPool2d):\n name = \"pool_\"+str(i)\n new_net.add_module(name,layer)\n\nnew_net.forward(your_image)\nprint new_net.extractor_3.feature<ECODE>", "isAccepted": false, "likes": 1, "poster": "alexis-jacq" }, { "contents": "If you really want to do something like that, I’d recommend this: <SCODE>class FeatureExtractor(nn.Module):\n def __init__(self, submodule, extracted_layers):\n self.submodule = submodule\n\n def forward(self, x):\n outputs = []\n for name, module in self.submodule._modules.items():\n x = module(x)\n if name in self.extracted_layers:\n outputs += [x]\n return outputs + [x]\n<ECODE>", "isAccepted": false, "likes": 18, "poster": "apaszke" }, { "contents": "Wow! good to know that… Thanks! That would be really useful!", "isAccepted": false, "likes": 1, "poster": "alexis-jacq" }, { "contents": "<SCODE>x = Variable(torch.rand(1, 3, 224, 224))\nh_x = resnet_18.forward(x)\nlast_view = h_x.creator.previous_functions[0][0]\nlast_pool = last_view.previous_functions[0][0]\nembedding = last_pool.saved_tensors[0]\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "Atcold" }, { "contents": "Why do you want to inspect the graph? A new instance is going to be created at every forward.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "<SCODE>for name, module in self.submodule._modules.items():\n x = module(x)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "We managed to quickly write what we currently have because getting TensorBoard to take 2d values / histograms was relatively straightforward, however adding integration for more complex data types is not a one-day job. After the ICML deadline we’ll give it a look.", "isAccepted": false, "likes": 1, "poster": "edran" } ]
false
Broadcasting? Or alternative solutions
null
[ { "contents": "Hi, I really like the library so far. However, I was wondering if broadcasting is on the roadmap, and what would be your current suggestion for work-arounds? <SCODE>x = Variable(torch.Tensor([[1.0, 1.0], \n [2.0, 2.0], \n [3.0, 3.0], \n [4.0, 4.0], \n [5.0, 5.0], \n [6.0, 6.0]]))\n\nweights = Variable(torch.zeros(2, 1))\nbias = Variable(torch.zeros(1))\n\nnet_input = x.mm(weights) + bias\n<ECODE> A workaround would be to add 1s to the input tensor, I guess: <SCODE>x = Variable(torch.Tensor([[1.0, 1.0, 1.0], \n [1.0, 2.0, 2.0], \n [1.0, 3.0, 3.0], \n [1.0, 4.0, 4.0], \n [1.0, 5.0, 5.0], \n [1.0, 6.0, 6.0]]))\n\nweights = Variable(torch.zeros(3, 1))\n\nnet_input = x.mm(weights)\n<ECODE> What would be your thoughts on that?", "isAccepted": false, "likes": 2, "poster": "rasbt" }, { "contents": "Adding broadcasting to most operations is definitely on our roadmap, and will be hopefully ready quite soon. Since it’s so often asked for we’ll probably reprioritize that and have it implemented soonish. For now there are two solutions - one that works already, another one that will be implemented very soon (presented in the same order): You can do broadcasting by manually adding singleton dimensions and expanding along them. This doesn’t do any memory copy, and only does some stride tricks: <SCODE>net_input = x.mm(weights)\nnet_input += bias.unsqueeze(0).expand_as(net_input)\n<ECODE> <SCODE>import torch.nn.functional as F\noutput = F.linear(input, weights, bias) # bias is optional\n<ECODE>", "isAccepted": false, "likes": 7, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "rasbt" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "<SCODE>x = torch.randn(1)\ny = torch.randn(2, 3, 4, 5, 6)\nprint(y + x.expand_as(y))\n<ECODE>", "isAccepted": false, "likes": 6, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jphoward" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Let me know if you need more info - I’m not sure I’ve done a great job of explaining!", "isAccepted": false, "likes": null, "poster": "jphoward" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Understanding how torch.nn.Module works
null
[ { "contents": "Essentially, I want to reproduce the results I get when I do it “manually:” <SCODE>from torch.autograd import Variable\nimport torch\n\n\nx = Variable(torch.Tensor([[1.0, 1.0], \n [1.0, 2.1], \n [1.0, 3.6], \n [1.0, 4.2], \n [1.0, 6.0], \n [1.0, 7.0]]))\ny = Variable(torch.Tensor([1.0, 2.1, 3.6, 4.2, 6.0, 7.0]))\nweights = Variable(torch.zeros(2, 1), requires_grad=True)\n\n\nfor i in range(5000):\n\n net_input = x.mm(weights)\n loss = torch.mean((net_input - y)**2)\n loss.backward()\n weights.data.add_(-0.0001 * weights.grad.data)\n \n if loss.data[0] < 1e-3:\n break\n\nprint('n_iter', i)\nprint(loss.data[0])\n<ECODE> Output: <SCODE>n_iter 1188\n0.0004487129335757345\n<ECODE> Now, running the following <SCODE>import torch.nn.functional as F\n\nclass Model(torch.nn.Module):\n \n def __init__(self):\n super(Model, self).__init__()\n self.weights = Variable(torch.zeros(2, 1), \n requires_grad=True)\n \n def forward(self, x):\n net_input = x.mm(self.weights)\n return net_input\n \nmodel = Model()\ncriterion = torch.nn.MSELoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=0.001)\n<ECODE> raises an error: <SCODE> ---------------------------------------------------------------------------\n IndexError Traceback (most recent call last)\n <ipython-input-258-3bcb3a8408d2> in <module>()\n 15 model = Model()\n 16 criterion = torch.nn.MSELoss()\n ---> 17 optimizer = torch.optim.SGD(model.parameters(), lr=0.001)\n\n /Users/Sebastian/miniconda3/envs/pytorch/lib/python3.5/site-packages/torch/optim/sgd.py in __init__(self, params, lr, momentum, dampening, weight_decay)\n 24 defaults = dict(lr=lr, momentum=momentum, dampening=dampening,\n 25 weight_decay=weight_decay)\n ---> 26 super(SGD, self).__init__(params, defaults)\n 27 \n 28 def step(self, closure=None):\n\n /Users/Sebastian/miniconda3/envs/pytorch/lib/python3.5/site-packages/torch/optim/optimizer.py in __init__(self, params, defaults)\n 25 self.state = defaultdict(dict)\n 26 self.param_groups = list(params)\n ---> 27 if not isinstance(self.param_groups[0], dict):\n 28 self.param_groups = [{'params': self.param_groups}]\n 29 \n\n IndexError: list index out of range\n<ECODE> I tried that: <SCODE>class Model(torch.nn.Module):\n \n def __init__(self):\n super(Model, self).__init__()\n self.weights = Variable(torch.zeros(2, 1), \n requires_grad=True)\n self.fc = torch.nn.Linear(2, 1)\n \n def forward(self, x):\n return x\n \nmodel = Model()\ncriterion = torch.nn.MSELoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=0.0001)\n\nfor i in range(5000):\n optimizer.zero_grad()\n outputs = model(x)\n \n loss = criterion(outputs, y)\n loss.backward() \n\n optimizer.step()\n \nprint(loss.data[0])\n<ECODE> but now, I am getting an error about the dimensions of the input: <SCODE> ---------------------------------------------------------------------------\n RuntimeError Traceback (most recent call last)\n <ipython-input-259-c6bb483f3953> in <module>()\n 28 outputs = model(x)\n 29 \n ---> 30 loss = criterion(outputs, y)\n 31 loss.backward()\n 32 \n\n /Users/Sebastian/miniconda3/envs/pytorch/lib/python3.5/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\n 208 \n 209 def __call__(self, *input, **kwargs):\n --> 210 result = self.forward(*input, **kwargs)\n 211 for hook in self._forward_hooks.values():\n 212 hook_result = hook(self, input, result)\n\n /Users/Sebastian/miniconda3/envs/pytorch/lib/python3.5/site-packages/torch/nn/modules/loss.py in forward(self, input, target)\n 21 _assert_no_grad(target)\n 22 backend_fn = getattr(self._backend, type(self).__name__)\n ---> 23 return backend_fn(self.size_average)(input, target)\n 24 \n 25 \n\n /Users/Sebastian/miniconda3/envs/pytorch/lib/python3.5/site-packages/torch/nn/_functions/thnn/auto.py in forward(self, input, target)\n 39 output = input.new(1)\n 40 getattr(self._backend, update_output.name)(self._backend.library_state, input, target,\n ---> 41 output, *self.additional_args)\n 42 return output\n 43 \n\n RuntimeError: input and target have different number of elements: input[6 x 2] has 12 elements, while target[6] has 6 elements at /Users/soumith/anaconda/conda-bld/pytorch-0.1.6_1484801351127/work/torch/lib/THNN/generic/MSECriterion.c:12\n<ECODE>", "isAccepted": false, "likes": 10, "poster": "rasbt" }, { "contents": "Both your examples have small errors inside: You have a bug in forward - your model always just returns the input. Try this: <SCODE>class Model(torch.nn.Module):\n \n def __init__(self):\n super(Model, self).__init__()\n self.fc = torch.nn.Linear(2, 1)\n \n def forward(self, x):\n return self.fc(x) # it was just x there\n<ECODE>", "isAccepted": false, "likes": 12, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "rasbt" }, { "contents": "Huh, I think we’re missing the docs on Parameters, I’ll note that and make sure it’s added soon. Sorry for the confusion.", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "Hi, thanks for clarifying the difference between Variable and Parameter in this thread. It might be slightly off the topic, but I have a question about the codes used in here. The very first code written by rasbt, which implemented OLS regression manually using only Variable, looks perfectly fine to me, and I got exactly same output. However, the loss is not monotonically decreasing for some unknown reason. Am I missing something? By the way, by replaced Variable used in the second code by rasbt to Parameter, the loss decreases monotonically as usual.", "isAccepted": false, "likes": 1, "poster": "tanemaki" }, { "contents": "Yeah, same issue here. The first example has some issues with the minimization: <SCODE>from torch.autograd import Variable\nimport torch\n\n\nx = Variable(torch.Tensor([[1.0, 1.0], \n [1.0, 2.1], \n [1.0, 3.6], \n [1.0, 4.2], \n [1.0, 6.0], \n [1.0, 7.0]]))\ny = Variable(torch.Tensor([1.0, 2.1, 3.6, 4.2, 6.0, 7.0]))\nweights = Variable(torch.zeros(2, 1), requires_grad=True)\n\nloss1 = []\n\nfor i in range(5000):\n\n net_input = x.mm(weights)\n loss = torch.mean((net_input - y)**2)\n loss.backward()\n weights.data.add_(-0.0001 * weights.grad.data)\n \n loss1.append(loss.data[0])\n\nprint('n_iter', i)\nprint(loss.data[0])\n\nplt.plot(range(5000), loss1)\n<ECODE> The 2nd example works well, though: <SCODE>import torch.nn.functional as F\nfrom torch.nn import Parameter\n\nclass Model(torch.nn.Module):\n \n def __init__(self):\n super(Model, self).__init__()\n self.weights = Parameter(torch.zeros(2, 1), \n requires_grad=True)\n \n def forward(self, x):\n net_input = x.mm(self.weights)\n return net_input\n \nmodel = Model()\ncriterion = torch.nn.MSELoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=0.001)\nloss2 = []\n\nfor i in range(5000):\n optimizer.zero_grad()\n outputs = model(x)\n \n loss = criterion(outputs, y)\n loss2.append(loss.data[0])\n loss.backward() \n\n optimizer.step()\n \nplt.plot(range(5000), loss2)\n<ECODE> After standardizing ex1, and lowering the learning rate further, it would somewhat work though: <SCODE>from torch.autograd import Variable\nimport torch\n\n\nx = torch.Tensor([[1.0, 1.0], \n [1.0, 2.1], \n [1.0, 3.6], \n [1.0, 4.2], \n [1.0, 6.0], \n [1.0, 7.0]])\n\nx = (x - x.mean()) / x.max()\nx = Variable(x)\n\ny = torch.Tensor([1.0, 2.1, 3.6, 4.2, 6.0, 7.0])\ny = (y - y.mean()) / y.max()\ny = Variable(y)\n\nweights = Variable(torch.zeros(2, 1), requires_grad=True)\n\nloss1 = []\n\nfor i in range(5000):\n\n net_input = x.mm(weights)\n loss = torch.mean((net_input - y)**2)\n loss.backward()\n weights.data.add_(-0.00000001 * weights.grad.data)\n \n loss1.append(loss.data[0])\n\nprint('n_iter', i)\nprint(loss.data[0])\n\nplt.plot(range(5000), loss1)\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "rasbt" }, { "contents": "", "isAccepted": false, "likes": 6, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "rasbt" }, { "contents": "Good questions - Ive been trying to grok it too!", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "tanemaki" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "Regarding the symmetry: Yeah, in this case zero weights wouldn’t matter like you said", "isAccepted": false, "likes": 1, "poster": "rasbt" }, { "contents": "Hmm, well why do you have to zero them out? You are just computing new gradients, so shouldn’t the old ones just be over-written?", "isAccepted": false, "likes": 1, "poster": "Kalamaya" }, { "contents": "No, they’re accumulated.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "hmishfaq" }, { "contents": "Yes, that’s what I meant.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Alex_Choy" }, { "contents": "its hard to follow what ur question is cuz ur title is vague and question details is too long.", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "Hi, I would like to come back to this example and understand why there is a difference between two loss functions: loss1 = torch.mean((y - y_pred)**2.0) \nloss2 = mse_loss(y_pred, y), where mse_loss = nn.MSELoss()\n Here is my complete code: <SCODE>x = Variable(torch.FloatTensor([[1.0, 1.0], \n [1.0, 2.1], \n [1.0, 3.6], \n [1.0, 4.2], \n [1.0, 6.0], \n [1.0, 7.0]]))\ny = Variable(torch.FloatTensor([1.0, 2.1, 3.6, 4.2, 6.0, 7.0]))\n\nmse_loss = nn.MSELoss()\nweights = Variable(torch.zeros((2, 1)).float(), requires_grad=True)\n\nn = 500\nloss1 = []\nloss2 = []\nweights_grads_history = np.zeros((n, 2))\nweights_history = np.zeros((n, 2))\nlearning_rate = 0.0001\n\nfor i in range(n):\n\n y_pred = x.mm(weights)\n loss = torch.mean((y - y_pred)**2.0)\n loss_ = mse_loss(y_pred, y) \n loss1.append(loss.data[0])\n loss2.append(loss_.data[0])\n # Compute gradients\n loss.backward()\n \n # Update parameters\n weights_grads_history[i, :] = weights.grad.data.numpy()[:, 0]\n weights_history[i, :] = weights.data.numpy()[:, 0]\n weights.data.sub_(weights.grad.data * learning_rate)\n\n # You need to clear the existing gradients though, else gradients will be accumulated to existing gradients \n weights.grad.data.zero_()\n\n \nprint(\"n_iter\", i)\nprint(loss1[-1], loss2[-1])\nprint(\"y_pred: \", y_pred)\nprint(\"Weights : \", weights)\n\nplt.figure(figsize=(12, 4))\nplt.subplot(131)\nplt.plot(range(n), loss1, label='loss1')\nplt.plot(range(n), loss2, label='loss2')\n_ = plt.legend()\nplt.subplot(132)\nplt.plot(range(n), weights_grads_history[:, 0], label='W_grad_1')\nplt.plot(range(n), weights_grads_history[:, 1], label='W_grad_2')\nplt.legend()\nplt.subplot(133)\nplt.plot(range(n), weights_history[:, 0], label='W_1')\nplt.plot(range(n), weights_history[:, 1], label='W_2')\n_ = plt.legend()\n<ECODE> Thanks", "isAccepted": false, "likes": 1, "poster": "vfdev-5" }, { "contents": "Hi, Your code should work when you change the following line: <SCODE>loss = torch.mean((y - y_pred.view(-1))**2.0)<ECODE>", "isAccepted": false, "likes": 4, "poster": "ptrblck" } ]
false
Concatenate two tensor for each timestep
null
[ { "contents": "<SCODE>def forward(...):\n ...\n z = Variable(torch.zeros(y.size(0), y.size(1), 1024))\n for i in range(z.size(0)):\n z[i] = torch.cat((y[i], x), 1)\n\n z = self.decoderGRU(z, init_decoder_GRU)\n decoded = self.classLinear(z.view(z.size(0)*z.size(1), z.size(2))) #error here.\n ...\n<ECODE> Error message: <SCODE>AttributeError: 'tuple' object has no attribute 'view'\n<ECODE> What is the best way to do this?", "isAccepted": false, "likes": 2, "poster": "NgPDat" }, { "contents": "If so, this should do the trick for you and will be much more efficient: <SCODE># Replace the for loop with this line. It will expand x to be of size seqLen x batch x feature1\n# and concatenate it with y in one go. \nz = torch.cat([y, x.unsqueeze(0).expand(seqLen, *x.size())], 2)\n<ECODE> <SCODE>z = self.decoderGRU(z, init_decoder_GRU)[0]\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thank you. Both solve my problem. Never thought of using expand that way.", "isAccepted": false, "likes": null, "poster": "NgPDat" }, { "contents": "Hi, I wonder if it’s convenient for you to share your encoder-decoder code with me, as I am trying to implement a similar network?", "isAccepted": false, "likes": null, "poster": "chifish" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "amy" } ]
false
Model zoo categories
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "Eugenio_Culurciello" }, { "contents": "i’m not sure i understand your question", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "the list of category names that correspond to each of the output neurons of CNN. Even for ImageNet, they need to specified.", "isAccepted": false, "likes": null, "poster": "Eugenio_Culurciello" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "rasbt" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Eugenio_Culurciello" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Eugenio_Culurciello" } ]
false
Are tables like concattable or paralleltable present in torch
null
[ { "contents": "Are there utilities like concattable, paralleltable in pytorch??", "isAccepted": false, "likes": 1, "poster": "gsp-27" }, { "contents": "No, but you can easily achieve that using autograd. e.g. for concat table: <SCODE>class MyModel(nn.Module):\n def __init__(self):\n self.submodule1 = MySubmodule1()\n self.submodule2 = MySubmodule2()\n\n def forward(self, x):\n # this is equivalent to ConcatTable containing both submodules,\n # but is much more readable\n concat = [self.submodule1(x), self.submodule2(x)]\n # do some other processing on concat...\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "I tried the following <SCODE>class MyModel(nn.Module):\n def __init__(self, submod_num,submod_var):\n self.submod_pool = []\n for i in range(submod_num):\n self.submod_pool += [MySubmodule(submod_var[i])]\n\n def forward(self,X):\n outPool = []\n for submod in self.submod_pool:\n outPool += [submod(X)]\n\n return torch.cat(outPool,dim)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "thongnguyen" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Amir_Ghodrati" }, { "contents": "I solved it by this: <SCODE> model= nn.Sequential(*layer_list)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "thongnguyen" } ]
false
Non-legacy View module?
null
[ { "contents": "There doesn’t currently appear to be any non-legacy View module in pytorch’s torch.nn module. Any reason for this? While obviously not essential, it’s convenient when porting over existing Torch networks. Would be happy to submit a PR.", "isAccepted": false, "likes": 1, "poster": "lebedov" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "That method directly resizes the associated tensor instance; having a View module would enable one to directly add it to the sequence of modules in a network rather than having to explicitly call it. For example: <SCODE>class MyNetwork(torch.nn.Module):\n def __init__(self):\n super(MyNetwork, self).__init__()\n modules = [nn.Conv2d(1, 10, 3),\n nn.MaxPool2d(2, 2),\n ...,\n View(100),\n nn.Linear(100, 2)]\n for i, m in enumerate(modules):\n self.add_module(str(i), m)\n def forward(self, x):\n for child in self.children():\n x = child(x)\n return x<ECODE>", "isAccepted": false, "likes": 2, "poster": "lebedov" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "lebedov" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Duly noted. Thanks for the feedback.", "isAccepted": false, "likes": null, "poster": "lebedov" }, { "contents": "Sorry to come back to this discussion after a month or so, but I indeed miss some convenient way of flattening out a layer inside of Sequential. I am breaking this operation into two pieces as suggested, a Sequential model that outputs a image shape, and then in the forward() method, I flatten the results.", "isAccepted": false, "likes": null, "poster": "juliohm" }, { "contents": "<SCODE>class Flatten(nn.Module):\n def __init__(self):\n super(Flatten, self).__init__()\n\n def forward(self, x):\n x = x.view(x.size(0), -1)\n return x\n\nmodel = nn.Sequential(\n nn.Conv2d(3, 1, 3, 1, 1),\n Flatten(),\n nn.Linear(24*24, 1)\n )\n\nx = Variable(torch.randn(10, 3, 24, 24))\nmodel(x)\n<ECODE>", "isAccepted": false, "likes": 8, "poster": "ptrblck" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "juliohm" } ]
false
“wheel not supported” for pip install
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "eulerreich" }, { "contents": "Not sure if that’s a problem in your case, but I had a similar issue (or a similar message) when I accidentally tried to install a package into a wrong environment (a py 3.5 wheel into a 3.6 env). Have you checked that your instance has Python 2.7 in your env and that the default pip is the pip of that python install?", "isAccepted": false, "likes": 4, "poster": "rasbt" }, { "contents": "torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform. Exception information: Traceback (most recent call last): File \"/usr/lib/python2.7/dist-packages/pip/basecommand.py\", line 122, in main status = self.run(options, args) File \"/usr/lib/python2.7/dist-packages/pip/commands/install.py\", line 257, in run InstallRequirement.from_line(name, None)) File \"/usr/lib/python2.7/dist-packages/pip/req.py\", line 168, in from_line raise UnsupportedWheel(\"%s is not a supported wheel on this platform.\" % wheel.filename) UnsupportedWheel: torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.", "isAccepted": false, "likes": null, "poster": "Nadav_Bhonker" }, { "contents": "We dont support 32-bit platforms. can you tell me what the output of: <SCODE>uname -a\nlsb_release -a\n<ECODE>", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Yes I have Python 2.7, and the pip is the pip for that python.", "isAccepted": false, "likes": null, "poster": "eulerreich" }, { "contents": "I get the same error with the same wheel, on a rhel6 system (ugh). I’m working in a virtualenv without root. <SCODE>pip install https://s3.amazonaws.com/pytorch/whl/cu75/torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl\n torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.\n Storing debug log for failure in /u/tsercu/.pip/pip.log\npip --version\n pip 1.5.6 from /opt/share/Python-2.7.9/lib/python2.7/site-packages (python 2.7)\nlsb_release -a\n LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch\n Distributor ID: RedHatEnterpriseServer\n Description: Red Hat Enterprise Linux Server release 6.6 (Santiago)\n Release: 6.6\n Codename: Santiago<ECODE>", "isAccepted": false, "likes": null, "poster": "tomsercu" }, { "contents": "Output of uname -a Output of lsb_release -a", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "<SCODE>$ cat ~/.pip/pip.log\ntorch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.\nException information:\nTraceback (most recent call last):\n File \"/opt/share/Python-2.7.9/lib/python2.7/site-packages/pip/basecommand.py\", line 122, in main\n status = self.run(options, args)\n File \"/opt/share/Python-2.7.9/lib/python2.7/site-packages/pip/commands/install.py\", line 257, in run\n InstallRequirement.from_line(name, None))\n File \"/opt/share/Python-2.7.9/lib/python2.7/site-packages/pip/req.py\", line 167, in from_line\n raise UnsupportedWheel(\"%s is not a supported wheel on this platform.\" % wheel.filename)\nUnsupportedWheel: torch-0.1.6.post22-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.<ECODE>", "isAccepted": false, "likes": null, "poster": "tomsercu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Hi everyone. This should be fixed now for all of you. The new command is available on the website, but pasting here for convenience: <SCODE>pip install https://s3.amazonaws.com/pytorch/whl/cu75/torch-0.1.6.post22-cp27-none-linux_x86_64.whl \npip install torchvision\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "That fixed it for me.", "isAccepted": false, "likes": null, "poster": "eulerreich" }, { "contents": "Will this enable me to use CUDA ? My driver is 8.0 on LINUX…", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I am facing the same problem on ubuntu. Help me please", "isAccepted": false, "likes": null, "poster": "Paarulakan" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "acgtyrant" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "acgtyrant" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
JoinTable like operation when combining CNNs
null
[ { "contents": "<SCODE>class Net(nn.Module):\ndef __init__(self):\n super(Net, self).__init__()\n self.embed = nn.Embedding(vocab.size(), 300)\n #self.embed.weight = Parameter( torch.from_numpy(vocab.get_weights().astype(np.float32))) \n self.conv_3 = nn.Conv2d(1, 50, kernel_size=(3, 300),stride=(1,1))\n self.conv_4 = nn.Conv2d(1, 50, kernel_size=(4, 300),stride=(1,1))\n self.conv_5 = nn.Conv2d(1, 50, kernel_size=(5, 300),stride=(1,1))\n self.decoder = nn.Linear(50 * 3, len(labels))\n\n\ndef forward(self, x):\n e1 = self.embed(x)\n x = F.dropout(e1, p=0.2)\n x = e1.view(x.size()[0], 1, 50, 300)\n cnn_3 = F.relu(F.max_pool2d(self.conv_3(x), (maxlen - 3 + 1, 1)))\n cnn_4 = F.relu(F.max_pool2d(self.conv_4(x), (maxlen - 4 + 1, 1)))\n cnn_5 = F.relu(F.max_pool2d(self.conv_5(x), (maxlen - 5 + 1, 1)))\n x = torch.cat([e.unsqueeze(0) for e in [cnn_3, cnn_4, cnn_5]]) \n x = x.view(-1, 50 * 3)\n return F.log_softmax(self.decoder(F.dropout(x, p=0.2)))<ECODE>", "isAccepted": false, "likes": null, "poster": "pchandar" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Inferring shape via flatten operator
null
[ { "contents": "Is there a flatten-like operator to calculate the shape of a layer output. An example would be transitioning from a conv layer to linear layer. In all the examples I’ve seen thus far this seems to be manually calculated, ex: <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = F.relu(self.fc2(x))\n return F.log_softmax(x)\n<ECODE> What would be idiomatic torch to calculate 320?", "isAccepted": true, "likes": 8, "poster": "domluna" }, { "contents": "<SCODE>bs = 5\nx = torch.rand(bs, 3, 224, 224)\nx = x.view(x.size(0), -1)\n<ECODE>", "isAccepted": true, "likes": 5, "poster": "fmassa" }, { "contents": "", "isAccepted": true, "likes": 1, "poster": "domluna" }, { "contents": "<SCODE>import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\nclass Net(nn.Module):\n def __init__(self, input_shape=(1, 28, 28)):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n\n n_size = self._get_conv_output(input_shape)\n \n self.fc1 = nn.Linear(n_size, 50)\n self.fc2 = nn.Linear(50, 10)\n\n # generate input sample and forward to get shape\n def _get_conv_output(self, shape):\n bs = 1\n input = Variable(torch.rand(bs, *shape))\n output_feat = self._forward_features(input)\n n_size = output_feat.data.view(bs, -1).size(1)\n return n_size\n\n def _forward_features(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n return x\n\n def forward(self, x):\n x = self._forward_features(x)\n x = x.view(x.size(0), -1)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = F.relu(self.fc2(x))\n return F.log_softmax(x)\n<ECODE>", "isAccepted": true, "likes": 8, "poster": "fmassa" }, { "contents": "you can try functional api which is truly dynamic.", "isAccepted": true, "likes": null, "poster": "ypxie" }, { "contents": "Hm yeah I did something similar. You definitely have to know the input size. The only other option is to write a general function which calculates the shape using conv rules without having to actually run the graph… <SCODE>class Network(nn.Module):\n \n def __init__(self, input_size=(3,32,32)):\n super(Network, self).__init__()\n self.features = nn.Sequential(\n nn.Conv2d(3,32,3),\n nn.ReLU(),\n nn.Conv2d(32,32,3),\n nn.ReLU(),\n nn.MaxPool2d((3,3))\n )\n self.flat_fts = self.get_flat_fts(input_size, self.features)\n self.classifier = nn.Sequential(\n nn.Linear(self.flat_fts, 100),\n nn.Dropout(p=0.2),\n nn.ReLU(),\n nn.Linear(100,10),\n nn.LogSoftmax()\n )\n \n def get_flat_fts(self, in_size, fts):\n f = fts(Variable(torch.ones(1,*in_size)))\n return int(np.prod(f.size()[1:]))\n \n def forward(self, x):\n fts = self.features(x)\n flat_fts = fts.view(-1, self.flat_fts)\n out = self.classifier(flat_fts)\n return out\n<ECODE>", "isAccepted": true, "likes": 1, "poster": "ncullen93" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "ncullen93" }, { "contents": "Yes, that’s what I do in my example.", "isAccepted": true, "likes": 1, "poster": "fmassa" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "apaszke" }, { "contents": "Actually no, we don’t need that. I think it’s too much work and maintenance for very small gains. You probably want to do that extra forward once.", "isAccepted": true, "likes": null, "poster": "apaszke" }, { "contents": "This should be a feature, seems weird to do a forward pass.", "isAccepted": true, "likes": 7, "poster": "abhigenie92" }, { "contents": "Keras and Lasagne can do it very easily.", "isAccepted": true, "likes": 1, "poster": "xueliang_liu" }, { "contents": "Can you please tell me what is bs here?", "isAccepted": true, "likes": null, "poster": "parichehr" }, { "contents": "bs is for batch_size, which is then below refered to as x.size(0).", "isAccepted": true, "likes": null, "poster": "lkhphuc" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Xingdong_Zuo" }, { "contents": "guys any update about this feature? I also wonder to know the answer to Xingdong_Zuo question. Thanks!", "isAccepted": true, "likes": null, "poster": "mlearnx" }, { "contents": "", "isAccepted": true, "likes": 3, "poster": "iacob" } ]
true
Simple L2 regularization?
null
[ { "contents": "Hi, does simple L2 / L1 regularization exist in pyTorch? I did not see anything like that in the losses. I guess the way we could do it is simply have the data_loss + reg_loss computed, (I guess using nn.MSEloss for the L2), but is an explicit way we can use it without doing it this way? Thanks", "isAccepted": false, "likes": 5, "poster": "Kalamaya" }, { "contents": "Hi, <SCODE>l1_crit = nn.L1Loss(size_average=False)\nreg_loss = 0\nfor param in model.parameters():\n reg_loss += l1_crit(param)\n\nfactor = 0.0005\nloss += factor * reg_loss\n<ECODE> Note that this might not be the best way of enforcing sparsity on the model though.", "isAccepted": false, "likes": 48, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": 6, "poster": "Kalamaya" }, { "contents": "Yeah, that’s been added there as an optimization, as L2 regularization is often used.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "what do you recommend which would be a better way to enforce sparsity instead of L1?", "isAccepted": false, "likes": 1, "poster": "ecolss" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ecolss" }, { "contents": "example: <SCODE>xx = nn.Parameter(torch.from_numpy(np.ones((3,3))))\nl1_crit = nn.L1Loss()\nl1_crit(xx)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "ncullen93" }, { "contents": "<SCODE>xx = nn.Parameter(torch.from_numpy(np.ones((3,3))))\ntarget = Variable(torch.from_numpy(np.zeros((3,3))))\nl1_crit = nn.L1Loss()\nl1_crit(xx, target)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "fmassa" }, { "contents": "I have two questions about L1 regularization: \nHow do we backpropagate for L1 regularization? I wonder it because the term is not differentiable.\n \nWhere can I see the implementation of L1 regularization? In the following link, there is only pass.\n", "isAccepted": false, "likes": 2, "poster": "Ja-Keoung_Koo" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "gwg" }, { "contents": "If you add the L1 regularization to the loss as I explained in a previous post, the gradients will be handled by autograd automatically.", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "yibo" }, { "contents": "<SCODE>batch_loss = loss(input=y_pred,target=batch_ys)\nbatch_loss += lambda*R\n<ECODE> ?", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "hughperkins" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "Neta_Zmora" }, { "contents": "<SCODE>weight_p, bias_p = [],[]\nfor name, p in model.named_parameters():\n if 'bias' in name:\n bias_p += [p]\n else:\n weight_p += [p]\n\noptim.SGD(\n [\n {'params': weight_p, 'weight_decay':1e -5},\n {'params': bias_p, 'weight_decay':0}\n ],\n lr=1e-2, momentum=0.9\n)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "shirui-japina" }, { "contents": "We do regularization to handle the high variance problem (overfitting). It’s good if the regularization includes all the learnable parameters (both weight and bias). But since bias is only a single parameter out of the large number of parameters, it’s usually not included in the regularization; and exclusion of bias hardly affects the results.", "isAccepted": false, "likes": null, "poster": "Deepak" } ]
false
Problems with target arrays of int (int32) types in loss functions
null
[ { "contents": "(On a side note, would you recommend using doubles and longs over floats and ints performance-wise?) <SCODE>TypeError: FloatClassNLLCriterion_updateOutput \nreceived an invalid combination of arguments - \ngot (int, torch.FloatTensor, torch.IntTensor, torch.FloatTensor, bool, NoneType, torch.FloatTensor), \nbut expected (int state, torch.FloatTensor input, torch.LongTensor target, torch.FloatTensor output, bool sizeAverage, [torch.FloatTensor weights or None], torch.FloatTensor total_weight)\n<ECODE> I was wondering if this is desired behavior (i.e., that the the loss function expects LongTensors)? <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 20, kernel_size=5)\n self.conv2 = nn.Conv2d(20, 32, kernel_size=5)\n self.conv2_drop = nn.Dropout2d(p=0.5)\n self.fc1 = nn.Linear(800, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), kernel_size=2, stride=2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 800)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = F.relu(self.fc2(x))\n return F.log_softmax(x)\n\nmodel = Net()\nif torch.cuda.is_available():\n model.cuda()\n\noptimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9)\n\n\nbatch_size=64\nmodel.train()\nfor step in range(1000):\n offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n data = train_data32[offset:(offset + batch_size), :, :, :]\n target = train_labels[offset:(offset + batch_size)]\n print('orig data type', data.dtype)\n print('orig data type', target.dtype)\n if torch.cuda.is_available():\n data, target = data.cuda(), target.cuda()\n data, target = Variable(torch.from_numpy(data)), Variable(torch.from_numpy(target))\n optimizer.zero_grad()\n print('input batch dim:', data.size(), 'type', )\n output = model(data)\n print('output batch dim:', output.size())\n print('target batch dim:', target.size())\n loss = F.nll_loss(output, target.long())\n loss.backward()\n optimizer.step()\n break\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "rasbt" }, { "contents": "Hi, I’ll try to address your questions in the following points. For training neural networks, using float is more than enough precision-wise, so no need for double. For keeping track of indices, int32 might not be enough for large models, so int64 (long) is preferred. That’s probably one of the reasons why we use long whenever we pass indices to functions (including NLLLoss) Note that you can also convert a numpy array to a tensor using torch.Tensor(numpy_array), and you can specify the type of the output tensor you want, in your case torch.LongTensor(numpy_array). This constructor does not share memory with the numpy array, so it’s slower and less memory efficient than the from_numpy equivalent. You can get the type of the tensor by passing no arguments to the type function, so tensor.type() returns the type of the tensor, and you can do things like <SCODE>tensor = torch.rand(3).double()\nnew_tensor = torch.rand(5).type(tensor.type())\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "fmassa" }, { "contents": "it’s on our list of things to do to allow Int labels as well, but right now it is expected behavior to ask for LongTensors as labels. You can simply get the class name: <SCODE>x = torch.randn(10)\nprint(x.__class__)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Thanks for the responses, again, that’s very helpful and useful to know.", "isAccepted": false, "likes": null, "poster": "rasbt" } ]
false
Why use nn.Conv2D over nn.functional.conv2d?
null
[ { "contents": "Hi, I stumbled upon this question because I am trying to have control over how my convolutional weights are initialized. The API for both of those however seems different. For the former: torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) For the latter: class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True) So, my questions are the following: Thanks", "isAccepted": false, "likes": 2, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "smth" } ]
false
Are GPU and CPU random seeds independent?
null
[ { "contents": "<SCODE>torch.manual_seed(args.seed)\nif args.cuda:\n torch.cuda.manual_seed(args.seed)\n<ECODE> torch.manual_seed(args.seed) sufficient?", "isAccepted": false, "likes": 3, "poster": "rasbt" }, { "contents": "It is sufficient for CPU determinism, but it won’t affect the GPU PRNG state. We’ve been thinking about merging these two, and we’ll probably do so in the future.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks! And merging them in future sounds like a good idea.", "isAccepted": false, "likes": null, "poster": "rasbt" }, { "contents": "Agree, less chance for mysterious inconsistency for programmers who’re not aware of this.", "isAccepted": false, "likes": null, "poster": "Alex_Choy" } ]
false
Equivalent of np.reshape() in pyTorch?
null
[ { "contents": "", "isAccepted": false, "likes": 4, "poster": "Kalamaya" }, { "contents": "<SCODE>>>> import torch\n>>> t = torch.ones((2, 3, 4))\n>>> t.size()\ntorch.Size([2, 3, 4])\n>>> t.view(-1, 12).size()\ntorch.Size([2, 12])<ECODE>", "isAccepted": false, "likes": 12, "poster": "rasbt" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "A caveat is that it has to be contiguous, but that matches numpy as far as I know.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "tachim" }, { "contents": "Since people have asked about this several times – yes. You just have to define the module version yourself: <SCODE>class View(nn.Module):\n def __init__(self, *shape):\n super(View, self).__init__()\n self.shape = shape\n def forward(self, input):\n return input.view(*shape)\n\nsequential_model = nn.Sequential([Linear(10, 20), View(-1, 5, 4)])\n<ECODE>", "isAccepted": false, "likes": 14, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "mderakhshani" }, { "contents": "You can use permute in pytorch to specify its order of reshaping. <SCODE>t = torch.rand((2, 3, 4))\nt = t.permute(1, 0, 2)\n<ECODE> this can reshape its order", "isAccepted": false, "likes": 6, "poster": "SherlockLiao" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "cosmozhang1988" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "colesbury" }, { "contents": "Thank you!!!", "isAccepted": false, "likes": null, "poster": "cosmozhang1988" }, { "contents": "why is this not called reshape?", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "My guess is that it’s been done to be consistent with torch rather than numpy, which makes sense. However, yeah, some of the naming conventions are a bit annoying for people coming from NumPy, not Torch", "isAccepted": false, "likes": null, "poster": "rasbt" }, { "contents": "I think the torchTensor.view() is same as to np.reshape() after my experiment. And torchTesnor.permute is same as to np.transpose. But I have a question about what is the use of np.reshape. Because in image processing domain, reshape will change the structure information of the image and that is fatal.", "isAccepted": false, "likes": null, "poster": "Eric_K" }, { "contents": "In image processing field, we should use permute in my opinion then what is the meaning of view()'s existence? Thank you!", "isAccepted": false, "likes": null, "poster": "Eric_K" }, { "contents": "<SCODE>linear_input = conv_output.view(batch_size, channels_out*height_out*width_out)\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "jpeg729" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "Elias_Vansteenkiste" } ]
false
Visual watcher when training/evaluating or tensorboard equivalence?
null
[ { "contents": "Hi, As leveraging python ecosystem seems one of the main reasons for switching to pytorch from Torch 7, would you provide guidance to visually overseeing training/evaluating with 3rd party Python libraries that is hard to do in pure Torch 7 ? For example,", "isAccepted": false, "likes": null, "poster": "pengsun" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pengsun" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "edgarriba" }, { "contents": "Hi, noob question here - might be especially relevant for people moving over from TensorFlow? I just wonder if anyone has got Crayon working for multi-threads? I can’t figure it out? I keep getting this error <SCODE>Traceback (most recent call last):\n File \"/home/ajay/anaconda3/envs/pyphi/lib/python3.6/multiprocessing/process.py\", line 249, in _bootstrap\n self.run()\n File \"/home/ajay/anaconda3/envs/pyphi/lib/python3.6/multiprocessing/process.py\", line 93, in run\n self._target(*self._args, **self._kwargs)\n File \"/home/ajay/PythonProjects/PyT_Neural_Arch_Search_v1_2/train_v1.py\", line 201, in train\n foo.get_scalar_values(\"Mean Reward\")\n File \"/home/ajay/anaconda3/envs/pyphi/lib/python3.6/site-packages/pycrayon/crayon.py\", line 167, in get_scalar_values\n return json.loads(r.text)\n File \"/home/ajay/anaconda3/envs/pyphi/lib/python3.6/json/__init__.py\", line 354, in loads\n return _default_decoder.decode(s)\n File \"/home/ajay/anaconda3/envs/pyphi/lib/python3.6/json/decoder.py\", line 339, in decode\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n File \"/home/ajay/anaconda3/envs/pyphi/lib/python3.6/json/decoder.py\", line 357, in raw_decode\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\n<ECODE> It works fine through when I run the following from the interpreter, <SCODE>from pycrayon import CrayonClient\ncc = CrayonClient( hostname=\"http://127.0.1.1\" , port=8889)\nfoo = cc.create_experiment(\"train_\" + str(rank))\n\nfoo.add_scalar_value(\"Mean Reward\", mean_reward, step = episode_count)\nfoo.add_scalar_value(\"Max Reward\" , max_reward, step = episode_count)\n\nfoo.get_scalar_values(\"Mean Reward\")\nfoo.get_scalar_values(\"Max Reward\")\n<ECODE>", "isAccepted": false, "likes": null, "poster": "AjayTalati" }, { "contents": "Nevermind, it works without the extra lines, <SCODE>foo.get_scalar_values(\"Mean Reward\")\nfoo.get_scalar_values(\"Max Reward\")\n<ECODE> Also I needed to give distinct date time’s for each of the threads, <SCODE>cc = CrayonClient( hostname=\"http://127.0.1.1\" , port=8889)\nexp_name = \"train_\" + str(rank) + \"_\" + datetime.now().strftime('train_%m-%d_%H-%M') # prevents server errors ??\nfoo = cc.create_experiment(exp_name)<ECODE>", "isAccepted": false, "likes": 1, "poster": "AjayTalati" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ibadami" } ]
false
Variable Can not compare with other parameter?
null
[ { "contents": "When I try to find out some Variable result who is the biggest. However, I found that “Variable” can not compare with other parameters, such as ndarray or FloatTensor. Thus, is there any thing wrong with Variable, or what should I do?", "isAccepted": false, "likes": null, "poster": "cumttang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Oh, thanks. Maybe there is another way to solve this problem for transforming Variable into Ndarray. However, I think to use comparison operators for variable is more convenient.", "isAccepted": false, "likes": null, "poster": "cumttang" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "cumttang" } ]
false
Parallelism via CPU
null
[ { "contents": "<SCODE>sess = tf.Session(config=tf.ConfigProto(\n intra_op_parallelism_threads=NUM_THREADS))<ECODE>", "isAccepted": false, "likes": null, "poster": "rasbt" }, { "contents": "Hi,", "isAccepted": false, "likes": 2, "poster": "fmassa" }, { "contents": "Sounds awesome! Thanks!", "isAccepted": false, "likes": null, "poster": "rasbt" } ]
false
Cannot install via pip on LINUX?
null
[ { "contents": "I am getting the following error when I try to install on my 64-bit Ubuntu 14.04 LTS system. The command I run is: And the error I get is: After that it says: For additional context, I have made a virtual environment (called pytorch) first, and then within that environment, I tried to run the install command.", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" } ]
false
Weight initilzation
null
[ { "contents": "How I can initialize variables, say Xavier initialization?", "isAccepted": false, "likes": 23, "poster": "Hamid" }, { "contents": "", "isAccepted": false, "likes": 6, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 12, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "michael_k" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "Does it make any sense what I am saying? If it does not, I can try better and with an example.", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "You first define your name check function, which applies selectively the initialisation. <SCODE>def weights_init(m):\n classname = m.__class__.__name__\n if classname.find('Conv') != -1:\n xavier(m.weight.data)\n xavier(m.bias.data)\n<ECODE> <SCODE>net = Net() # generate an instance network from the Net class\nnet.apply(weights_init) # apply weight init\n<ECODE>", "isAccepted": false, "likes": 14, "poster": "Atcold" }, { "contents": "A less Lua way of doing that would be to check if some module is an instance of a class. This is the recommended way: <SCODE>def weights_init(m):\n if isinstance(m, nn.Conv2d):\n xavier(m.weight.data)\n xavier(m.bias.data)\n<ECODE>", "isAccepted": false, "likes": 29, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Hamid" }, { "contents": "<SCODE>size = m.weight.size() # returns a tuple\nfan_out = size[0] # number of rows\nfan_in = size[1] # number of columns\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Atcold" }, { "contents": "my code: <SCODE>def weight_init(m): \n\tif isinstance(m, nn.Linear):\n\t\tsize = m.weight.size()\n\t\tfan_out = size[0] # number of rows\n\t\tfan_in = size[1] # number of columns\n\t\tvariance = np.sqrt(2.0/(fan_in + fan_out))\n\t\tm.weight.data.normal_(0.0, variance)\n\n\nclass Residual(nn.Module):\n\tdef __init__(self,dropout, shape, negative_slope, BNflag = False):\n\t\tsuper(Residual, self).__init__()\n\t\tself.dropout = dropout\n\t\tself.linear1 = nn.Linear(shape[0],shape[1])\n\t\tself.linear2 = nn.Linear(shape[1],shape[0])\n\t\tself.dropout = nn.Dropout(self.dropout)\n\t\tself.BNflag = BNflag\n\t\tself.batch_normlization = nn.BatchNorm1d(shape[0])\n\t\tself.leakyRelu = nn.LeakyReLU(negative_slope = negative_slope , inplace=False)\n\n\tdef forward(self, X):\n\t\tx = X\n\t\tif self.BNFlag:\n\t\t\tx = self.batch_normlization(x)\n\t\tx = self.leakyRelu(x)\n\t\tx = self.dropout(x)\n\t\tx = self.linear1(x)\n\t\tif self.BNFlag:\n\t\t\tx = self.batch_normlization(x)\n\t\tx = self.leakyRelu(x)\n\t\tx = self.dropout(x)\n\t\tx = self.linear2(x)\n\t\tx = torch.add(x,X)\n\t\treturn x\n\t\t\n\t\t\nclass FullyCN(nn.Module):\n\tdef __init__(self, args):\n\t\tsuper(FullyCN, self).__init__()\n\t\tself.numlayers = arg.sm-num-hidden-layers\n\t\tself.learning-rate= args.sm-learning-rate\n\t\tself.dropout = arg.sm-dropout-prob\n\t\tself.BNflag = args.sm-bn\n\t\tself.shape = [args.sm-input-size,args.sm-num-hidden-units]\t\t\n\t\tself.res = Residual(self.dropout,self.shape,args.sm_act_param,self.self.BNflag)\n\t\tself.res(weight_init)\n\t\tself.res-outpus = []\n\n\tdef forward(self,X):\n\t\tself.res-outpus.append(self.res(X))\n\t\tfor i in range(self.numlayers):\n\t\t\tself.res-outpus.append(self.res(self.res-outpus[-1]))\n\t\treturn self.res-outpus[-1]<ECODE>", "isAccepted": false, "likes": 2, "poster": "Hamid" }, { "contents": "sorry about confusion", "isAccepted": false, "likes": null, "poster": "Hamid" }, { "contents": "Correct.", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "But I have called weight_init once for the class while I call linear layers in a for loop (i.e., there are multiple sets of variables).", "isAccepted": false, "likes": null, "poster": "Hamid" }, { "contents": "<SCODE>net = Residual() # generate an instance network from the Net class\nnet.apply(weights_init) # apply weight init\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 9, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Hamid" } ]
false
Cuda() failed with “The NVIDIA driver on your system is too old”
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "rohjunha" }, { "contents": "That actually sounds like a docker issue. When PyTorch runs, we check with the CUDA driver API if the driver is sufficient, and we report back what it says… Are you using NVIDIA-docker? because you need that to run docker images with GPU support.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "rohjunha" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kaiumba" } ]
false
High memory usage while training
null
[ { "contents": "<SCODE>class testNet(nn.Module):\n def __init__(self):\n super(testNet, self).__init__()\n self.rnn = nn.RNN(input_size=200, hidden_size=1000, num_layers=1)\n self.linear = nn.Linear(1000, 100)\n\n def forward(self, x, init):\n x = self.rnn(x, init)[0]\n y = self.linear(x.view(x.size(0)*x.size(1), x.size(2)))\n return y.view(x.size(0), x.size(1), y.size(1))\n\nnet = testNet()\ninit = Variable(torch.zeros(1, 4, 1000))\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)\n\ntotal_loss = 0.0\nfor i in range(10000): #10000 mini-batch\n input = Variable(torch.randn(1000, 4, 200)) #Seqlen = 1000, batch_size = 4, feature = 200\n target = Variable(torch.LongTensor(4, 1000).zero_())\n\n optimizer.zero_grad()\n output = net(input, init)\n loss = criterion(output.view(-1, output.size(2)), target.view(-1))\n loss.backward()\n optimizer.step()\n total_loss += loss[0]\n\nprint(total_loss)\n<ECODE> I expect memory usage not increasing per mini-batch. What might be the problem? (Correct me if my script is wrong)", "isAccepted": false, "likes": 1, "poster": "NgPDat" }, { "contents": "I’m trying to reproduce your results. So far, my run is pretty stable at around 105MB, after 400 mini-batches, I will wait for some time.", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 14, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "NgPDat" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "NgPDat" } ]
false
Train Neural network problem in reinforcement learning
null
[ { "contents": "<SCODE>self.value_optimizer.zero_grad()\npreValue = self.value_net(state).data\nnextValue = self.value_net(next_state).data\nexpectedValue = (self.gamma * nextValue) + reward\n\npreValue = Variable(preValue)\nexpectedValue = Variable(expectedValue)\nloss = F.smooth_l1_loss(preValue, expectedValue)\nloss.backward()\nself.value_optimizer.step()\n\n**self._execution_engine.run_backward((self,), (gradient,), retain_variables)**\n**RuntimeError: there are no graph nodes that require computing gradients**\n<ECODE>", "isAccepted": false, "likes": null, "poster": "cumttang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "<SCODE>self.value_optimizer.zero_grad()\n# Here, when you unpack the data, you detach the data from the graph\n# No backpropagation through the model is possible, because you got rid\n# of the reference to the grpah.\npreValue = self.value_net(state).data\nnextValue = self.value_net(next_state).data\nexpectedValue = (self.gamma * nextValue) + reward\n\n# Here, you repack the tensors in Variables, but the history of\n# operations is not retained - they are leaf Variables.\n# Also you didn't specify that they require gradients (they don't\n# by default).\npreValue = Variable(preValue)\nexpectedValue = Variable(expectedValue)\nloss = F.smooth_l1_loss(preValue, expectedValue)\n# At this point your while graph looks like this - no model there:\n# preValue expectedValue\n# \\ /\n# smooth_f1_loss\n# |\n# loss\nloss.backward()\nself.value_optimizer.step()\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "I have tried it, but it seems not work.", "isAccepted": false, "likes": null, "poster": "cumttang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "cumttang" }, { "contents": "Try this: <SCODE>self.value_optimizer.zero_grad()\npreValue = self.value_net(state)\nnextValue = self.value_net(next_state).detach() # don't backprop this way\nexpectedValue = (self.gamma * nextValue) + reward\nloss = F.smooth_l1_loss(preValue, expectedValue)\nloss.backward()\nself.value_optimizer.step()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I have tried it, but this problem occurs. <SCODE>**expectedValue = (self.gamma * nextValue) + reward**\n<ECODE>", "isAccepted": false, "likes": null, "poster": "cumttang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
How to install PyTorch such that can use GPUs
null
[ { "contents": "EDIT: I use net.cuda() and this seems to work. However when I try to run my script I get this error: (If I do not try to run net.cuda() my script works find on the CPU).", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "NgPDat" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "Nevermind, I think it works now: I had forgotten to do the same for the labels as well. Thanks!", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "Hi, <SCODE>python translate.py -model ../onmt-model/onmt_model_en_fr_b1M-261c69a7.pt -src ../test.txt -output ../test.tok\n<ECODE> , but failed with the following error: <SCODE>Traceback (most recent call last):\n File \"translate.py\", line 116, in <module>\n main()\n File \"translate.py\", line 55, in main\n translator = onmt.Translator(opt)\n File \"/root/OpenNMT-py/onmt/Translator.py\", line 11, in __init__\n checkpoint = torch.load(opt.model)\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/serialization.py\", line 222, in load\n return _load(f, map_location, pickle_module)\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/serialization.py\", line 355, in _load\n return legacy_load(f)\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/serialization.py\", line 300, in legacy_load\n obj = restore_location(obj, location)\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/serialization.py\", line 85, in default_restore_location\n result = fn(storage, location)\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/serialization.py\", line 67, in _cuda_deserialize\n return obj.cuda(device_id)\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/_utils.py\", line 56, in _cuda\n with torch.cuda.device(device):\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/cuda/__init__.py\", line 136, in __enter__\n _lazy_init()\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/cuda/__init__.py\", line 96, in _lazy_init\n _check_driver()\n File \"/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/cuda/__init__.py\", line 70, in _check_driver\n http://www.nvidia.com/Download/index.aspx\"\"\")\nAssertionError:\nFound no NVIDIA driver on your system. Please check that you\nhave an NVIDIA GPU and installed a driver from\nhttp://www.nvidia.com/Download/index.aspx\n<ECODE> It looks having no GPU support. Yes, I’m using a VM, so don’t have GPU. Is this right? <SCODE>Traceback (most recent call last):\n File \"translate.py\", line 123, in <module>\n main()\n File \"translate.py\", line 56, in main\n translator = onmt.Translator(opt)\n File \"/root/OpenNMT-py/onmt/Translator.py\", line 12, in __init__\n checkpoint = torch.load(opt.model)\n File \"/root/anaconda3/lib/python3.6/site-packages/torch/serialization.py\", line 229, in load\n return _load(f, map_location, pickle_module)\n File \"/root/anaconda3/lib/python3.6/site-packages/torch/serialization.py\", line 362, in _load\n return legacy_load(f)\n File \"/root/anaconda3/lib/python3.6/site-packages/torch/serialization.py\", line 307, in legacy_load\n obj = restore_location(obj, location)\n File \"/root/anaconda3/lib/python3.6/site-packages/torch/serialization.py\", line 85, in default_restore_location\n result = fn(storage, location)\n File \"/root/anaconda3/lib/python3.6/site-packages/torch/serialization.py\", line 67, in _cuda_deserialize\n return obj.cuda(device_id)\n File \"/root/anaconda3/lib/python3.6/site-packages/torch/_utils.py\", line 57, in _cuda\n with torch.cuda.device(device):\n File \"/root/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__.py\", line 129, in __enter__\n _lazy_init()\n File \"/root/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__.py\", line 89, in _lazy_init\n _check_driver()\n File \"/root/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__.py\", line 56, in _check_driver\n raise AssertionError(\"Torch not compiled with CUDA enabled\")\nAssertionError: Torch not compiled with CUDA enabled\n<ECODE> Could anyone help? Thanks a lot!", "isAccepted": false, "likes": null, "poster": "lifengd" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "@smth Hi Smth, Do you mean the Python edition of OpenNMT require GPU? Not ok with CPU only on VM? And seems there is no such a control flag from the command help: translate.py Thanks!", "isAccepted": false, "likes": null, "poster": "lifengd" } ]
false
Serving Model Trained with PyTorch
null
[ { "contents": "Tensorflow has Tensorflow Serving. I know pytorch is a framework in its early stages, but how do people serve models trained with pytorch. Must it be from Python? I’m specifically looking to serve from C++.", "isAccepted": false, "likes": 4, "poster": "abc" }, { "contents": "We don’t have a way to serve models from C++ right now, and it’s not a priority for us at this stage. There are many things like distributed training and double backward that we’ll be implementing first. Sorry!", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Would you say that pytorch was built with serving in mind, e.g. for an API, or more for research purposes?", "isAccepted": false, "likes": null, "poster": "abc" }, { "contents": "We’re more research oriented. We’re rather thinking of creating tools to export models to frameworks that are more focused on production usage like Caffe2 and TensorFlow.", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "Also you mentioned double backward. This is the first I’ve heard of it. I found a paper by Yann LeCun on double backpropagation, but was wondering whether it’s common to use such a method.", "isAccepted": false, "likes": null, "poster": "abc" }, { "contents": "Hi, I’m playing with a possible solution for serving from C based on TH and THNN. It’ll be limited to statically compilable graphs of course. I should have something to share in the not so distant future.", "isAccepted": false, "likes": 5, "poster": "lantiga" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 5, "poster": "lantiga" }, { "contents": "Sure that sounds cool. It doesn’t seem hacky, it’s just a graph compiler. It’s a very good start, and will likely be capable of producing small binaries. Let us know when there’s going to be any progress or in case you have any trouble. We’ll definitely showcase your solution somewhere.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Eugenio_Culurciello" }, { "contents": "Making progress. As soon as I get the first MNIST example to compile I’ll share what I have.", "isAccepted": false, "likes": 1, "poster": "lantiga" }, { "contents": "We need to deploy pytorch models to e.g. Android, so we need a method to export a model. This is my starting point. Can you please tell me if I am on the right way or if I am doing something totally stupid? <SCODE>import sys\nimport torch\nfrom torch import nn\nfrom torchvision import models\nfrom torch.utils.serialization import load_lua\n\ndef dump(f):\n\ts = str(f.__class__)\n\tsys.stdout.write(s[s.rfind('.')+1:-2]+'(')\n\tfor fa in f.previous_functions:\n\t\tif isinstance(fa[0], torch.autograd.Function):\n\t\t\tdump(fa[0])\n\t\t\tsys.stdout.write(',')\n\t\tif isinstance(fa[0], torch.nn.parameter.Parameter):\n\t\t\tsys.stdout.write('param,')\n\t\telif isinstance(fa[0], torch.autograd.Variable):\n\t\t\tsys.stdout.write('input,')\n\tsys.stdout.write(')')\n\n\nclass MyNet(nn.Module):\n def __init__(self):\n super(MyNet, self).__init__()\n self.conv1 = nn.Conv2d(3, 16, kernel_size=1, bias=False)\n self.bn1 = nn.BatchNorm2d(16)\n self.conv2 = nn.Conv2d(3, 16, kernel_size=1, bias=True)\n\n def forward(self, x):\n return self.bn1(self.conv1(x))+self.conv2(x)\n \n\n#net = models.alexnet()\n#net=load_lua('model.net') #Legacy networks won't work (no support for Variables)\nnet = MyNet()\ninput=torch.autograd.Variable(torch.zeros(1,3,128,128))\noutput=net.forward(input)\ndump(output.creator)\nprint('')\n<ECODE> The output for the simple MyNet will be <SCODE>Add(BatchNorm(ConvNd(input,param,),param,param,),ConvNd(input,param,param,),)\n<ECODE> Thanks", "isAccepted": false, "likes": null, "poster": "mvitez" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Yes, please. I have just sent an invitation request for slack to soumith.", "isAccepted": false, "likes": null, "poster": "mvitez" }, { "contents": "I’m not sure how profoundly things will have to be reworked with the upcoming changes in autograd, but it’s fun anyway.", "isAccepted": false, "likes": 3, "poster": "lantiga" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "lantiga" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "Great, very nice work, thank you.", "isAccepted": false, "likes": 1, "poster": "mvitez" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 5, "poster": "Eugenio_Culurciello" } ]
false
Check `gradInput`s
null
[ { "contents": "<SCODE>net = Net()\nh = net(x)\nJ = loss(h, y)\nJ.backward()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "In that case Variable hooks (on the output) should do it.", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Sharing optimizer between processes
null
[ { "contents": "I am wondering if it is possible to share optimizer between different threads. To be specific, when optimizer.step() is applied, modified state of the optimizer should be available for all processes.", "isAccepted": false, "likes": null, "poster": "delgado" }, { "contents": "i cant think of a way to do this, because the optimizer also has python scalars. What are you trying to do?", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "delgado" }, { "contents": "No, I think most of the optimizer state is kept in Tensors, so it should be possible to share it. I’m not 100% now, I’ll need to take a look at RMSprop and can confirm it tomorrow.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks. I will also look at the code and see if I can find some hack.", "isAccepted": false, "likes": null, "poster": "delgado" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "ypxie" }, { "contents": "It’s not. You need to guard it with a mutex yourself", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Just a quick follow-up for those wanting to implement A3C. A3C uses a Hogwild! update style, which means updates are made lock-free. Workers possibly overwrite each other updates at times, but that’s OK. But other than that, it is doable, and not too hard.", "isAccepted": false, "likes": 1, "poster": "mimoralea" } ]
false
RuntimeError: expected a Variable argument, but got function
null
[ { "contents": "Hello, I get this weird error while running my code. <SCODE>class Residual(nn.Module):\n\tdef __init__(self,dropout, shape, negative_slope, BNflag = False):\n\t\tsuper(Residual, self).__init__()\n\t\tself.dropout = dropout\n\t\tself.linear1 = nn.Linear(shape[0],shape[1])\n\t\tself.linear2 = nn.Linear(shape[1],shape[0])\n\t\tself.dropout = nn.Dropout(self.dropout)\n\t\tself.BNflag = BNflag\n\t\tself.batch_normlization = nn.BatchNorm1d(shape[0])\n\t\tself.leakyRelu = nn.LeakyReLU(negative_slope = negative_slope , inplace=False)\n\n\tdef forward(self, X):\n\t\tx = X\n\t\tif self.BNflag:\n\t\t\tx = self.batch_normlization(x)\n\t\tx = self.leakyRelu(x)\n\t\tx = self.dropout(x)\n\t\tx = self.linear1(x)\n\t\tif self.BNflag:\n\t\t\tx = self.batch_normlization(x)\n\t\tx = self.leakyRelu(x)\n\t\tx = self.dropout(x)\n\t\tx = self.linear2(x)\n\t\tx = torch.add(x,X)\n\t\treturn x\n\nres = Residual(0.5,[100,200],0.2,False)\n<ECODE> I haven’t passed any data yet.", "isAccepted": false, "likes": null, "poster": "Hamid" }, { "contents": "Never mind. problem Solved", "isAccepted": false, "likes": null, "poster": "Hamid" } ]
false
How to check if Model is on cuda
null
[ { "contents": "", "isAccepted": true, "likes": 11, "poster": "napsternxg" }, { "contents": "As replied on the github issues, an easy way is: <SCODE>next(model.parameters()).is_cuda # returns a boolean\n<ECODE>", "isAccepted": true, "likes": 56, "poster": "smth" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "mikolchon" }, { "contents": "I tested it right now, and it works even in pytorch v0.4", "isAccepted": true, "likes": null, "poster": "thadar" }, { "contents": "But why is it even necessary? If a model is on cuda and you call model.cuda() it should be a no-op and if the model is on cpu and you call model.cpu() it should also be a no-op.", "isAccepted": true, "likes": null, "poster": "justusschock" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "mikolchon" }, { "contents": "Personally, I develop and debug 99% of the code on macOS, and then sync it over to a headless cluster, which is why this pattern is useful to me, for example. Yes, e.g., you can now specify the device 1 time at the top of your script, e.g., <SCODE>device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\") \n<ECODE> and then for the model, you can use <SCODE>model = model.to(device)\n<ECODE> The same applies also to tensors, e.g,. <SCODE>for features, targets in data_loader:\n features = features.to(device)\n targets = targets.to(device)<ECODE>", "isAccepted": true, "likes": 9, "poster": "rasbt" }, { "contents": "", "isAccepted": true, "likes": 2, "poster": "chiragraman" }, { "contents": "You can get the device by: <SCODE>next(network.parameters()).device\n<ECODE>", "isAccepted": true, "likes": 14, "poster": "Martin_Ennemoser" }, { "contents": "why doesn’t: <SCODE>self.device\n<ECODE> <SCODE>(Pdb) self.device\n*** torch.nn.modules.module.ModuleAttributeError: 'MyNN' object has no attribute 'device'\n<ECODE>", "isAccepted": true, "likes": 1, "poster": "Brando_Miranda" }, { "contents": "", "isAccepted": true, "likes": 8, "poster": "ptrblck" }, { "contents": "", "isAccepted": true, "likes": null, "poster": "Brando_Miranda" }, { "contents": "Yes, that’s correct.", "isAccepted": true, "likes": 1, "poster": "ptrblck" } ]
true
Dropout functional API, advantages/disadvantages?
null
[ { "contents": "I saw in one of the examples that the functional API was used to implement dropout for a conv layer but not for the fully connected layer. Was wondering if this has a specific reason? I pasted the code below <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = F.relu(self.fc2(x))\n return F.log_softmax(x)\n<ECODE> Am I correct that this is the same as the following code? <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc1_drop = nn.Dropout(p=0.5) # added this line\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = self.fc1_drop(x) # added this line\n #x = F.dropout(x, training=self.training) # removed this line\n x = F.relu(self.fc2(x))\n return F.log_softmax(x)\n<ECODE>", "isAccepted": false, "likes": 4, "poster": "rasbt" }, { "contents": "No, they’re equivalent and they expand to calls to the same autograd functions. The same applies for all other layers - both functional and non-functional versions work exactly the same, it’s only a matter of personal taste.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "In that tutorial I probably missed the dropout module and that’s why I didn’t change it to functional. I find using the functional API for parameter-less operations and modules for everything containing parameters the best.", "isAccepted": false, "likes": 5, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "surag" }, { "contents": "After I got back into PyTorch for some research projects lately, I adopted the habit of using the functional API for everything that does not have parameters (in the sense of weights, biases, etc.) Regarding dropout, I think there’s no issue with that as you can specify the training/eval mode via e.g., <SCODE>x = F.dropout(out, p=dropout_prob, training=self.training)\n<ECODE> PS: I don’t use nn.Sequential, though, because I usually like to be able access intermediate states conveniently", "isAccepted": false, "likes": 1, "poster": "rasbt" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "two_four" } ]
false
RNN for sequence prediction
null
[ { "contents": "Hello, Previously I used keras for CNN and so I am a newbie on both PyTorch and RNN. In keras you can write a script for an RNN for sequence prediction like, <SCODE>in_out_neurons = 1\nhidden_neurons = 300\n\nmodel = Sequential() \nmodel.add(LSTM(hidden_neurons, batch_input_shape=(None, length_of_sequences, in_out_neurons), return_sequences=False)) \nmodel.add(Dense(in_out_neurons)) \nmodel.add(Activation(\"linear\")) \n<ECODE> but when it comes to PyTorch I don’t know how to implement it. I directly translate the code above into below, but it doesn’t work. <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.rnn1 = nn.GRU(input_size=seq_len,\n hidden_size=128,\n num_layers=1)\n self.dense1 = nn.Linear(128, 1)\n\n def forward(self, x, hidden):\n x, hidden = self.rnn1(x, hidden)\n x = self.dense1(x)\n return x, hidden\n\n def init_hidden(self, batch_size):\n weight = next(self.parameters()).data\n return Variable(weight.new(128, batch_size, 1).zero_())\n<ECODE> how can I implement something like the keras code? thank you.", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "Apart from this, your module looks good to me!", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "and For example, if you input a sequence <SCODE>[[[ 0.1, 0.2]],\n [[ 0.1, 0.2]],\n [[ 0.3, 0.1]]]\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "Thanks a lot for your help, finally the code below works, <SCODE>import torch\nimport torch.nn as nn\nfrom torch.autograd import Variable\n\nfeatures = 1\nseq_len = 10\nhidden_size = 128\nbatch_size = 32\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.rnn1 = nn.GRU(input_size=features,\n hidden_size=hidden_size,\n num_layers=1)\n self.dense1 = nn.Linear(hidden_size, 1)\n\n def forward(self, x, hidden):\n x, hidden = self.rnn1(x, hidden)\n x = x.select(1, seq_len-1).contiguous()\n x = x.view(-1, hidden_size)\n x = self.dense1(x)\n return x, hidden\n\n def init_hidden(self):\n weight = next(self.parameters()).data\n return Variable(weight.new(1, batch_size, hidden_size).zero_())\n\nmodel = Net()\nmodel.cuda()\nhidden = model.init_hidden()\n\nX_train_1 = X_train[0:batch_size].reshape(seq_len,batch_size,features)\ny_train_1 = y_train[0:batch_size]\nmodel.zero_grad()\nT = torch.Tensor\nX_train_1, y_train_1 = T(X_train_1), T(y_train_1)\nX_train_1, y_train_1 = Variable(X_train_1).cuda(), Variable(y_train_1).cuda()\n\noutput, hidden = model(X_train_1, Variable(hidden.data))\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "moskomule" }, { "contents": "Thanks for your helping, like I wrote above the script works, “literally” but the loss doesn’t decrease over the epochs, so give me some advice. I think the related parts are, <SCODE>class Net(nn.Module):\n def __init__(self, features, cls_size):\n super(Net, self).__init__()\n self.rnn1 = nn.GRU(input_size=features,\n hidden_size=hidden_size,\n num_layers=1)\n self.dense1 = nn.Linear(hidden_size, cls_size)\n\n def forward(self, x, hidden):\n x, hidden = self.rnn1(x, hidden)\n x = x.select(0, maxlen-1).contiguous()\n x = x.view(-1, hidden_size)\n x = F.softmax(self.dense1(x))\n return x, hidden\n\n def init_hidden(self, batch_size=batch_size):\n weight = next(self.parameters()).data\n return Variable(weight.new(1, batch_size, hidden_size).zero_())\n\ndef var(x):\n x = Variable(x)\n if cuda:\n return x.cuda()\n else:\n return x\n\nmodel = Net(features=features, cls_size=len(chars))\nif cuda:\n model.cuda()\ncriterion = nn.CrossEntropyLoss()\noptimizer = optim.Adam(model.parameters(), lr=lr)\n\ndef train():\n model.train()\n hidden = model.init_hidden()\n for epoch in range(len(sentences) // batch_size):\n X_batch = var(torch.FloatTensor(X[:, epoch*batch_size: (epoch+1)*batch_size, :]))\n y_batch = var(torch.LongTensor(y[epoch*batch_size: (epoch+1)*batch_size]))\n model.zero_grad()\n output, hidden = model(X_batch, var(hidden.data))\n loss = criterion(output, y_batch)\n loss.backward()\n optimizer.step()\n\nfor epoch in range(nb_epochs):\n train()\n<ECODE> the input is “one-hot” vector and I tried changing its learning rate but the result is the same.", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "I’m not sure, it’s hard to spot bugs in code that you can’t run. Why do you do this: <SCODE>x = x.select(0, maxlen-1).contiguous()\n<ECODE> Don’t you want to return predictions for the whole sequence? It seems to me that you’re only taking the last output.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "\nreturn_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "If you have the full code available somewhere I can take a look.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "OK, thanks. Does interfere in back propagation?", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "How would they interfere? They both should be ok.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Right. So now, <SCODE>class Net(nn.module):\n ...\n def forward(self, x, hidden):\n x, hidden = self.rnn1(x, hidden)\n x = x.select(0, maxlen-1).contiguous()\n x = x.view(-1, hidden_size)\n x = F.relu(self.dense1(x))\n x = F.log_softmax(self.dense2(x))\n return x, hidden\n...\ncriterion = nn.NLLLoss()\n...\ndef train():\n model.train()\n hidden = model.init_hidden()\n for epoch in range(len(sentences) // batch_size):\n X_batch = var(torch.FloatTensor(X[:, epoch*batch_size: (epoch+1)*batch_size, :]))\n y_batch = var(torch.LongTensor(y[epoch*batch_size: (epoch+1)*batch_size]))\n model.zero_grad()\n output, hidden = model(X_batch, var_pair(hidden))\n loss = criterion(output, y_batch)\n loss.backward()\n optimizer.step()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "How to understand this fact? Thank you!", "isAccepted": false, "likes": null, "poster": "CodePothunter" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "I see. Thank you very much!", "isAccepted": false, "likes": null, "poster": "CodePothunter" }, { "contents": "Hi, Thank you", "isAccepted": false, "likes": null, "poster": "osm3000" } ]
false
How to get the graph of your DNN drawn?
null
[ { "contents": "Hi all, I am wondering if there is an easy plug-and-play way in order for me to have a nice visualization of my DNN that I have designed? Im after something simple to be honest, just something where I can see the various layers connected to each other to get a sense of what I have coded. Thanks", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ido4848" }, { "contents": "For the moment, we need to wait until the new autograd is merged into master (which should happen next week). Once that’s done, one can start working on a tensorboard integration, as the underlying graph representation from PyTorch shouldn’t change much", "isAccepted": false, "likes": 2, "poster": "fmassa" }, { "contents": "Any updates on the tensorboard integration? How can I contribute?", "isAccepted": false, "likes": 1, "poster": "varun-suresh" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "hhsecond" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "fmassa" }, { "contents": "Sounds good, i’ll wait for that then. Thanks", "isAccepted": false, "likes": null, "poster": "hhsecond" }, { "contents": "Do we have any plans to build own visualization tool based on jit.trace or ONNX? I and my colleagues had built one basic visualizer and we were planning to make a better one. If somebody is working on it already, would like to discuss.", "isAccepted": false, "likes": null, "poster": "hhsecond" } ]
false
Why is input used as var name in examples and pytorch convention!
null
[ { "contents": "A discussion is needed for adopting common python specific convention to pytorch.", "isAccepted": false, "likes": null, "poster": "napsternxg" }, { "contents": "I know that many people will consider that a “bad practice”, but I seriously don’t think that giving up readability for the sake of satisfying a rule that doesn’t make a lot of sense in this context is worth it.", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Sure, I was just suggesting that not overwriting reserved symbols will bring an increase in the quality of the examples (however small that increase is).", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "If someone sends a PR I’ll accept it, but it’s not a priority for us now, so we’ll do it later otherwise.", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Data set loader questions
null
[ { "contents": "I have downloaded and split the CIFAR 10 data using the given torch.datasets api, now I want to separate out training and validate data, say I want out 50000 sample 49000 as training example and 1000 as validation examples, how do I achieve this?? Also say I keep the batch size as 64, then in the case where I have 50000 training samples last batch will not have the same number of samples as the other batches, how is this case handled??", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "gsp-27" } ]
false
Using previous output values as part of a loss function
reinforcement-learning
[ { "contents": "I am trying to implement TRPO, and I need the gradient of the network parameters w.r.t. the KL divergence between the current action distribution and the action distribution after the parameters are changed. What is the best way to implement this? How do I make sure action_distribution0 doesn’t get backpropagated through?", "isAccepted": false, "likes": null, "poster": "yoonholee" }, { "contents": "<SCODE>action_distribution0 = model(state).detach()\n# detach() blocks the gradient\n# also action_distribution0 doesn't require grad now, so it can be a loss target\n\n# =========================================\n# change the parameters of the network here\n# =========================================\n\nKL(Network(state), action_distribution0).backward()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "This worked great, thanks!", "isAccepted": false, "likes": null, "poster": "yoonholee" }, { "contents": "<SCODE>fake = netG(noise)\nfake = fake.detach() # this is not an inplace op, so need to re-assign?\noutput = netD(fake)<ECODE>", "isAccepted": false, "likes": null, "poster": "tomsercu" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
Proper way to do gradient clipping?
null
[ { "contents": "Is there a proper way to do gradient clipping, for example, with Adam? It seems like that the value of Variable.data.grad should be manipulated (clipped) before calling optimizer.step() method. I think the value of Variable.data.grad can be modified in-place to do gradient clipping. Is it safe to do? Also, Is there a reason that Autograd RNN cells have separated biases for input-to-hidden and hidden-to-hidden? I think this is redundant and has a some overhead.", "isAccepted": false, "likes": 14, "poster": "kim.seonghyeon" }, { "contents": "The reason for that is that it has a nice user facing API where you have both weight tensors exposed. Also, it opens up a possibility of doing batched matrix multiply on the inputs for all steps, and then only applying the hidden-to-hidden weights (it’s not yet added there). If you measure the overhead and prove us that it can be implemented in a clean and fast way, we’ll happily accept a PR or change it.", "isAccepted": false, "likes": 6, "poster": "apaszke" }, { "contents": "I have tested nn.LSTM against simple LSTM implementation and found almost no difference in the performance. Maybe I overestimated the overhead of the additional addition with simple guess. Thank you!", "isAccepted": false, "likes": null, "poster": "kim.seonghyeon" }, { "contents": "If you’re running on GPU you’ll also likely see great speedups from using cuDNN LSTM implementation.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "kim.seonghyeon" }, { "contents": "In the sense, does optimizer itself call backward? In which case, the code below should pass optimizer to the clip function right? <SCODE> optimizer.zero_grad()\n output, hidden = model(data, hidden)\n loss = criterion(output.view(-1, ntokens), targets)\n loss.backward()\n clipped_lr = lr * clip_gradient(model, clip)\n for p in model.parameters():\n p.data.add_(-clipped_lr, p.grad.data)\n \n optimizer.step()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "dmadeka1" }, { "contents": "", "isAccepted": false, "likes": 8, "poster": "apaszke" }, { "contents": "Maybe I’m doing something wrong here, but using gradient clipping like <SCODE>nn.utils.clip_grad_norm(model.parameters(), clip)\nfor p in model.parameters():\n p.data.add_(-lr, p.grad.data)\n<ECODE> <SCODE>Epoch: 1/10... Step: 10... Loss: 4.4288\nEpoch: 1/10... Step: 20... Loss: 4.4274\nEpoch: 1/10... Step: 30... Loss: 4.4259\nEpoch: 1/10... Step: 40... Loss: 4.4250\nEpoch: 1/10... Step: 50... Loss: 4.4237\nEpoch: 1/10... Step: 60... Loss: 4.4223\nEpoch: 1/10... Step: 70... Loss: 4.4209\nEpoch: 1/10... Step: 80... Loss: 4.4193\nEpoch: 1/10... Step: 90... Loss: 4.4188\nEpoch: 1/10... Step: 100... Loss: 4.4174\n<ECODE> And without gradient clipping, everything else equal: <SCODE>Epoch: 1/10... Step: 10... Loss: 3.2837\nEpoch: 1/10... Step: 20... Loss: 3.1901\nEpoch: 1/10... Step: 30... Loss: 3.1512\nEpoch: 1/10... Step: 40... Loss: 3.1296\nEpoch: 1/10... Step: 50... Loss: 3.1170\nEpoch: 1/10... Step: 60... Loss: 3.0758\nEpoch: 1/10... Step: 70... Loss: 2.9787\nEpoch: 1/10... Step: 80... Loss: 2.9104\nEpoch: 1/10... Step: 90... Loss: 2.8271\nEpoch: 1/10... Step: 100... Loss: 2.6813\n<ECODE> There is probably something I don’t understand, but I’m just switching out those two bits of code.", "isAccepted": false, "likes": null, "poster": "mcleonard" }, { "contents": "Maybe you’re clipping them to very small values. It’s a possible effect", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "The one comes with nn.util clips in proportional to the magnitude of the gradients. Thus you’d like to make sure it is not too small for your particular model as Adam said (I think :p). The old-fashioned way of clipping/clampping is <SCODE>def gradClamp(parameters, clip=5):\n for p in parameters:\n p.grad.data.clamp_(max=clip)<ECODE>", "isAccepted": false, "likes": 2, "poster": "WendyShang" }, { "contents": "for people trying to just get an answer quickly: <SCODE>torch.nn.utils.clip_grad_norm(mdl_sgd.parameters(),clip)\n<ECODE> or with in-place clamp: <SCODE>W.grad.data.clamp_(-clip,clip)\n<ECODE> also similar Q:", "isAccepted": false, "likes": 40, "poster": "Brando_Miranda" }, { "contents": "I thought nn.utils.clip_grad_norm(model.parameters(), clip) is supposed to finish the job. Can someone give a more explicit explain? Is it because after I use gradient clipping, I may not use adam optimizer?", "isAccepted": false, "likes": 5, "poster": "ntubertchen" }, { "contents": "Regarding the code you ask about: <SCODE>for p in model.parameters():\n p.data.add_(-lr, p.grad.data)\n<ECODE> <SCODE>p.data = p.data + (-lr * p.grad.data)\n<ECODE> In other words, this performs a similar function as optimizer.step(), using the gradients to updates the model parameters, but without the extra sophistication of a torch.optim.Optimizer. If you use the above code, then you should not use an optimizer (and vice-versa).", "isAccepted": false, "likes": 16, "poster": "Neta_Zmora" }, { "contents": "Best regards Thomas", "isAccepted": false, "likes": 20, "poster": "tom" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "hughperkins" }, { "contents": "Thanks. Where does this go in relation to forward and backward propagation?", "isAccepted": false, "likes": 2, "poster": "Jared_77" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jukiewiczm" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jukiewiczm" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "aaronlelevier" } ]
false
Text classification unexpectedly slow
null
[ { "contents": "I wrote a simple demo for short text classification but the speed is unexpectedly slow. When I tried to find out where the bottleneck is, it turns out to be intractable. At first, the bottleneck is this line: However, after commenting out the above line, it slows down at these lines in get_batch() function: Is there any problem in the code? I ran this script on GPU (Titan X) with cuda 8.0, python 2.7, ubuntu 16 and pytorch was installed by pip. The data was randomly generated. The code is attached below:", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "If you want to reliably benchmark the compute time of your model do this: <SCODE>torch.cuda.synchronize()\nstart = # get start time\noutput = model(input)\ntorch.cuda.synchronize()\nend = # get end time\n<ECODE> Can you expect what “unexpectedly slow” means? What is the runtime and what did you expect? It seems that you’re using a convolutional kernel size of 3x300, and that will be incredibly costly to compute (especially with 200 output channels).", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "For the slow part, I time the training on one batch and print it in the code. It shows that the training speed is around 13 instances per sec. With larger graph using tensorflow, the same machine can easily achieve thousands of instances per sec. In the pytorch code above, every batch takes about 8 sec. Thanks.", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "Try to use more batches, the first one will be always slower, because lot of time will be spent allocating memory that will be cached for subsequent runs. Are the batch sizes and model parameters exactly the same as in tensorflow? I can try to run your model tomorrow, but it would be helpful if you could also provide me with the code for tensorflow implementation.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "The 8 sec is not the first batch, it is on average. The batch size for tensorflow is 1024. There are four parallel convolutional layers for my tensorflow model, the kernel sizes are 1X250, 3X250, 4X250, 5X250. And then some hidden layers. That is a much larger graph than the example I showed above. Sorry, the tensorflow code has too many dependencies, I don’t think I can simply show it here. Thank you very much!", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "And what is the time per batch or element that you can achieve in TensorFlow?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "here is the code: Thank you very much!", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "Seems like that there are some problems or pitfalls in Conv2d on GPU. When I used in_channels = 1, out_channels = 200, kernel height = 3, kernel width = 300 (original setting) then backward was very slow. I changed it to in channels = 300, out channels = 200, kernel height = 1, kernel width = 3 then model works well. (I got 9500/sec on GTX 1080 8) ) This seems like GPU-specific problem because no problem occurred when ran on CPU.", "isAccepted": false, "likes": null, "poster": "kim.seonghyeon" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "If it is a GPU-specific problem, then why tensorflow code works well with the same kernel size? Thanks", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "So there is no way to speed it up other than reshaping the input? Thanks", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "It’s not a GPU specific problem, we’re probably just using different cuDNN calls to select algorithms, and for some reason they fail to pick the fast ones for this use case. I don’t know if there’s no other way, it will take me some time until I properly benchmark it and see what’s the issue. If you want to use it now, just reshape the input.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thank you very much!", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "With cudnn disabled the training runs at 1k samples/sec for me. With it slows down to 12.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "<SCODE>torch.backends.cudnn.benchmark = True\n<ECODE> This is a flag that controls cudnn benchmark mode and should be used only if every forward pass uses the same data dimensions. After this your code runs @ 7k/s for training and 64k/s for test on my machine.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks!", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "If I set this: What do you mean by same data dimension? Thanks.", "isAccepted": false, "likes": null, "poster": "Xia_Yandi" } ]
false
Usage of Numpy and gradients
null
[ { "contents": "Hi, I was wondering if gradients are automatically computed if Numpy is in the Forward() function of nn.module, i.e, torch tensors are converted to numpy arrays, a numpy op is applied and then we convert it back to torch tensors. Is there any implication of doing so? Thanks!", "isAccepted": false, "likes": null, "poster": "ytay017" }, { "contents": "Hi,", "isAccepted": false, "likes": 1, "poster": "fmassa" } ]
false
Matrix-vector multiply (handling batched data)
null
[ { "contents": "<SCODE># (batch x inp)\nv = torch.randn(5, 15)\n# (inp x output)\nM = torch.randn(15, 20)\n<ECODE> Compute: <SCODE># (batch x output)\nout = torch.Tensor(5, 20)\nfor i, batch_v in enumerate(v):\n out[i] = (batch_v * M).t()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "I think these are the only Py3 versions that we support.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I was wondering… Matrix batched-vector multiply is just matrix-matrix multiply, but what about the inverse situation: Batched-matrix vector multiply?", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "No, there’s not. But you could do it by reshaping your vector to look like a matrix (no memory copy there, just stride tricks): <SCODE>M_batch = ... # batch x dim1 x dim2\nv = ... # dim2\nv_view = v.unsqueeze(0).expand(-1, len(v)).unsqueeze(2) # batch x dim2 x 1\noutput = M_batch.bmm(v_view) # batch x dim1 x 1\n# optionally output.squeeze(2) to get batch x dim1 output\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "Alright, thanks for the info. Given efficient reshaping capabilities, all this operations can easily be implemented by the user.", "isAccepted": false, "likes": null, "poster": "emanjavacas" }, { "contents": "I got it to work passing the batch size instead of -1. <SCODE>v.unsqueeze(0).expand(M_batch.size(0), len(v)).unsqueeze(2)\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "emanjavacas" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Second order derivatives?
null
[ { "contents": "and I need the gradients of the network parameters w.r.t (flat_grad * x) In the process of flattening the gradients, I had to convert everything into a numpy array, which broke the backprop chain. How can I solve this problem?", "isAccepted": false, "likes": null, "poster": "yoonholee" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Implementing Squeezenet
null
[ { "contents": "Hi, I am trying to implement Squeezenet and train it on Cifar10 data, I have got the code ready but there seems to be some problem, my training set accuracy never increases though the loss function graph makes sense. <SCODE>class fire(nn.Module):\ndef __init__(self, inplanes, squeeze_planes, expand_planes):\n super(fire, self).__init__()\n self.conv1 = nn.Conv2d(inplanes, squeeze_planes, kernel_size=1, stride=1)\n self.relu1 = nn.ReLU(inplace=True)\n self.conv2 = nn.Conv2d(squeeze_planes, expand_planes, kernel_size=1, stride=1)\n self.conv3 = nn.Conv2d(squeeze_planes, expand_planes, kernel_size=3, stride=1, padding=1)\n self.relu2 = nn.ReLU(inplace=True)\n\n # using MSR initilization\n for m in self.modules():\n if isinstance(m, nn.Conv2d):\n n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels\n m.weight.data.normal_(0, math.sqrt(2./n))\n\ndef forward(self, x):\n x = self.conv1(x)\n x = self.relu1(x)\n out1 = self.conv2(x)\n out2 = self.conv3(x)\n out = torch.cat([out1, out2], 1)\n out = self.relu2(out)\n return out<ECODE>", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "The model definition looks correct to me, so it’s probably some other bug.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thanks should I upload the training and the complete model code as well?", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "Where do you want to upload it?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I wrote it by mistake. Did you get a chance to look at it? Is there any mistake?", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "Though it gives me poor accuracy but no segfault.", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I think the learning rate itself was the issue, it must be doing division by zero somewhere, I changed the learning rate and now it seems to be working fine.", "isAccepted": false, "likes": null, "poster": "gsp-27" }, { "contents": "No, segfaults are never caused by zero division. If you get a small repro script I can fix it.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "is there a chance they are cause by not zeroing the grad parameters The script that I used for training is: <SCODE>def paramsforepoch(epoch):\n p = dict()\n regimes = [[1, 18, 1e-3, 5e-4],\n [19, 29, 5e-3, 5e-4],\n [30, 43, 1e-3, 0],\n [44, 52, 5e-4, 0],\n [53, 1e8, 1e-4, 0]]\n for i, row in enumerate(regimes):\n if epoch >= row[0] and epoch <= row[1]:\n p['learning_rate'] = row[2]\n p['weight_decay'] = row[3]\n return p\n\navg_loss = list()\nfig1, ax1 = plt.subplots()\nfig2, ax2 = plt.subplots()\n# train the model\n# TODO: Compute training accuracy and test accuracy\n# TODO: train it on some data and see if it overfits.\n# TODO: train the data on final model\n\n# create a temporary optimizer\noptimizer = optim.SGD(net.parameters(), lr=args.learning_rate, momentum=args.momentum, weight_decay=0.0005)\n\ndef adjustlrwd(p):\n for param_group in optimizer.state_dict()['param_groups']:\n param_group['lr'] = p['learning_rate']\n param_group['weight_decay'] = p['weight_decay']\n\n# train the network\ndef train(epoch):\n\n # set the optimizer for this epoch\n if epoch > 0 or epoch > 18 or epoch > 29 or epoch > 43 or epoch > 52:\n p = paramsforepoch(epoch)\n print(\"Configuring optimizer with lr={:.3f} and weight_decay={:.3f}\".format(p['learning_rate'], p['weight_decay']))\n adjustlrwd(p)\n ###########################################################################\n\n global avg_loss\n correct = 0\n net.train()\n for b_idx, (data, targets) in enumerate(train_loader):\n # trying to overfit a small data\n if b_idx == 100:\n break\n\n if args.cuda:\n data.cuda(), targets.cuda()\n # convert the data and targets into Variable and cuda form\n data, targets = Variable(data), Variable(targets)\n\n # train the network\n optimizer.zero_grad()\n scores = net.forward(data)\n loss = F.nll_loss(scores, targets)\n\n # compute the accuracy\n pred = scores.data.max(1)[1] # get the index of the max log-probability\n correct += pred.eq(targets.data).cpu().sum()\n\n avg_loss.append(loss.data[0])\n loss.backward()\n optimizer.step()\n\n if b_idx % args.log_schedule == 0:\n print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n epoch, (b_idx+1) * len(data), len(train_loader.dataset),\n 100. * (b_idx+1) / 100, loss.data[0]))\n # also plot the loss, it should go down exponentially at some point\n ax1.plot(avg_loss)\n fig1.savefig(\"Squeezenet_loss.jpg\")\n\n # now that the epoch is completed plot the accuracy\n accuracy = correct / 6400.0\n print(\"training accuracy ({:.2f}%)\".format(100*accuracy))\n ax2.plot(100*accuracy)\n fig2.savefig(\"Training-test-acc.jpg\")\n\n<ECODE>", "isAccepted": false, "likes": null, "poster": "gsp-27" } ]
false
Different results with same input
null
[ { "contents": "Hi, I have this output : EDIT : I have the same problem if I do two forward pass with a batch of 1", "isAccepted": false, "likes": null, "poster": "maxgreat" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "apaszke" } ]
false
Titan X Maxwell (old) issues
null
[ { "contents": "I can confirm we can run Torch7 on those machines with no issues. Also on the new Pascal Titan X, we do not have this problem. Maybe an issues with driver? Anyone else has the same issue?", "isAccepted": false, "likes": null, "poster": "Eugenio_Culurciello" }, { "contents": "We don’t support Python 3.4 Do you still have this issue with Python 3.5?", "isAccepted": false, "likes": null, "poster": "colesbury" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Eugenio_Culurciello" } ]
false
Example on how to use batch-norm?
null
[ { "contents": "TLDR: What exact size should I give the batch_norm layer here if I want to apply it to a CNN? output? In what format? I have a two-fold question: Is this the correct intended usage? Maybe an example of the syntax for it’s usage with a CNN? Thanks.", "isAccepted": false, "likes": 4, "poster": "Kalamaya" }, { "contents": "Hi, as for 1st question, <SCODE>class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(in_channels=1, out_channels=10,\n kernel_size=5,\n stride=1)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_bn = nn.BatchNorm2d(20)\n self.dense1 = nn.Linear(in_features=320, out_features=50)\n self.dense1_bn = nn.BatchNorm1d(50)\n self.dense2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_bn(self.conv2(x)), 2))\n x = x.view(-1, 320) #reshape\n x = F.relu(self.dense1_bn(self.dense1(x)))\n x = F.relu(self.dense2(x))\n return F.log_softmax(x)\n<ECODE> worked.", "isAccepted": false, "likes": 21, "poster": "moskomule" }, { "contents": "On an unrelated note, one thing I noticed though is that you are doing ReLUs AFTER your max_pool, however the canonical way it is done is usually the reverse. (ReLU and then max-pool).", "isAccepted": false, "likes": 5, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "moskomule" }, { "contents": "", "isAccepted": false, "likes": 25, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "Kalamaya" }, { "contents": "Sure, it’s not a general thing of course, it’s only leveraging a property of the max operator.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "What do you do during test time? How do you set the forward prop so that it does not update the weights of the batch_norm module? Via eval()", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "At test time, I would like to freeze both the weights, (lambda and beta), as well as freeze the running averages that is has computed. (Ostensibly because it has a good estimate for those from training already). So I basically expect that I would want all 4 of those values frozen.", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks!", "isAccepted": false, "likes": 1, "poster": "Keven_Wang" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "cslxiao" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Chahrazad" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "cslxiao" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "I have a pretrained model whose parameters are available as csv files. This model has batch norm layers which has got weight, bias, mean and variance parameters. I want to copy these parameters to layers of a similar model I have created in pytorch. But the Batch norm layer in pytorch has only two parameters namely weight and bias. How do I deal with mean and variance so that during eval all these four parameters are used?", "isAccepted": false, "likes": null, "poster": "bala" }, { "contents": "How to set the BatchNorm layers to eval() mode?", "isAccepted": false, "likes": 1, "poster": "igreen" }, { "contents": "<SCODE>model.eval()\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "smth" }, { "contents": "I have been using BN to train audoencoders over a large number of image patches (50K/image) of different architectures recently. There is indeed gotcha whenever BN is used with the dataset as follows. After training a long time (70 epochs or more with 4K batches each), the validation loss suddenly increases significantly and never comes back while the training loss remains stable. Decreasing the learning rate only postpones the phenomenon. The trained model at this point is not usable if model.eval() is called as it is supposed to be. But if the output is normalized to the regular pixel range, the results seem alright. After several trials, the cause is likely the default epsilon which may be too small (1e-5) for long term stability. However, increasing the epsilon leads to a slightly higher validation loss. Alternatively, the running mean and var computed by pytorch under the hood may have something to do with since fixing BN at the training mode also alleviates the issue for inference time. In view of the popularity of BN, I am a bit surprised everyone seems happy with it but is it really the case?", "isAccepted": false, "likes": null, "poster": "farleylai" } ]
false
Adding a scalar?
null
[ { "contents": "Dumb question, but how do I make a scalar Variable? I’d like to add a trainable parameter to a vector, but I keep getting size-mismatch problems. <SCODE># Works, but I can't make the 2 a \n# parameter, so I can't do gradient descent on it\nVariable(torch.from_numpy(np.ones(5))) + 2 \n<ECODE> Thanks!", "isAccepted": true, "likes": null, "poster": "ririw" }, { "contents": "<SCODE>x = Variable(torch.from_numpy(np.ones(5)))\ny = Variable(torch.Tensor([2]).double()) # numpy is double by default\nz = x + y.expand(x.size())\n<ECODE>", "isAccepted": true, "likes": 4, "poster": "fmassa" }, { "contents": "Great, thanks for that.", "isAccepted": true, "likes": null, "poster": "ririw" } ]
true
List of nn.Module in a nn.Module
null
[ { "contents": "<SCODE>class testNet(nn.Module):\n def __init__(self, input_dim, hidden_dim, step=1):\n super(testNet, self).__init__()\n self.linear = nn.Linear(100, 100) #dummy module\n self.linear_combines1 = []\n self.linear_combines2 = []\n for i in range(step):\n self.linear_combines1.append(nn.Linear(input_dim, hidden_dim))\n self.linear_combines2.append(nn.Linear(hidden_dim, hidden_dim))\n\nnet = testNet(128, 256, 3)\nprint(net) #Won't print what is in the list\nnet.cuda() #Won't send the module in the list to gpu\n<ECODE> What is the intended correct way to do this?", "isAccepted": false, "likes": 8, "poster": "NgPDat" }, { "contents": "Hi, <SCODE>import torch\nimport torch.nn as nn\n\nclass ListModule(nn.Module):\n def __init__(self, *args):\n super(ListModule, self).__init__()\n idx = 0\n for module in args:\n self.add_module(str(idx), module)\n idx += 1\n\n def __getitem__(self, idx):\n if idx < 0 or idx >= len(self._modules):\n raise IndexError('index {} is out of range'.format(idx))\n it = iter(self._modules.values())\n for i in range(idx):\n next(it)\n return next(it)\n\n def __iter__(self):\n return iter(self._modules.values())\n\n def __len__(self):\n return len(self._modules)\n\nclass testNet(nn.Module):\n def __init__(self, input_dim, hidden_dim, step=1):\n super(testNet, self).__init__()\n self.linear = nn.Linear(100, 100) #dummy module\n linear_combines1 = []\n linear_combines2 = []\n for i in range(step):\n linear_combines1.append(nn.Linear(input_dim, hidden_dim))\n linear_combines2.append(nn.Linear(hidden_dim, hidden_dim))\n self.linear_combines1 = ListModule(*linear_combines1)\n self.linear_combines2 = ListModule(*linear_combines2)\n\nnet = testNet(128, 256, 3)\nprint(net)\nnet.cuda()\n\nprint(net.linear_combines1[0])\nprint(len(net.linear_combines2))\nfor i in net.linear_combines1:\n print(i.weight.data.type())\n<ECODE>", "isAccepted": false, "likes": 6, "poster": "fmassa" }, { "contents": "<SCODE>class AttrProxy(object):\n \"\"\"Translates index lookups into attribute lookups.\"\"\"\n def __init__(self, module, prefix):\n self.module = module\n self.prefix = prefix\n\n def __getitem__(self, i):\n return getattr(self.module, self.prefix + str(i))\n\n\nclass testNet(nn.Module):\n def __init__(self, input_dim, hidden_dim, steps=1):\n super(testNet, self).__init__()\n self.steps = steps\n for i in range(steps):\n self.add_module('i2h_' + str(i), nn.Linear(input_dim, hidden_dim))\n self.add_module('h2h_' + str(i), nn.Linear(hidden_dim, hidden_dim))\n self.i2h = AttrProxy(self, 'i2h_')\n self.h2h = AttrProxy(self, 'h2h_')\n\n def forward(self, input, hidden):\n # here, use self.i2h[t] and self.h2h[t] to index \n # input2hidden and hidden2hidden modules for each step,\n # or loop over them, like in the example below\n # (assuming first dim of input is sequence length)\n for inp, i2h, h2h in zip(input, self.i2h, self.h2h):\n hidden = F.tanh(i2h(input) + h2h(hidden))\n return hidden\n<ECODE>", "isAccepted": false, "likes": 12, "poster": "apaszke" }, { "contents": "Thank you! Both ideas are great. I took some time to incorporate two ideas together. And here is my take on it: <SCODE>class ListModule(object):\n #Should work with all kind of module\n def __init__(self, module, prefix, *args):\n self.module = module\n self.prefix = prefix\n self.num_module = 0\n for new_module in args:\n self.append(new_module)\n\n def append(self, new_module):\n if not isinstance(new_module, nn.Module):\n raise ValueError('Not a Module')\n else:\n self.module.add_module(self.prefix + str(self.num_module), new_module)\n self.num_module += 1\n\n def __len__(self):\n return self.num_module\n\n def __getitem__(self, i):\n if i < 0 or i >= self.num_module:\n raise IndexError('Out of bound')\n return getattr(self.module, self.prefix + str(i))\n\n\nclass testNet(nn.Module):\n def __init__(self, input_dim, hidden_dim, steps=1):\n super(testNet, self).__init__()\n self.steps = steps\n self.i2h = ListModule(self, 'i2h_')\n self.h2h = ListModule(self, 'h2h_')\n for i in range(steps):\n self.i2h.append(nn.Linear(input_dim, hidden_dim))\n self.h2h.append(nn.Linear(hidden_dim, hidden_dim))\n\n def forward(self, input, hidden):\n for inp, i2h, h2h in zip(input, self.i2h, self.h2h):\n hidden = F.tanh(i2h(inp) + h2h(hidden))\n return hidden\n\nnet = testNet(128, 256, 3)\nprint(net)\nnet.cuda()\ninp = Variable(torch.randn(3, 4, 128)).cuda()\ninit = Variable(torch.randn(4, 256)).cuda()\nout = net(inp, init)\n<ECODE>", "isAccepted": false, "likes": 8, "poster": "NgPDat" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "NgPDat" }, { "contents": "Yup, just wanted to wait until I write the docs before posting here. They should be up today or tomorrow.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Atcold" }, { "contents": "The docs aren’t merged yet. I’ve been working on bug fixes.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "<SCODE> self.W = Variable(w_init, requires_grad=True)\n self.mod_list = torch.nn.ModuleList([self.W])<ECODE>", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "A linear “module” without a bias is consider a module when its simply just a Variable (matrix with some dimension). I don’t see the difference of that and my suggestion. Something like: <SCODE>torch.nn.Linear(D_in,D_out,bias=False)<ECODE>", "isAccepted": false, "likes": null, "poster": "Brando_Miranda" }, { "contents": "<SCODE>def linear(input):\n x = Variable(torch.rand(2, 2))\n return input\n<ECODE> and <SCODE>def linear(input):\n x = Variable(torch.rand(2, 2))\n return torch.matmul(input, x)\n<ECODE>", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "BestCookies" } ]
false
When will the conda package be updated?
null
[ { "contents": "I found some features like upsampling_nearest are in the github but not in the conda package. Is there a timeline when the conda package will be updated?", "isAccepted": false, "likes": null, "poster": "junonia" }, { "contents": "the conda package was updated yesterday evening with the 0.1.7 release which has the upsampling_nearest available", "isAccepted": false, "likes": null, "poster": "smth" }, { "contents": "Thanks! I need to use install instead update to get the new package.", "isAccepted": false, "likes": null, "poster": "junonia" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Atcold" }, { "contents": "conda update pytorch torchvision -c soumith", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "<SCODE>conda config --add channels soumith\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "Atcold" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "McLawrence" } ]
false
Read data from the GPU?
null
[ { "contents": "Thanks!", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "tomsercu" } ]
false
Understanding graphs and state
null
[ { "contents": "There are two questions remaining - the second question is more important. 1) No guarantee that second backward will fail? <SCODE>x = Variable(torch.ones(2,3), requires_grad=True)\ny = x.mean(dim=1).squeeze() + 3 # size (2,)\nz = y.pow(2).mean() # size 1\ny.backward(torch.ones(2))\nz.backward() # should fail! But only fails on second execution\ny.backward(torch.ones(2)) # still fine, though we're calling it for the second time\nz.backward() # this fails (finally!)\n<ECODE> 2) Using a net twice on the same input Variable makes a new graph with new state? <SCODE>out = net(inp)\nout2 = net(inp) # same input\nout.backward(torch.ones(1,1,2,2))\nout2.backward(torch.ones(1,1,2,2)) # doesnt fail -> has a different state than the first fw pass?!\n<ECODE> Am I right to think that fw-passing the same variable twice constructs a second graph, keeping the state of the first graph around? The problem I see with this design is that often (during testing, or when you detach() to cut off gradients, or anytime you add an extra operation just for monitoring) there’s just a fw-pass on part of the graph - so is that state then kept around forever and just starts consuming more memory on every new fw-pass of the same variable? I understand that the volatile flag is probably introduced for this problem and I see it’s used during testing in most example code. \nfake = netG(noise).detach() to avoid bpropping through netG https://github.com/pytorch/examples/blob/master/dcgan/main.py#L216 43\n test on non-volatile variables: https://github.com/pytorch/examples/blob/master/super_resolution/main.py#L74 19\n If you finetune only top layers of a feedforward net, bottom layers see only fw-passes But in general, if I understand this design correctly, this means anytime you have a part of a network which isn’t backpropped through, you need to supply volatile flag? Then when you use that intermediate volatile variable in another part of the network which is backpropped through, you need to re-wrap and turn volatile off?", "isAccepted": false, "likes": 6, "poster": "tomsercu" }, { "contents": "Now, here’s some description on when do we keep the state around:", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Thanks for the elaborate answer, the graph going out of scope with the output variable is the essential part I was missing here. Let me know if you think it’s useful to make the notebook with your answer into a full tutorial, I think these autograd graph/state mechanics are a bit underdocumented atm. Or maybe some explanation could be added to the autograd note in the docs.", "isAccepted": false, "likes": null, "poster": "tomsercu" }, { "contents": "I agree the materials we have right now aren’t very detailed but we didn’ thave a lot of time to expand them. If you’d have a moment to write that down and submit a notebook or a PR to the notes I’ll merge them. Thanks!", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "pietromarchesi" }, { "contents": "Hi, <SCODE>fake = netG(inputs)\nLoss1 = criterion(netD1(fake), real_label)\nLoss2 = criterion(netD2(fake), real_label)\nLoss3 = criterion(netD3(fake), real_label)\nLoss_other = criterion_other(fake, target)\nLoss = Loss1 + Loss2 + Loss3 + Loss_other\nLoss.backward()\n<ECODE> does this create the graphs for each of the netDs? Will be wrong if I did <SCODE>Loss1 = criterion(netD1(fake).detach(), real_label)\nLoss2 = criterion(netD2(fake).detach(), real_label)\nLoss3 = criterion(netD3(fake).detach(), real_label)\nLoss_other = criterion_other(fake, target)\nLoss = Loss1 + Loss2 + Loss3 + Loss_other\nLoss.backward()\n<ECODE> to save some memory, since I don’t need to backprop through netD? Will there be any difference in backpropping?", "isAccepted": false, "likes": null, "poster": "Nabarun_Goswami" }, { "contents": "Hi, if I understand you correctly, you want to train netD1 with loss1, netD3 with loss2 ,… and netG with loss_other ? <SCODE>Loss1 = criterion(netD1(fake.detach()), real_label)\nLoss2 = criterion(netD2(fake.detach()), real_label)\nLoss3 = criterion(netD3(fake.detach()), real_label)\nLoss_other = criterion_other(fake, target)\nLoss = Loss1 + Loss2 + Loss3 + Loss_other\nLoss.backward()\n<ECODE> , where you detach the fake, thus loss1, etc can be propagated back through netD1 etc (but still not though netG, if you want to propagate through netG and not netD1 you can try to set .requires_grad=False for all paramters in netD1, but not sure if it will work, since it only works on leaves).", "isAccepted": false, "likes": null, "poster": "dzimm" }, { "contents": "Hi, Actually no, don’t want to train netD* at all, all the losses are for training the netG, and all netD* are fixed. The last point of your reply kind of hit the point, I want to propagate through netG without propagating through netD*. .requires_grad = False looks promising, but I am not sure either, would be great if someone can clarify on that.", "isAccepted": false, "likes": null, "poster": "Nabarun_Goswami" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "tomsercu" }, { "contents": "Adam, you said that:", "isAccepted": false, "likes": null, "poster": "riccardosamperna" } ]
false
Help clarifying repackage_hidden in word_language_model
null
[ { "contents": "<SCODE>def repackage_hidden(h):\n \"\"\"Wraps hidden states in new Variables, to detach them from their history.\"\"\"\n if type(h) == Variable:\n return Variable(h.data)\n else:\n return tuple(repackage_hidden(v) for v in h)\n<ECODE> I dont think I fully understand what the “history” includes, can somebody helps clarify this? Thanks!", "isAccepted": false, "likes": 9, "poster": "WenchenLi" }, { "contents": "", "isAccepted": false, "likes": 24, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "jekbradbury" }, { "contents": "", "isAccepted": false, "likes": 7, "poster": "apaszke" }, { "contents": "So we do not need to repackage hidden state when making predictions ,since we don’t do a BPTT ?", "isAccepted": false, "likes": null, "poster": "LaceyChen17" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "jdhao" }, { "contents": "<SCODE>def repackage_hidden(h):\n \"\"\"Wraps hidden states in new Variables, to detach them from their history.\"\"\"\n if type(h) == Variable:\n return Variable(h.data, requires_grad=True)\n else:\n return tuple(repackage_hidden(v) for v in h)\n<ECODE> Thanks.", "isAccepted": false, "likes": null, "poster": "ratishsp" }, { "contents": "it has already been updated to be compatible with the latest PyTorch version: <SCODE>def repackage_hidden(h):\n \"\"\"Wraps hidden states in new Tensors, to detach them from their history.\"\"\"\n if isinstance(h, torch.Tensor):\n return h.detach()\n else:\n return tuple(repackage_hidden(v) for v in h)\n<ECODE>", "isAccepted": false, "likes": 3, "poster": "alphadl" } ]
false
CUDA out of memory error
null
[ { "contents": "I ran my net with a large minibatch on the GPU with problem, and then ctrl-c out of it. However when I try to re-run my script I get: RuntimeError: cuda runtime error (2) : out of memory at /home/soumith/local/builder/wheel/pytorch-src/torch/lib/THC/generic/THCStorage.cu:66 Nothing changed and it just worked before. Is the memory on the GPU not being released? Is there a way to force the GPU memory to release its memory before a run? Thanks.", "isAccepted": false, "likes": 2, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "nvidia-smi shows me using 8GB out of the 12 GB… What puzzles me is that literally just stopped with a large minibatch size that training just fine, but then when I re-start the python script it craps out.", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "Kalamaya" }, { "contents": "Hi, Can you give me some suggestions? Thank you so much.", "isAccepted": false, "likes": null, "poster": "mjchen611" }, { "contents": "And when I set num_workers = 0,the (RAM, but not GPU) memory not increases largely with the increase of epoch… Can you give me some suggestions or instructions about the problem? Thank you so much.", "isAccepted": false, "likes": null, "poster": "mjchen611" }, { "contents": "i face the same error…is there a way to fin the variables which are getting cached on gpu", "isAccepted": false, "likes": 1, "poster": "Harikrishna.Vydana" } ]
false
Build your own loss function in PyTorch
null
[ { "contents": "<SCODE>X = np.asarray([[0.6946, 0.1328], [0.6563, 0.6873], [0.8184, 0.8047], [0.8177, 0.4517], \n [0.1673, 0.2775], [0.6919, 0.0439], [0.4659, 0.3032], [0.3481, 0.1996]], dtype=np.float32)\nX = torch.from_numpy(X)\ny = np.asarray((1,3,2,2,3,1,2,3), dtype=np.float32)\ny = torch.from_numpy(y)\n\ndef similarity(i, j):\n ''' This function defines the similarity between vectors i and j\n inputs: i, j - vectors of the same length\n sigma - the denumerator parameter\n output: sim - similarity value (real number from 0 to 1) '''\n \n dist = torch.norm(i - j) \n return dist\n\ndef similarity_matrix(mat):\n ''' This function creates the similarity matrix of a dataset\n input: mat - dataset in matrix format\n sigma - a paramter which defines similarity\n output: simMatrix - the similarity matrix '''\n \n a = mat.size()\n a = a[0]\n simMatrix = torch.zeros((a,a))\n for i in xrange(a):\n for j in xrange(a):\n simMatrix[i][j] = similarity(mat[i], mat[j]) \n return simMatrix \n\ndef convert_y(y):\n n = y.size()\n n = n[0]\n converted_y = torch.zeros((n, n))\n for i in xrange(n):\n for j in xrange(n):\n if y[i] == y[j]:\n converted_y[i, j] = 1\n return converted_y\n\ndef customized_loss(X, y):\n X_similarity = similarity_matrix(X)\n association = convert_y(y)\n loss_num = torch.sum(torch.mul(X_similarity, association))\n loss_all = torch.sum(X_similarity)\n loss_denum = loss_all - loss_num\n loss = loss_num/loss_denum\n return loss\n\nloss = customized_loss(X, y)\nprint(loss)\n<ECODE> Now, of course, considering that I am going to use it as the final layer, of the neural net, I would need to compute the gradients of it and then use them in the backpropagation. Explaining the function a bit: I first transform the input data space into a kind of similarity matrix (0 it means the data being the same, the higher the number in ij-th entry, the higher is the dissimilarity). Then in order to find the intra-cluster loss, I multiply this matrix with a 0/1 matrix, where the ij-th entry is 1 if the element i and j are in the same cluster, 0 otherwise. The intra-cluster loss is find similarity, and finally, we just divide the two losses. My questions are: Thanks for any answer, or possible hint.", "isAccepted": false, "likes": 17, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": 16, "poster": "apaszke" }, { "contents": "Thanks a lot! I rewrote everything using Torch, so now, it should work if I use loss.backward(X, y)? Advises about writing my own autograd function and/or computing the similarity more efficiently are very welcome. It is definitely something that I need to do later, but for now I need just a simple version of this working. Edit - It looks that it works by rewriting the final function as: <SCODE>def customized_loss(X, y):\n X_similarity = Variable(similarity_matrix(X), requires_grad = True)\n association = Variable(convert_y(y), requires_grad = True)\n temp = torch.mul(X_similarity, association)\n loss_num = torch.sum(torch.mul(X_similarity, association))\n loss_all = torch.sum(X_similarity)\n loss_denum = loss_all - loss_num\n loss = loss_num/loss_denum\n return loss\n<ECODE> All is good for now, thanks again!", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": 4, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I think that I got lost now. Rewriting an another time the function (in probably a more readable way): <SCODE>X = Variable(torch.Tensor([[0.6946, 0.1328], \n [0.6563, 0.6873], \n [0.8184, 0.8047], \n [0.8177, 0.4517], \n [0.1673, 0.2775], \n [0.6919, 0.0439],\n [0.4659, 0.3032],\n [0.3481, 0.1996]]))\n\ny = Variable(torch.Tensor([1.0, 3.0, 2.0, 2.0, 3.0, 1.0, 2.0, 3.0]))\n\n\ndef customized_loss(X, y):\n \n def similarity_matrix(mat):\n a = mat.size()\n a = a[0]\n simMatrix = Variable(torch.zeros(a,a), requires_grad = True)\n for i in xrange(a):\n for j in xrange(a):\n simMatrix[i][j] = torch.norm(mat[i] - mat[j]) \n return simMatrix \n \n def convert_y(y):\n a = y.size()\n a = a[0]\n converted_y = Variable(torch.zeros(a,a), requires_grad = True)\n for i in xrange(n):\n for j in xrange(n):\n if y[i] == y[j]:\n converted_y[i, j] = 1\n return converted_y\n\n X_similarity = similarity_matrix(X)\n association = convert_y(y)\n loss_num = torch.sum(torch.mul(X_similarity, association))\n loss_all = torch.sum(X_similarity)\n loss_denum = loss_all - loss_num\n loss = loss_num/loss_denum\n return loss\n\nloss = customized_loss(X, y)\n<ECODE> As far as I can see, everything now is done in Variables (from beginning to the end). We are giving X and y (which are variables) to the function, and then everything is done in Variables. The only other variables that I need to define is simMatrix in the similarity_matrix function, and there I am having this error: <SCODE>RuntimeError: in-place operations can be only used on variables that don't share storage with any other variables, but detected that there are 2 objects sharing it.\n<ECODE> Of course, the same thing happens in convert_y function when I create the converted_y Variable. And I have no clue, what is going wrong, while googling this error doesn’t show any result. … You already spent some time here, so thanks for that, but in case you can guide me how to fix this problem (or writing it if it is a quick fix) it would be awesome. From the pyTorch tutorial about the Variables it is not clear to me what I am doing wrong (haven’t ever used Torch). I guess that the problem is that I am implicitly creating a new Variable in the middle of the graph, but is there any way around it? Is there a solution around this?", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "<SCODE># (x - y)^2 = x^2 - 2*x*y + y^2\ndef similarity_matrix(mat):\n # get the product x * y\n # here, y = x.t()\n r = torch.mm(mat, mat.t())\n # get the diagonal elements\n diag = r.diag().unsqueeze(0)\n diag = diag.expand_as(r)\n # compute the distance matrix\n D = diag + diag.t() - 2*r\n return D.sqrt()\n<ECODE>", "isAccepted": false, "likes": 13, "poster": "fmassa" }, { "contents": "About, if you are not backpropagating through y part…I am a bit confused. Essentially, the algorithm is: While I do not need to backprop through y, Y is multiplied with X (and Y comes from y), so I think that I need to backprop through Y, right? Cheers!", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "<SCODE>def convert_y2(y):\n s = y.size(0)\n y_expand = y.unsqueeze(0).expand(s, s)\n Y = y_expand.eq(y_expand.t())\n return Y\n<ECODE>", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "@fmassa You’re absolutely right. About your function, it returns a ByteTensor which means that Y cannot be multiplied with X (which is a FloatTensor). There should be something that allows casting a Tensor to some other type, right?", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "<SCODE>a = torch.ByteTensor([0,1,0])\nb = a.float() # converts to float\nc = a.type('torch.FloatTensor') # converts to float as well\n<ECODE> Possible shortcuts for the conversion are the following: .byte() .short() .char() .int() .long() .float() .double \n.half() # for cuda only at the moment", "isAccepted": false, "likes": 2, "poster": "fmassa" }, { "contents": "Excellent! Thanks a lot! I think that I still need to fully understand how these functions work (read them in details), but everything is working now. Of course, the ANN isn’t working (a lot of NANs immediately after the first iteration), but that is something that I need to investigate and see the gradients’ values.", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "@fmassa I am using the version that is written on this thread, and is giving NaNs after the first iteration. Typically, when I got Nans in the past it was either an error in differentiation or a large training rate. Both of these cannot be this time because it is automatic differentiation and I am using extremely small training rates (3e-7 in Adam, while typically I use 3e-4). Anyway, I will have to read the documentation to find how to print the gradients, which might give me an idea on what is going wrong. And then, if I have problems, I will make an another thread. Thanks for everything!", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "You can use hooks e.g.: <SCODE>x = Variable(torch.randn(5, 5), requires_grad=True)\ny = Variable(torch.randn(5, 5), requires_grad=True)\nz = x + y\n# this will work only in Python3\nz.register_hook(lambda g: print(g)) \n# if you're using Python2 do this:\n# def pring_grad(g):\n# print g\n# z.register_hook(print_grad)\nq = z.sum()\nq.backward()\n<ECODE>", "isAccepted": false, "likes": 10, "poster": "apaszke" }, { "contents": "Excellent, I will try it and see what is going wrong.", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "About NaNs is your results: it is related to Loss Function. I was implementing sth like that in TensorFlow and I get NaNs too. Based on experiments, it look like the gradient of diagonal in similarity matrix cause NaNs. I modified to skip diagonal (so take just right upper triangle matrix, as left down is just the same).", "isAccepted": false, "likes": null, "poster": "melgor" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "edgarriba" }, { "contents": "@apaszke loss.register_hook(lambda g: print(g)) I get: <SCODE>Variable containing:\n1\n[torch.FloatTensor of size 1]\n\nVariable containing:\n1\n[torch.FloatTensor of size 1]\n<ECODE> which isn’t very helpful. Now, on my example, if I want all the gradients which are computed on my loss function, how can I use register_hook to do so?", "isAccepted": false, "likes": null, "poster": "Ismail_Elezi" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
nn.Module with multiple inputs
null
[ { "contents": "the same image at multiple resolutions is used different images are used give multiple inputs to the nn.Module\n join the fc layers together I am following the example of imagenet, which looks like this : <SCODE>class SimpleConv(nn.Module):\n def __init__(self, num_classes):\n super(SimpleConv, self).__init__()\n self.features = nn.Sequential(\n nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=3, stride=2),\n nn.Conv2d(64, 192, kernel_size=5, padding=2),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=3, stride=2),\n nn.Conv2d(192, 384, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(384, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.Conv2d(256, 256, kernel_size=3, padding=1),\n nn.ReLU(inplace=True),\n nn.MaxPool2d(kernel_size=3, stride=2),\n )\n self.classifier = nn.Sequential(\n nn.Dropout(),\n nn.Linear(256 * 6 * 6, 4096),\n nn.ReLU(inplace=True),\n nn.Dropout(),\n nn.Linear(4096, 4096),\n nn.ReLU(inplace=True),\n nn.Linear(4096, num_classes),\n )\n\n def forward(self, x):\n x = self.features(x)\n x = x.view(x.size(0), 256 * 6 * 6)\n x = self.classifier(x)\n return x\n\n\ndef simple_conv(pretrained=False, num_classes=140):\n model = SimpleConv(num_classes)\n # if pretrained:\n # model.load_state_dict(model_zoo.load_url(model_urls['alexnet']))\n return model\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "trypag" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "trypag" }, { "contents": "Hi, <SCODE>class SimpleConv(nn.Module):\n def __init__(self):\n super(SimpleConv, self).__init__()\n self.features = nn.Sequential(\n nn.Conv2d(1, 1, kernel_size=3, stride=1, padding=1),\n nn.ReLU(inplace=True),\n )\n\n def forward(self, x, y):\n x1 = self.features(x)\n x2 = self.features(y)\n x = torch.cat((x1, x2), 1)\n return x\n<ECODE>", "isAccepted": false, "likes": 23, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "No, there’s not. We don’t recommend that. Just write a custom container, with two sequential parts and a reshape in the middle. See torchvision models.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Would someone please explain what this function does? Does a pre-trained model means that I can just use its weights right away without doing anything? Thanks for the help. <SCODE>def simple_conv(pretrained=False, num_classes=140):\n model = SimpleConv(num_classes)\n # if pretrained:\n # model.load_state_dict(model_zoo.load_url(model_urls['alexnet']))\n return model<ECODE>", "isAccepted": false, "likes": null, "poster": "Russel_Russel" }, { "contents": "Yes, pretrained models are ones that have been trained by someone earlier and that you can use in different applications.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "vodp" }, { "contents": "What do you mean with: ? Can you show an example of usage in the train loop? Thanks a lot", "isAccepted": false, "likes": null, "poster": "arcticriki" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Joy_Chopra" }, { "contents": "Hi there, Sorry for jumping in, but I am also trying to write modules that can accept multiple inputs. Since I want them to be flexible with regards to the number of inputs, I tried passing a list of Variables() to the forward() method. Here is an example among other similar modules: <SCODE>class Addition(Aggregation):\n \"\"\"\n Add two input tensors, return a single output tensor of same dimensions. If input and output have different sizes,\n use largest in each dimension and zero-pad or interpolate (spatial dimensions), or convolve with a 1x1 filter\n (number of channels)\n \"\"\"\n def __init__(self, in_channels: list, pad_or_interpolate: str = 'pad', pad_mode: str = 'replicate', \n interpolate_mode: str = 'nearest'):\n\n assert pad_or_interpolate in ['pad', 'interpolate'], \\\n \"Error: Unknown value for `pad_or_interpolate` {}\".format(pad_or_interpolate)\n\n super(Addition, self).__init__()\n self.ch_align = ChannelAlignment(in_channels) # use 1x1 convolution to align n_channels\n\n if pad_or_interpolate == 'pad':\n self.sz_align = partial(self.align_sizes_pad, mode=pad_mode)\n else: \n self.sz_align = partial(self.align_sizes_interpolate, mode=interpolate_mode)\n\n def forward(self, inputs: list):\n \"\"\"\n Performs element-wise sum of inputs. If they have different dimensions, they are first adjusted to\n common dimensions by 1/ padding or interpolation (h and w axes) and/or 2/ 1x1 convolution.\n :param inputs: List of torch input tensors of dimensions (N, C_i, H_i, W_i)\n :return: A single torch Tensor of dimensions (N, max(C_i), max(H_i), max(W_i)), containing the element-\n wise sum of the input tensors (or their size-adjusted variants)\n \"\"\"\n inputs = self.sz_align(inputs) # Perform size alignment\n inputs = self.ch_align(inputs) # Perform channel alignment\n stacked = torch.stack(inputs, dim=4) # stack inputs along an extra axis (will be removed when summing up)\n \n return torch.sum(stacked, 4, keepdim=True).squeeze(4)\n<ECODE> However I am getting weird errors: Models using these modules do not train if they are more than a few layers deep (accuracy does not increase and loss is “infinite”) Sometimes they crash and I get error messages such as <SCODE>RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED\n<ECODE> or sometimes: <SCODE> File \"C:\\Users\\Luc\\Miniconda3\\envs\\pytorch\\lib\\site-packages\\torch\\autograd\\__init__.py\", line 90, in backward\n allow_unreachable=True) # allow_unreachable flag\nRuntimeError: CUDA error: an illegal memory access was encountered\n<ECODE> <SCODE> File \"C:\\Users\\Luc\\Miniconda3\\envs\\pytorch\\lib\\site-packages\\torchsummary\\torchsummary.py\", line 19, in hook\n summary[m_key][\"input_shape\"] = list(input[0].size())\nAttributeError: 'list' object has no attribute 'size'\n<ECODE> (even though testing the forward pass with a simple tensor returns no error). I am trying to generate CNNs automatically so it has a lot of boilerplate code which makes it difficult for me to provide a simple reproducible example, but I hope you can assist! Many thanks", "isAccepted": false, "likes": null, "poster": "lucf" } ]
false
Inplace matrix modification
null
[ { "contents": "", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "Yes it is. This should work: <SCODE>L = Variable(torch.Tensor(i_size, j_size))\n# it's important to not specify requires_grad=True\n# it makes sense - you don't need grad w.r.t. original L content,\n# because it will be overwritten.\nfor i in range(i_size):\n for j in range(j_size):\n L[i, j] = # compute the value here\n<ECODE> But beware, it might be very very slow! Not only because you’ll be looping over elements in Python, but also because it will involve a lot of autograd ops to compute this, and there’s a constant overhead associated with each one. It’s not a huge problem if you’re doing relatively expensive computation like matrix multiplication or convolution, but for simple ops it can be more expensive than the computation alone.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "<SCODE>loss = 0\nL = Variable(torch.Tensor(i_size, j_size))\nW = Parameter(torch.Tensor(20, 20)) #fake size\nfor t in range(time_step):\n for i in range(i_size):\n for j in range(j_size):\n L[i, j] = func(L[i,j]) # a function of wrt old L\n out = get_output(L, W) #output computed from L and weights W\n loss += loss_func(out, label[t])\n\n<ECODE> In this case, can I still get gradient of loss wrt the weights W using autograd? It seems that L is overwritten at each timestep.", "isAccepted": false, "likes": null, "poster": "ypxie" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Tahnks! That’s exactly what I want to know.", "isAccepted": false, "likes": null, "poster": "ypxie" } ]
false
Getting indices of top-k/smallest-k elements of a Variable
null
[ { "contents": "Thanks.", "isAccepted": false, "likes": 1, "poster": "yusuf_isik" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Repackage_hidden function in word_language_model example
null
[ { "contents": "Can you pls explain what this function do? <SCODE>def repackage_hidden(h):\n \"\"\"Wraps hidden states in new Variables, to detach them from their history.\"\"\"\n if type(h) == Variable:\n return Variable(h.data)\n else:\n return tuple(repackage_hidden(v) for v in h)<ECODE>", "isAccepted": false, "likes": null, "poster": "erogol" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "I see thank you. Is this a general practice for BPTT in pytorch or just memory efficiency?", "isAccepted": false, "likes": null, "poster": "erogol" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Torch.cat() throws error?
null
[ { "contents": "When running these code: I got an error message: But similar code like this: or this: works well. I’m totally confused. I really need to extract some elements from one tensor and cat them into another tensor. What should I do? Thanks!", "isAccepted": false, "likes": null, "poster": "Xiaochen_Li" }, { "contents": "I’m not sure why does it work for regular tensors, I’ll need to look into that, but I think it’s an unintended behaviour.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "But got this:", "isAccepted": false, "likes": null, "poster": "Xiaochen_Li" }, { "contents": "<SCODE>>>> import torch\n>>> x = torch.range(1, 2)\n>>> y = torch.range(3, 4)\n>>> torch.stack([x, y], 0)\n\n 1 2\n 3 4\n[torch.FloatTensor of size 2x2]\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" } ]
false
Parameters not automatically registered in module
null
[ { "contents": "e.g. inside the init of a module: model.parameters() does not contains the parameters", "isAccepted": false, "likes": null, "poster": "davidenitti" }, { "contents": "Hi,", "isAccepted": false, "likes": null, "poster": "fmassa" } ]
false
PyTorch only using one GPU?
null
[ { "contents": "From nvidia-smi, I can see that during training, my pyTorch script is only using one GPU. Is there a way to make it use the others? (I have two). Thanks.", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Now in my main script, I call: I took a look at the documentation for the DataParallel, and what I currently do is this: Now this seems to run, BUT, it complains that it doesnt know what “forward” is. (The forward member function that it). So I am confused as to what exactly I should be putting through this “DataParallel” exactly… every member function of my DNN?.. Thanks", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Python 2.7, 3.5, and 3.6
null
[ { "contents": "Dear All, Thanks!", "isAccepted": false, "likes": null, "poster": "sungtae" }, { "contents": "py3 versions (3.5 and 3.6) also have CUDA multiprocessing available. Apart from that feature difference, they are within epsilon of performance with 2.7.", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "sungtae" }, { "contents": "I changed from 2.7 to 3.6 but am having problems with the model zoo. The Imagenet example will download pre-trained weights using a hickle file. Unfortunately it seems that the hickle module is not compatible with Python V 3.* due to the way strings are handled, which makes it difficult to use the pre-trained examples. Does anybody know a way around this, I don’t want to have to go back to 2.7 if I can avoid it John", "isAccepted": false, "likes": null, "poster": "fromLittleAcorns" } ]
false
PyTorch with VizDoom sample
reinforcement-learning
[ { "contents": "Thank you for the great tool! It is a real time saver due to autograd. You have provided nice docs and quite useful examples. It turns out Torch is more memory efficient for my models though.", "isAccepted": false, "likes": null, "poster": "andrew" }, { "contents": "this is pretty great!", "isAccepted": false, "likes": null, "poster": "smth" } ]
false
PyTorch for older then 7.5 CUDA
null
[ { "contents": "PS I have old laptop with GeForce 330M and the newest available driver for Ubuntu 16.04 is nvidia-340. So the max version of CUDA I can use is 6.5.", "isAccepted": false, "likes": null, "poster": "Artsiom_Chapialiou" }, { "contents": "The minimum CUDA version we want to support is 7.0. You can try building from source, but we dont plan to support 6.5", "isAccepted": false, "likes": 1, "poster": "smth" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Chen-Wei_Xie" } ]
false
Model produces random output when using loops to define hidden layers
null
[ { "contents": "Hi, I’m adapting the DCGAN models from the PyTorch examples and had a problem saving and loading the parameters. Specifically, when I saved the parameters and then read them back in at a later stage, I would get a different output for a consistent input. I debugged and found what I believed to be the problem, however I’m not sure of the cause. I am using a loop to define hidden layers of the MLP (I’m not using the conv nets): <SCODE>self.nhl = 4\nself.nh = [2, 2, 2, 2]\nself.fc = []\nfor i in range(self.nhl-1):\n self.fc.append(nn.Linear(self.nh[i], self.nh[i+1]))\n<ECODE> which gave the following output on repeated executions of the program (with a model saved earlier): <SCODE> jordan [ src ] $ python main.py \n\nFirst layer --> 0.0268119424582 0.0\nhidden layer --> 0.0\nhidden layer --> 0.558199584484\nhidden layer --> 0.203323662281\noutput --> 0.305809795856 -0.139149516821 0.538217186928\n\n jordan [ src ] $ python main.py \nFirst layer --> 0.0268119424582 0.0\nhidden layer --> 0.0\nhidden layer --> 0.45568972826\nhidden layer --> 0.0742047131062\noutput --> 0.425929844379 -0.366338938475 0.542093634605\n \njordan [ src ] $ python main.py \nFirst layer --> 0.0268119424582 0.0\nhidden layer --> 0.0\nhidden layer --> 0.0\nhidden layer --> 0.460717827082\noutput --> 0.345742940903 -0.0932856351137 0.694804787636\n<ECODE> Note that the outputs at each of the layers changes on each execution (I confirmed the parameters loaded are all consistent on each iteration). Forward is implemented as: <SCODE> x = F.relu(self.input(_input))\n\n print 'First layer -->', x.data[0][0], x.data[0][1]\n for i in range(self.nhl-1):\n x = F.relu(self.fc[i]((x)))\n print 'hidden layer -->', x.data[0][0]\n\n output = self.output(x)\n<ECODE> If I unroll the loops then it produces consistent output. Is there a bug in my loop or is there a reason you can’t use loops to define hidden layers?", "isAccepted": false, "likes": null, "poster": "Jordan_Campbell" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "NgPDat" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Jordan_Campbell" } ]
false
Cuda Numpy operations in custom Modules
null
[ { "contents": "Hi, Just converting the model and input to cuda results in “RuntimeError: numpy conversion for FloatTensor is not supported” error at “result = abs(rfft2(numpy_input)) line”. <SCODE>class BadFFTFunction(Function):\n \n def forward(self, input):\n numpy_input = input.numpy()\n result = abs(rfft2(numpy_input))\n return torch.FloatTensor(result)\n \n def backward(self, grad_output):\n numpy_go = grad_output.numpy()\n result = irfft2(numpy_go)\n return torch.FloatTensor(result)<ECODE>", "isAccepted": false, "likes": null, "poster": "Vijay_Rengarajan" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Okay, thanks! I will check.", "isAccepted": false, "likes": null, "poster": "Vijay_Rengarajan" } ]
false
Choose backends
null
[ { "contents": "It is not obvious to me how to select different backends. In torch, you can mix layers from cudnn and cunn easily. Is there an easy way to do the same in pytorch?", "isAccepted": false, "likes": null, "poster": "junonia" }, { "contents": "Just out of curiosity, what’s the reason why you need to select a particular backend?", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Thanks for the information. Sometimes I extract extra information from the layers. For example, for maxpooling layer, I have cases that make use of the indices in the cunn implementation. The cudnn implementation does not have the indices variable available. So in cases there is no corresponding cudnn implementation, cunn option will be selected for that layer?", "isAccepted": false, "likes": null, "poster": "junonia" }, { "contents": "Yes, if you request the pooling function/module to return indices and cuDNN doesn’t support it, it will be automatically ignored. We’re always picking the fastest backend that supports all specified options.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Just to clarify, if I have a network with some layers supported by cudnn, but if the net contains a layer that cudnn does not support (e.g. FractionalMaxPooling), will the entire net be run using cunn or just the FractionalMaxPooling layer?", "isAccepted": false, "likes": null, "poster": "junonia" }, { "contents": "The decision is made on a per-operation basis. When we have a choice of multiple backends for a given op, we filter out these that don’t support the options you chose and pick the fastest one. For example cuDNN 5 doesn’t support dilated convolutions. If your model has both dilated and regular convolutions, THCUNN will be used for dilated and cuDNN for regular ones. This has the upside of always running at the max speed, across many different systems, that have only a subset of backends (e.g. no CUDA support), based on a single model definition.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "I’d like to use CUDNN libraries for deterministic modules but not for non-deterministic modules such as CNN libraries. If I can choose backends for each module, I would be able to get some degree of speedup while keeping determinic results. You don’t really have a plan to allow manually choosing backends?", "isAccepted": false, "likes": null, "poster": "supakjk" }, { "contents": "We want to add the ability to pick the backends in a more fine-grained manner, but we haven’t discussed any solutions yet.", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Any update on this? Is there a way to choose conv algorithms?", "isAccepted": false, "likes": null, "poster": "Shubhankar" } ]
false
DataParallel and cuda with multiple inputs
null
[ { "contents": "Hi, <SCODE> for i, (input, target) in enumerate(train_loader):\n # measure data loading time\n data_time.update(time.time() - end)\n\n target = target.cuda(async=True)\n input_25 = torch.autograd.Variable(input[0])\n input_51 = torch.autograd.Variable(input[1])\n input_75 = torch.autograd.Variable(input[2])\n\n target_var = torch.autograd.Variable(target)\n\n # compute output\n output = model(patch25=input_25, patch51=input_51, patch75=input_75)\n<ECODE> So I tried to just move the network to the GPU: <SCODE># basic_conv() returns a nn.Module\nnet = basic_conv().cuda()\n<ECODE> Then I get this error that I cannot interpret myself : <SCODE>Traceback (most recent call last):\n File \"read_network.py\", line 29, in <module>\n net(patch25=in25, patch51=in51, patch75=in75)\n File \"/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 210, in __call__\n result = self.forward(*input, **kwargs)\n File \"/mnt/hdd/code/scripts/simple_conv.py\", line 30, in forward\n x_25 = self.conv2d_25_5(x['patch25'])\n File \"/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 210, in __call__\n result = self.forward(*input, **kwargs)\n File \"/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/modules/conv.py\", line 235, in forward\n self.padding, self.dilation, self.groups)\n File \"/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/functional.py\", line 37, in conv2d\n return f(input, weight, bias) if bias is not None else f(input, weight)\n File \"/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/_functions/conv.py\", line 33, in forward\n output = self._update_output(input, weight, bias)\n File \"/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/_functions/conv.py\", line 88, in _update_output\n return self._thnn('update_output', input, weight, bias)\n File \"/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/_functions/conv.py\", line 147, in _thnn\n return impl[fn_name](self, self._bufs[0], input, weight, *args)\n File \"/home/ganaye/deps/miniconda3/lib/python3.5/site-packages/torch/nn/_functions/conv.py\", line 225, in call_update_output\n bias, *args)\nTypeError: FloatSpatialConvolutionMM_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.cuda.FloatTensor, torch.cuda.FloatTensor, torch.FloatTensor, torch.FloatTensor, int, int, int, int, int, int), but expected (int state, torch.FloatTensor input, torch.FloatTensor output, torch.FloatTensor weight, [torch.FloatTensor bias or None], torch.FloatTensor finput, torch.FloatTensor fgradInput, int kW, int kH, int dW, int dH, int padW, int padH)\n<ECODE> It seems the error is coming from here, this is the forward call of my network: <SCODE> def forward(self, **x):\n # patch of size 25\n x_25 = self.conv2d_25_5(x['patch25'])\n x_25 = F.max_pool2d(x_25, 2, stride=1, padding=0)\n<ECODE> Help !!", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "Ok…", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Sorry, I was meaning DataParallel instead of DataLoader. I would like to give multiple inputs to the DataParallel. I will probably need to modify my network for each input to be distributed to a DataParallel. Can you explain this code extracted from the imagenet example : <SCODE> if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):\n model.features = torch.nn.DataParallel(model.features)\n model.cuda()\n else:\n model = torch.nn.DataParallel(model).cuda()\n<ECODE> Is there a specific reason to separate the classifier and the features in the alexnet and vgg models ? Why not giving the whole model to DataParallel, like in the resnet model ? Thanks", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "I implement the data_parallel with two inputs, but it does not work <SCODE>This is the functional version of the DataParallel module. \n\nArgs: \n module: the module to evaluate in parallel\n input: input to the module\n device_ids: GPU ids on which to replicate module\n output_device: GPU location of the output Use -1 to indicate the CPU.\n (default: device_ids[0])\nReturns:\n a Variable containing the result of module(input) located on\n output_device\n\"\"\"\nif not device_ids:\n return module(input1, input2) \n\nif output_device is None:\n output_device = device_ids[0]\n\nreplicas = replicate(module, device_ids)\ninput1s = scatter(input1, device_ids)\ninput2s = scatter(input2, device_ids)\nreplicas = replicas[:len(input1s)]\noutputs = parallel_apply2(replicas, input1s, input2s)\nreturn gather(outputs, output_device)\n<ECODE> <SCODE>lock = threading.Lock()\nresults = {}\n\ndef _worker(module, input1, input2, results, lock):\n var_input1 = input1\n var_input2 = input2\n while not isinstance(var_input1, Variable):\n var_input1 = var_input1[0]\n while not isinstance(var_input2, Variable):\n var_input2 = var_input2[0]\n try: \n with torch.cuda.device_of(var_input1):\n output = module(input1, input2) \n with lock:\n results[input1] = output\n except Exception as e:\n with lock:\n results[input1] = e\n\nthreads = [threading.Thread(target=_worker,\n args=(module, input1, input2, results, lock))\n for module, input1, input2 in zip(modules, input1s, input2s)]<ECODE>", "isAccepted": false, "likes": null, "poster": "peak" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" } ]
false
Directory for Table Layers in PyTorch
null
[ { "contents": "", "isAccepted": false, "likes": 11, "poster": "amdegroot" }, { "contents": "This is great! Thanks for putting this together", "isAccepted": false, "likes": 1, "poster": "colesbury" }, { "contents": "Yeah no problem, love doing this stuff, hopefully it helps!", "isAccepted": false, "likes": null, "poster": "amdegroot" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Mata_Fu" }, { "contents": "This is exactly what i was looking for. I was stuck on how to implement a nn.SplitTable (Lua) with pytorch", "isAccepted": false, "likes": null, "poster": "Dr_Dumbenstein" } ]
false
Delete variable from the graph
null
[ { "contents": "It could be a useful feature to allow deletion of variable from the computational graph, when back-propagating before that point is no longer necessary to save space and allow for ‘persistent’ computational structures. Deletion means release memory held by earlier variables and set new leaf variables. A example use case could be running a RNN for infinite steps and but only need gradient for K steps back computed at every single step, and it feels like there should be a way to do this without the need of constructing a new graph every time when gradient is needed.", "isAccepted": false, "likes": 2, "poster": "jzhoup" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Because it may avoid some redundant computations in some cases? Say if I want to compute gradient for every step of RNN while looking K steps back for the gradient, I may not want to construct a new graph from K steps back at every step because lots of the forward computations are already done before. Is this right?", "isAccepted": false, "likes": 2, "poster": "jzhoup" }, { "contents": "Ok, so as far as I understand, you want to do k steps with your RNN, and then move that k-sized window one step forward, and compute backward for every move? So effectively (nearly) every path would get backproped through k times?", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Yes. I guess there are other use cases where subpaths will be backproped through multiple times, and it will be great if we can keep the graph and don’t have to reconstruct the whole graph with every backward call. This deviates a little from my original question but if we allow the graph to persist and grow then it can come to a point that forgetting some history, possibly by allowing deleting variables and reassigning leaf variables, becomes necessary and thus the question I was asking.", "isAccepted": false, "likes": null, "poster": "jzhoup" }, { "contents": "No, we haven’t thought about this use case. Is it something you’re doing or just an example?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Should be straightforward to add though.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "I don’t have a working model that works like this but I work with very long sequences that is not practical to backprop all the way through, and this is one way I would try. I do think such flexibility is going to be helpful for enabling new types of models and I hope you would agree. Thanks!", "isAccepted": false, "likes": 2, "poster": "jzhoup" }, { "contents": "Yes it is. We’ve discussed it and will add it soon.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Great! You guys are awesome : )", "isAccepted": false, "likes": 1, "poster": "jzhoup" }, { "contents": "Has this been implemented? I have currently an implementation in which it would be great to backpropagate every output of an LSTM for K steps backwards instead of detaching the hidden state every K steps. I’m not sure of how would updates to the parameters look like, but it would be interesting to explore e.g. for the char-rnn.", "isAccepted": false, "likes": null, "poster": "garibarba" }, { "contents": "Hi. I have the same need. I want to do truncated backpropagation through time, and want to stop the gradient propagation past the hidden state from T time-steps ago. My previous approach was just to buffer the last T inputs and do a full computation of the forward pass from t-T to t, but that involves a lot of repeated computation! I’m looking for a way that I can just take the hidden state variable from T steps ago, and tell torch “don’t backpropagate past this variable!”. Is there a workaround that allows this?", "isAccepted": false, "likes": 1, "poster": "petered" }, { "contents": "For now I’ll try to implement something to crop the graph, I’ll probably make a new conversation about this.", "isAccepted": false, "likes": null, "poster": "EMarquer" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "garibarba" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Konpat_Ta_Preechakul" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "Konpat_Ta_Preechakul" } ]
false
How to save your net in pytorch?
null
[ { "contents": "I think it’s the torch.save(aNet, ‘myNet’), and this doesn’t complain, but I cant seem to load it… :-/", "isAccepted": false, "likes": null, "poster": "Kalamaya" }, { "contents": "Code from the imagenet example : Loading: <SCODE>if args.resume:\n if os.path.isfile(args.resume):\n print(\"=> loading checkpoint '{}'\".format(args.resume))\n checkpoint = torch.load(args.resume)\n args.start_epoch = checkpoint['epoch']\n best_prec1 = checkpoint['best_prec1']\n model.load_state_dict(checkpoint['state_dict'])\n print(\"=> loaded checkpoint '{}' (epoch {})\"\n .format(args.evaluate, checkpoint['epoch']))\n<ECODE> Saving: <SCODE>torch.save({\n 'epoch': epoch + 1,\n 'arch': args.arch,\n 'state_dict': model.state_dict(),\n 'best_prec1': best_prec1,\n }, 'checkpoint.tar' )\n<ECODE>", "isAccepted": false, "likes": 5, "poster": "trypag" }, { "contents": "Does this loading recover the last updated learning rate ?", "isAccepted": false, "likes": null, "poster": "rajarsheem" }, { "contents": "Add the following to the state you save <SCODE>\"optim_state\": optim.state_dict(),\n<ECODE> and load like this <SCODE>optimizer.load_state_dict(checkpoint['optim_state'])<ECODE>", "isAccepted": false, "likes": null, "poster": "jpeg729" } ]
false
Mask tensors from indexes (e.g from multinomial sampling)
null
[ { "contents": "Hi, I am looking for a ‘good’ way to transform matrix of indexes to a onehot matrix (line by line). For example,", "isAccepted": false, "likes": 1, "poster": "ludc" }, { "contents": "<SCODE>>>> x = torch.LongTensor([[1, 3], [2, 4]])\n>>> torch.zeros(2, 5).scatter_(1, x, 1)\n\n 0 1 0 1 0\n 0 0 1 0 1\n[torch.FloatTensor of size 2x5]\n<ECODE>", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "1112" }, { "contents": "You could use this code snippet to get all indices of the value you are comparing to: <SCODE>x = torch.LongTensor([[1, 3], [2, 4]])\nres = torch.zeros(2, 5).scatter_(1, x, 1)\n(res == 1.).nonzero()\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "ptrblck" } ]
false
Document for C API
null
[ { "contents": "", "isAccepted": false, "likes": 2, "poster": "longcw" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Thank you for your reply. I have found it in the source code.", "isAccepted": false, "likes": null, "poster": "longcw" }, { "contents": "float *int input_data = THFloatTensor_data(input)", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "ncullen93" }, { "contents": "<SCODE>long offset = input->storageOffset+i*input->stride[0]+j*input->stride[1];\nfloat val = THFloatStorage_get(input->storage, offset);\n\n<ECODE>", "isAccepted": false, "likes": null, "poster": "Frank_Lee" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "fmassa" }, { "contents": "", "isAccepted": false, "likes": 5, "poster": "lupoglaz" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Ah ok ! thanks! Now I’m starting to understand better how you create a python library with C code. Also, this link is really helpfull: I don’t know if somewhere else you provide the list of all the “torch.x” available functions.", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Wow. How did I miss that", "isAccepted": false, "likes": null, "poster": "alexis-jacq" }, { "contents": "I think the doc for the C api would be nice, there’s so much super cool functionality that gets a bit obscured by the macro concatenation in the C source and the source being scattered across files plus navigation tools getting confused by the macros (I really like the design of the API though and how the macros are used, it was clearly designed with maximum simplicity in mind which is awesome). Also incremental updates in release notes on what’s new in the C API would help keeping track of changes. I could also see how added overhead of something like this might be an opportunity cost. Just an idea.", "isAccepted": false, "likes": 1, "poster": "Andrei_Pokrovsky" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "jinserk" } ]
false
Does select and narrow return a view or copy
null
[ { "contents": "Hi, Generating patches positions is not really heavy on memory, but extracting is such an expensive operation, it grows linearly and eventually blows 32 GB of RAM. I would like to understand what is done wrong. To give you some idea, here is the logic behind the code: <SCODE>class PatchExtractor(data.Dataset):\n def __init__(self, root, patch_size, transform=None,\n target_transform=None):\n self.root = root\n self.patch_size = patch_size\n self.transform = transform\n self.target_transform = target_transform\n # extract all patch positions\n self.dataset = make_dataset(root,\n patch_size)\n if loader is None:\n self.loader = Loader()\n else:\n self.loader = loader\n\n def __getitem__(self, index):\n path, args, target = self.dataset[index]\n\n # img is a tensor returned by a succession of narrow and select\n img = self.loader.load(path, args, self.patch_size)\n\n if self.transform is not None:\n img = self.transform(img)\n if self.target_transform is not None:\n target = self.target_transform(target)\n\n return img, target\n\n def __len__(self):\n return len(self.dataset)\n\n<ECODE> <SCODE>self.images[path].select(0, position[0]) \\\n .narrow(0, y-border_width, patch_size) \\\n .narrow(1, z-border_width, patch_size)\n<ECODE> From my opinion, it seems everytime a patch is loaded and transfered to cuda, it is not freed after usage. I don’t store those values, I use the same train loop as in the imagenet example. The fact that a large bunch of my memory is being freed right after an epoch ends (ie when testing starts) guides me to that observation. <SCODE>def train(train_loader, model, criterion, optimizer, epoch):\n batch_time = AverageMeter()\n data_time = AverageMeter()\n losses = AverageMeter()\n top1 = AverageMeter()\n top5 = AverageMeter()\n\n # switch to train mode\n model.train()\n\n end = time.time()\n for i, (input, target) in enumerate(train_loader):\n # measure data loading time\n data_time.update(time.time() - end)\n\n target = target.cuda(async=True)\n input_25 = torch.autograd.Variable(input[0]).cuda()\n input_51 = torch.autograd.Variable(input[1]).cuda()\n input_75 = torch.autograd.Variable(input[2]).cuda()\n\n target_var = torch.autograd.Variable(target)\n\n # compute output\n output = model(patch25=input_25, patch51=input_51, patch75=input_75)\n loss = criterion(output, target_var)\n\n # debug loss value\n # print('raw loss is {loss.data[0]:.5f}\\t'.format(loss=loss))\n\n # measure accuracy and record loss\n prec1, prec5 = accuracy(output.data, target, topk=(1, 5))\n losses.update(loss.data[0], input[0].size(0))\n top1.update(prec1[0], input[0].size(0))\n top5.update(prec5[0], input[0].size(0))\n\n # compute gradient and do SGD step\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n # measure elapsed time\n batch_time.update(time.time() - end)\n end = time.time()\n\n if i % args.print_freq == 0:\n print('Epoch: [{0}][{1}/{2}]\\t'\n 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\\t'\n 'Data {data_time.val:.3f} ({data_time.avg:.3f})\\t'\n 'Loss {loss.val:.4f} ({loss.avg:.4f})\\t'\n 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\\t'\n 'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format(\n epoch, i, len(train_loader), batch_time=batch_time,\n data_time=data_time, loss=losses, top1=top1, top5=top5))\n<ECODE> Thank you for the feedback", "isAccepted": false, "likes": 1, "poster": "trypag" }, { "contents": "", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Also, if you’re using DataLoader with many workers, you might want to use a lower number. Each worker probably has its own copy of the dataset in memory and the patches it extracts are getting accumulated in the queue, because it’s a fast operation.", "isAccepted": false, "likes": 3, "poster": "apaszke" }, { "contents": "It’s the CPU memory that is blowing, the GPU memory stays stable. To give you an idea, in test mode I am around 9 GB used, in training mode it’s around 20 GB. The thing is that memory usage in test mode is stable between 8-9 GB, but during training the RAM is slowly eatean, batcth after batch, constantly growing until the next test step where it goes back to 8-9 GB.", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "However reducing the number of workers from 8 to 2 did reduce the memory footprint by half, at the cost of increasing data loading time. Still, even when lowering the number of workers, the memory usage keeps growing (slower) from batch to batch, until it reaches a stable point. I guess you have implemented some kind of maximum caching for the queue, and the limit is satisfied. This is both a good point to see it constant and sad because it shows a problem I have no idea how to solve. Reducing the batch size should help also right ? as it is prepared on the CPU side, I am currently at 4096 with 3 inputs of 75x75, 51x51, 25x25. He is my current train loop : <SCODE>for i, (input, target) in enumerate(train_loader):\n # measure data loading time\n data_time.update(time.time() - end)\n\n target = target.cuda(async=True)\n input_25 = torch.autograd.Variable(input[0]).cuda()\n input_51 = torch.autograd.Variable(input[1]).cuda()\n input_75 = torch.autograd.Variable(input[2]).cuda()\n\n target_var = torch.autograd.Variable(target)\n\n # compute output\n output = model(patch25=input_25, patch51=input_51, patch75=input_75)\n loss = criterion(output, target_var)\n\n # debug loss value\n # print('raw loss is {loss.data[0]:.5f}\\t'.format(loss=loss))\n\n # measure accuracy and record loss\n prec1, prec5 = accuracy(output.data, target, topk=(1, 5))\n losses.update(loss.data[0], input[0].size(0))\n top1.update(prec1[0], input[0].size(0))\n top5.update(prec5[0], input[0].size(0))\n\n # compute gradient and do SGD step\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n # measure elapsed time\n batch_time.update(time.time() - end)\n end = time.time()\n\n if i % args.print_freq == 0:\n print('Epoch: [{0}][{1}/{2}]\\t'\n 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\\t'\n 'Data {data_time.val:.3f} ({data_time.avg:.3f})\\t'\n 'Loss {loss.val:.4f} ({loss.avg:.4f})\\t'\n 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\\t'\n 'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format(\n epoch, i, len(train_loader), batch_time=batch_time,\n data_time=data_time, loss=losses, top1=top1, top5=top5))\n\n del input_25\n del input_51\n del input_75\n del input\n gc.collect()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "I have monitored instantiating the DataLoader weights around 4.2 GB in CPU memory, this is were all patches positions are extracted and stored. This is just a list of 15 millions tuples. Does it mean I will cost around 4.2x4 if I have 4 processes building the batches ?", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "Yes, exactly. We have an upper bound on the queue size, so that if the workers are fast, they won’t fill up the memory, and it’s scaled linearly with each worker. If you find yourself with 1 or 2 workers saturating the queue there’s no point in using more of them. Also, as I said, remember that if you’re using an in-memory dataset, each worker is likely to keep its own copy, so the memory usage will be quite high. You could try to load the images lazily or use some kind of an in-memory database, that all workers will contact for the data.", "isAccepted": false, "likes": 2, "poster": "apaszke" }, { "contents": "Also what I the reason the validation mode is way more efficient on CPU memory than training, because on CPU side, this is basically the same to me, I mean the data processing is the same, but it uses 10 GB in RAM. only differences are : <SCODE># instead of model.train()\nmodel.eval()\n\n# using volatile\ninput_25 = torch.autograd.Variable(input[0], volatile=True).cuda()\n<ECODE>", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "Can you show me how you instantiate both training and validation data loaders?", "isAccepted": false, "likes": null, "poster": "apaszke" }, { "contents": "Yes sure. I have focused my work on reducing the DataLoader footprint. I have found using a list of int was really really expensive : python ints are 28 bytes long. By using a numpy array I can reduce my RAM footprint from 1 GB to 80 MB. I will see if it scales well on a server. <SCODE> train_loader = torch.utils.data.DataLoader(\n medfolder.MedFolder(traindir, file_extensions, patch_size, label_map, file_map,\n transform=Compose([\n transforms.CenterSquareCrops([25, 51, 75]),\n transforms.Unsqueeze(0)\n ])),\n batch_size=args.batch_size, shuffle=True,\n num_workers=args.workers, pin_memory=True)\n\n val_loader = torch.utils.data.DataLoader(\n medfolder.MedFolder(valdir, file_extensions, patch_size, label_map, file_map,\n transform=Compose([\n transforms.CenterSquareCrops([25, 51, 75]),\n transforms.Unsqueeze(0)\n ])),\n batch_size=args.batch_size, shuffle=False,\n num_workers=args.workers, pin_memory=True)\n<ECODE> The only difference is that the validation dataset is not shuffled. I will give you some updates on this tomorrow. I think I am nearing the end of this problem.", "isAccepted": false, "likes": null, "poster": "trypag" }, { "contents": "Ugh, nice finding! That should probably do it. Note that PyTorch tensors also keep the data packed, so you can use them instead of numpy indices too. Just be careful with pinned memory when it’s already under a high pressure. Pinned memory is never swapped out, so it can possibly freeze or crash the system if there’s too much of it.", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "<SCODE>x= torch.autograd.Variable(torch.zeros(batchSize,3,2*imgW,2*imgW)).cuda(0) \nyt=torch.autograd.Variable(torch.LongTensor(batchSize).zero_()).cuda(0) \nmodel = models.resnet18(pretrained=True).cuda(0) \nfor epoch in range(0,epochs): \n correct=0\n \n for t in range(0, trainSize, batchSize): \n idx=0 \n for i in range(t, maxSize): # the batch of input x and target yt are assigned by local patches of # a large image here\n x[idx,:,:,:]=image[:, i-imgW:i+imgW, i-imgW:i+imgW] \n yt[idx]=torch.from_numpy(Class[i]) \n idx=idx+1 \n optimizer.zero_grad() \n output = model(x) \n loss = criterion(m(output), yt) \n pred = output.data.max(1)[1] \n correct += pred.eq(yt.data).sum()\n loss.backward()\n optimizer.step()\n<ECODE> Thank you for your instruction beforehand.", "isAccepted": false, "likes": null, "poster": "phenixcx" }, { "contents": "<SCODE>\nmodel = models.resnet18(pretrained=True).cuda(0) \nfor epoch in range(0,epochs): \n correct=0\n \n for t in range(0, trainSize, batchSize): \n idx=0 \n x_data = []\n y_data = []\n for i in range(t, maxSize): # the batch of input x and target yt are assigned by local patches of # a large image here\n x[idx,:,:,:]=image[:, i-imgW:i+imgW, i-imgW:i+imgW] \n yt[idx]=torch.from_numpy(Class[i]) \n x_data.append(image[:, i-imgW:i+imgW, i-imgW:i+imgW])\n y_data.append(torch.from_numpy(Class[i]))\n idx=idx+1 \n x= torch.autograd.Variable(torch.stack(x_data, 0).cuda(0))\n yt=torch.autograd.Variable(torch.cat(y_data, 0).cuda(0))\n optimizer.zero_grad() \n output = model(x) \n loss = criterion(m(output), yt) \n pred = output.data.max(1)[1] \n correct += pred.eq(yt.data).sum()\n loss.backward()\n optimizer.step()\n<ECODE>", "isAccepted": false, "likes": 1, "poster": "apaszke" }, { "contents": "Got it! Thank you very much! I have read the related examples, and noticed that Variable is defined in the loop. I did not know it is necessary to do like this. Thank you, Adam, for your instruction again!", "isAccepted": false, "likes": null, "poster": "phenixcx" } ]
false
Verifying proper usage of bach_norm and eval()
null
[ { "contents": "I want to make sure that I am utilizing the batchnorm functionality properly: I understand how to code up the layers to perform the batchnorm, however one thing I want to make sure that I am doing right, is basically putting the net into “evaluation” mode, so that the parameters of the batch_norm do not keep getting updated, when I am say, periodically running through the validation set, or when I am done and running over new data etc. Is this correct? Thank you.", "isAccepted": false, "likes": 1, "poster": "Kalamaya" }, { "contents": "", "isAccepted": false, "likes": 3, "poster": "Jordan_Campbell" } ]
false