QuestionId
int64 388k
59.1M
| AnswerCount
int64 0
47
| Tags
stringlengths 7
102
| CreationDate
stringlengths 23
23
| AcceptedAnswerId
float64 388k
59.1M
⌀ | OwnerUserId
float64 184
12.5M
⌀ | Title
stringlengths 15
150
| Body
stringlengths 12
29.3k
| answers
listlengths 0
47
|
---|---|---|---|---|---|---|---|---|
29,399,067 | 1 |
<lua><torch>
|
2015-04-01T19:20:41.137
| 29,400,464 | 563,762 |
How to get a prediction using Torch7
|
<p>I'm still familiarizing myself with Torch and so far so good. I have however hit a dead end that I'm not sure how to get around: how can I get Torch7 (or more specifically the dp library) to evaluate a single input and return the predicted output?</p>
<p>Here's my setup (basically the dp demo):</p>
<pre></pre>
|
[
{
"AnswerId": "29400464",
"CreationDate": "2015-04-01T20:45:20.397",
"ParentId": null,
"OwnerUserId": "49985",
"Title": null,
"Body": "<p>You have two options. </p>\n\n<p>One. Use the encapsulated <a href=\"https://github.com/torch/nn/blob/master/doc/module.md#module\" rel=\"nofollow\">nn.Module</a> to forward your <a href=\"https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor\" rel=\"nofollow\">torch.Tensor</a>:</p>\n\n<pre><code>mlp = model:toModule(datasource:trainSet():sub(1,2))\nmlp:float()\ninput = torch.FloatTensor(1, 1, 32, 32) -- replace this with your input\noutput = mlp:forward(input)\n</code></pre>\n\n<p>Two. Encapsulate your torch.Tensor into a <a href=\"http://dp.readthedocs.org/en/latest/view/index.html#dp.ImageView\" rel=\"nofollow\">dp.ImageView</a> and forward that through your <a href=\"http://dp.readthedocs.org/en/latest/model/index.html#dp.Model\" rel=\"nofollow\">dp.Model</a> :</p>\n\n<pre><code>input = torch.FloatTensor(1, 1, 32, 32) -- replace with your input\ninputView = dp.ImageView('bchw', input)\noutputView = mlp:forward(inputView, dp.Carry{nSample=1})\noutput = outputView:forward('b')\n</code></pre>\n"
}
] |
29,412,658 | 0 |
<machine-learning><neural-network><gradient-descent><torch><conv-neural-network>
|
2015-04-02T12:11:25.753
| null | 1,084,163 |
torch - LookupTable and gradient update
|
<p>I am trying to implement a neural network with multiple layers. I am trying to understand if what I have done is correct and if not, how do I debug this. The way I do it is, I define my neural network in the following manner (I initialise the lookuptable layer with some prior learned embeddings):</p>
<pre></pre>
<p>Now, to train the network, I loop through every training example and for every example I call gradUpdate() which has this code (this is straight from the examples):</p>
<pre></pre>
<p>The findGrad function is just an implementation of WARP Loss which returns the gradient wrt output. I am wondering if this is all I need? I assume this will backpropagate and update the parameters of all the layers. To check this, I trained this network and saved the model. Then I loaded the model and did:</p>
<pre></pre>
<p>Now, I checked vector[1] and lookuptable.weight[1] and they were the same. I can't understand why did the weights in the lookup table layer not get updated? What am I missing here?</p>
<p>Looking forward to your replies!</p>
|
[] |
29,434,671 | 1 |
<python><neural-network><computer-vision><deep-learning><caffe>
|
2015-04-03T15:02:32.747
| null | 2,539,909 |
Caffe network getting very low loss but very bad accuracy in testing
|
<p>I'm somewhat new to caffe, and I'm getting some strange behavior. I'm trying to use fine tuning on the bvlc_reference_caffenet to accomplish an OCR task.</p>
<p>I've taken their pretrained net, changed the last FC layer to the number of output classes that I have, and retrained. After a few thousand iterations I'm getting loss rates of ~.001, and an accuracy over 90 percent when the network tests. That said, when I try to run my network on data by myself, I get awful results, not exceeding 7 or 8 percent.</p>
<p>The code I'm using to run the net is: </p>
<pre></pre>
<p>Any thoughts on why this performance might be so poor?</p>
<p>Thanks!</p>
<p>PS: Some additional information which may or not be of use. When classifying as shown below, the classifier really seems to favor certain classes. Even though I have a 101 class problem, it seems to only assign a max of 15 different classes</p>
<p>PPS: I'm also fairly certain I'm not overfitting. I've been testing this along the way with snapshots and they all exhibit the same poor results.</p>
|
[
{
"AnswerId": "29776104",
"CreationDate": "2015-04-21T15:06:11.770",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>Your code for testing the model you posted seem to miss some components:</p>\n\n<ol>\n<li>It seems like you did not subtract the image's mean.</li>\n<li>You did not swap channels from RGB to BGR.</li>\n<li>You did not scale the inputs to [0..255] range.</li>\n</ol>\n\n<p>Looking at similar instances of <code>caffe.Classifier</code> you may see something like:</p>\n\n<pre><code>net = caffe.Classifier('bvlc_reference_caffenet/deploy.prototxt',\n 'bvlc_reference_caffenet/caffenet_train_iter_28000.caffemodel', \n mean = NP.load( 'ilsvrc_2012_mean.npy' ),\n input_scale=1.0, raw_scale=255,\n channel_swap=(2,1,0),\n image_dims=(227, 227, 1))\n</code></pre>\n\n<p>It is <strong>crucial</strong> to have the same input transformation in test as in training.</p>\n"
}
] |
29,434,729 | 1 |
<lua><machine-learning><resize><torch>
|
2015-04-03T15:05:25.537
| 29,436,069 | 470,433 |
Torch Resize Tensor
|
<p>How do I resize a Tensor in Torch? Methods documented in <a href="https://github.com/torch/torch7/blob/master/doc/tensor.md#resizing" rel="noreferrer">https://github.com/torch/torch7/blob/master/doc/tensor.md#resizing</a> do not seem to work.</p>
<pre></pre>
<p>Why doesn't this approach work? </p>
|
[
{
"AnswerId": "29436069",
"CreationDate": "2015-04-03T16:32:40.753",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p>You need to do:</p>\n\n<pre><code>images:resize(...)\n</code></pre>\n\n<p>What you did:</p>\n\n<pre><code>images.resize(...)\n</code></pre>\n\n<p>images.resize does not pass the current tensor as the first argument.</p>\n\n<p><code>images:resize(...)</code> is equivalent to <code>images.resize(images, ...)</code></p>\n"
}
] |
29,436,556 | 1 |
<windows><theano>
|
2015-04-03T17:04:00.827
| 29,449,430 | 147,530 |
How to modify .theanorc so that nvcc uses the -m64 flag during compilation?
|
<p>I followed the steps at <a href="http://deeplearning.net/software/theano/install_windows.html#install-windows" rel="nofollow">http://deeplearning.net/software/theano/install_windows.html#install-windows</a> to install theano but running into problems. One of them is that by default using the settings on <a href="http://deeplearning.net/software/theano/install_windows.html#install-windows" rel="nofollow">http://deeplearning.net/software/theano/install_windows.html#install-windows</a>, on my machine tries to compile theano in 32bit mode as I see following when I try to import theano on python shell (note the below):</p>
<pre></pre>
<p>It then runs into problems as it cannot find which indeed does not exist on my machine under 32 bit libs stored under (does this file exist on another users system in the 32bit folder?). I have stored under and would therefore like to compile in 64bit mode. To do that I changed to:</p>
<pre></pre>
<p>but this does not give desired effect. is still trying to compile in 32bit mode:</p>
<pre></pre>
<p>Does anyone know the proper syntax how to modify so that uses the <a href="http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/#axzz3WB624BPV" rel="nofollow">-m64 flag</a> during compilation?</p>
<blockquote>
<p>--machine {32|64} -m Specify 32-bit vs. 64-bit architecture.</p>
<p>Allowed values for this option: 32, 64.</p>
</blockquote>
|
[
{
"AnswerId": "29449430",
"CreationDate": "2015-04-04T17:34:12.860",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>The problem is that your PYTHON is a 32bit python. We do not support mixing python, g++ and nvcc bit size. THis mean they all must be 32 bit or 64 bit.</p>\n\n<p>Make sure to install all of them as 64 bit.</p>\n\n<p>If you want to try to support this mixed case, check in the theano/sandbox/cuda/nvcc_compiler.py, it is there we do the compilation. Here we add the -m32 flags:</p>\n\n<p><a href=\"https://github.com/Theano/Theano/blob/master/theano/sandbox/cuda/nvcc_compiler.py#L324\" rel=\"nofollow\">https://github.com/Theano/Theano/blob/master/theano/sandbox/cuda/nvcc_compiler.py#L324</a></p>\n\n<p>If you make this work, a PR with the requested change to Theano would be welcome.</p>\n"
}
] |
29,438,171 | 1 |
<windows><theano>
|
2015-04-03T19:03:18.313
| 29,438,391 | 147,530 |
How to specify a windows directory that has spaces in it in .theanorc
|
<p>I need to use the flag to specify directory that contains to . If I specify it like this:</p>
<pre></pre>
<p>I get error while importing theano in python that complains about spaces. What is the proper way to specify this directory in ? Note that I do not want to edit my .</p>
|
[
{
"AnswerId": "29438391",
"CreationDate": "2015-04-03T19:21:38.480",
"ParentId": null,
"OwnerUserId": "3760780",
"Title": null,
"Body": "<p>Try the following, if you haven't yet, it seems to work for other people:</p>\n\n<pre><code>[nvcc]\ncompiler_bindir=C:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\VC\\bin\n</code></pre>\n\n<p>See also <a href=\"https://stackoverflow.com/a/26073714\">this</a> long Stackoverflow guide about installing Theano on Windows.</p>\n"
}
] |
29,456,838 | 1 |
<image-processing><similarity><caffe>
|
2015-04-05T11:36:24.087
| null | 3,966,933 |
Finding similar Images
|
<p>I want to find images similar to another image. So after researching i found two methods first was two represent the image by its attributes like </p>
<p></p>
<p>but the limitation of this method is that I will not be able to get an exhaustive dataset with all the features marked.</p>
<p>The second approach I found was to extract features and do feature mapping.
So I decided to use deep convolution neural networks with caffe so that by using any of the exsisting models I could learn the features and then perform feature matching or some other operation. I just wanted to take a general advice what can be the other methods which are good and worth a try. And since I am just starting out with caffe so can anyone give a general guideline how to approach the problem with caffe?
Thanks in advance</p>
<p>I looked at phash just was curious that it will find the images which are same like there are minor intensity variations and some other variation wiill it also work to give the same type(semantically) like for a tshirt with blue and red stripes will it give black and white stripe as similar and would it consider things like the length of shirt, collar style etc</p>
|
[
{
"AnswerId": "31133097",
"CreationDate": "2015-06-30T08:19:58.280",
"ParentId": null,
"OwnerUserId": "809993",
"Title": null,
"Body": "<p>It's true, that it's been empirically shown, that the euclidean distance between the features extracted using ConvNets is closer for images of the same class, while farther for images of different classes - but it's important to understand what kind of similarity you're looking for.</p>\n\n<p>One can define many types of similarity measures, and the type of features you use (in the case of ConvNets, the type of data it was trained on) affects the kind of similar images you'll get. For instance, maybe given an image of a dog, you want to find other pictures of dogs but not specifically that exact dog, alternatively, maybe you have a picture of a church and you want to find another image of the exact same church but from a different angle - these are two very different problems, with different methods you can use to solve them.</p>\n\n<p>One particular kind of convolutional neural networks you can look at, are Siamese Network, which are built to learn similarities between two images, given a dataset of pairs of images with the labels same/not_same. You can look for implementation in Caffe for this method <a href=\"http://caffe.berkeleyvision.org/gathered/examples/siamese.html\" rel=\"nofollow\">here</a>.</p>\n\n<p>A different method, is to take a ConvNet trained on ImageNet data (<a href=\"https://github.com/BVLC/caffe/wiki/Model-Zoo\" rel=\"nofollow\">see here for options</a>), and use the python/matlab interface to classify images, and then extract the second to last layer, and use that as the representation for that image. Now you can just take the euclidean distance of those representations and this would be your similarity measure.</p>\n\n<p>Unrelated to Caffe, you can also use \"old school\" methods of feature matching, included in open source libraries like OpenCV (<a href=\"http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html\" rel=\"nofollow\">an example tutorial of such method</a>). </p>\n"
}
] |
29,458,330 | 1 |
<lua><torch>
|
2015-04-05T14:28:50.427
| null | 563,762 |
Size mismatch in function addmv in Torch7
|
<p>I'm working on a small Torch7/ Lua script to create and train a neural network, but I'm running into errors. Any ideas?</p>
<p>Here's my code:</p>
<pre></pre>
<p>Here's the error:</p>
<pre></pre>
<p>And here's a sample of my data:</p>
<pre></pre>
<p>Basically my aim is to create a deep neural network linking the frequency of words used in a sentence and tie it to the user rating it as either "positive", "negative" or "neutral" (my outputs, which are binary). Please also let me know if my thinking is correct on this.</p>
<p>Thank you!</p>
|
[
{
"AnswerId": "29459592",
"CreationDate": "2015-04-05T16:38:36.680",
"ParentId": null,
"OwnerUserId": "563762",
"Title": null,
"Body": "<p>Found the problem!</p>\n\n<p>The issue was that I was giving the wrong sizes when creating the network. I was passing in \"inputs:size(1)\" instead of \"inputs:size(2)\". Here's the fix</p>\n\n<pre><code>mlp:add(nn.Linear(inputs:size(2), HUs))\nmlp:add(nn.Tanh())\nmlp:add(nn.Linear(HUs, outputs:size(2)))\n</code></pre>\n\n<p>Feel like I'm slowly starting to get the hang of Lua/ Torch! Score</p>\n"
}
] |
29,474,646 | 1 |
<file><lua><load><torch>
|
2015-04-06T15:50:25.323
| 29,474,911 | 1,082,019 |
How to load a table from file in Torch / Lua?
|
<p>Very simple operation. I have a file that contains a table made of N rows and 6 columns, and I would like to load it in a table in my Torch / Lua script.</p>
<p>The data file looks this way:</p>
<pre></pre>
<p>And I would like to load it in a table, where for example contains and so on.</p>
<p>How could I do it? Thanks</p>
|
[
{
"AnswerId": "29474911",
"CreationDate": "2015-04-06T16:07:30.727",
"ParentId": null,
"OwnerUserId": "1442917",
"Title": null,
"Body": "<p>You can use <code>string.match</code> to parse individual line into a table and use <code>io.lines</code> to iterate over lines in the file:</p>\n\n<pre><code>-- script.lua\nlocal t, patt = {}, (\"(%w+)%s+\"):rep(5)..\"(%w+)\"\nfor line in io.lines() do\n if not line:find(\"^chromNameA\") then\n table.insert(t, {line:match(patt)})\n end\nend\nprint(#t, t[1][1], t[1][6]) -- prints `5 chr22 16678717`\n\n-- file.txt\nchromNameA startA endA chromNameB startB endB\nchr22 16867980 16868130 chr22 16669675 16678717\nchr22 16867980 16868130 chr22 16685348 16701095\nchr22 16867980 16868130 chr22 16723869 16739035\nchr22 16867980 16868130 chr22 16748016 16750787\nchr22 16867980 16868130 chr22 16750788 16755877\n\n-- execution: lua script.lua <file.txt\n</code></pre>\n\n<p>You can then start the script as <code>lua script.lua <file.txt</code> and it should produce a table with the structure you want.</p>\n"
}
] |
29,481,458 | 1 |
<cuda><gpu><caffe>
|
2015-04-06T23:46:57.310
| 29,481,562 | 2,297,751 |
A single program appear on two GPU card
|
<p>I have multiple GPU cards(NO.0, NO.1 ...), and every time I run a <a href="http://caffe.berkeleyvision.org/" rel="nofollow noreferrer">caffe</a> process on NO.1 or 2 ... (except 0) card, it will use up 73MiB on the NO.0 card.</p>
<p>For example, in the fig below, process 11899 will use 73MiB on NO.0 card but it actually run on NO.1 card. </p>
<p><img src="https://i.stack.imgur.com/lEqay.png" alt="multi_gpus"></p>
<p>Why? Can I disable this feature?</p>
|
[
{
"AnswerId": "29481562",
"CreationDate": "2015-04-06T23:58:37.810",
"ParentId": null,
"OwnerUserId": "1695960",
"Title": null,
"Body": "<p>The CUDA driver is like an operating system. It will reserve memory for various purposes when it is active. Certain features, such as <a href=\"https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-unified-memory-programming-hd\" rel=\"nofollow\">managed memory</a>, may cause substantial side-effect allocations to occur (although I don't think this is the case with Caffe). And its even possible that the application itself may be doing some explicit allocations on those devices, for some reason.</p>\n\n<p>If you want to prevent this, one option is to use the <code>CUDA_VISIBLE_DEVICES</code> <a href=\"https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars\" rel=\"nofollow\">environment variable</a> when you launch your process.</p>\n\n<p>For example, if you want to prevent CUDA from doing anything with card \"0\", you could do something like this (on linux):</p>\n\n<pre><code>CUDA_VISIBLE_DEVICES=\"1,2\" ./my_application ...\n</code></pre>\n\n<p>Note that the enumeration used above (the CUDA enumeration) is the same enumeration that would be reported by the <code>deviceQuery</code> sample app, but not necessarily the same enumeration reported by <code>nvidia-smi</code> (the NVML enumeration). You may need to experiment or else run <code>deviceQuery</code> to determine which GPUs you want to use, and which you want to exclude.</p>\n\n<p>Also note that using this option actually affects the devices that are visible to an application, and will cause a re-ordering of device enumeration (the device that was previously \"1\" will appear to be enumerated as device \"0\", for example). So if your application is multi-GPU aware, and you are selecting specific devices for use, you may need to change the specific devices you (or the application) are selecting, when you use this environment variable.</p>\n"
}
] |
29,490,289 | 0 |
<lua><caffe><torch>
|
2015-04-07T11:16:57.920
| null | 470,433 |
Torch Caffe Lua - How to get Flickr Style Example to work?
|
<p>Using the <a href="https://github.com/szagoruyko/torch-caffe-binding" rel="nofollow">Torch Caffe Binding</a>. I want to make a prediction on the <a href="http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html" rel="nofollow">Flickr Style example</a>. I've got the trained model and the code below. How do I have to alter the code to make it work?</p>
<pre></pre>
<p>To output now is a FloatTensor [torch.FloatTensor of dimension 20x1x1] with 20 times NaN values.</p>
|
[] |
29,492,848 | 1 |
<machine-learning><deep-learning><regression><caffe><unsupervised-learning>
|
2015-04-07T13:29:15.030
| null | 1,913,583 |
Add regression layer to caffe
|
<p>I have implemented a smile detection system based on deep learning. The bottom layer is the output of the system and has 10 output according to the amount of the person's smile.<br>
I want to convert these ten output with a numeric output in the range of 1 to 10 with a regression layer.<br>
How can I do this in caffe?<br>
Thanks</p>
|
[
{
"AnswerId": "32623945",
"CreationDate": "2015-09-17T06:37:44.003",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>In order to convert the 10 outputs you have to a single one you need an <code>\"InnerProduct\"</code> layer with 10 inputs and a single output. To train this layer you also need to add a loss layer suitable for regression on top of the 10 output layer you already have.<br>\nSuch loss layers can be, e.g., <a href=\"https://github.com/BVLC/caffe/blob/master/src/caffe/layers/euclidean_loss_layer.cpp\" rel=\"nofollow\">Euclidean loss layer</a> or Ross Girshick's <a href=\"https://github.com/rbgirshick/caffe-fast-rcnn/blob/fast-rcnn/src/caffe/layers/smooth_L1_loss_layer.cpp\" rel=\"nofollow\">smooth L1 loss layer</a>.</p>\n"
}
] |
29,514,223 | 1 |
<deep-learning><caffe>
|
2015-04-08T12:10:02.487
| null | 1,452,257 |
Dropout layer ignored - "Warning: had one or more problems upgrading V1LayerParameter"
|
<p>I'm trying to add a dropout layer based on the Imagenet example (see code below). However, it seems to be ignored it's not printed as being part of the network when I train the model and I get the warning message below. I have the newest version of caffe installed. What should I do to include it properly?</p>
<p><strong>Dropout layer:</strong></p>
<pre></pre>
<p><strong>Training log:</strong></p>
<pre></pre>
|
[
{
"AnswerId": "32201237",
"CreationDate": "2015-08-25T10:12:15.157",
"ParentId": null,
"OwnerUserId": "5263750",
"Title": null,
"Body": "<p>I had the exact same problem and stumbled upon this question:</p>\n\n<p><a href=\"https://stackoverflow.com/questions/30253520/what-does-attempting-to-upgrade-input-file-specified-using-deprecated-transform\">What does 'Attempting to upgrade input file specified using deprecated transformation parameters' mean?</a></p>\n\n<p>As far as I know, in caffe's attempt to upgrade your layer architecture,\ncaffe has already found some layers that are written in the new protobuf syntax. Thus, it skips upgrading your old syntax and therefore ignores the Dropout layer.\nBut I am not 100% sure about this, as it's just my assumption, when taking a short look into the caffe code.</p>\n\n<p>For me the solution was to change it as follows:</p>\n\n<pre><code>layers {\n name: \"drop4\"\n type: DROPOUT\n bottom: \"fc4\"\n top: \"fc4\"\n dropout_param {\n dropout_ratio: 0.5\n }\n}\n</code></pre>\n\n<p>Hope it works =)</p>\n"
}
] |
29,529,959 | 2 |
<caffe><lmdb>
|
2015-04-09T04:44:56.380
| 29,552,948 | 465,509 |
Caffe: Understanding expected lmdb datastructure for blobs
|
<p>I'm trying to understand how data is interpreted in Caffe.
For that I've taken a look at the <a href="http://caffe.berkeleyvision.org/gathered/examples/mnist.html">Minst Tutorial</a>
Looking at the input data definition:</p>
<pre></pre>
<p>I've now looked at the mnist_train_lmdb and taken one of the entries (shown in hex):</p>
<pre></pre>
<p>(I've added the line breaks here to be able to see the '7' digit.)</p>
<p>Now my question is, <strong>where this format is described?</strong> Or put differently where is defined that the first 36 bytes are some sort of header and the last 8 bytes have some label correspondence?</p>
<p><strong>How would I go about constructing my own data?</strong>
Neither <a href="http://caffe.berkeleyvision.org/tutorial/net_layer_blob.html">Blob Tutorial</a> nor <a href="http://caffe.berkeleyvision.org/tutorial/layers.html">Layers Definition</a> give much away about required formats. <em>My intention is not to use image data, but time series</em></p>
<p>Thanks!</p>
|
[
{
"AnswerId": "29552948",
"CreationDate": "2015-04-10T03:40:49.967",
"ParentId": null,
"OwnerUserId": "465509",
"Title": null,
"Body": "<p>I realized that protocol buffers must come into play here. So I tried to deserialize it against some of the types defined in <a href=\"https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto\">caffe.proto</a>.</p>\n\n<p><strong>Datum</strong> seems to be the perfect fit:</p>\n\n<pre><code>{Caffe.Datum}\n Channels: 1\n Data: {byte[784]}\n Encoded: false\n FloatData: Count = 0\n Height: 28\n Label: 7\n Width: 28\n</code></pre>\n\n<p>So the answer is simply: <strong>It's a serialized representation of a 'Datum' typed instance as defined per <a href=\"https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto\">caffe.proto</a></strong></p>\n\n<p><em>Btw. since english is not my native language I had to first realize that \"Datum\" is a singular form of \"data\"</em></p>\n\n<p>When it comes to using your own data, it's structured as follows:</p>\n\n<blockquote>\n <p>The conventional blob dimensions for data are number N x channel\n K x height H x width W. Blob memory is row-major in layout so the last\n / rightmost dimension changes fastest. For example, the value at index\n (n, k, h, w) is physically located at index ((n * K + k) * H + h) * W\n + w.</p>\n</blockquote>\n\n<p>See <a href=\"http://caffe.berkeleyvision.org/tutorial/net_layer_blob.html\">Blobs, Layers, and Nets: anatomy of a Caffe model</a> for reference</p>\n"
},
{
"AnswerId": "30793520",
"CreationDate": "2015-06-12T00:22:35.943",
"ParentId": null,
"OwnerUserId": "4561745",
"Title": null,
"Body": "<p>I can try to answer your second question. Since Caffe only takes data in a bunch of selected formats like lmdb, hdf5 etc., it is best to convert (or generate - in case of synthetic data) your data to these formats. Following links can help you in this. If you have trouble with <code>import hdf5</code>in Python, then you may refer to <a href=\"https://stackoverflow.com/questions/30769048/error-in-creating-lmdb-database-file-in-python-for-caffe?answertab=votes#tab-top\">this</a> page.</p>\n\n<p><a href=\"http://deepdish.io/2015/04/28/creating-lmdb-in-python/\" rel=\"nofollow noreferrer\">Creating an LMDB file in Python</a></p>\n\n<p><a href=\"https://stackoverflow.com/questions/5466971/fastest-way-to-write-hdf5-file-with-python\">Writing an HDF5 file in Python</a> </p>\n\n<p><a href=\"http://download.nexusformat.org/sphinx/examples/h5py/index.html\" rel=\"nofollow noreferrer\">HDF5 more examples</a></p>\n"
}
] |
29,540,592 | 2 |
<cuda><neural-network><theano><deep-learning><conv-neural-network>
|
2015-04-09T13:57:04.340
| null | 3,778,898 |
Why does my dropout function in Theano slow down convolution greatly?
|
<p>I am learning Theano. I wrote a simple dropout function as below:</p>
<pre></pre>
<p>When I apply this function to the input of the first two convolutional layers, the average time spent on each image increases from 0.5ms to about 2.5ms! Does anyone know what could be the reason for such drastic slow down?</p>
<p>I am using a GTX 980 card with cuDNN installed.</p>
|
[
{
"AnswerId": "30299367",
"CreationDate": "2015-05-18T09:21:21.447",
"ParentId": null,
"OwnerUserId": "3778898",
"Title": null,
"Body": "<p>RandomStream only works on the CPU. Therefore <code>mask</code> has to be copied from CPU to GPU every time drop is called, which is the reason for the drastic slow down. To avoid this, I now use <a href=\"http://deeplearning.net/software/theano/tutorial/examples.html#other-implementations\" rel=\"noreferrer\">random stream implementation which works on GPU</a>. </p>\n"
},
{
"AnswerId": "33594732",
"CreationDate": "2015-11-08T13:56:58.903",
"ParentId": null,
"OwnerUserId": "5538861",
"Title": null,
"Body": "<p>It seems like a similar problem as mine (<a href=\"https://stackoverflow.com/questions/33592778/lasagne-dropoutlayer-does-not-utilize-gpu-efficiently\">Lasagne dropoutlayer does not utilize GPU efficiently</a>). \nHave you checked that you code sets <code>cuda_enabled = True</code> somewhere? otherwise you can manually set it in line 93 of <a href=\"https://github.com/Theano/Theano/blob/master/theano/sandbox/cuda/__init__.py\" rel=\"nofollow noreferrer\">https://github.com/Theano/Theano/blob/master/theano/sandbox/cuda/<strong>init</strong>.py</a>.\nI know this is not a elegant solution but it solved my problem for now. :)</p>\n"
}
] |
29,564,360 | 2 |
<lua><machine-learning><deep-learning><torch>
|
2015-04-10T14:44:48.083
| 29,564,910 | 1,367,788 |
Bug encountered When running Google's Deep Q Network Code
|
<p>Google's Deep Q Network for Atari Games is here.</p>
<p><a href="https://github.com/rahular/deepmind-dqn" rel="nofollow">https://github.com/rahular/deepmind-dqn</a></p>
<p>When I run it with GPU setting</p>
<pre></pre>
<p>I had this error</p>
<pre class="lang-html prettyprint-override"></pre>
<p>The code that caused this issue is in this file <a href="https://github.com/rahular/deepmind-dqn/blob/master/dqn/convnet.lua" rel="nofollow">https://github.com/rahular/deepmind-dqn/blob/master/dqn/convnet.lua</a></p>
<p>and it is in this function </p>
<pre class="lang-html prettyprint-override"></pre>
<p>The net:add(convLayer( is 22th line.</p>
<p>I used gpu setting so it seems </p>
<pre></pre>
<p>caused convLayer to be nil.</p>
<p>Does anyone know why nn.SpatialConvolutionCUDA returns a nil ?</p>
|
[
{
"AnswerId": "29622297",
"CreationDate": "2015-04-14T08:14:59.757",
"ParentId": null,
"OwnerUserId": "1367788",
"Title": null,
"Body": "<p>Found the solution.</p>\n\n<p>using this github branch</p>\n\n<p><a href=\"https://github.com/soumith/deepmind-atari\" rel=\"nofollow\">https://github.com/soumith/deepmind-atari</a></p>\n\n<p>After cloning this branch, then install cutorch and cunn using luarocks.</p>\n\n<p>Now you can run the code.</p>\n"
},
{
"AnswerId": "29564910",
"CreationDate": "2015-04-10T15:11:21.013",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p>Did the code originally come with GPU support, or did you add it yourself?</p>\n\n<p>You should replaced the depreceated layers, i.e. replace:</p>\n\n<pre><code>net:add(nn.Transpose({1,2},{2,3},{3,4}))\nconvLayer = nn.SpatialConvolutionCUDA\n</code></pre>\n\n<p>with</p>\n\n<pre><code>convLayer = nn.SpatialConvolution\n</code></pre>\n\n<p>Check the documentation for the layers.</p>\n\n<p>Edit: Use <a href=\"https://github.com/soumith/deepmind-atari\" rel=\"nofollow\">this branch</a>, I fixed it for GPU support.</p>\n"
}
] |
29,579,280 | 1 |
<python><theano><deep-learning>
|
2015-04-11T14:57:28.623
| 29,579,908 | 2,161,754 |
is TensorSharedVariable in theano initilized twice in function?
|
<p>In theano, once the sharedvarialbe is initialized in one function, it will never be initialized again even if the function is accessed repeatedly, am I right?</p>
<pre></pre>
<p>In the code above, the exp_sqr_grads variable and the exp_sqr_ups variable will not be initialized with zeros again when the sgd_updates_adadelta function is called the second time? </p>
|
[
{
"AnswerId": "29579908",
"CreationDate": "2015-04-11T15:59:16.350",
"ParentId": null,
"OwnerUserId": "3489247",
"Title": null,
"Body": "<p>Shared variables are not static, if that is what you mean. My understanding of your code:</p>\n\n<pre><code>import theano\nimport theano.tensor as T\n\nglobal_list = []\n\ndef f():\n\n a = np.zeros((4, 5), dtype=theano.config.floatX)\n b = theano.shared(a)\n global_list.append(b)\n</code></pre>\n\n<p>Copy and paste this into an IPython and then try:</p>\n\n<pre><code>f()\nf()\n\nprint global_list\n</code></pre>\n\n<p>The list will contain two items. They are not the same object:</p>\n\n<pre><code>In [9]: global_list[0] is global_list[1]\nOut[9]: False\n</code></pre>\n\n<p>And they don't refer to the same memory: Do</p>\n\n<pre><code>global_list[0].set_value(np.arange(20).reshape(4, 5).astype(theano.config.floatX))\n</code></pre>\n\n<p>Then</p>\n\n<pre><code>In [20]: global_list[0].get_value()\nOut[20]: \narray([[ 0., 1., 2., 3., 4.],\n [ 5., 6., 7., 8., 9.],\n [ 10., 11., 12., 13., 14.],\n [ 15., 16., 17., 18., 19.]])\n\nIn [21]: global_list[1].get_value()\nOut[21]: \narray([[ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.]])\n</code></pre>\n\n<p>Having established that initializing shared variables several times leads to different variables, here is how to update a shared variable using a function. We re-use the established shared variables:</p>\n\n<pre><code>s = global_list[1]\nx = T.scalar(dtype=theano.config.floatX)\ng = theano.function([x], [s], updates=[(s, T.inc_subtensor(s[0, 0], x))])\n</code></pre>\n\n<p><code>g</code> now increments the top left value of <code>s</code> by <code>x</code> at every call:</p>\n\n<pre><code>In [7]: s.get_value()\nOut[7]: \narray([[ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.]])\n\nIn [8]: g(1)\nOut[8]: \n[array([[ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.]])]\n\nIn [9]: s.get_value()\nOut[9]: \narray([[ 1., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.]])\n\nIn [10]: g(10)\nOut[10]: \n[array([[ 1., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.]])]\n\nIn [11]: s.get_value()\nOut[11]: \narray([[ 11., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.]])\n</code></pre>\n"
}
] |
29,584,422 | 2 |
<macos><lua><ipython><canopy><torch>
|
2015-04-11T23:58:09.753
| null | 4,611,375 |
Installing Torch7. iPython installation error (mac)
|
<p>I'm trying to install Torch7 on my mac, however the installation halts at this point:</p>
<pre></pre>
<p>Not sure what it means. Further above I received the following output</p>
<pre></pre>
<p>However iPython is installed as I can confirm:</p>
<pre></pre>
<p><em>Attempting to locate the .bashrc file</em></p>
<pre></pre>
<p></p>
|
[
{
"AnswerId": "29595574",
"CreationDate": "2015-04-12T22:50:44.853",
"ParentId": null,
"OwnerUserId": "726839",
"Title": null,
"Body": "<p>First, IPython may be installed but not seen by the install process.</p>\n\n<p>If you enter <code>which ipython</code> at a shell prompt it will tell you where it is installed. Then <code>echo $PATH</code> will display your PATH variable which should contain the directory that contains IPython. If it isn't then you will have to edit the PATH statement in your .bashrc file to add it.</p>\n\n<p>Second, <code>.bashrc</code> is a file that is run by the shell when it starts up and should be in your home directory so enter <code>cd</code> at a shell prompt and you will be there. Then use a text editor such as Text Edit to edit it. In your case you don't have a <code>.bashrc</code> file, instead some things are being set in a file called <code>.profile</code>. You should probably check the contents of that.</p>\n\n<p>Finally, I wouldn't run any of these commands from within IPython. Only run them (and the Torch install process) from the shell.</p>\n\n<p>Further, I notice you have a file with the name \"anaconda\" in it. Have you installed 'Anaconda'?</p>\n"
},
{
"AnswerId": "34121829",
"CreationDate": "2015-12-06T19:44:32.507",
"ParentId": null,
"OwnerUserId": "5647398",
"Title": null,
"Body": "<p>i had a similar problem and solved it, maybe it could help others.</p>\n\n<p>Here was the end of the second installation script, and the command \"th\" wasn't working: </p>\n\n<pre><code>Not updating your shell profile.\nYou might want to\nadd the following lines to your shell profile:\n\n. /Users/myusername/torch/install/bin/torch-activate\n</code></pre>\n\n<p>This article explains how your shell profile is organized: <a href=\"https://serverfault.com/questions/110065/what-profile-is-my-current-shell-using\">https://serverfault.com/questions/110065/what-profile-is-my-current-shell-using</a></p>\n\n<p>I realized in my user folder /Users/myusername/ i had a \".bash_profile\" file, i pasted the line \". /Users/myusername/torch/install/bin/torch-activate\" inside but didn't work (command \"th\" no recognized in terminal).</p>\n\n<p><strong>So in the same /Users/myusername/ folder i created a \".profile\" file</strong> and pasted the line \". /Users/myusername/torch/install/bin/torch-activate\" inside.</p>\n\n<p>Then the command \"th\" works fine ;)</p>\n"
}
] |
29,585,037 | 1 |
<c++><opencv><visual-studio-2013><cuda><caffe>
|
2015-04-12T01:43:13.620
| null | 4,778,446 |
Caffe Compilation on Windows - Compiles Successfully But 0xc000007b Error
|
<p>I have followed the steps at <a href="https://initialneil.wordpress.com/2015/01/11/build-caffe-in-windows-with-visual-studio-2013-cuda-6-5-opencv-2-4-9/" rel="nofollow">this URL</a> to compile Caffe for windows. The compilation succeeds but I am unable to run the generated EXE file. Also, when I downloaded the Git branch, there was already a caffe.exe file listed there. When I tried to run the precompiled file, I also got this error: "The application was unable to start correctly (0xc000007b). Click OK to close the application". This is the same error I get on my compiled binary.</p>
<p>Please help me. I am running Windows 7 x64. I suspect the problem could be creeping in somewhere that maybe since I have like 32 bit MinGW or something maybe it is trying to use the 32 bit libraries?</p>
<p>Right now, I have my configuration set to build x64 bit. I feel like one of the problems could be that maybe the CUDA is trying to build 32 or something? I just don't know what's causing this.. Even stranger, why am I not able to run the precompiled caffe.exe that I found when I downloaded this.. (I get the exact same error, that makes me feel like it isn't my compilation process.. there is something else going on).</p>
<p>Thank you for your help</p>
<p>OK - I ran the dependency walker. I identified the following issues:</p>
<p><strong>Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module.</strong></p>
<p><strong>Error: Modules with different CPU types were found.</strong></p>
<p>LIBGCC_S_DW2-1.DLL</p>
<p>LIBGFORTRAN-3.DLL</p>
<p>are x86 but the rest of them are x64. Where can I get the 64 bit DLLs from?</p>
<p>Also,
API-MS-WIN-APPMODEL-RUNTIME-L1-1-0.DLL</p>
<p>API-MS-WIN-CORE-WINRT-ERROR-L1-1-0.DLL</p>
<p>API-MS-WIN-CORE-WINRT-L1-1-0.DLL</p>
<p>API-MS-WIN-CORE-WINRT-ROBUFFER-L1-1-0.DLL</p>
<p>API-MS-WIN-CORE-WINRT-STRING-L1-1-0.DLL</p>
<p>API-MS-WIN-SHCORE-SCALING-L1-1-1.DLL</p>
<p>DCOMP.DLL</p>
<p>IESHIMS.DLL</p>
<p>Are listed as being not found (the system cannot find the file specified).</p>
|
[
{
"AnswerId": "29585079",
"CreationDate": "2015-04-12T01:49:00.903",
"ParentId": null,
"OwnerUserId": "3325075",
"Title": null,
"Body": "<p>This is a dependency error for Windows applications--in others words, you're probably missing a DLL file. Use <a href=\"http://www.dependencywalker.com/\" rel=\"nofollow\">dependency walker</a> to help find out which DLL files you're missing.</p>\n"
}
] |
29,593,373 | 1 |
<macos><opencv><cmake><anaconda><caffe>
|
2015-04-12T19:00:03.583
| null | 4,777,983 |
caffe make error undefined reference x86_64
|
<p><a href="https://stackoverflow.com/questions/29585180/caffe-make-error-in-mac-os-10-9">https://stackoverflow.com/questions/29585180/caffe-make-error-in-mac-os-10-9</a></p>
<p>Mac OS 10.9, Cuda 6.5, Opencv 2.4.11, Anaconda..</p>
<p>By removing -limgcodecs, I am getting the same error as before-</p>
<pre></pre>
<p>.............................................................................</p>
<p>Also I tried to install opencv 3.0, </p>
<p>$ brew install -devel opencv</p>
<p>but is giving the foll CMake error-</p>
<pre></pre>
<p>Any help appreciated.</p>
<hr>
<p>Finally I installed opencv3 using cmake and not through brew. But now there seems to the same error- undefined symbols but with leveldb-</p>
<pre></pre>
<p>I even tried to re-install leveldb but the problem persists. Any help appreciated</p>
|
[
{
"AnswerId": "29657441",
"CreationDate": "2015-04-15T17:58:18.453",
"ParentId": null,
"OwnerUserId": "4777983",
"Title": null,
"Body": "<p>I had to revert back to opencv 2.4.11. Uninstalled brew and all its dependencies. Loaded Cuda 7.0. Also added the ENV flags in dependencies with -stdlib=libc++ option. This solved the problem for me, although I still don't know the cause.</p>\n"
}
] |
29,606,856 | 2 |
<python><c++><dllimport><caffe>
|
2015-04-13T13:43:07.753
| null | 1,972,060 |
How to import caffe module in Python?
|
<p>I have build .dll of _caffe.cpp on Windows (Release, x64).</p>
<p>I changed extension .dll to .pyd and trying to import it in python:</p>
<pre></pre>
<p>What does it mean, some module of dependencies missing which was included in project in Visual Studio, where I build this dll?</p>
|
[
{
"AnswerId": "40053725",
"CreationDate": "2016-10-15T00:19:53.260",
"ParentId": null,
"OwnerUserId": "6779334",
"Title": null,
"Body": "<p>For windows : </p>\n\n<p>Adding <code>/caffe/Build/x64/Release/pycaffe</code> to system path(<code>path</code>) works for me, and I think the best way to do it is :</p>\n\n<ol>\n<li>New a system variable : <code>PYTHON_PKG = /caffe/Build/x64/Release/pycaffe;</code></li>\n<li>Include <code>PYTHON_PKG</code> in <code>path</code> : <code>path = %PYTHON_PKG%; %OtherDirs%</code></li>\n</ol>\n\n<p>After I did this, I get PKG missing <code>google.internal</code>, then I did <code>pip install google.internal</code> in <code>CMD</code>. It works.</p>\n"
},
{
"AnswerId": "30886477",
"CreationDate": "2015-06-17T08:41:16.677",
"ParentId": null,
"OwnerUserId": "4973198",
"Title": null,
"Body": "<p>You need to add Python Caffe to PYTHONPATH. For example:\n<strong>export PYTHONPATH=$PYTHONPATH:/home/username/caffe/python</strong></p>\n"
}
] |
29,649,623 | 1 |
<python-2.7><memory-management><bigdata><theano>
|
2015-04-15T11:59:10.483
| 29,650,507 | 1,357,690 |
Out of memory when creating a Theano shared variable with borrow=True
|
<p>I'm trying to allocate a really big dataset (~28GB of RAM in an ndarray) into theano shared variables, using borrow=True to avoid replicating the memory. In order to do so, I'm using the following function:</p>
<pre></pre>
<p>In order to avoid data conversions, prior to saving the arrays to disk I already defined them to be in the correct format (afterwards filling them and dumping them into disk with np.save()):</p>
<pre></pre>
<p>It seems, though, that theano tires to replicate the memory anyway, dumping me the following error:</p>
<p><em>Error allocating 25594500000 bytes of device memory (out of memory). Driver report 3775729664 bytes free and 4294639616 bytes total.</em></p>
<p>Theano is configured to work on the GPU (GTX 970).</p>
|
[
{
"AnswerId": "29650507",
"CreationDate": "2015-04-15T12:41:20.407",
"ParentId": null,
"OwnerUserId": "1357690",
"Title": null,
"Body": "<p>Instead of using <code>theano.shared</code>, it is possible to use <code>theano.tensor._shared</code> to force the data to be allocated into CPU memory. The fixed code ends up like this:</p>\n\n<pre><code>def load_dataset(path):\n # Load dataset from memory\n data_f = np.load(path+'train_f.npy')\n data_t = np.load(path+'train_t.npy')\n\n # Split into training and validation\n return (\n (\n theano.tensor._shared(data_f[:-1000, :], borrow=True),\n theano.tensor._shared(data_t[:-1000, :], borrow=True)\n ), (\n theano.tensor._shared(data_f[-1000:, :], borrow=True),\n theano.tensor._shared(data_t[-1000:, :], borrow=True)\n )\n )\n</code></pre>\n"
}
] |
29,658,790 | 2 |
<python><opencv><neural-network><deep-learning><caffe>
|
2015-04-15T19:11:19.843
| null | 2,631,051 |
how to test mnist on my own dataset images
|
<p>I'm trying to test mnist using my own dataset of digits images.<br>
I wrote a python script for that but it is giving an error. error is in line no 16 of code. Actually i'm not able to send image for test. give me some suggestions. thanks in advance.</p>
<pre></pre>
|
[
{
"AnswerId": "29776555",
"CreationDate": "2015-04-21T15:24:35.953",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>Why don't you use the python wrapper class <code>Classifier</code>?</p>\n\n<pre><code>net = caffe.Classifier( MODEL_FILE, PRETRAINED )\nnet.predict( [input_image], oversmaple=False )\n</code></pre>\n\n<p>I'm not 100% sure, but I think LeNeT model expect gray scale image, you might need to read the image</p>\n\n<pre><code>input_image = caffe.io.load_image(IMAGE_FILE, color=False)\n</code></pre>\n"
},
{
"AnswerId": "34608567",
"CreationDate": "2016-01-05T09:48:37.843",
"ParentId": null,
"OwnerUserId": "988709",
"Title": null,
"Body": "<pre><code>import caffe\nimport os\n\nmodel_file = '../examples/mnist/lenet.prototxt'\npretrained_file = '../examples/mnist/lenet_iter_10000.caffemodel'\nnet = caffe.Classifier(model_file, pretrained_file, image_dims=(28, 28), raw_scale=255)\nscore = net.predict([caffe.io.load_image('img/1.bmp', color=False)], oversample=False)\nprint score\n</code></pre>\n\n<p>This code work for me, the output is like that:</p>\n\n<pre><code>...\n[[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]]\n</code></pre>\n"
}
] |
29,662,576 | 1 |
<lua><statistics><correlation><torch>
|
2015-04-15T23:13:33.293
| 29,662,849 | 1,082,019 |
Torch / Lua, how to pair together to arrays into a table?
|
<p>I need to use the Pearson correlation coefficient in my Torch / Lua program.
This is the function:</p>
<pre></pre>
<p>This function wants an input couple table that can be used in the pairs() function.
I tried to submit to it the right input table but I was not able to get anything working.</p>
<p>I tried with:</p>
<pre></pre>
<p>But unfortunately it does not work. It will compute pairs between the first elements of b, while I want it to compute the correlation between a and b.
How could I do?</p>
<p>Can you provide a good object that may work as input to the math.pearson() function?</p>
|
[
{
"AnswerId": "29662849",
"CreationDate": "2015-04-15T23:37:50.427",
"ParentId": null,
"OwnerUserId": "1442917",
"Title": null,
"Body": "<p>If you are using <a href=\"https://github.com/attractivechaos/klib/blob/master/lua/klib.lua\" rel=\"nofollow\">this implementation</a>, then I think it expects a different parameter structure. You should be passing a table of two-value tables, as in:</p>\n\n<pre><code>local z = {\n {27, 0.0001}, {29, 0.001}, {45, 0.32132}, {98, 0.0001}, {1293, 0.0009}\n}\nprint(math.pearson(z))\n</code></pre>\n\n<p>This prints <code>-0.25304101592759</code> for me.</p>\n"
}
] |
29,667,795 | 2 |
<cuda><makefile><osx-mavericks><dylib><caffe>
|
2015-04-16T07:13:10.263
| null | 4,777,983 |
Segmentation fault in caffe
|
<p>Mac OS 10.9, OpenCV 2.4.11, CUDA 7.0,
All env flags set to libc++</p>
<pre></pre>
<p>As this could be related to library environment variables.
Here are all my env variables-
$DYLD_LIBRARY_PATH = /usr/local/cuda/lib</p>
<p>$LD_LIBRARY_PATH = /usr/local/cuda/lib:/opt/intel/composer_xe_2015.2.132/mkl/lib/</p>
<p>$DYLD_FALLBACK_LIBRARY_PATH =
/usr/local/cuda/lib:/Developer/NVIDIA/CUDA-7.0/lib:/Users/deepsamal/anaconda/lib:/usr/local/lib:/usr/lib:/opt/intel/composer_xe_2015.2.132/mkl/lib/:</p>
<p>As both-</p>
<pre></pre>
<p>are running without error, this means library is linked but not getting loaded dynamically.</p>
<p>Cant figure out what could be the reason? </p>
<p>Any help appreciated.</p>
<p>EDIT: tried to see the runtime link of the libcudart library.</p>
<pre></pre>
<p>EDIT: I tried to find all soft links to libcaffe.so, it seems the paths to cuda libs and cudnn are not resolved and that seems to be the problem.</p>
<pre></pre>
|
[
{
"AnswerId": "29671149",
"CreationDate": "2015-04-16T09:51:16.960",
"ParentId": null,
"OwnerUserId": "1662497",
"Title": null,
"Body": "<p>There is a terrible mess with OSX tools at execution time because of the DYLD_LIBRARY_PATH\nI think this kind of hack should work : </p>\n\n<pre><code>DYLD_LIBRARY_PATH=''; make runtest\n</code></pre>\n"
},
{
"AnswerId": "30596127",
"CreationDate": "2015-06-02T12:29:29.320",
"ParentId": null,
"OwnerUserId": "27943",
"Title": null,
"Body": "<p>Setting the <code>DYLD_FALLBACK_LIBRARY_PATH</code> variable fixed this for me at least. I just had to add the <code>/usr/local/cuda/lib</code> as the first path (as you have done)</p>\n\n<pre><code>export DYLD_FALLBACK_LIBRARY_PATH=/usr/local/cuda/lib:/usr/local/lib:/usr/lib:/Developer/NVIDIA/CUDA-7.0/lib:\n</code></pre>\n\n<p>or as documented <a href=\"https://groups.google.com/forum/#!msg/caffe-users/BkMspfbYLIE/Pi6SUVANAKUJ\" rel=\"nofollow\">here</a>.</p>\n"
}
] |
29,669,774 | 1 |
<matlab><machine-learning><neural-network><deep-learning><caffe>
|
2015-04-16T08:51:54.537
| null | 4,795,605 |
Extract filters and biases using caffe
|
<p>I want to extract filters and biases from my own caffemodel, (no need to visualize the feature) and I want to save them into file for .
I used to do this issue: </p>
<p>My workaround:</p>
<pre></pre>
<p>But some errors happened:</p>
<pre></pre>
<p>I dont know how to solve this problem.
Can anyone help me? please and thanks.</p>
|
[
{
"AnswerId": "29789189",
"CreationDate": "2015-04-22T06:13:25.893",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>You need to give an additional argument for <code>'init'</code> the phase name.<br>\nI believe in your case</p>\n\n<pre><code> caffe('init',...\n 'supe_resolution_train_test.prototxt',...\n 'super_resolution_iter_1000.caffemodel',...\n 'test');\n</code></pre>\n"
}
] |
29,691,735 | 1 |
<python><theano><deep-learning>
|
2015-04-17T06:18:49.067
| 29,705,705 | 2,161,754 |
How to find which variable is float64 when trying to use GPU in Theano
|
<p>In theano, when using gpu, the variables have to be float32. I checked all my variables to be folat32, but I still get the error below.</p>
<pre></pre>
<p>Seems that some variables are still float64, my question is how to locate the position I use the a float64 variable.</p>
|
[
{
"AnswerId": "29705705",
"CreationDate": "2015-04-17T17:23:51.617",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>With Theano 0.7, you can use the Theano flags: warn_float64. You can give him one of those values: 'ignore', 'warn', 'raise', 'pdb'.</p>\n\n<p>This allow you to easily find where float64 are created.</p>\n"
}
] |
29,698,136 | 1 |
<caffe>
|
2015-04-17T11:40:40.067
| 29,699,422 | 894,903 |
Caffe HDF5 neural network basic example fails to parse the model file
|
<p>I am trying to build a minimal example of a neural network with HDF5 data that I have prepared from a CSV file using the caffe libraries.</p>
<p>My prototext is as follows: [wine_train.prototxt]</p>
<pre></pre>
<p>and my solver is as follows:</p>
<pre></pre>
<p>Each time I get the following error:</p>
<pre></pre>
<p>What exactly does the error say and how do I resolve it?</p>
<h1>Update</h1>
<p>This seems to run:</p>
<pre></pre>
<h2>UPDATE2</h2>
<p>Virtually identical models: One works the other doesn't. I cant explain why!</p>
<p>Works:</p>
<pre></pre>
<p>Doesn't:</p>
<pre></pre>
|
[
{
"AnswerId": "29699422",
"CreationDate": "2015-04-17T12:36:33.140",
"ParentId": null,
"OwnerUserId": "3391810",
"Title": null,
"Body": "<p>I ran into the same problem yesterday. You might need to check that your caffe version is up-to-date. They changed the protobuf definition quite heavily. In your case, \"type\" used to be an enum, now it just takes a string.</p>\n\n<p><strong><em>[Update]</em></strong> From the discussion in the comments: The answer is to not use \"layers\" but \"layer\". layers was probably present in some old / outdated example.</p>\n"
}
] |
29,703,182 | 1 |
<c++><ios><xcode><caffe>
|
2015-04-17T15:16:54.683
| 31,537,251 | 4,733,134 |
How to build Caffe framework XCode 6.2, iOS 8.3 environment
|
<p>I am working on build caffe framework for ios, i used the Caffe master source and make files to build the framework for iOS. </p>
<p>I changed the OS target in CMake GUI config as "/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk"</p>
<p>While run Xcode to build project i got the below error messages
/Users/Macpro_ios_v2/Caffe_iOS/src/caffe/common.cpp:1:10: 'glog/logging.h' file not found
"boost/thread.h" file not found
i included usr/local/include and opt/local/include to search paths in the build phase </p>
<p>while run the same xcode project for OSX, it works fine and generates the libs perfectly. If i change the target for iphone OS i got above error.</p>
<p>Please help me to fix the above issuse. Please help how to configure the make list in Caffe master for iphone.</p>
<p>I have caffe dylip, lib-a for OSX. Is it possible to link mac osx libaries in iOS project?</p>
|
[
{
"AnswerId": "31537251",
"CreationDate": "2015-07-21T11:01:38.803",
"ParentId": null,
"OwnerUserId": "1217590",
"Title": null,
"Body": "<p>cloudVision,</p>\n\n<p>You should install OpenCV2 from here before you compile the caffe-ios-sample:\n<a href=\"http://docs.opencv.org/doc/tutorials/introduction/ios_install/ios_install.html\" rel=\"nofollow\">http://docs.opencv.org/doc/tutorials/introduction/ios_install/ios_install.html</a></p>\n"
}
] |
29,707,174 | 2 |
<python><machine-learning><theano><deep-learning>
|
2015-04-17T18:47:42.160
| 29,729,742 | 2,824,962 |
Theano/Lasagne/Nolearn Neural Network Image Input
|
<p>I am working on image classification tasks and decided to use Lasagne + Nolearn for neural networks prototype.
All standard examples like MNIST numbers classification run well, but problems appear when I try to work with my own images.</p>
<p>I want to use 3-channel images, not grayscale.
And there is the code where I'm trying to get arrays from images:</p>
<pre></pre>
<p>Here is the code of NN and its fitting:</p>
<pre></pre>
<p>I recieve exceptions like this one:</p>
<pre></pre>
<p><strong>So, in which format do you "feed" your networks with image data?</strong>
Thanks for answers or any tips!</p>
|
[
{
"AnswerId": "30922572",
"CreationDate": "2015-06-18T18:03:25.130",
"ParentId": null,
"OwnerUserId": "3502360",
"Title": null,
"Body": "<p>If you're doing classification you need to modify a couple of things:</p>\n\n<ol>\n<li>In your code you have set <code>regression = True</code>. To do classification remove this line.</li>\n<li>Ensure that your input shape matches the shape of X if want to input 3 distinct channels</li>\n<li><p>Because you are doing classification you need the output to use a softmax nonlinearity (at the moment you have the identity which will not help you with classification)</p>\n\n<pre><code>X, y = simple_load(\"new\")\n\nX = np.array(X)\ny = np.array(y)\n\nnet1 = NeuralNet(\n layers=[ # three layers: one hidden layer\n ('input', layers.InputLayer),\n ('hidden', layers.DenseLayer),\n ('output', layers.DenseLayer),\n ],\n # layer parameters:\n input_shape=(None, 3, 256, 256), # TODO: change this\n hidden_num_units=100, # number of units in hidden layer\n output_nonlinearity=lasagne.nonlinearities.softmax, # TODO: change this\n output_num_units=len(y), # 30 target values\n\n # optimization method:\n update=nesterov_momentum,\n update_learning_rate=0.01,\n update_momentum=0.9,\n\n max_epochs=400, # we want to train this many epochs\n verbose=1,\n</code></pre>\n\n<p>)</p></li>\n</ol>\n"
},
{
"AnswerId": "29729742",
"CreationDate": "2015-04-19T12:18:00.923",
"ParentId": null,
"OwnerUserId": "2824962",
"Title": null,
"Body": "<p>I also asked it in lasagne-users forum and Oliver Duerr helped me a lot with code sample:\n<a href=\"https://groups.google.com/forum/#!topic/lasagne-users/8ZA7hr2wKfM\" rel=\"nofollow\">https://groups.google.com/forum/#!topic/lasagne-users/8ZA7hr2wKfM</a></p>\n"
}
] |
29,723,803 | 1 |
<python><sympy><theano>
|
2015-04-18T22:50:58.273
| null | 3,608,005 |
How to evaluate and compile such a function using theano or autowrap?
|
<p>I would like to evaluate functions of similar form (more complicated) in sympy.</p>
<pre></pre>
<p>where all variables are vectors of length . The evaluation will take place at every timestep of an optimization routine. As a consequence, I would like to implement it efficiently. Most probably, it would be best to compile those functions but the autowrap module gives me strange errors.</p>
<p>What works:</p>
<pre></pre>
<p>I can evaluate this expression directly in sympy:</p>
<pre></pre>
<p>Gives me (as expected):</p>
<p></p>
<p>But I failed at compiling this expression. I am reading blogs and manuals since hours, but either I am too tired or did not find the proper information. Any help is strongly appreciated.</p>
<p>EDIT: In response to Eric: I do not now how to implement the sum of a vector in theano or autowrap. I tried different versions using lambda functions and got various errors. Maybe the most reproducible has to do with the dimension of the inputs:</p>
<pre></pre>
<p>If I try to compile a simple expression using autowrap:</p>
<pre></pre>
<p>I obtain:</p>
<pre></pre>
|
[
{
"AnswerId": "34983147",
"CreationDate": "2016-01-25T00:01:39.867",
"ParentId": null,
"OwnerUserId": "467314",
"Title": null,
"Body": "<p>There are mulitple issues with your approach. 1) You can't mix in numpy functions with SymPy types and then expect the code generators to work with the mixed types. The code generators in SymPy work with SymPy types only. You can pass in external functions that map to SymPy functions. For example, you can use sympy.Sum() and then write a mapping from SymPy.Sum() to numpy.sum() for the code generator to consume. 2) Eq() is not supported in code generators as far as I know. 3) The indexed types do very specific things when passed to autowrap. You'll need to read the docs carefully about them.</p>\n"
}
] |
29,735,197 | 1 |
<theano>
|
2015-04-19T20:04:32.660
| null | 1,382,757 |
does Theano's symbolic matrix inverse "know" about structured matrices?
|
<p>In my application I am computing the inverse of a block tridiagonal matrix A - will Theano's matrix inverse account for that structure (by using a more efficient matrix inverse algorithm)?</p>
<p>Further, I only need the diagonal and first off diagonal blocks of the resulting inverse matrix. Is there a way of preventing Theano from computing the remaining blocks? </p>
<p>Generally, I'm curious whether it would be worth implementing a forward/backward block tridagonal matrix inverse algorithm myself.</p>
|
[
{
"AnswerId": "29934743",
"CreationDate": "2015-04-29T05:01:06.163",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>As of April 2015 Theano matrix inverse function won't do it directly:</p>\n\n<p><a href=\"http://deeplearning.net/software/theano/library/tensor/nlinalg.html#theano.tensor.nlinalg.MatrixInverse\" rel=\"nofollow\">http://deeplearning.net/software/theano/library/tensor/nlinalg.html#theano.tensor.nlinalg.MatrixInverse</a></p>\n\n<p>Theano do not have many optimization and function related to that type of methods. It partially wrap what is under numpy.linalg (most of it) and some of scipy.linalg:</p>\n\n<p><a href=\"http://deeplearning.net/software/theano/library/tensor/slinalg.html\" rel=\"nofollow\">http://deeplearning.net/software/theano/library/tensor/slinalg.html</a></p>\n\n<p>So you are better in the short term to do it with numpy/scipy directly.</p>\n\n<p>If you want to add those feature to Theano, this can be done. But it need someone with the time and willingness to do it.</p>\n"
}
] |
29,739,796 | 1 |
<machine-learning><theano><conv-neural-network><pylearn>
|
2015-04-20T05:18:32.140
| 29,765,165 | 2,388,116 |
How to use leaky ReLus as the activation function in hidden layers in pylearn2
|
<p>I am using pylearn2 library to design a CNN. I want to use Leaky ReLus as the activation function in one layer. Is there any possible way to do this using pylearn2? Do I have to write a custom function for it or does pylearn2 have inbuilt funtions for tha? If so, how to write a custom code? Please can anyone help me out here?</p>
|
[
{
"AnswerId": "29765165",
"CreationDate": "2015-04-21T07:04:21.387",
"ParentId": null,
"OwnerUserId": "1742064",
"Title": null,
"Body": "<p><a href=\"https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/models/mlp.py#L2849\" rel=\"nofollow\" title=\"ConvElemwise\">ConvElemwise</a> super-class is a generic convolutional elemwise layer. Among its subclasses <a href=\"https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/models/mlp.py#L3305\" rel=\"nofollow\" title=\"ConvRectifiedLinear\">ConvRectifiedLinear</a> is a convolutional rectified linear layer that uses <a href=\"https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/models/mlp.py#L2718\" rel=\"nofollow\" title=\"RectifierConvNonlinearity\">RectifierConvNonlinearity</a> class.</p>\n\n<p>In the <code>apply()</code> method:</p>\n\n<pre><code> p = linear_response * (linear_response > 0.) + self.left_slope *\\\n linear_response * (linear_response < 0.)\n</code></pre>\n\n<p>As <a href=\"http://cs231n.github.io/neural-networks-1/\" rel=\"nofollow\">this</a> gentle review points out:</p>\n\n<blockquote>\n <p>... Maxout neuron (introduced recently by <a href=\"http://www-etud.iro.umontreal.ca/~goodfeli/maxout.html\" rel=\"nofollow\">Goodfellow et al.</a>) that generalizes the ReLU and its leaky version.</p>\n</blockquote>\n\n<p>Examples are <a href=\"https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/models/maxout.py#L956\" rel=\"nofollow\">MaxoutLocalC01B</a> or <a href=\"https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/models/maxout.py#L515\" rel=\"nofollow\">MaxoutConvC01B</a>.</p>\n\n<p>The reason for lack of answer in <a href=\"https://groups.google.com/forum/#!topic/pylearn-users/JaSbTVNvrWo\" rel=\"nofollow\">pylearn2-user</a> may be that <a href=\"http://deeplearning.net/software/pylearn2/\" rel=\"nofollow\">pylearn2</a> is mostly written by researches at <a href=\"https://sites.google.com/a/lisa.iro.umontreal.ca/mila/\" rel=\"nofollow\">LISA lab</a> and, thus, the threshold for point 13 in <a href=\"http://deeplearning.net/software/pylearn2/faq.html\" rel=\"nofollow\">FAQ</a> may be high.</p>\n"
}
] |
29,774,413 | 1 |
<python><theano><deep-learning>
|
2015-04-21T13:59:51.910
| 29,774,499 | 492,372 |
What is the borrow parameter in Theano
|
<p>I see this following line of code:</p>
<pre></pre>
<p>In the above line, what is the borrow parameter exactly? What is the advantage of adding that there? FYI, train_set_x is basically a matrix that was generated using the theano.shared method.</p>
|
[
{
"AnswerId": "29774499",
"CreationDate": "2015-04-21T14:02:56.647",
"ParentId": null,
"OwnerUserId": "28169",
"Title": null,
"Body": "<p><a href=\"http://deeplearning.net/software/theano/tutorial/aliasing.html#borrowing-when-creating-shared-variables\">This part of the documentation</a> seems relevant:</p>\n\n<blockquote>\n <p>By default (<code>s_default</code>) and when explicitly setting <code>borrow=False</code>, the shared variable we construct gets a <em>deep</em> copy of np_array. So changes we subsequently make to <code>np_array</code> have no effect on our shared variable.</p>\n</blockquote>\n\n<p>Setting it to <code>True</code> can then be assumed to make a shallow copy, effectively letting you \"borrow\" access to the memory.</p>\n"
}
] |
29,774,793 | 1 |
<python><caffe>
|
2015-04-21T14:15:15.650
| 29,775,072 | 894,903 |
TypeError Python Class
|
<p>I have a library (caffe) that has the following definition:</p>
<pre></pre>
<p>When I try to call it using I get the following error:</p>
<pre></pre>
<p>I checked <a href="https://stackoverflow.com/questions/9698614/super-raises-typeerror-must-be-type-not-classobj-for-new-style-class">SO-9698614</a>,<a href="https://stackoverflow.com/questions/489269/python-super-raises-typeerror-why?lq=1">SO-576169</a> and <a href="https://stackoverflow.com/questions/489269/python-super-raises-typeerror-why?lq=1">SO-489269</a> but they did not lead to a solution. My class is a new type class and I could not see why it was not working. </p>
<h1>Full trace:</h1>
<pre></pre>
|
[
{
"AnswerId": "29775072",
"CreationDate": "2015-04-21T14:25:49.673",
"ParentId": null,
"OwnerUserId": "100297",
"Title": null,
"Body": "<p>Somehow you managed to bind <code>NetSpec</code> to <code>None</code> somewhere:</p>\n\n<pre><code>>>> super(None, object())\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: must be type, not None\n</code></pre>\n\n<p>The error indicates that the <code>NetSpec</code> global is bound to <code>None</code>.</p>\n\n<p>You could also bypass the <code>NetSpec.__setattr__</code> method by going directly to the instance <code>__dict__</code> attribute:</p>\n\n<pre><code>class NetSpec(object):\n def __init__(self):\n self.__dict__['tops'] = OrderedDict()\n</code></pre>\n\n<p>From the code you shared it could be that this is the culprit:</p>\n\n<pre><code>from .layers import layers, params, NetSpec\n</code></pre>\n\n<p>This imports <code>caffe.layers</code> but rebinds <code>caffe.layers</code> to the <code>Layers()</code> instance. This <em>could</em> then trigger Python to delete the module again as there are no other references to it yet (depending on when and how the <code>sys.modules</code> reference is created), causing all globals to be rebound to <code>None</code> (including <code>NetSpec</code>).</p>\n"
}
] |
29,776,502 | 3 |
<python-2.7><installation><windows-8.1><anaconda><theano>
|
2015-04-21T15:22:22.780
| 29,855,776 | 4,815,525 |
Installing Theano on windows 8.1 with Anaconda: setting the system path configuration script
|
<p>I’m trying to install Theano on windows 8.1 64 with Anaconda following step by step the guide provided here: <a href="http://theano.readthedocs.org/en/latest/install_windows.html" rel="nofollow">http://theano.readthedocs.org/en/latest/install_windows.html</a>.
I get stuck with the environment configuration script 'env.bat' needed to configure the system path.
The example refers to a WinPython distribution but as I’m installing in Anaconda and I don’t know how to configure that specific row. </p>
<p>This is the example for WinPython:</p>
<pre></pre>
<p>Which directory should I set here after the CALL considering I’m using Anaconda? Struggling quite a lot, could anyone please help?</p>
<p>EDIT: please note that SCISOFT is the directory where WinPython is installed in the tutorial, the author says: "The script assumes that you installed WinPython distribution, update the winpython line otherwise." and that is what I'm not able to do because it is not specified what to point at.</p>
<p>I'm including the whole .bat, though I have no problem with the other settings:</p>
<pre></pre>
|
[
{
"AnswerId": "30539474",
"CreationDate": "2015-05-29T21:38:15.660",
"ParentId": null,
"OwnerUserId": "4853662",
"Title": null,
"Body": "<p>I had similar problems and so I compiled a solid guide on how to install Theano on Windows 8.1 x64, using WinPython x64 and CUDA 7 with MS Visual Studio 2012 - CPU and GPU both set up.</p>\n\n<p><a href=\"http://machinelearning.berlin/?p=383\" rel=\"nofollow\">http://machinelearning.berlin/?p=383</a></p>\n\n<p>Hope this helps.</p>\n"
},
{
"AnswerId": "29862969",
"CreationDate": "2015-04-25T08:40:27.653",
"ParentId": null,
"OwnerUserId": "3140336",
"Title": null,
"Body": "<p>next winpython should have theano working out of the box <a href=\"https://groups.google.com/forum/#!topic/theano-users/lta_34FXIwg\" rel=\"nofollow\">https://groups.google.com/forum/#!topic/theano-users/lta_34FXIwg</a></p>\n"
},
{
"AnswerId": "29855776",
"CreationDate": "2015-04-24T19:20:56.010",
"ParentId": null,
"OwnerUserId": "161801",
"Title": null,
"Body": "<p>I don't know what is in the WinPython env.bat, but you probably can just delete it. You likely just need to make sure that Anaconda is on the PATH. </p>\n"
}
] |
29,780,804 | 2 |
<data-structures><lua><torch>
|
2015-04-21T18:53:46.230
| null | 1,082,019 |
In Torch / Lua, is there a command to analyze an object (like str() in R)?
|
<p>I have to analyze some objects and their structure in my Torch / Lua script.
I would like to use a command that behave like <a href="https://stat.ethz.ch/R-manual/R-devel/library/utils/html/str.html" rel="nofollow"> in R</a>.</p>
<p>Do you have any suggestion?</p>
|
[
{
"AnswerId": "29781146",
"CreationDate": "2015-04-21T19:13:49.087",
"ParentId": null,
"OwnerUserId": "1442917",
"Title": null,
"Body": "<p>You may want to use a <a href=\"http://lua-users.org/wiki/TableSerialization\" rel=\"nofollow\">serializer</a> to represent complex data structures in a readable way. There is <a href=\"https://github.com/torch/torch7/blob/master/doc/serialization.md\" rel=\"nofollow\">torch.serialize</a> function, but it doesn't produce human-readable output. I've written <a href=\"https://github.com/pkulchenko/serpent\" rel=\"nofollow\">Serpent serializer and pretty-printer</a> that supports some of the options that <code>str()</code> has, like the max nesting level for tables or the max number of elements in a table. It also supports custom formatters, which allows you to modify the output to some degree.</p>\n"
},
{
"AnswerId": "29797140",
"CreationDate": "2015-04-22T12:09:43.590",
"ParentId": null,
"OwnerUserId": "1243636",
"Title": null,
"Body": "<p>I like this module: <a href=\"https://github.com/kikito/inspect.lua\" rel=\"nofollow\">https://github.com/kikito/inspect.lua</a></p>\n\n<p><code>luarocks install inspect</code></p>\n\n<p>then import it like this</p>\n\n<p><code>local inspect = require 'inspect'</code></p>\n\n<p>output may be something like this:</p>\n\n<pre><code>assert(inspect(setmetatable({a=1}, {b=2}) == [[{\n a = 1\n <metatable> = {\n b = 2\n }\n}]]))\n</code></pre>\n\n<p>common usage:</p>\n\n<pre><code>print(inspect(myobj))\n</code></pre>\n"
}
] |
29,788,075 | 2 |
<python><neural-network><deep-learning><caffe><glog>
|
2015-04-22T04:56:59.090
| 29,788,785 | 1,452,257 |
Setting GLOG_minloglevel=1 to prevent output in shell from Caffe
|
<p>I'm using Caffe, which is printing a lot of output to the shell when loading the neural net.<br>
I'd like to suppress that output, which supposedly can be done by setting when running the Python script. I've tried doing that using the following code, but I still get all the output from loading the net. How do I suppress the output correctly?</p>
<pre></pre>
|
[
{
"AnswerId": "31350273",
"CreationDate": "2015-07-10T21:06:25.190",
"ParentId": null,
"OwnerUserId": "3006372",
"Title": null,
"Body": "<p>I was able to get <a href=\"https://stackoverflow.com/a/29788785/1714410\">Shai's solution</a> to work, but only by executing that line in Python <em>before</em> calling </p>\n\n<pre><code>import caffe\n</code></pre>\n"
},
{
"AnswerId": "29788785",
"CreationDate": "2015-04-22T05:49:51.057",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>To supress the output level you need to <strong>increase</strong> the loglevel to at least 2</p>\n\n<pre><code> os.environ['GLOG_minloglevel'] = '2' \n</code></pre>\n\n<p>The levels are</p>\n\n<p>0 - debug<br>\n 1 - info (still a LOT of outputs)<br>\n 2 - warnings<br>\n 3 - errors </p>\n\n<hr>\n\n<p><strong>Update:</strong><br>\nSince this flag is <em>global</em> to <code>caffe</code>, it must be set <em>prior</em> to importing of <code>caffe</code> package (as pointed out by <a href=\"https://stackoverflow.com/a/31350273/1714410\">jbum</a>). Once the flag is set and <code>caffe</code> is imported the behavior of the GLOG tool cannot be changed.</p>\n"
}
] |
29,802,027 | 0 |
<torch>
|
2015-04-22T15:21:17.523
| null | 4,820,103 |
Torch: Model fast when learning/testing, slow when using it
|
<p>I have an issue using a learned model with torch.</p>
<p>I followed this howto <a href="http://code.cogbits.com/wiki/doku.php?id=tutorial_supervised" rel="nofollow">http://code.cogbits.com/wiki/doku.php?id=tutorial_supervised</a> to train a model. Everything is fine, my model was trained and I have corrects results when I use my model. But it's slow !</p>
<p>The testing part for training look like this:</p>
<pre></pre>
<p>I have the following speed recorded during testing:</p>
<p></p>
<p>(Of course it vary, but it's ~12ms).</p>
<p>I want to use the learned model on others images, so I did this in a simple and new script:</p>
<pre></pre>
<p>The time spent is much bigger, I have the following output: </p>
<p>I tested with more than one image, always with the resizing and normalization outside clock's measurements, and I always have > 200ms / image.</p>
<p>I don't understand what I'm doing wrong and why my code is much slower than during the training/testing. </p>
<p>Thanks !</p>
|
[] |
29,805,179 | 1 |
<matlab><caffe><matcaffe>
|
2015-04-22T17:43:16.757
| null | 2,191,652 |
Error using caffe Invalid input size
|
<p>I tried to train my own neural net using my own imagedatabase as described in </p>
<p><a href="http://caffe.berkeleyvision.org/gathered/examples/imagenet.html" rel="nofollow">http://caffe.berkeleyvision.org/gathered/examples/imagenet.html</a></p>
<p>However when I want to check the neural net after training on some standard images using the matlab wrapper I get the following output / error:</p>
<pre></pre>
<p>I used the matlab wrapper before to extract cnn features based on a pretrained model. It worked. So I don't think the input size of my images is the problem (They are converted to the correct size internally by the function "prepare_image").</p>
<p>Has anyone an idea what could be the error?</p>
|
[
{
"AnswerId": "29824882",
"CreationDate": "2015-04-23T13:25:51.700",
"ParentId": null,
"OwnerUserId": "2191652",
"Title": null,
"Body": "<p>Found the solution: I was referencing the wrong \".prototxt\" file (Its a little bit confusing because the files are quite similar.\nSo for computing features using the matlab wrapper one needs to reference the following to files in \"matcaffe_demo.m\":</p>\n\n<pre><code>models/bvlc_reference_caffenet/deploy.prototxt\nmodels/bvlc_reference_caffenet/MyModel_caffenet_train_iter_450000.caffemodel\n</code></pre>\n\n<p>where \"MyModel_caffenet_train_iter_450000.caffemodel\" is the only file needed which is created during training.</p>\n\n<p>In the beginning I was accidently referencing </p>\n\n<pre><code>models/bvlc_reference_caffenet/MyModel_train_val.prototxt\n</code></pre>\n\n<p>which was the \".prototxt\" file used for training.</p>\n"
}
] |
29,814,641 | 2 |
<python><theano><deep-learning>
|
2015-04-23T05:42:00.847
| 29,971,142 | 4,231,726 |
Regression with lasagne: error
|
<p>I am trying to run a regression with lasagne/nolearn. I am having trouble finding documentation how to do that (new to deep learning in general).</p>
<p>Starting off with a simple network (one hidden layer)</p>
<pre></pre>
<p>I get the following error:</p>
<pre></pre>
<p>Thanks!..</p>
|
[
{
"AnswerId": "29971142",
"CreationDate": "2015-04-30T15:00:01.150",
"ParentId": null,
"OwnerUserId": "576676",
"Title": null,
"Body": "<p>Make sure that you're using versions of nolearn and Lasagne that are <strong>known to work together</strong>.</p>\n\n<p>Say you've been following the <a href=\"http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/\" rel=\"nofollow\">Using convolutional neural nets to detect facial keypoints tutorial</a>. Then the right thing to do is to install the dependencies from <a href=\"https://raw.githubusercontent.com/dnouri/kfkd-tutorial/master/requirements.txt\" rel=\"nofollow\">this requirements.txt file</a>, like so:</p>\n\n<pre><code>pip uninstall Lasagne\npip uninstall nolearn\npip install -r https://raw.githubusercontent.com/dnouri/kfkd-tutorial/master/requirements.txt\n</code></pre>\n\n<p>If, however, you're using nolearn from Git master, then make sure you install the Lasagne version that's in the <a href=\"https://github.com/dnouri/nolearn/blob/master/requirements.txt\" rel=\"nofollow\">requirements.txt file found there</a>:</p>\n\n<pre><code>pip uninstall Lasagne\npip install -r https://raw.githubusercontent.com/dnouri/nolearn/master/requirements.txt\n</code></pre>\n"
},
{
"AnswerId": "29905390",
"CreationDate": "2015-04-27T20:31:50.423",
"ParentId": null,
"OwnerUserId": "4839389",
"Title": null,
"Body": "<p>Not sure what version of nolearn and lasagne you are using. I did notice that you have <code>y</code> as being of shape <code>(137,)</code>. From my usage this needs to be <code>(137, 1)</code> to work for your case, and, in general, dim 2 needs to match the <code>output_num_units</code>. </p>\n\n<p>Try <code>y.reshape((-1, 1))</code>.</p>\n\n<p>If this doesn't work it may be a Python 3 compatibility issue.</p>\n"
}
] |
29,818,284 | 2 |
<nlp><convolution><theano><deep-learning>
|
2015-04-23T08:49:59.773
| 33,602,229 | 2,161,754 |
how to make the image_shape dynamic in the convolution in Theano
|
<p>I tried to process the tweets dataset using CNN in Theano. Different from images, the lenght of different tweets (corresponding to the image shape) is variable. So the shape of each tweet is different. However, in Theano, the convolution need that the shape information are constant values. So my question is that is there some way to make the image_shape dynamic?</p>
|
[
{
"AnswerId": "29837099",
"CreationDate": "2015-04-24T01:21:39.723",
"ParentId": null,
"OwnerUserId": "3098048",
"Title": null,
"Body": "<p>Convolutional neural networks are really better suited to processing images.</p>\n\n<p>For processing tweets, you might want to read about recursive neural networks.</p>\n\n<p><a href=\"http://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf\" rel=\"nofollow\">http://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf</a></p>\n"
},
{
"AnswerId": "33602229",
"CreationDate": "2015-11-09T03:45:52.947",
"ParentId": null,
"OwnerUserId": "5149834",
"Title": null,
"Body": "<p>Kalchbrenner et. al (2015) implemented an CNN that accepts dynamic length input and pools them into k elements. If there are less than k elements to begin with, the remaining are zero-padded. Their experiments with sentence classification show that such networks successfully represent grammatical structures.</p>\n\n<p>For details check out:</p>\n\n<ul>\n<li>the paper (<a href=\"http://arxiv.org/pdf/1404.2188v1.pdf\" rel=\"nofollow\">http://arxiv.org/pdf/1404.2188v1.pdf</a>)</li>\n<li>Matlab code (link on page 2 of the paper)</li>\n<li>suggestion for DCNNs for Theano/Keras (<a href=\"https://github.com/fchollet/keras/issues/373\" rel=\"nofollow\">https://github.com/fchollet/keras/issues/373</a>)</li>\n</ul>\n"
}
] |
29,825,174 | 1 |
<python><list><neural-network><caffe>
|
2015-04-23T13:37:42.157
| 29,825,383 | 639,973 |
python list not showing full elements
|
<p>I'm inserting images into Decaf, and want to extract features from 6,7,8th layers. 6th and 7th are supposed to be 4096-dimensions, and 8th is supposed to be 1000.</p>
<p>I'm assuming that the generated output functions like a list, and want to record each element in a separate text file as follows:</p>
<pre></pre>
<p>The f8 file correctly has 1000 files, but f6 and f7 text files have something like the following:</p>
<pre></pre>
<p>The dots in the middle are literally like that. What happend to all the numbers? Do those dots signify something? some kind of abridgement?
Is this something that has to do with decaf or python?</p>
|
[
{
"AnswerId": "29825383",
"CreationDate": "2015-04-23T13:45:02.973",
"ParentId": null,
"OwnerUserId": "190597",
"Title": null,
"Body": "<p>It looks like <code>feat6</code> is a NumPy array.\nIf so, instead of </p>\n\n<pre><code>f6name = fname+'-f6.txt'\nf6 = open(f6name,'w')\nfor f in feat6:\n f6.write(str(f))\n f6.write('\\t')\nf6.close()\n</code></pre>\n\n<p>use</p>\n\n<pre><code>import numpy as np\n\nf6name = fname+'-f6.txt'\nnp.savetxt(f6name, feat6, delimiter='\\t')\n</code></pre>\n\n<p>This won't include the brackets (<code>[</code> and <code>]</code>), but that is usually more desireable as it makes parsing the data easier.</p>\n\n<hr>\n\n<p>The <code>str</code> representation of NumPy arrays includes ellipses when the number of elements in the array exceed <code>threshold</code> which by default NumPy sets to 1000. You can change this by <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html\" rel=\"nofollow\">setting <code>threshold</code> to some higher number</a>: </p>\n\n<pre><code>import numpy as np\nnp.set_printoptions(threshold=10**6)\n</code></pre>\n\n<p>With this change, <code>str(f)</code> would return a stringified version of <code>f</code> without ellipses as long as <code>f.size</code> is less than 10**6.</p>\n\n<p>While this explains why you are seeing ellipses, I don't recommend using <code>np.set_printoptions</code> here since <code>np.savetxt</code> solves your problem more simply.</p>\n"
}
] |
29,825,687 | 2 |
<machine-learning><theano><pylearn><lstm>
|
2015-04-23T13:56:22.307
| null | 1,259,448 |
implementing LSTM with theano scan, way slower then using loops
|
<p>I am using Theano/Pylearn2 to implement LSTM model inside my own network. However, I've found that Theano scan is much, much slower than using plain loops. I used the Theano profiler </p>
<pre></pre>
<p>and the Ops,</p>
<pre></pre>
<p>So lots and lots of time are spent on Scan (which is kind of as expected, but I didn't expect it to be soo slow). </p>
<p>The main body of my code is</p>
<pre></pre>
<p>And I wrote my scan as:</p>
<pre></pre>
<p>One thing I've noticed is that the type of Theano scan uses Python implementation (?) is that the reason why this is ridiculously slow? or did I do something wrong? Why is Theano python implementation of Scan instead of C's.</p>
<p>(I said using loops is faster, but it's faster at runtime, for large model I didn't manage to compile the version of using loops within reasonable amount of time).</p>
|
[
{
"AnswerId": "38457559",
"CreationDate": "2016-07-19T11:43:02.800",
"ParentId": null,
"OwnerUserId": "4862162",
"Title": null,
"Body": "<p>It takes time for Theano developers to implement scan and gradient-of-scan using C and GPU, because it is much more complicated than other functions. That is why when you profile it, it shows GpuElemwise, GpuGemv, GpuDot22, etc., but you don't see a GpuScan or GpuGradofScan.</p>\n\n<p>Meanwhile, you can only fall back to for loops.</p>\n"
},
{
"AnswerId": "31328630",
"CreationDate": "2015-07-09T21:34:01.867",
"ParentId": null,
"OwnerUserId": "2938232",
"Title": null,
"Body": "<p>This was asked a while ago but I had/have the same problem. Answer is that scan is slow on the GPU.</p>\n\n<p>See: <a href=\"https://github.com/Theano/Theano/issues/1168\" rel=\"nofollow\">https://github.com/Theano/Theano/issues/1168</a></p>\n"
}
] |
29,828,908 | 2 |
<python><github><neural-network><theano>
|
2015-04-23T16:08:06.533
| null | 2,423,116 |
Theano warning: The same cache key is associated to different modules
|
<p>I've been using Lasagne for a while to run Neural Networks. I had installed it by downloading the repo from github and then doing .
Today I tried to updating to the latest version. This is what I've done:</p>
<p>-rename the previous lasagne folder to lasagne_old.</p>
<p>-create a new lasagne folder with the new repo</p>
<p>-</p>
<p>The install completed fine.
However as soon as I try running the usual Neural Networks it starts giving errors:</p>
<pre></pre>
<p>How could I fix this? And, moving forward, what is the right way to update a package from a repo?</p>
|
[
{
"AnswerId": "35456839",
"CreationDate": "2016-02-17T12:32:30.097",
"ParentId": null,
"OwnerUserId": "4530440",
"Title": null,
"Body": "<p>Use the command <a href=\"http://deeplearning.net/software/theano/install.html#updating-theano\" rel=\"nofollow\"><code>theano-cache clear</code></a>. I had a similar problem and it solved it. Hope it's helpful</p>\n\n<blockquote>\n <p>If you installed NumPy/SciPy with yum/apt-get, updating NumPy/SciPy with pip/easy_install is not always a good idea. This can make Theano crash due to problems with BLAS (but see below). The versions of NumPy/SciPy in the distribution are sometimes linked against faster versions of BLAS. Installing NumPy/SciPy with yum/apt-get/pip/easy_install won’t install the development package needed to recompile it with the fast version. This mean that if you don’t install the development packages manually, when you recompile the updated NumPy/SciPy, it will compile with the slower version. This results in a slower Theano as well. To fix the crash, you can clear the Theano cache like this:</p>\n</blockquote>\n\n<pre><code>theano-cache clear\n</code></pre>\n"
},
{
"AnswerId": "42933636",
"CreationDate": "2017-03-21T16:58:59.777",
"ParentId": null,
"OwnerUserId": "3881080",
"Title": null,
"Body": "<pre><code>$ theano-cache clear\n</code></pre>\n\n<p>and optionally</p>\n\n<pre><code>$ theano-cache purge\n</code></pre>\n"
}
] |
29,842,935 | 1 |
<python><machine-learning><neural-network><deep-learning><caffe>
|
2015-04-24T08:55:36.763
| null | 4,827,684 |
Mean shape incompatible with input shape -- CAFFE Classification Error in IO.PY
|
<p>I'm installed Caffe on an Ubuntu 14.04 virtual server with CUDA installed (without driver) using <a href="https://github.com/BVLC/caffe/wiki/Ubuntu-14.04-VirtualBox-VM" rel="nofollow">this link</a>.</p>
<p>The classification step: </p>
<pre></pre>
<p>yields a traceback. </p>
<p>I followed the steps as describe is by user2696499 in <a href="https://stackoverflow.com/questions/28692209/using-gpu-despite-setting-cpu-only-yielding-unexpected-keyword-argument/28979649#28979649">this SO thread</a>,<br>
However the variable is not defined. How or where to define it ?</p>
|
[
{
"AnswerId": "29887772",
"CreationDate": "2015-04-27T05:22:08.210",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p><code>ins</code> stands for the input shape and is equal to <code>in_shape[1:]</code>.</p>\n"
}
] |
29,871,102 | 1 |
<lua><torch>
|
2015-04-25T21:47:01.440
| 29,872,347 | 1,546,029 |
In trepl or luajit, how can I find the source code of a library I'm using?
|
<p>Let's say I'm working with a lua library I installed using luarocks, and I want to see the definition of a function from that library. In ipython in could use </p>
<blockquote>
<blockquote>
<p>??function_name</p>
</blockquote>
</blockquote>
<p>to see the definition in the terminal, in matlab I could use </p>
<blockquote>
<blockquote>
<p>which function_name</p>
</blockquote>
</blockquote>
<p>then use my editor to look at the path returned by which. How could I do something similar to find the function definition for a lua library?</p>
|
[
{
"AnswerId": "29872347",
"CreationDate": "2015-04-26T00:35:43.063",
"ParentId": null,
"OwnerUserId": "805875",
"Title": null,
"Body": "<p>In 'plain' Lua/JIT, you can say <a href=\"http://www.lua.org/manual/5.1/manual.html#pdf-debug.getinfo\" rel=\"nofollow noreferrer\"><code>debug.getinfo</code></a><code>( func )</code> and will get a table containing (among others) the fields <code>short_src</code>, <code>source</code> and <code>linedefined</code>.</p>\n\n<p>For Lua functions, <code>short_src</code> will be the filename or <code>stdin</code> if it was defined in the REPL. (<code>source</code> has a slightly different format, filenames are prefixed with an <code>@</code>, a <code>=</code> prefix is used for C functions or stuff defined interactively, and for <code>load</code>ed functions, it will be the actual string that was loaded.)</p>\n\n<p>You can pack that up in a function like</p>\n\n<pre><code>function sourceof( f )\n local info = debug.getinfo( f, \"S\" )\n return info.short_src, info.linedefined\nend\n</code></pre>\n\n<p>or maybe even start an editor and point it there, e.g. (for vim)</p>\n\n<pre><code>function viewsource( f )\n -- get info & check it's actually from a file\n local info = debug.getinfo( f, \"S\" )\n local src, line = info.source, info.linedefined\n if src == \"=[C]\" then return nil, \"Is a C function.\" end\n local path = src:match \"^@(.*)$\"\n if path then\n -- start vim (or an other editor if you adapt the format string)\n return os.execute( (\"vim -fR %q +%d\"):format( path, line ) )\n end\n return nil, \"Was defined at run time.\"\nend\n</code></pre>\n\n<p>And just for fun, here's yet another version that returns the code if it can find it somewhere. (This will also work for functions that have been generated at run time, e.g. by calling <code>load</code>, and where no source file exists. You could also work in the other direction by dumping the <code>load</code>ed snippet into a temp file and opening that…)</p>\n\n<pre><code>-- helper to extract the source block defining the function\nlocal function funclines( str, line1, lineN, filename )\n -- if linedefined / lastlinedefined are 0, this is the main chunk's function\n if line1 == 0 and lineN == 0 then\n filename = filename and filename..\" (main chunk)\"\n or \"(chunk defined at runtime)\"\n return \"-- \"..filename..\"\\n\"..str\n end\n -- add line info to file name or use placeholder\n filename = filename and filename..\":\"..line1 or \"(defined at runtime)\"\n -- get the source block\n local phase, skip, grab = 1, line1-1, lineN-(line1-1)\n local ostart, oend -- these will be the start/end offsets\n if skip == 0 then phase, ostart = 2, 0 end -- starts at first line\n for pos in str:gmatch \"\\n()\" do\n if phase == 1 then -- find offset of linedefined\n skip = skip - 1 ; if skip == 0 then ostart, phase = pos, 2 end \n else -- phase == 2, find offset of lastlinedefined+1\n grab = grab - 1 ; if grab == 0 then oend = pos-2 ; break end\n end\n end\n return \"-- \"..filename..\"\\n\"..str:sub( ostart, oend )\nend\n\nfunction dumpsource( f )\n -- get info & line numbers\n local info = debug.getinfo( f, \"S\" )\n local src, line, lastline = info.source, info.linedefined, info.lastlinedefined\n -- can't do anything for a C function\n if src == \"=[C]\" then return nil, \"Is a C function.\" end\n if src == \"=stdin\" then return nil, \"Was defined interactively.\" end\n -- for files, fetch the definition\n local path = src:match \"^@(.*)$\"\n if path then\n local f = io.open( path )\n local code = f:read '*a' \n f:close( )\n return funclines( code, line, lastline, path )\n end\n -- otherwise `load`ed, so `source`/`src` _is_ the source\n return funclines( src, line, lastline )\nend\n</code></pre>\n\n<p>A closing remark: If you paste code into a Lua/JIT REPL, <code>local</code>s disappear between definitions, because every line (or minimal complete group of lines) is its own chunk. The common fix (that you probably know) is to wrap everything into a block as <code>do</code>*paste*<code>end</code>, but an alternative is to <code>load[[</code>*paste*<code>]]()</code> (possibly with more <code>=</code>s like <code>[===[</code> and <code>]===]</code>.) If you paste this way, the above <code>dumpsource</code> (or any other function using <code>debug.getinfo</code>) will then be able to get the source of the function(s). This also means that if you defined a nice function but it's gone from the history and the scroll buffer, you can recover it in this way (<em>if</em> you defined it by <code>load</code>ing and not directly feeding the interpreter). Saving the source in a file will then also be possible without copy-pasting and not require editing out the <code>>></code> prompts.</p>\n"
}
] |
29,882,271 | 1 |
<lua><gnuplot><torch>
|
2015-04-26T19:13:37.800
| 30,022,005 | 1,546,029 |
Lua Error: "Gnuplot terminal is not set"
|
<p>In LuaJIT or in the Torch REPL, I run the commands
and I get the error "Gnuplot terminal is not set".</p>
<p>I tried using gnuplot.setterm() with some guesses such as 'x11' and 'qt' as arguments, but get the error "gnuplot does not seem to have this term". Is there somewhere I can get a list of terminal emulators/graphics backends available to gnuplot? Or alternatively, are these errors indicative of some other problem?</p>
|
[
{
"AnswerId": "30022005",
"CreationDate": "2015-05-04T03:04:19.540",
"ParentId": null,
"OwnerUserId": "1546029",
"Title": null,
"Body": "<p>It turns out that you get this error if you don't have the <a href=\"http://www.gnuplot.info/download.html\" rel=\"nofollow\">Gnuplot executable</a> installed. </p>\n\n<p>I didn't check for this problem before because gnuplot.lua (v. 5.1) has an error check for the case of that executable being unavailable - on line 145 - but for some reason it failed to catch the problem. </p>\n"
}
] |
29,893,793 | 1 |
<conditional-statements><theano>
|
2015-04-27T10:55:00.327
| null | 1,215,787 |
Theano The number of values on the `then` branch should have the same number of variables as the `else` branch
|
<p>The title is the error message when I use ifelse in theano. </p>
<p>Say, I have a python snippet like this that I want to wirte in theano:</p>
<pre></pre>
<p>Now if I write in theano:</p>
<pre></pre>
<p>if gave the error in the title. How to remedy this error, please ?</p>
<p>Thank you </p>
|
[
{
"AnswerId": "33180736",
"CreationDate": "2015-10-16T22:57:32.610",
"ParentId": null,
"OwnerUserId": "2805751",
"Title": null,
"Body": "<p><code>ifelse</code> needs to evaluate each branch of the condition at compile, your error is because the list you return on the false condition contains zero theano variables but the <code>calculate()</code> function returns at least one. I believe there are other checks on the datatype returned, but can't remember off the top of my head. For the code you posted, if <code>calculate</code> always returns a scaler value:</p>\n\n<p><code>rval = ifelse(condition, calculate(a,b,c), 0.)</code></p>\n\n<p>should compile</p>\n"
}
] |
29,948,507 | 1 |
<lua><luajit><torch>
|
2015-04-29T15:41:29.960
| 29,949,224 | 1,546,029 |
How to specify startup file for Torch REPL
|
<p>I'd like to define some command line convenience functions to be run every time I start the Torch REPL. For example,</p>
<p></p>
<p>and things of that nature. How can I have functions like this added to the namespace every time I start the REPL?</p>
<p>I web searched "luajit|torch|trepl startup|rc file" but couldn't find any leads on this.</p>
|
[
{
"AnswerId": "29949224",
"CreationDate": "2015-04-29T16:15:27.313",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p>you can alias your th repl to take a default -l parameter:</p>\n\n<pre><code>alias thnew='th -lmyadditions '\n</code></pre>\n\n<p>where myadditions.lua is your file to be executed that is placed in your lua path.</p>\n"
}
] |
29,951,454 | 1 |
<jpeg><macports><libjpeg><libjpeg-turbo><torch>
|
2015-04-29T18:17:17.947
| null | 3,870,045 |
issues installing libjpeg for mac (macports)
|
<p>I use a macbook (with Yosemite). I am quite new to macports.</p>
<p>However, i need to install libjpeg (so i can use torch7 fully), but get this error When i try:</p>
<pre></pre>
<p>result:</p>
<pre></pre>
<p>I dont know quite what to do? as i don't think i can uninstall jpeg without problems</p>
|
[
{
"AnswerId": "29959632",
"CreationDate": "2015-04-30T05:25:14.020",
"ParentId": null,
"OwnerUserId": "726106",
"Title": null,
"Body": "<p>You can deactivate the conflicting port first with:</p>\n\n<pre><code>sudo port -f deactivate jpeg\nsudo port install libjpeg-turbo\n</code></pre>\n"
}
] |
29,955,356 | 1 |
<lua><luajit><torch>
|
2015-04-29T22:00:18.860
| 29,955,728 | 1,546,029 |
How to change working directory in Torch REPL
|
<p>Title says it all, how can one change the working directory inside the Torch REPL? I tried using calls to os.execute('cd some_dir') but this doesn't work, as demonstrated here.</p>
<p>
--prints: /home/user/Code<br>
--prints: true exit 0<br>
-- prints: /home/user/Code</p>
<p>where pwd() is a a convenience function that calls os.execute('pwd').</p>
|
[
{
"AnswerId": "29955728",
"CreationDate": "2015-04-29T22:31:31.700",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p>Install the lfs package (probably already installed, if not \"luarocks install luafilesystem\")</p>\n\n<p>Then,</p>\n\n<pre><code>lfs=require 'lfs'\nlfs.chdir(newdir)\n</code></pre>\n\n<p>Also, in torch REPL, you can execute shell commands with a $ prefix\nExample:</p>\n\n<pre><code>th> $ls\n</code></pre>\n"
}
] |
29,962,365 | 1 |
<python-2.7><anaconda><theano>
|
2015-04-30T08:08:52.630
| null | 4,786,629 |
Not being able to run Theano on windows XP 32 with Anaconda
|
<p>I’m trying to run Theano on windows XP 32 with Anaconda (Python 2.7). I installed Theano following the steps provided in the Anaconda section here: <a href="http://deeplearning.net/software/theano/install_windows.html" rel="nofollow">http://deeplearning.net/software/theano/install_windows.html</a>. Everything went fine but when I try to run the script import theano I get this error message:</p>
<p>ImportError: DLL load failed: The specified procedure could not be found. Struggling quite a bit, could anyone please help?</p>
|
[
{
"AnswerId": "35789344",
"CreationDate": "2016-03-04T06:24:28.847",
"ParentId": null,
"OwnerUserId": "1653637",
"Title": null,
"Body": "<p>I had the same issue. Now I can use Theano on Windows XP 32bits with Anaconda.</p>\n\n<p>With dependency walker, we found when 'import theano', the error 'ImportError' caused by LIBSTDC++6.DLL(a g++ DLL) cannot import the non-exists symbol 'vsnprintf' in 'MSVCRT.DLL'.</p>\n\n<p>The key to fix this issue is, replace the mingw installed by 'conda install mingw' (rev 4.7.0). Install mingw g++ (rev 4.9.3) with '<a href=\"https://sourceforge.net/projects/mingw/\" rel=\"nofollow\">mingw-get-setup.exe</a>' from sourceforge instead. </p>\n\n<p>Make user you PATH environment has 'C:\\MinGW\\bin;C:\\MinGW\\lib;' (default install location) to use the correct g++.</p>\n\n<p>Good luck!</p>\n"
}
] |
29,971,874 | 2 |
<python><loops><theano>
|
2015-04-30T15:30:48.233
| null | 854,585 |
Python theano with index computed inside the loop
|
<p>I have installed the Theano library for increasing the speed of a computation, so that I can use the power of a GPU.</p>
<p>However, inside the inner loop of the computation a new index is calculated, based on the loop index and corresponding values of a couple of arrays.</p>
<p>That calculated index is then used to access an element of another array, which, in turn, is used for another calculation.</p>
<p>Is this too complicated to expect any significant speedups from Theano?</p>
<p>So let me rephrase my question, the other way round.
Here is an example of GPU code snippet. Some initialisations are left out for reasons of brevity. Can I translate this to Python/Theano without increasing computation times considerably?</p>
<pre></pre>
<p>{ </p>
<pre></pre>
|
[
{
"AnswerId": "30126362",
"CreationDate": "2015-05-08T14:32:53.797",
"ParentId": null,
"OwnerUserId": "1318689",
"Title": null,
"Body": "<p>GPUs aren't great at random access memory when working with their global memory. I've not used Theano before but if your arrays all fit in local memory - this would be fast as random accesses aren't a problem there. If it is global memory though it is hard to anticipate what performance would be but it would be a far cry from it's full power. On another note, is something about this computation even parallelizable? GPUs only really do well when there's alot of these things going on concurrently.</p>\n"
},
{
"AnswerId": "30501611",
"CreationDate": "2015-05-28T08:51:51.717",
"ParentId": null,
"OwnerUserId": "2100935",
"Title": null,
"Body": "<p>No, I see nothing which cannot be done using Tensors instead of a for-loop. This should mean that you might see an increase in speed, but this will really depend on the application. You have an overhead of python+theano as well, especially coming from c-like code.</p>\n\n<p>So, instead of</p>\n\n<pre><code>for (m = 0; m < M; ++m)\n{\n unsigned int ind2 = 3 * m;\n\n float diff_x = x - some_pos[ind2];\n float diff_y = y - some_pos[ind2 + 1];\n float diff_z = z - some_pos[ind2 + 2];\n\n float distance = sqrtf(diff_x * diff_x\n + diff_y * diff_y\n + diff_z * diff_z);\n\n unsigned int dist = rintf(distance/some_factor);\n ind3 = m * another_factor + dist;\n\n cuComplex some_element = data[ind3];\n}\n</code></pre>\n\n<p>You could do something like (of the top of my head)</p>\n\n<pre><code>diff_xyz = T.Tensor([x,y,z]).dimshuffle('x',0) - some_pos.reshape(-1,3)\ndistance = T.norm(diff_xyz)\ndist = T.round(distance/some_factor)\ndata = data.reshape(another_factor,-1)\nsome_elements = data[:,dist]\n</code></pre>\n\n<p>See? No more loops, therefore a GPU can parallellize this.</p>\n\n<blockquote>\n <p>However, inside the inner loop of the computation a new index is calculated, based on the loop index and corresponding values of a couple of arrays. (...) Is this too complicated to expect any significant speedups from Theano?</p>\n</blockquote>\n\n<p>In general: this can be optimized, as long as the loop index has a linear relation with the index needed, by using tensors instead of loops. It however needs a bit of creativity and massaging to get right.</p>\n\n<p>Non-linear relations are also possible using <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor._tensor_py_operators.take\" rel=\"nofollow\">Tensor.take()</a>, but I don't dare to vouch for its speed on GPU. My gut-feeling always told me to stay away from it, as it is probably too flexible to optimize nicely. However, it is possible to use when there are no alternatives.</p>\n"
}
] |
29,985,453 | 2 |
<python><ubuntu-14.04><keras>
|
2015-05-01T10:40:45.597
| 29,985,665 | 4,262,897 |
Linux error when installing Keras
|
<p>I am getting this strange to me error when installing Keras on an Ubuntu server:</p>
<pre></pre>
<p>Any ideas how to fix this issue?</p>
<p>I've downloaded Keras repository from <a href="https://github.com/fchollet/keras" rel="nofollow">https://github.com/fchollet/keras</a>, and used this command to install it:</p>
<pre></pre>
<p>My Linux specifications are:</p>
<ul>
<li><em>Distributor ID:</em> Ubuntu</li>
<li><em>Description:</em> Ubuntu 14.04.2 LTS</li>
<li><em>Release:</em> 14.04</li>
<li><em>Codename:</em> trusty</li>
</ul>
|
[
{
"AnswerId": "29985665",
"CreationDate": "2015-05-01T10:59:08.237",
"ParentId": null,
"OwnerUserId": "2929337",
"Title": null,
"Body": "<p>You need to install the <a href=\"https://launchpad.net/ubuntu/+source/hdf5\" rel=\"nofollow\">hdf5</a> package to get the headers you need.</p>\n"
},
{
"AnswerId": "42403835",
"CreationDate": "2017-02-22T22:53:30.930",
"ParentId": null,
"OwnerUserId": "1605761",
"Title": null,
"Body": "<p>Real Error is :</p>\n\n<blockquote>\n <p>\"In file included from /tmp/easy_install-qQggXs/h5py-2.5.0/h5py/defs.c:287:0:\n /tmp/easy_install-qQggXs/h5py-2.5.0/h5py/api_compat.h:27:18: fatal error: hdf5.h: No such file or directory\n #include \"hdf5.h\" \"</p>\n</blockquote>\n\n<p><em>This error says that header file hdf5.h is missing.</em></p>\n\n<p>Run the following command to install header file:</p>\n\n<pre><code>sudo apt-get install libhdf5-dev\n</code></pre>\n\n<p>Please note that to install h5py package, run following command :</p>\n\n<pre><code>sudo pip install h5py\n</code></pre>\n\n<p>Hope this solves your problem</p>\n"
}
] |
29,990,173 | 0 |
<python><ubuntu-14.04><theano><deep-learning>
|
2015-05-01T15:45:21.067
| null | 4,262,897 |
Python keras, nolearn, import error
|
<p>I am getting this error when I am trying to import nolearn for theano or keras.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="false" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"></pre>
</div>
</div>
With Keras the same type.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="false" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"></pre>
</div>
</div>
</p>
<p>I taught I had this sorted by this <a href="https://stackoverflow.com/questions/29985453/linux-error-when-installing-keras?noredirect=1#comment48090541_29985453">Linux error when installing Keras</a> but reality was different. </p>
<p>Any ideas?</p>
<p>This is import part of program.</p>
<pre></pre>
|
[] |
29,993,665 | 1 |
<python><theano>
|
2015-05-01T19:22:11.727
| 33,098,347 | 3,979,919 |
Python theano.scan taps argument
|
<p>I'm trying desperately to understand the taps argument
in the theano.scan function. Unfortunately I'm
not able to come up with a specific question.</p>
<p>I just don't understand the "taps" mechanism.
Well I ok. I know in which order the sequences
are passed to the function, but I don't know
the meaning. For example (I borrowed this code from
another Question <a href="https://stackoverflow.com/questions/26718812/python-theano-scan-function">Python - Theano scan() function</a>):</p>
<pre></pre>
<p>Setting taps to -1 does make sense to me. As far as I understand
it's the same as not setting the taps value and the whole vector 'x0'
is being passed to the addf function. x0 will then be added
with the "step" parameter (int 2 which will be broadcasted to the same size).
In the next iteration the result [4, 5] will be the input and so on
which yields the following output: </p>
<pre></pre>
<p>Setting taps to -3 however yields the following output:</p>
<pre></pre>
<p>I don't have any explanation how the scan function creates this
output. Why is it just a list now?
The "print(a1)" turns out to be as expected </p>
<pre></pre>
<p>Although I know that this is the value that a1 should have,
I do not know how to interpret it. What is the t-3 th value
of x0?
The theano documentation
doesn't seem to be all to detailed about the taps argument...
so hopefully one of you guys will be.</p>
<p>Thx</p>
|
[
{
"AnswerId": "33098347",
"CreationDate": "2015-10-13T08:59:52.313",
"ParentId": null,
"OwnerUserId": "642776",
"Title": null,
"Body": "<p>To better understand the use of <code>taps</code>, you should first understand how <code>scan</code> uses the <code>outputs_info</code> argument altogether and how the provided values for it (<code>initial</code> to be exact) change the nature of the result.</p>\n\n<p><code>scan</code> expects you to provide the type of output you expect from this operation (unless of course you dont have any initial values to provide and simply mention <code>None</code>, in which case it will start the first round {<code>step</code>} and the output is not passed as a parameter to the <code>fn</code> in the successive rounds).</p>\n\n<p>So <code>scan</code> is used for iterative reduction over the provided <code>sequences</code>. This means that at <code>step</code> <em>n</em> (and with no <code>taps</code> specified for either <code>sequences</code> or <code>outputs_info</code>), the given <code>fn</code> will be applied to the <em>n</em>th elements of each of the <code>sequences</code> along with the output(s) generated by the previous (<em>n-1</em> th) <code>step</code>. Hence the default value of <code>taps</code> for <code>sequences</code> is <code>0</code> and for <code>outputs_info</code> is <code>-1</code>.</p>\n\n<p>Another way to look at it would be to consider all the sequences to consist of slices across their respective first dimension. So for a particular step, the current slice(s) of the <code>sequence(s)</code> and the output slice of the previous step are passed to <code>fn</code> and the computed output is added to the results as a new slice which would then be used for the next <code>step</code>. It is obvious that each of the output slices would be of the same shape. And if you are providing an initial slice as part of <code>outputs_info</code> then it should also have the same shape as that produced by the application of <code>fn</code>. In your example, if <code>output_info=[dict(initial=x0)]</code>, it would take <code>[2, 3]</code> as the first slice and use it for the first <code>step</code> as the argument <code>a1</code> to <code>addf</code>.</p>\n\n<p>But quite often in signal processing (and elsewhere) you need more than just the last data points in time as causal information. Here I have used time just as a way to represent <code>steps</code>. Anyway, this is where <code>taps</code> is useful and helps in indicating exactly which data points from the <code>sequences</code> and <code>results</code> have to be used for the current <code>step</code>. In your example, this means that for the current <code>step</code> the 3rd last output should be passed to <code>fn</code>.</p>\n\n<p>And this is where you need to be careful in describing <code>initial</code> for <code>outputs_info</code>. Because scan will first split the <code>initial</code> value into slices along the fist dimension. <strong>Then the first slice among this set of slices would be considered the earliest slice</strong> (3rd last in your example) <strong>required to compute the output of the first <code>step</code></strong>.</p>\n\n<p>Lets assume in your example, <code>taps=[-2]</code> and <code>input = [2, 3]</code>. In this case, scan will split the input into slices and use the first slice (the value 2 here) as the argument <code>a1</code> to <code>addf</code>. The resulting value 4 would be added to the output and for the next step, the slices would include [2, 3, 4] of which the value 3 is on the second last (-2) tap. And so on. However, with <code>taps=[-3]</code> and the same <code>input</code>, there is one value missing which is like saying that you had collected the values at times (t-3) and (t-2) but didnt collect the value at (t-1).</p>\n\n<p>So if you reckon your output to be of a certain shape, and you require multiple taps of the output beyond -1, then the value of <code>initial</code> should be a list of elements of the required output shape <strong>and</strong> have <strong>exactly</strong> as many such elements as would be required to retrieve the earliest slice.</p>\n\n<p>TLDR:\nIn your example, if you want to get 2d vectors as the result of each <code>step</code> and are using <code>taps=[-3]</code>, then <code>input</code> should be a list of 3 such 2d vectors. If you want to get single valued results, then <code>input</code> should be a list with 3 integers. A list with 2 integers does not make sense in this context at all. It would only make sense if <code>taps</code> is either -2 or -1 or <code>[-2, -1]</code>.</p>\n"
}
] |
29,995,397 | 2 |
<python><c++><windows><anaconda><theano>
|
2015-05-01T21:27:34.567
| 30,019,264 | 4,380,945 |
compilation failure when running theano - windows 8.1 64 bit with Anaconda python distribution
|
<p>I am running lasagne/nolearn, which uses theano.</p>
<p>It has been particularly difficult to install and compile theano. The following compilation error happens after installing a 64 bit g++ compiler.</p>
<p>Help is much appreciated. Thanks!</p>
<p>Problem occurred during compilation with the command line below:</p>
<pre></pre>
|
[
{
"AnswerId": "35199686",
"CreationDate": "2016-02-04T11:21:50.617",
"ParentId": null,
"OwnerUserId": "868972",
"Title": null,
"Body": "<p>For those using WinPython and mingw, here some additional information:</p>\n\n<p>1) Don't even bother trying the mingw32 package when using a 64bit Windows, immediately go for mingw64. That stops the above shown error from occuring</p>\n\n<p>2) For WinPython, the .theanorc or .theanorc.txt file must not be in your home directory, but in the WinPython/settings directory!</p>\n\n<p>3) Make things easier by using linux style path separators such as, for the g++ flag, cxx=d:/dev/mingw-w64/mingw64/bin/g++.exe</p>\n\n<p>4) nvcc needs the windows path to have included the cl.exe file, which can in general be found in you VS installation under something like (VSPATH)/VC/bin</p>\n\n<p>Regards,\nG.</p>\n"
},
{
"AnswerId": "30019264",
"CreationDate": "2015-05-03T20:52:36.507",
"ParentId": null,
"OwnerUserId": "4380945",
"Title": null,
"Body": "<p>I found what the problem was and would like to post the solution. This particular problem was caused because the file libpythonxx.a file was missing in the same directory where you find the pythonxx.dll file (in my case python27.dll and thus I created the libpython27.a file.</p>\n\n<p>A noble soul posted all steps necessary to install theano at <a href=\"http://rosinality.ncity.net/doku.php?id=python:installing_theano\" rel=\"noreferrer\">http://rosinality.ncity.net/doku.php?id=python:installing_theano</a> (in Korean and English). To generate such file, you copy the pythonxx.dll file to a temporary directory and type the following commands in the windows console:</p>\n\n<p>gendef pythonXX.dll</p>\n\n<p>dlltool --as-flags=--64 -m i386:x86-64 -k --output-lib libpythonXX.a --input-def pythonXX.def</p>\n\n<p>Then you paste the generated libpythonxx.a file in the same directory as the pythonxx.dll file.</p>\n\n<p>In windows, you usually find this file under C:\\Windows\\System3 but if you are using anaconda as I am, you will find it under?</p>\n\n<p>C:\\Users\\xxxxx\\Anaconda\\libs, xxxxx being your user.</p>\n"
}
] |
30,023,117 | 2 |
<neural-network><torch>
|
2015-05-04T05:25:07.223
| 30,036,454 | 3,468,673 |
torch7 : print matrix in text file with lines longer than 80 characters
|
<p>I'm trying to recuperate the parameters of this NN:</p>
<pre></pre>
<p>using this code :</p>
<pre></pre>
<p>When saving the output .lua file into text file using this command line:</p>
<pre></pre>
<p>I get all weight matrix wrapped into segments of six columns each (Columns 1 to 6 ... Columns 193 to 198 ... Columns 199 to 200).</p>
<p>Is there anyway to prevent text from being wrapped and displaying weight matrix in only one block?</p>
<p>Thank you.</p>
|
[
{
"AnswerId": "30025344",
"CreationDate": "2015-05-04T08:02:22.897",
"ParentId": null,
"OwnerUserId": "2929337",
"Title": null,
"Body": "<p>I think what you actually want to do is to save the parameters so that you can load them back in later? In that case, look at this:</p>\n\n<p><a href=\"https://github.com/torch/torch7/blob/master/doc/serialization.md\" rel=\"nofollow\">https://github.com/torch/torch7/blob/master/doc/serialization.md</a></p>\n"
},
{
"AnswerId": "30036454",
"CreationDate": "2015-05-04T17:42:30.233",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<pre><code>printT = function(t)\n t = t:view(-1)\n for i=1,t:nElement() do\n io.write(t[i] .. ',')\n end\nend\n\nprintT(mlp:get(1).weight)\nprintT(mlp:get(1).bias)\n\nprintT(mlp:get(3).weight)\nprintT(mlp:get(3).bias)\n\nprintT(mlp:get(5).weight)\nprintT(mlp:get(5).bias)\n</code></pre>\n"
}
] |
30,033,096 | 2 |
<machine-learning><neural-network><deep-learning><caffe><gradient-descent>
|
2015-05-04T14:47:04.150
| 33,711,600 | 562,769 |
What is `lr_policy` in Caffe?
|
<p>I just try to find out how I can use <a href="http://caffe.berkeleyvision.org/">Caffe</a>. To do so, I just took a look at the different files in the examples folder. There is one option I don't understand:</p>
<pre></pre>
<p>Possible values seem to be:</p>
<ul>
<li></li>
<li></li>
<li></li>
<li></li>
<li></li>
<li> </li>
</ul>
<p>Could somebody please explain those options?</p>
|
[
{
"AnswerId": "30045244",
"CreationDate": "2015-05-05T05:55:11.887",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>It is a common practice to decrease the learning rate (lr) as the optimization/learning process progresses. However, it is not clear how exactly the learning rate should be decreased as a function of the iteration number.</p>\n\n<p>If you use <a href=\"https://github.com/NVIDIA/DIGITS\" rel=\"noreferrer\">DIGITS</a> as an interface to Caffe, you will be able to visually see how the different choices affect the learning rate.</p>\n\n<p><strong>fixed:</strong> the learning rate is kept fixed throughout the learning process.</p>\n\n<hr>\n\n<p><strong>inv:</strong> the learning rate is decaying as ~<code>1/T</code><br>\n<img src=\"https://i.stack.imgur.com/LScLY.png\" alt=\"enter image description here\"></p>\n\n<hr>\n\n<p><strong>step:</strong> the learning rate is piecewise constant, dropping every X iterations<br>\n<img src=\"https://i.stack.imgur.com/W5h6j.png\" alt=\"enter image description here\"> </p>\n\n<hr>\n\n<p><strong>multistep:</strong> piecewise constant at arbitrary intervals<br>\n<img src=\"https://i.stack.imgur.com/DW0qa.png\" alt=\"enter image description here\"></p>\n\n<hr>\n\n<p>You can see exactly how the learning rate is computed in the function <a href=\"https://github.com/BVLC/caffe/blob/master/src/caffe/solvers/sgd_solver.cpp#L27\" rel=\"noreferrer\"><code>SGDSolver<Dtype>::GetLearningRate</code></a> (<em>solvers/sgd_solver.cpp</em> line ~30).</p>\n\n<hr>\n\n<p>Recently, I came across an interesting and unconventional approach to learning-rate tuning: <a href=\"http://arxiv.org/abs/1506.01186\" rel=\"noreferrer\">Leslie N. Smith's work \"No More Pesky Learning Rate Guessing Games\"</a>. In his report, Leslie suggests to use <code>lr_policy</code> that alternates between decreasing and <em>increasing</em> the learning rate. His work also suggests how to implement this policy in Caffe.</p>\n"
},
{
"AnswerId": "33711600",
"CreationDate": "2015-11-14T18:04:27.833",
"ParentId": null,
"OwnerUserId": "2211907",
"Title": null,
"Body": "<p>If you look inside the <code>/caffe-master/src/caffe/proto/caffe.proto</code> file (you can find it online <a href=\"https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto#L157-L172\">here</a>) you will see the following descriptions:</p>\n\n<pre><code>// The learning rate decay policy. The currently implemented learning rate\n// policies are as follows:\n// - fixed: always return base_lr.\n// - step: return base_lr * gamma ^ (floor(iter / step))\n// - exp: return base_lr * gamma ^ iter\n// - inv: return base_lr * (1 + gamma * iter) ^ (- power)\n// - multistep: similar to step but it allows non uniform steps defined by\n// stepvalue\n// - poly: the effective learning rate follows a polynomial decay, to be\n// zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power)\n// - sigmoid: the effective learning rate follows a sigmod decay\n// return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize))))\n//\n// where base_lr, max_iter, gamma, step, stepvalue and power are defined\n// in the solver parameter protocol buffer, and iter is the current iteration.\n</code></pre>\n"
}
] |
30,034,492 | 1 |
<python><python-3.x><theano>
|
2015-05-04T15:58:06.107
| 30,177,516 | 1,311,255 |
does nolearn/lasagne support python 3
|
<p>I am working with Neural Net implementation in as mentioned <a href="http://nbviewer.ipython.org/github/ottogroup/kaggle/blob/master/Otto_Group_Competition.ipynb">here</a><br>
However I get the following error:<br>
<br>
I figure out that is in python-3<br>
Does nolearn/lasagne support python-3 ? If not, is there any workaround ?</p>
|
[
{
"AnswerId": "30177516",
"CreationDate": "2015-05-11T20:57:51.337",
"ParentId": null,
"OwnerUserId": "576676",
"Title": null,
"Body": "<p>You seem to be using an older version of nolearn. Try the current master from Github with these commands:</p>\n\n<p><code>pip uninstall nolearn\npip install https://github.com/dnouri/nolearn/archive/master.zip#egg=nolearn\n</code></p>\n\n<p>Here's the tests in master running with both Python 2.7 and 3.4: <a href=\"https://travis-ci.org/dnouri/nolearn/builds/61806852\" rel=\"noreferrer\">https://travis-ci.org/dnouri/nolearn/builds/61806852</a></p>\n"
}
] |
30,035,581 | 2 |
<macos><caffe>
|
2015-05-04T16:55:07.570
| 30,051,717 | 1,348,187 |
Installing Caffe (deep learning) issues
|
<p>I have been able to install Caffe but I had a lot of issues and that's because I didn't follow the instructions very well.</p>
<p>I have a Mac OSx and I'm reading the OSx guide for installation.</p>
<p>In this point:</p>
<p><img src="https://i.stack.imgur.com/07Akr.png" alt="enter image description here"></p>
<p>when I type I get: </p>
<blockquote>
<p>"hdf5: command not found"</p>
</blockquote>
<p>I've tried to install by and but I'm still getting:</p>
<blockquote>
<p>"hdf5: command not found"</p>
</blockquote>
<p>Does anyone have any clue? </p>
<p>Thank you very much.</p>
<hr>
<p>according to the answer of @mattias, my binaries in are:</p>
<p><img src="https://i.stack.imgur.com/YVQEk.png" alt="enter image description here"></p>
|
[
{
"AnswerId": "30036375",
"CreationDate": "2015-05-04T17:38:04.387",
"ParentId": null,
"OwnerUserId": "1197616",
"Title": null,
"Body": "<p>You can install hdf5 from source. I just tested on OS X 10.9.5.</p>\n\n<pre><code>wget http://www.hdfgroup.org/ftp/HDF5/current/src/hdf5-1.8.14.tar\n</code></pre>\n\n<p>Unpack,</p>\n\n<pre><code>tar zxfv hdf5-1.8.14.tar\n</code></pre>\n\n<p>Enter directory</p>\n\n<pre><code>cd hdf5-1.8.14\n</code></pre>\n\n<p>And then,</p>\n\n<pre><code>./configure --prefix=/usr/local/hdf5 # or where you want it\nmake\nsudo make install\n</code></pre>\n\n<p>Then you have it installed in /usr/local/hdf5.</p>\n\n<p>Good luck!</p>\n"
},
{
"AnswerId": "30051717",
"CreationDate": "2015-05-05T11:26:51.103",
"ParentId": null,
"OwnerUserId": "1348187",
"Title": null,
"Body": "<p><code>hdf5</code> is not a command or anything else. The documentation is just bad, it has to be: </p>\n\n<p><code>brew tap homebrew/science hdf5 opencv</code></p>\n\n<hr>\n\n<p>So, what I mean is, we have to install <code>hdf5</code> and then link it to Caffe. But executing hdf5 is not what the guide meant.</p>\n"
}
] |
30,045,306 | 2 |
<neural-network><torch>
|
2015-05-05T06:00:09.663
| 30,053,109 | 3,468,673 |
torch7 : how to connect the neurons of the same layer?
|
<p>Is it possible to implement, using torch, an architecture that connects the neurons of the same layer?</p>
|
[
{
"AnswerId": "30053109",
"CreationDate": "2015-05-05T12:35:15.047",
"ParentId": null,
"OwnerUserId": "2929337",
"Title": null,
"Body": "<p>What you describe is called a recurrent neural network. Note that it needs quite different type of structure, input data, and training algorithms to work well.</p>\n\n<p>There is the <a href=\"https://github.com/Element-Research/rnn\" rel=\"nofollow\"><strong>rnn</strong></a> library for Torch to work with recurrent neural networks.</p>\n"
},
{
"AnswerId": "30050671",
"CreationDate": "2015-05-05T10:37:46.820",
"ParentId": null,
"OwnerUserId": "677824",
"Title": null,
"Body": "<p>Yes, it's possible. Torch has everything that other languages have: logical operations, reading/writing operations, array operations. That's all what needed for implementing any kind of neural network. If to take into account that torch has usage of CUDA you can even implement neural network which can work faster then some C# or java implementations. Performance improvement can depend from number of if/else during one iteration</p>\n"
}
] |
30,055,969 | 1 |
<python><theano><pymc3>
|
2015-05-05T14:36:27.167
| null | 4,866,786 |
How to handle shape of pymc3 Deterministic variables
|
<p>I've been working on getting a hierarchical model of some psychophysical behavioral data up and running in pymc3. I'm incredibly impressed with things overall, but after trying to get up to speed with Theano and pymc3 I have a model that mostly works, however has a couple problems. </p>
<p>The code is built to fit a parameterized version of a Weibull to seven sets of data. Each trial is modeled as a binary Bernoulli outcome, while the thresholds (output of thact as the y values which are used to fit a Gaussian function for height, width, and elevation (a, c, and d on a typical Gaussian). </p>
<p>Using the parameterized Weibull seems to be working nicely, and is now hierarchical for the slope of the Weibull while the thresholds are fit separately for each chunk of data. However - the output I'm getting from k and y_est leads me to believe they may not be the correct size, and unlike the probability distributions, it doesn't look like I can specify shape (unless there's a theano way to do this that I haven't found - though from what I've read specifying shape in theano is tricky). </p>
<p>Ultimately, I'd like to use y_est to estimate the gaussian height or width, however the output right now results in an incredible mess that I think originates with size problems in y_est and k. Any help would be fantastic - the code below should simulate some data and is followed by the model. The model does a nice job fitting each individual threshold and getting the slopes, but falls apart when dealing with the rest. </p>
<p>Thanks for having a look - I'm super impressed with pymc3 so far!</p>
<p>EDIT: Okay, so the shape output by y_est.tag.test_value.shape looks like this</p>
<pre></pre>
<p>I think this is where I'm running into trouble, though it may just be poorly constructed on my part. k has the right shape (one k value per unique_xval). y_est is outputting an entire set of data (101x7) instead of a single estimate (one y_est per unique_xval) for each difficulty level. Is there some way to specify that y_est get specific subsets of df_y_vals to control this?</p>
<pre></pre>
|
[
{
"AnswerId": "30130650",
"CreationDate": "2015-05-08T18:35:08.493",
"ParentId": null,
"OwnerUserId": "359944",
"Title": null,
"Body": "<p>I'm not sure exactly what you want to do when you say \"Is there some way to specify that y_est get specific subsets of df_y_vals to control this\". Can you describe for each y_est value what values of df_y_vals are you supposed to use? What's the shape of df_y_vals? What's the shape of y_est supposed to be? (7,)? </p>\n\n<p>I suspect what you want is to index into df_y_vals using <a href=\"http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html\" rel=\"nofollow\">numpy advanced indexing</a>, which works the same in PyMC as in numpy. Its hard to say exactly without more information.</p>\n"
}
] |
30,056,139 | 1 |
<c><lua><neural-network><luajit><torch>
|
2015-05-05T14:43:03.273
| 30,058,467 | 4,866,811 |
How to read Torch Tensor from C
|
<p>I have to train a convolutional neural network using the Torch framework and then write the same network in C.
To do so, I have to read somehow the learned parameters of the net from my C program, but I can't find a way to convert or write to a file the Torch Tensors to make them readable in C.
Ideally, I want to convert the Tensors into arrays of double in C. </p>
<p>Does anyone know how to do that? Thanks in advance :)</p>
|
[
{
"AnswerId": "30058467",
"CreationDate": "2015-05-05T16:29:50.343",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<blockquote>\n <p>I can't find a way to convert or write to a file the Torch Tensors to make them readable in C. Ideally, I want to convert the Tensors into arrays of double in C.</p>\n</blockquote>\n\n<p>The most basic (and direct) way is to directly <code>fread</code> in C the data you have previously written into a binary file. In such a case you would typically concatenate the weights and biases (if any) for each layer.</p>\n\n<p>On the Lua/Torch side you can use the <a href=\"https://github.com/torch/torch7/blob/master/doc/file.md\" rel=\"noreferrer\">File</a> utilities to literally <code>fwrite</code> each tensor data. For example here is a basic function that does that:</p>\n\n<pre><code>local fwrite = function(tensor, file)\n if not tensor then return false end\n local n = tensor:nElement()\n local s = tensor:storage()\n return assert(file:writeDouble(s) == n)\nend\n</code></pre>\n\n<p>For example if <code>m</code> refers to a <code>torch/nn</code> module containing weights you would use it as follow:</p>\n\n<pre><code>local file = torch.DiskFile(\"net.bin\", \"w\"):binary()\nfwrite(m.weight, file)\nfwrite(m.bias, file)\n</code></pre>\n\n<p>Of course you need to write your own logic to make sure you <code>fwrite</code> and concatenate all the weights from all your layers. On the C side, in addition to <code>net.bin</code>, you also need to know the structure of your network (nb. layers, parameters like kernel size, etc) to know how many block of <code>double</code>-s to <code>fread</code>.</p>\n\n<p>As an example (in Lua) you can have a look at <a href=\"https://github.com/jhjin/overfeat-torch\" rel=\"noreferrer\">overfeat-torch</a> (non official project) that illustrates how to read such a plain binary file: see the <a href=\"https://github.com/jhjin/overfeat-torch/blob/09e10a6818ee8e079d923c63a9afc55d86ac4515/run.lua#L83-L100\" rel=\"noreferrer\">ParamBank</a> tool.</p>\n\n<p>Keep in mind that a robust solution would consist in using a proper binary serialization format like <a href=\"http://msgpack.org/\" rel=\"noreferrer\">msgpack</a> or <a href=\"https://developers.google.com/protocol-buffers/\" rel=\"noreferrer\">Protocol Buffers</a> that would make this export/import process clean and portable.</p>\n\n<p>--</p>\n\n<p>Here is a toy example:</p>\n\n<pre><code>-- EXPORT\nrequire 'nn'\n\nlocal fwrite = function(tensor, file)\n if not tensor then return false end\n local n = tensor:nElement()\n local s = tensor:storage()\n return assert(file:writeDouble(s) == n)\nend\n\nlocal m = nn.Linear(2, 2)\n\nprint(m.weight)\nprint(m.bias)\n\nlocal file = torch.DiskFile(\"net.bin\", \"w\"):binary()\nfwrite(m.weight, file)\nfwrite(m.bias, file)\n</code></pre>\n\n<p>Then in C:</p>\n\n<pre><code>/* IMPORT */\n#include <stdio.h>\n#include <stdlib.h>\n#include <assert.h>\n\nint\nmain(void)\n{\n const int N = 2; /* nb. neurons */\n\n double *w = malloc(N*N*sizeof(*w)); /* weights */\n double *b = malloc(N*sizeof(*w)); /* biases */\n\n FILE *f = fopen(\"net.bin\", \"rb\");\n assert(fread(w, sizeof(*w), N*N, f) == N*N);\n assert(fread(b, sizeof(*w), N, f) == N);\n fclose(f);\n\n int i, j;\n for (i = 0; i < N; i++)\n for (j = 0; j < N; j++)\n printf(\"w[%d,%d] = %f\\n\", i, j, w[N*i+j]);\n\n for (i = 0; i < N; i++)\n printf(\"b[%d] = %f\\n\", i, b[i]);\n\n free(w);\n free(b);\n\n return 0;\n}\n</code></pre>\n"
}
] |
30,074,670 | 1 |
<python><matlab><numpy><matplotlib><theano>
|
2015-05-06T10:53:28.913
| 30,074,835 | 4,870,118 |
Why does matplotlib imshow() display a transposed image?
|
<p>I have an matrix of images created in matlab which i will be using as an input to a convolutional neural network i am coding in theano. I've imported the matrix using and on inspection the matrix appears identical to the one created in matlab. When using in matlab i get the images displayed corrected, however, when using matplotlib the images are transposed. Does anyone know the cause of this?</p>
<p>Code for matlab:</p>
<pre></pre>
<p>Code for matplotlib:</p>
<pre></pre>
<p>I would post the images but im new to stackoverflow and i don't have enough reputation yet.</p>
<p>Cheers,</p>
|
[
{
"AnswerId": "30074835",
"CreationDate": "2015-05-06T11:01:13.367",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>Matlab and python has a different way to store arrays in memory. Matlab saves an array column-first, while python uses row-first method.<br>\nConsider, for example, a 2-by-2 matrix </p>\n\n<pre><code>M = [1, 2\n 3, 4]\n</code></pre>\n\n<p>In memory, matlab save the matrix as <code>[1 3 2 4]</code> while python's order is <code>[1 2 3 4]</code>. This effect causes your image to be transposed.</p>\n\n<p>Consider transposing the images in Matlab prior to saving them - this way the data is stored in memory in the same order as in python.</p>\n"
}
] |
30,085,991 | 1 |
<python><theano>
|
2015-05-06T19:36:51.620
| null | 887,074 |
How do I print the value of a theano variable whenever it is evaluated?
|
<p>I'm using lasagne and theano to build a convolutional neural network, and I'm having issues trying to follow the printdebugging examples in <a href="http://deeplearning.net/software/theano/tutorial/debug_faq.html#how-do-i-step-through-a-compiled-function" rel="nofollow">http://deeplearning.net/software/theano/tutorial/debug_faq.html#how-do-i-step-through-a-compiled-function</a></p>
<p>My function looks like this with G and Y being theano tensors</p>
<pre></pre>
<p>so the output ave_loss should be a symbolic expression that when compiled and executed with input data will result in computing the average loss over a batch of training images. </p>
<p>What I want to do is put a symbolic print expression in here so that whenever the ave_loss is computed it prints the contents of G. </p>
<p>But right now I'm stuck just trying to get something to print before and after </p>
<pre></pre>
<p>The above code does not work, and I'm not really sure how to manipulate theano.function to make it work. </p>
<p>What I'm attempting to do is create an identity function that accepts G and returns G without modifying it, but prints pre_func and post_func along the way. </p>
<p>How to you use theano.function (or theano.printing.Print) to acomplish this?</p>
|
[
{
"AnswerId": "30157136",
"CreationDate": "2015-05-10T22:04:49.927",
"ParentId": null,
"OwnerUserId": "3979919",
"Title": null,
"Body": "<p>Unfortunately I can't help you with the printing approach since\nI have never used the print myself. But.. wouldn't it\nbe possible to return G together with the ave_loss.\nThen you can look at the contents...</p>\n\n<p>something like: </p>\n\n<pre><code>def loss_function(self, G, Y_):\n ...\n G = dbgfunc()\n ...\n return ave_loss, G\n\n\nG = T.matrix('G') \nY_ = T.matrix('Y')\n\nave_loss, G_prime = loss_function(G, Y_)\n\nf = function([G, _Y], [ave_loss, G_prime])\n\nprint( f(...) )\n</code></pre>\n\n<p>EDIT:</p>\n\n<p>I just saw that the contents of <strong>G</strong> don't seem to be changing.\nWhy do you want to print it anyways. Since the print also prevents\nTheano from some optimisations if I remember correctly.</p>\n"
}
] |
30,099,876 | 1 |
<python><theano><theano-cuda>
|
2015-05-07T11:34:07.420
| null | 3,155,703 |
theano installation (windows 64bit python 2)
|
<p>I started an installation of theano.</p>
<p>My computer SPEC:
- OS : Windows 7 64bit
- Graphic card : NVIDIA Geforce GT 630
- CPU : AMD FX-8120</p>
<p>I installed theano by installation guide of deeplearning.net .
(<a href="http://deeplearning.net/software/theano/install_windows.html#configure-theano-for-gpu-use" rel="nofollow">http://deeplearning.net/software/theano/install_windows.html#configure-theano-for-gpu-use</a>)</p>
<p>I successfully finished installation process under.</p>
<p>Visual Studio 2010 -> Windows Software Development Kit version 7.1 -> CUDA -> Microsoft Visual C++ Complier for Python 2.7 (adding header) -> TDM GCC -> WinPython-64bit-2.7.9.4 -> env.bat -> Theano setup</p>
<p>When I create a test file(under) and test it, it successfully execute.</p>
<p>-------test file----------------------------------</p>
<pre></pre>
<hr>
<p><strong>BUT when I add .theanorc.txt</strong>
---.theanorc.txt--------------------------------</p>
<pre></pre>
<hr>
<p>It gives me an error like this(under)</p>
<p><a href="https://www.dropbox.com/s/gjspcpaz4hkeep8/11.PNG?dl=0" rel="nofollow">https://www.dropbox.com/s/gjspcpaz4hkeep8/11.PNG?dl=0</a></p>
<p>I have no problem with CUDA-devicequery & nvidia-smi.exe</p>
<p>-------------------------DeviceQuery--------------------------</p>
<pre></pre>
<p>-------------------------nvidia-smi.exe--------------------------</p>
<pre></pre>
<p>Please help me..</p>
|
[
{
"AnswerId": "30116189",
"CreationDate": "2015-05-08T05:06:17.800",
"ParentId": null,
"OwnerUserId": "3140336",
"Title": null,
"Body": "<p>You should use a \".theanorc\" file, not a \".theanorc.txt\".</p>\n\n<p>You may also write your question on the theano-users group: <a href=\"https://groups.google.com/forum/#!forum/theano-users\" rel=\"nofollow\">https://groups.google.com/forum/#!forum/theano-users</a></p>\n\n<p>Next Winpython release will include theano+mingwpy, so it may the reduce complexity of setup.</p>\n"
}
] |
30,127,415 | 1 |
<cuda><caffe>
|
2015-05-08T15:24:08.933
| null | 4,879,546 |
Caffe cuDNN R1 compile error
|
<p>I'm trying to compile Caffe with cuDNN-6.5-R1 enabled on Ubuntu 14.04. CUDA version is 7.0. </p>
<p>I copied the cudnn.h header file in /usr/local/cuda/include and the cudnn libraries in /usr/local/cuda/lib64, then "sudo ldconfig", and when I do "make all", I get the following error:</p>
<pre></pre>
<p>I checked other posts of people who had the same problem as me, but they were all using R2, not R1.</p>
|
[
{
"AnswerId": "30139904",
"CreationDate": "2015-05-09T12:27:32.957",
"ParentId": null,
"OwnerUserId": "562440",
"Title": null,
"Body": "<p>For detailed instructions on how to install CUDA, cuDNN and Caffe take a look at \"<a href=\"http://www.joyofdata.de/blog/gpu-powered-deeplearning-with-nvidia-digits/\" rel=\"nofollow\">GPU Powered DeepLearning with NVIDIA DIGITS on EC2</a>\".</p>\n\n<p>The reason why \"they were all using R2\" BTW is b/c that is simply a requirement.</p>\n"
}
] |
30,132,036 | 1 |
<python><numpy><scipy><theano><deep-learning>
|
2015-05-08T20:05:58.653
| 30,508,530 | 588,373 |
Log Determinant in Theano Loss Function
|
<p>I'm using Theano (python package for deep learning), but I'm very new to it and I'm running into an issue with a term in my loss function. The term involves taking the logarithm of the determinant of a matrix; the matrix is a function of a layer of hidden units in my network.
I import Tensor, and Tensor.nlinalg:</p>
<pre></pre>
<p>and then stick this term in my loss function:</p>
<pre></pre>
<p>but when I attempt to train it I get the following exception and traceback:</p>
<pre></pre>
<p>Can anyone offer any advice?
Cheers,
Mike</p>
|
[
{
"AnswerId": "30508530",
"CreationDate": "2015-05-28T13:50:34.437",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p><code>theano.tensor.nlinalg.Det</code> in the <a href=\"http://deeplearning.net/software/theano/library/tensor/nlinalg.html\" rel=\"nofollow\">linear algebra package</a> is an operation class, not an operation function. You need to first init an instance of the class and then apply it to the node representing your matrix. For example,</p>\n\n<pre><code>import numpy\n\nimport theano\nimport theano.tensor.nlinalg\n\nx = theano.tensor.matrix('x', dtype=theano.config.floatX)\np = theano.shared(numpy.array([[2, 0], [0, 3]], dtype=theano.config.floatX))\ny = theano.dot(x, p)\nc = theano.tensor.log(theano.tensor.nlinalg.Det()(y))\ng = theano.grad(c, x)\n\nprint theano.printing.pp(g)\n</code></pre>\n\n<p>Note the difference between <code>theano.tensor.nlinalg.Det()(y)</code> and <code>theano.tensor.nlinalg.Det(y)</code>.</p>\n"
}
] |
30,135,101 | 2 |
<machine-learning><matrix-multiplication><theano><matrix-inverse><caffe>
|
2015-05-09T01:46:44.290
| 31,756,649 | 3,733,814 |
Whether to use Caffe or Theano for Moore-Penrose Pseudo inverse?
|
<p>I need to use(in an application) Extreme Learning Machine(ELM) which is highly optimised for multiple CPUs or GPUs. As ELM main computation involves Moore-Penrose Pseudo inverse and matrix multiplication, what would be the best option to implement ELM among Theano and Caffe ?</p>
<p>Secondly, Is it possible to implement a new learning algorithm(ELM) in Caffe using its python interface ?</p>
|
[
{
"AnswerId": "30139872",
"CreationDate": "2015-05-09T12:24:53.663",
"ParentId": null,
"OwnerUserId": "562440",
"Title": null,
"Body": "<p>As far as Google is concerned Caffe won't help you with \"Extreme Learning Machines\".</p>\n\n<blockquote>\n <p>Secondly, Is it possible to implement a new learning algorithm(ELM) in Caffe using its python interface ?</p>\n</blockquote>\n\n<p>No, that is not possible. You will have to implement new layers and algorithms in C++. Afterwards you can deal with them via Python.</p>\n\n<p>For a primer on Caffe, check out <a href=\"http://www.joyofdata.de/blog/neural-networks-with-caffe-on-the-gpu/\" rel=\"nofollow\">\"Neural Nets with Caffe Utilizing the GPU\"</a>.</p>\n"
},
{
"AnswerId": "31756649",
"CreationDate": "2015-07-31T23:29:27.107",
"ParentId": null,
"OwnerUserId": "1157452",
"Title": null,
"Body": "<p>If I were you I would use Theano, not Caffe.\nCaffe is <strong>not</strong> programmed around a general-purpose matrix library so with Caffe you would be trying to use a screwdriver to open a beer basically. \nIf you definitively feel like using C++ look into MrShadow or any other GPU-based matrix libraries.</p>\n\n<p>... or simply use Theano with Python.</p>\n\n<p>I'm not a big fan of Python and Theano takes some time to master but it's extremely convenient. </p>\n\n<p>Also there are one or two ELM libraries for Python you can use as a reference, that's a huge plus when you need to test your own implementation.</p>\n\n<p><a href=\"https://github.com/dclambert/Python-ELM\" rel=\"nofollow\">https://github.com/dclambert/Python-ELM</a></p>\n\n<p><a href=\"https://github.com/acba/elm\" rel=\"nofollow\">https://github.com/acba/elm</a></p>\n\n<p>I haven't used them so I can't elaborate on their status but something is better than nothing.</p>\n\n<p>You can also take a look at Keras and Lasagne, both are neural network libraries built on top of Theano. Just like Caffe, they will not help much with ELMs but they will get you started with Theano+nnets. Then all you have to do is create your own ELM layers.</p>\n"
}
] |
30,143,308 | 4 |
<macos><lua><torch>
|
2015-05-09T18:00:03.727
| null | 3,698,971 |
Torch / Lua after installation is not working
|
<p>I have followed the following approach in order to install Torch in my machine (Mac).</p>
<p><a href="http://torch.ch/docs/getting-started.html#_" rel="noreferrer">http://torch.ch/docs/getting-started.html#_</a></p>
<p>When I am done with the installation, I type:</p>
<p>
or </p>
<p>or </p>
<p>in order to load the or to make updates on the lua packages. It says "command not found". Do you have any idea how I can resolve this issue? </p>
|
[
{
"AnswerId": "32927986",
"CreationDate": "2015-10-03T22:08:01.333",
"ParentId": null,
"OwnerUserId": "2479672",
"Title": null,
"Body": "<p>Have you updated your PATH? It should include something like </p>\n\n<blockquote>\n <p>/home/user/torch/install/bin</p>\n</blockquote>\n"
},
{
"AnswerId": "33497300",
"CreationDate": "2015-11-03T10:54:06.520",
"ParentId": null,
"OwnerUserId": "3698971",
"Title": null,
"Body": "<p>I have resolved the issue. I have deleted torch and I have installed it again. I have updated my PATH, and I have ran the <code>$ luarocks install image</code> command. After all of these, I was able to ran <code>$ th</code> command and in general torch.</p>\n"
},
{
"AnswerId": "34711978",
"CreationDate": "2016-01-10T23:22:16.113",
"ParentId": null,
"OwnerUserId": "5771204",
"Title": null,
"Body": "<p>I faced the same issue and following this post deleted and reinstalled everything. However in the end what helped was adding /home/user/torch/install/bin/ to the PATH variable. </p>\n"
},
{
"AnswerId": "40624114",
"CreationDate": "2016-11-16T04:29:46.970",
"ParentId": null,
"OwnerUserId": "4422034",
"Title": null,
"Body": "<p>If you're on a <strong><em>Mac</em></strong> using the <strong><em>bash terminal</em></strong>, make sure that you've permanently added <code>/Users/you/torch/install/bin</code> to your <strong>PATH</strong>.</p>\n\n<p>To do this:</p>\n\n<ol>\n<li><p>Navigate in your terminal to the root directory by running the command:</p>\n\n<pre><code>$ cd\n</code></pre></li>\n<li><p>Using the text editor of your choice (emacs, vim, etc.) open the <strong>.bash_profile</strong> file for editing. For example:</p>\n\n<pre><code>$ emacs .bash_profile\n</code></pre></li>\n<li><p>Add the following line to the end of the file (replacing 'you' with your Mac username):</p>\n\n<pre><code>PATH=$PATH\\:/Users/you/torch/install/bin ; export PATH\n</code></pre></li>\n<li><p>Save and exit the text editor\n<br>\n<br></p></li>\n<li><p>Source the changes by running:</p>\n\n<pre><code>$ source .bash_profile\n</code></pre></li>\n<li><p>Check that your PATH has been updated (look for <code>/Users/you/torch/install/bin</code> in the string returned):</p>\n\n<pre><code>$ echo $PATH\n</code></pre></li>\n<li><p>To make sure it has been changed permanently, completely quit Terminal, open it and run <code>echo $PATH again</code></p></li>\n<li><p>Now try <code>th</code> and it should run Torch!</p></li>\n</ol>\n\n<p><br>\nFor more help on PATH:\n<a href=\"https://kb.iu.edu/d/acar\" rel=\"noreferrer\">https://kb.iu.edu/d/acar</a></p>\n\n<p>The Torch installation (at least for me) added the line <code>. /Users/jb/torch/install/bin/torch-activate</code> to my <strong>.profile</strong> file, not <strong>.bash_profile</strong>. I tried adding that exact line to .bash_profile but it didn't work, so based on the recommendations here I got rid of the trailing directory and such.</p>\n"
}
] |
30,170,508 | 1 |
<compilation><cuda><nvcc><caffe>
|
2015-05-11T14:35:27.553
| 30,186,905 | 3,532,255 |
caffe Debug build: stray '"' character in nvcc command
|
<p>I am trying to build my C++ application that uses caffe, in Debug Mode, VS2013 community, x64. To be able to build version that do not need cuda to run, I added to wrapped each .cu file as indicated below:</p>
<pre></pre>
<p>The project was built and ran fine in CPU_ONLY mode.
Undefininig the CPU_ONLY flag, the project builds and runs OK in Release mode, but in Debug,
I am getting the following error when trying to compile the *.cu files:</p>
<pre></pre>
<p>Where COMMAND is the nvcc compiler call command below, newlined for readability.</p>
<pre></pre>
<p>The project was able to build successfully in debug mode before adding the CPU_ONLY flags.
Any ideas?</p>
|
[
{
"AnswerId": "30186905",
"CreationDate": "2015-05-12T09:33:06.513",
"ParentId": null,
"OwnerUserId": "3532255",
"Title": null,
"Body": "<p>Turns out it was a typo.\nIn project properties->Debug->CUDA C/C++->Device, instead of\n <code>compute_30,sm_30</code></p>\n\n<p>I had</p>\n\n<pre><code>`compute_30, sm_30`\n</code></pre>\n\n<p>that is, with a space separator.</p>\n"
}
] |
30,173,144 | 1 |
<python><optimization><machine-learning><scipy><theano>
|
2015-05-11T16:40:51.863
| 30,173,316 | 1,544,186 |
SciPy Conjugate Gradient Optimisation not invoking callback method after each iteration
|
<p>I followed the tutorial <a href="http://deeplearning.net/tutorial/code/logistic_cg.py" rel="nofollow">here</a> in order to implement Logistic Regression using theano. The aforementioned tutorial uses SciPy's optimisation procedure. Among the important argument to the aforementioned function are: the object/cost function to be minimised, a user supplied initial guess of the parameters, a function which provides the derivative of the function at and an optional user-supplied function, called after each iteration.</p>
<p>The training function is defined as follows:</p>
<pre></pre>
<p>What the above code does, is basically go through all the minibatches in the training dataset, for each minibatch calculate the average batch cost (i.e. the average of the cost function applied to each of the training samples in the minibatch) and averages the cost over all the batches. It might be worth pointing out that the cost for each individual batch is calculated by -- a theano function.</p>
<p>To me, it seems that the function is being called arbitrarily, and not after every iteration as the documentation in SciPy claims.</p>
<p>Here is the output I received after modifying and by adding "train" and "callback" prints respectively.</p>
<pre></pre>
<p>My question is, since each call to is indeed a training epoch, how do I change the behaviour, so that a call to is invoked after ?</p>
|
[
{
"AnswerId": "30173316",
"CreationDate": "2015-05-11T16:50:45.197",
"ParentId": null,
"OwnerUserId": "577088",
"Title": null,
"Body": "<p>Each call to <code>train_fn</code> is <em>not</em> necessarily a single training epoch. I'm not exactly sure how <code>fmin_cg</code> is implemented, but in general, <a href=\"http://en.wikipedia.org/wiki/Conjugate_gradient_method\" rel=\"nofollow\">conjugate gradient methods</a> may call the cost or gradient function more than once per minimziation step. This is (as far as I understand it) required sometimes to find the conjugate vector relative to the previous step taken.<sup>1</sup></p>\n\n<p>So your callback is being called every time <code>fmin_cg</code> takes a <em>step</em>. If you need a function to be called every time the cost or gradient function is called, you can just put the call inside the relevant function.</p>\n\n<p><sup>\n1. Edit: At least when they are <em>nonlinear</em> methods, as <a href=\"http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.fmin_cg.html\" rel=\"nofollow\"><code>fmin_cg</code></a> is. The wikipedia page suggests that vanilla conjugate gradient (CG) methods may not require multiple calls, but I think they aren't as suitable for optimizing nonlinear functions. The CG code that I've seen -- which I guess must have been for nonlinear CG -- definitely involved at least one line search per step. That could certainly call for multiple evaluations of the gradient function.</sup></p>\n"
}
] |
30,184,994 | 3 |
<python><theano>
|
2015-05-12T07:58:11.140
| 30,510,063 | 1,367,788 |
How can I change device used of theano
|
<p>I tried to change the device used in theano-based program. </p>
<pre></pre>
<p>However I got error</p>
<pre></pre>
<p>I wonder what is the best way of change gpu to gpu1 in code ?</p>
<p>Thanks</p>
|
[
{
"AnswerId": "36279339",
"CreationDate": "2016-03-29T08:38:41.243",
"ParentId": null,
"OwnerUserId": "679886",
"Title": null,
"Body": "<p>Another possibility which worked for me was setting the environment variable in the process, before importing theano:</p>\n\n<pre><code>import os \nos.environ['THEANO_FLAGS'] = \"device=gpu1\" \nimport theano\n</code></pre>\n"
},
{
"AnswerId": "33579079",
"CreationDate": "2015-11-07T04:18:40.533",
"ParentId": null,
"OwnerUserId": "160698",
"Title": null,
"Body": "<p>Remove the \"device\" config in .theanorc, then in your code:</p>\n\n<pre><code>import theano.sandbox.cuda\ntheano.sandbox.cuda.use(\"gpu0\")\n</code></pre>\n\n<p>It works for me.</p>\n\n<p><a href=\"https://groups.google.com/forum/#!msg/theano-users/woPgxXCEMB4/l654PPpd5joJ\" rel=\"nofollow\">https://groups.google.com/forum/#!msg/theano-users/woPgxXCEMB4/l654PPpd5joJ</a></p>\n"
},
{
"AnswerId": "30510063",
"CreationDate": "2015-05-28T14:54:22.620",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>There is no way to change this value in code running in the same process. The best you could do is to have a \"parent\" process that alters, for example, the <code>THEANO_FLAGS</code> environment variable and spawns children. However, the method of spawning will determine which environment the children operate in.</p>\n\n<p>Note also that there is no way to do this in a way that maintains a process's memory through the change. You can't start running on CPU, do some work with values stored in memory then change to running on GPU and continue running using the values still in memory from the earlier (CPU) stage of work. The process must be shutdown and restarted for a change of device to be applied.</p>\n\n<p>As soon as you <code>import theano</code> the device is fixed and cannot be changed within the process that did the import.</p>\n"
}
] |
30,195,837 | 3 |
<multithreading><machine-learning><neural-network><caffe><openblas>
|
2015-05-12T15:52:00.420
| null | 2,751,512 |
How to use multi CPU cores to train NNs using caffe and OpenBLAS
|
<p>I am learning deep learning recently and my friend recommended me caffe. After install it with OpenBLAS, I followed the tutorial, <a href="http://caffe.berkeleyvision.org/gathered/examples/mnist.html" rel="nofollow">MNIST task</a> in the doc. But later I found it was super slow and only one CPU core was working.</p>
<p>The problem is that the servers in my lab don't have GPU, so I have to use CPUs instead.</p>
<p>I Googled this and got some page like <a href="https://github.com/BVLC/caffe/pull/80" rel="nofollow">this</a> . I tried to and . But caffe still used one core.</p>
<p>How can I make caffe use multi CPUs?</p>
<p>Many thanks.</p>
|
[
{
"AnswerId": "44172173",
"CreationDate": "2017-05-25T04:16:20.217",
"ParentId": null,
"OwnerUserId": "3698136",
"Title": null,
"Body": "<p>While building OpenBLAS, you have to set the flag USE_OPENMP = 1 to enable OpenMP support. Next set Caffe to use OpenBLAS in the Makefile.config, please export the number of threads you want to use during runtime by setting up OMP_NUM_THREADS=n where n is the number of threads you want.</p>\n"
},
{
"AnswerId": "34971873",
"CreationDate": "2016-01-24T03:27:34.973",
"ParentId": null,
"OwnerUserId": "2488716",
"Title": null,
"Body": "<p>I found that this method works:</p>\n\n<p>When you build the caffe, in your make command, do use this for 8 cores: \n<code>make all -j8</code> and\n<code>make pycaffe -j8</code></p>\n\n<p>Also, make sure \n <code>OPENBLAS_NUM_THREADS=8</code> \nis set.</p>\n\n<p><a href=\"https://stackoverflow.com/questions/31395729/how-to-enable-multithreading-with-caffe\">This</a> question has a full script for the same.</p>\n"
},
{
"AnswerId": "37351642",
"CreationDate": "2016-05-20T16:34:31.593",
"ParentId": null,
"OwnerUserId": "3701279",
"Title": null,
"Body": "<p>@Karthik. That also works for me. One interesting discovery that I made was that using 4 threads reduces forward/backward pass during the caffe timing test by a factor of 2. However, increasing the thread count to 8 or even 24 results in f/b speed that is less than what I get with OPENBLAS_NUM_THREADS=4.\nHere are times for a few thread counts (tested on NetworkInNetwork model).</p>\n\n<p>[#threads] [f/b time in ms]<br>\n1 223<br>\n2 150<br>\n4 113<br>\n8 125<br>\n12 144 </p>\n\n<p>For comparison, on a Titan X GPU the f/b pass took 1.87 ms.</p>\n"
}
] |
30,197,510 | 1 |
<python><optimization><neural-network><theano>
|
2015-05-12T17:17:46.437
| 30,222,321 | 1,563,927 |
Theano: how to efficiently undo/reverse max-pooling
|
<p>I'm using Theano 0.7 to create a <a href="http://deeplearning.net/tutorial/lenet.html" rel="nofollow">convolutional neural net</a> which uses <strong><a href="http://deeplearning.net/tutorial/lenet.html#maxpooling" rel="nofollow">max-pooling</a></strong> (i.e. shrinking a matrix down by keeping only the local maxima).</p>
<p>In order to "undo" or "reverse" the max-pooling step, one method is to store the locations of the maxima as auxiliary data, then simply recreate the un-pooled data by making a big array of zeros and using those auxiliary locations to place the maxima in their appropriate locations.</p>
<p>Here's how I'm currently doing it:</p>
<pre></pre>
<blockquote>
<p><em>(By the way, in this case I have a 3D tensor, and it's only the third axis that gets max-pooled. People who work with image data might expect to see two dimensions getting max-pooled.)</em></p>
</blockquote>
<p>The output is:</p>
<pre></pre>
<p>This method <strong>works</strong> but it's a <strong>bottleneck</strong>, taking most of my computer's time (I think the set_subtensor calls might imply cpu<->gpu data copying). So: can this be implemented more efficiently?</p>
<p>I suspect there's a way to express this as a single call which may be faster, but I don't see how to get the tensor indexing to broadcast properly.</p>
<hr>
<p><strong>UPDATE:</strong> I thought of a way of doing it in one call, by working on the flattened tensors:</p>
<pre></pre>
<p>However, this is still not good efficiency-wise because when I run this (added on to the end of the above script) I find out that the Cuda libraries can't currently do the integer index manipulation efficiently:</p>
<pre></pre>
|
[
{
"AnswerId": "30222321",
"CreationDate": "2015-05-13T18:10:49.343",
"ParentId": null,
"OwnerUserId": "3489247",
"Title": null,
"Body": "<p>I don't know whether this is faster, but it may be a little more concise. See if it is useful for your case.</p>\n\n<pre><code>import numpy as np\nimport theano\nimport theano.tensor as T\n\nminibatchsize = 2\nnumfilters = 3\nnumsamples = 4\nupsampfactor = 5\n\ntotalitems = minibatchsize * numfilters * numsamples\n\ncode = np.arange(totalitems).reshape((minibatchsize, numfilters, numsamples))\n\nauxpos = np.arange(totalitems).reshape((minibatchsize, numfilters, numsamples)) % upsampfactor \nauxpos += (np.arange(4) * 5).reshape((1,1,-1))\n\n# first in numpy\nshp = code.shape\nupsampled_np = np.zeros((shp[0], shp[1], shp[2] * upsampfactor))\nupsampled_np[np.arange(shp[0]).reshape(-1, 1, 1), np.arange(shp[1]).reshape(1, -1, 1), auxpos] = code\n\nprint \"numpy output:\"\nprint upsampled_np\n\n# now the same idea in theano\nencoded = T.tensor3()\npositions = T.tensor3(dtype='int64')\nshp = encoded.shape\nupsampled = T.zeros((shp[0], shp[1], shp[2] * upsampfactor))\nupsampled = T.set_subtensor(upsampled[T.arange(shp[0]).reshape((-1, 1, 1)), T.arange(shp[1]).reshape((1, -1, 1)), positions], encoded)\n\nprint \"theano output:\"\nprint upsampled.eval({encoded: code, positions: auxpos})\n</code></pre>\n"
}
] |
30,198,926 | 2 |
<neural-network><computer-vision><deep-learning><caffe><image-segmentation>
|
2015-05-12T18:36:33.810
| 32,453,021 | 562,769 |
Can Caffe classify pixels of an image directly?
|
<p>I would like to classify pixels of an image to "is street" or "is not street". I have some training data from the <a href="http://www.cvlibs.net/datasets/kitti/eval_road.php">KITTI dataset</a> and I have seen that Caffe has an <a href="http://caffe.berkeleyvision.org/tutorial/layers.html#images"></a> layer type.
The labels are there in form of images of the same size as the input image.</p>
<p>Besides Caffe, my first idea to solve this problem was by giving image patches around the pixel which should get classified (e.g. 20 pixels to the top / left / right / bottom, resulting in 41×41=1681 features per pixel I want to classify.<br>
However, if I could tell caffe how to use the labels without having to create those image patches manually (and the layer type seems to suggest that it is possible) I would prefer that.</p>
<p>Can Caffe classify pixels of an image directly? How would such a prototxt network definition look like? How do I give Caffe the information about the labels?</p>
<p>I guess the input layer would be something like</p>
<pre></pre>
<p>However, I am not sure what exactly means. Is it really centered? How does caffe deal with the corner pixels? What is and good for?</p>
|
[
{
"AnswerId": "30207142",
"CreationDate": "2015-05-13T06:23:57.330",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>Can Caffe classify pixels? in theory I think the answer is Yes. I didn't try it myself, but I don't think there is anything stopping you from doing so.</p>\n\n<p><strong>Inputs:</strong><br>\nYou need two <code>IMAGE_DATA</code> layers: one that loads the RGB image and another that loads the <em>corresponding</em> label-mask image. Note that if you use <code>convert_imageset</code> utility you cannot shuffle each set independently - you won't be able to match an image to its label-mask. </p>\n\n<p>An <code>IMAGE_DATA</code> layer has two \"tops\" one for \"data\" and one for \"label\" I suggest you set the \"label\"s of both input layers to the index of the image/label-mask and add a utility layer that verifies that the indices <em>always</em> matches, this will prevent you from training on the wrong label-masks ;)</p>\n\n<p>Example:</p>\n\n<pre><code>layer {\n name: \"data\"\n type: \"ImageData\"\n top: \"data\"\n top: \"data-idx\"\n # paramters...\n}\nlayer {\n name: \"label-mask\"\n type: \"ImageData\"\n top: \"label-mask\"\n top: \"label-idx\"\n # paramters...\n}\nlayer {\n name: \"assert-idx\"\n type: \"EuclideanLoss\"\n bottom: \"data-idx\"\n bottom: \"label-idx\"\n top: \"this-must-always-be-zero\"\n}\n</code></pre>\n\n<p><strong>Loss layer:</strong><br>\nNow, you can do whatever you like to the input data, but eventually to get pixel-wise labeling you need pixel-wise loss. Therefore, you must have your last layer (before the loss) produce a prediction with the <strong>same</strong> width and height as the <code>\"label-mask\"</code> Not all loss layers knows how to handle multiple labels, but <code>\"EuclideanLoss\"</code> (for example) can, therefore you should have a loss layer something like</p>\n\n<pre><code>layer {\n name: \"loss\"\n type: \"EuclideanLoss\"\n bottom: \"prediction\" # size on image\n bottom: \"label-mask\"\n top: \"loss\"\n}\n</code></pre>\n\n<p>I think <code>\"SoftmaxWithLoss\"</code> has a newer version that can be used in this scenario, but you'll have to check it our yourself. In that case <code>\"prediction\"</code> should be of shape 2-by-h-by-w (since you have 2 labels).</p>\n\n<p><strong>Additional notes:</strong><br>\nOnce you set the input size in the parameters of the <code>\"ImageData\"</code> you fix the sizes of all blobs of the net. You must set the label size to the same size. You must carefully consider how you are going to deal with images of different shape and sizes.</p>\n"
},
{
"AnswerId": "32453021",
"CreationDate": "2015-09-08T08:37:54.717",
"ParentId": null,
"OwnerUserId": "1179925",
"Title": null,
"Body": "<p>Seems you can try <a href=\"http://www.cs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf\" rel=\"noreferrer\">fully convolutional networks for semantic segmentation</a></p>\n\n<p>Caffe was cited in this paper: <a href=\"https://github.com/BVLC/caffe/wiki/Publications\" rel=\"noreferrer\">https://github.com/BVLC/caffe/wiki/Publications</a></p>\n\n<p>Also here is the model:\n<a href=\"https://github.com/BVLC/caffe/wiki/Model-Zoo#fully-convolutional-semantic-segmentation-models-fcn-xs\" rel=\"noreferrer\">https://github.com/BVLC/caffe/wiki/Model-Zoo#fully-convolutional-semantic-segmentation-models-fcn-xs</a></p>\n\n<p>Also this presentation can be helpfull:\n<a href=\"http://tutorial.caffe.berkeleyvision.org/caffe-cvpr15-pixels.pdf\" rel=\"noreferrer\">http://tutorial.caffe.berkeleyvision.org/caffe-cvpr15-pixels.pdf</a></p>\n"
}
] |
30,200,589 | 2 |
<python><c++><logging><caffe>
|
2015-05-12T20:14:33.703
| null | 1,452,257 |
Disable and renable logging created from C++ module in Python
|
<p>I'm using a deep learning library, Caffe, which is written in C++ and has an interface to Python. One of my commands creates a lot of unnecessary output to the log and I would really like to remove that by temporarily disabling logging.</p>
<p>Caffe uses GLOG and I've tried using to only log important messages. However, that didn't work. I've also tried using the Python logging module to shut down all logging temporarily using the code below, which didn't work either.</p>
<pre></pre>
|
[
{
"AnswerId": "31236798",
"CreationDate": "2015-07-06T01:51:04.470",
"ParentId": null,
"OwnerUserId": "1170917",
"Title": null,
"Body": "<p>You likely need to set the log level environmental variable before you start Python. Or at leastt this worked for me:</p>\n\n<p>GLOG_minloglevel=3 python script.py</p>\n\n<p>Which silenced loading messages.</p>\n"
},
{
"AnswerId": "34735198",
"CreationDate": "2016-01-12T03:39:23.500",
"ParentId": null,
"OwnerUserId": "4139842",
"Title": null,
"Body": "<p><code>GLOG_minloglevel=3</code> ,only by executing that line in Python before calling</p>\n\n<p>so,you can try</p>\n\n<pre><code>os.environ[\"GLOG_minloglevel\"] =\"3\"\nimport caffe\n</code></pre>\n"
}
] |
30,205,464 | 1 |
<makefile><protocol-buffers><homebrew><caffe><matcaffe>
|
2015-05-13T04:02:42.667
| null | 568,145 |
protobuf installed using brew but not found in build process
|
<p><strong>Background</strong></p>
<p>Yesterday I built <a href="http://caffe.berkeleyvision.org/installation.html" rel="nofollow">Caffe</a> and had no problems with its dependencies.</p>
<p>Today I had problems building the Caffe Matlab wrappers due to protobuf dependencies not being found. So I rebuilt Caffe: followed by a , hoping that would fix the problem.</p>
<p>Now the Caffe build is complaining about the protobuf dependency.</p>
<p>The error output is given at the bottom of this question.</p>
<p>Between the original (successful) build and the failed build, I needed to to allow the Caffe python wrappers to import protobuf, as python complained about not being able to find the protobuf package. That was the only "change" involving protobuf prior to the failed Caffe rebuild.</p>
<p>I have tried reinstalling protobuf using brew, but this did not help.</p>
<p>So essentially the chronology of events relating to protobuf are as follows:</p>
<pre></pre>
<p>Whenever protobuf was not found, showed that protobuf (2.6.1) was installed.</p>
<p><strong>Question</strong></p>
<p>Can someone please explain why protobuf is not being found when it is clearly installed?</p>
<p>What is particularly confusing is the fact that it was found initially (during the original, successful build) and now it is not being found despite following the same approach.</p>
<p>Here is the error output:</p>
<pre></pre>
|
[
{
"AnswerId": "33409948",
"CreationDate": "2015-10-29T09:09:13.580",
"ParentId": null,
"OwnerUserId": "3585651",
"Title": null,
"Body": "<p>Did you build protobuf from source? I had a similar problem due to compiling protobuf and my project with differing versions of libc++. </p>\n\n<p>I solved it by adding c++11 to CXXFLAGS.</p>\n"
}
] |
30,225,633 | 1 |
<python><numpy><theano>
|
2015-05-13T21:39:27.067
| null | 3,998,122 |
Cross Entropy for batch with Theano
|
<p>I am attempting to implement an RNN and have output predictions p_y of shape (batch_size, time_points, num_classes). I also have a target_output of shape (batch_size, time_points), where the value at a given index of target_output is an integer denoting the class (a value between 0 and num_classes-1). How can I index p_y with target_output to get the probabilities of the given class I need to compute Cross-Entropy?</p>
<p>I'm not even sure how to do this in numpy. The expression p_y[target_output] does not give the desired results.</p>
|
[
{
"AnswerId": "30434928",
"CreationDate": "2015-05-25T09:17:48.823",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>You need to use advanced indexing (search for \"advanced indexing\" <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html\" rel=\"nofollow\">here</a>). But Theano advanced indexing behaves differently to numpy so knowing how to do this in numpy may not be all that helpful!</p>\n\n<p>Here's a function which does this for my setup, but note that the order of my dimensions differs from yours. I use (time points, batch_size, num_classes). This also assumes you want to use the 1-of-N categorical cross-entropy variant. You may not want sequence length padding either.</p>\n\n<pre><code>def categorical_crossentropy_3d(coding_dist, true_dist, lengths):\n # Zero out the false probabilities and sum the remaining true probabilities to remove the third dimension.\n indexes = theano.tensor.arange(coding_dist.shape[2])\n mask = theano.tensor.neq(indexes, true_dist.reshape((true_dist.shape[0], true_dist.shape[1], 1)))\n predicted_probabilities = theano.tensor.set_subtensor(coding_dist[theano.tensor.nonzero(mask)], 0.).sum(axis=2)\n\n # Pad short sequences with 1's (the pad locations are implicitly correct!)\n indexes = theano.tensor.arange(predicted_probabilities.shape[0]).reshape((predicted_probabilities.shape[0], 1))\n mask = indexes >= lengths\n predicted_probabilities = theano.tensor.set_subtensor(predicted_probabilities[theano.tensor.nonzero(mask)], 1.)\n\n return -theano.tensor.log(predicted_probabilities)\n</code></pre>\n"
}
] |
30,236,070 | 2 |
<python><theano><pymc3>
|
2015-05-14T11:21:04.037
| 30,241,668 | 3,568,242 |
PyMC3 & Theano - Theano code that works stop working after pymc3 import
|
<p>Some simple theano code that works perfectly, stop working when I import pymc3</p>
<p>Here some snipets in order to reproduce the error:</p>
<pre></pre>
<p>And I get the following error for each of the previous snippets:</p>
<pre></pre>
<p>Any Ideas ?
Thanks in advance</p>
|
[
{
"AnswerId": "38948567",
"CreationDate": "2016-08-15T02:31:58.787",
"ParentId": null,
"OwnerUserId": "1716733",
"Title": null,
"Body": "<p>Solution proposed <a href=\"https://github.com/snake-charmer-devs/snake-charmer/issues/18\" rel=\"nofollow\">here</a> lasts a bit longer than setting the flag. In your shell type:</p>\n\n<pre><code>theano-cache purge\n</code></pre>\n"
},
{
"AnswerId": "30241668",
"CreationDate": "2015-05-14T15:49:11.160",
"ParentId": null,
"OwnerUserId": "2288595",
"Title": null,
"Body": "<p>I think this is related to <code>pymc3</code> setting <code>theano.config.compute_test_value = 'raise'</code>: <a href=\"https://github.com/pymc-devs/pymc3/blob/master/pymc3/model.py#L395\" rel=\"noreferrer\">https://github.com/pymc-devs/pymc3/blob/master/pymc3/model.py#L395</a> </p>\n\n<p>You can explicitly set <code>theano.config.compute_test_value</code> back to <code>'ignore'</code> to get rid of the error.</p>\n"
}
] |
30,247,061 | 3 |
<python><convolution><theano>
|
2015-05-14T20:47:34.070
| 30,247,267 | 4,013,571 |
How can I get a 1D convolution in theano
|
<p>The only function I can find is for 2D convolutions <a href="http://deeplearning.net/software/theano/library/tensor/signal/conv.html#theano.tensor.signal.conv.conv2d" rel="noreferrer">described here</a>...</p>
<p>Is there any optimised 1D function ?</p>
|
[
{
"AnswerId": "32147403",
"CreationDate": "2015-08-21T18:46:55.723",
"ParentId": null,
"OwnerUserId": "742616",
"Title": null,
"Body": "<p>Just to be a bit more specific, I found this to work nicely:</p>\n\n<pre><code>conv2d = T.signal.conv.conv2d\n\nx = T.dmatrix()\ny = T.dmatrix()\nveclen = x.shape[1]\n\nconv1d_expr = conv2d(x, y, image_shape=(1, veclen), border_mode='full')\n\nconv1d = theano.function([x, y], outputs=conv1d_expr)\n</code></pre>\n\n<p><code>border_mode = 'full'</code> is optional.</p>\n"
},
{
"AnswerId": "30247127",
"CreationDate": "2015-05-14T20:51:58.577",
"ParentId": null,
"OwnerUserId": "4013571",
"Title": null,
"Body": "<p>It looks as though this is <a href=\"https://github.com/Theano/Theano/issues/2028\" rel=\"nofollow\">in development</a>.\nI've realised I can use the <code>conv2d()</code> function by specifying either width or height as 1...</p>\n\n<p>For the function <code>conv2d()</code>, the parameter <code>image_shape</code> takes a list of length 4 containing:</p>\n\n<pre><code>([number_images,] height, width)\n</code></pre>\n\n<p>by setting <code>height=1</code> or <code>width=1</code> it forces it to a 1D convolution.</p>\n"
},
{
"AnswerId": "30247267",
"CreationDate": "2015-05-14T21:01:27.147",
"ParentId": null,
"OwnerUserId": "3928385",
"Title": null,
"Body": "<p>While I believe there's no <code>conv1d</code> in theano, Lasagne (a neural network library on top of theano) has several implementations of Conv1D layer. Some are based on <code>conv2d</code> function of theano with one of the dimensions equal to 1, some use single or multiple dot products. I would try all of them, may be a dot-product based ones will perform better than <code>conv2d</code> with <code>width=1</code>.</p>\n\n<p><a href=\"https://github.com/Lasagne/Lasagne/blob/master/lasagne/theano_extensions/conv.py\" rel=\"nofollow\">https://github.com/Lasagne/Lasagne/blob/master/lasagne/theano_extensions/conv.py</a></p>\n"
}
] |
30,253,520 | 1 |
<c++><machine-learning><neural-network><deep-learning><caffe>
|
2015-05-15T07:05:59.953
| null | 562,769 |
What does 'Attempting to upgrade input file specified using deprecated transformation parameters' mean?
|
<p>I am currently trying to train my first net with Caffe. I get the following output:</p>
<pre></pre>
<p>What does </p>
<blockquote>
<p> </p>
</blockquote>
<p>mean? Where exactly did I use something deprecated? What should I use instead? </p>
|
[
{
"AnswerId": "30285502",
"CreationDate": "2015-05-17T09:41:34.193",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>Recently, input transformation (scaling/cropping etc.) was separated from the IMAGE_DATA layer into a separate object: data transformer. This change affected the protobuffer syntax and the syntax of the IMAGE_DATA layer.</p>\n\n<p>It appears as if your <code>first_net.prototxt</code> is in the old format and Caffe converts it for you to the new format. </p>\n\n<p>You can do this conversion manually yourself using <code>./build/tools/upgrade_net_proto_text</code> (for prototxt files) and <code>./build/tools/upgrade_net_proto_binary</code> (for binaryproto files). </p>\n"
}
] |
30,259,911 | 1 |
<machine-learning><neural-network><deep-learning><caffe><matcaffe>
|
2015-05-15T12:46:16.540
| 30,285,844 | 2,191,652 |
Object categories of pretrained imagenet model in caffe
|
<p>I'm using the pretrained imagenet model provided along the caffe (CNN) library (). I can output a 1000 dim vector of object scores for any images using this model.<br>
However I don't know what the actual object categories are. Did someone find a file, where the corresponding object categories are listed?</p>
|
[
{
"AnswerId": "30285844",
"CreationDate": "2015-05-17T10:24:13.113",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>You should look for the file <code>'synset_words.txt'</code> it has 1000 line each line provides a description of a different class.</p>\n\n<p>For more information on how to get this file (and some others you might need) you can read <a href=\"http://caffe.berkeleyvision.org/gathered/examples/imagenet.html\" rel=\"nofollow\">this</a>.</p>\n\n<hr>\n\n<p>If you want all the labels to be ready-for-use in Matlab, you can read the txt file into a cell array (a cell per class):</p>\n\n<pre><code>C = textread('/path/to/synset_words.txt','%s','delimiter','\\n');\n</code></pre>\n"
}
] |
30,276,809 | 1 |
<lua><torch>
|
2015-05-16T14:22:59.913
| 30,277,067 | 1,546,029 |
How to permanently add directory to Lua search path?
|
<p>What's a simple way to <em>permanently</em> add a directory to the Lua search path?</p>
|
[
{
"AnswerId": "30277067",
"CreationDate": "2015-05-16T14:48:48.967",
"ParentId": null,
"OwnerUserId": "107090",
"Title": null,
"Body": "<p>Set the corresponding <a href=\"http://www.lua.org/manual/5.3/#env\" rel=\"nofollow\">environment variable</a> or rebuild Lua after adding your path to the <a href=\"http://www.lua.org/source/5.3/luaconf.h.html#LUA_PATH_DEFAULT\" rel=\"nofollow\">source</a>. </p>\n"
}
] |
30,288,203 | 1 |
<lua><torch>
|
2015-05-17T14:58:13.070
| 30,288,867 | 1,715,101 |
Torch tensor equivalent function to matlab's "find"?
|
<p>In a nutshell, I would like to know if there is a tensor command in torch that gives me the indices of elements in a tensor that satisfy a certain criteria.</p>
<p>Here is matlab code that illustrates what I would like to be able to do in torch:</p>
<pre></pre>
<p>I understand that I could do this in torch using a for loop, but is there some equivalent to matlab's find command that would allow me to do this more compactly?</p>
|
[
{
"AnswerId": "30288867",
"CreationDate": "2015-05-17T15:53:41.517",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<pre><code>x[x:gt(5)] = 0\n</code></pre>\n\n<p>In general there are x:gt :lt :ge :le :eq</p>\n\n<p>There is also the general :apply function tha takes in an anonymous function and applies it to each element.</p>\n"
}
] |
30,288,443 | 1 |
<python><optimization><neural-network><theano>
|
2015-05-17T15:18:50.870
| null | 1,606,150 |
Theano. Flatten all the parameters for later optimization
|
<p>I opened the following issue with my problem:
<a href="https://github.com/Theano/Theano/issues/2920" rel="nofollow">https://github.com/Theano/Theano/issues/2920</a></p>
<p>Basically, I want to prepare all my model's parameters for future optimization. And I need to flatten and concatenate all my variables in order to find derivative later. But I am stuck.</p>
<p>If somebody knows how to solve it, please, share your wisdom :)</p>
|
[
{
"AnswerId": "30415477",
"CreationDate": "2015-05-23T17:12:04.140",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>Parameters are usually Theano shared variables, not symbolic variables. The result of using <code>T.concatenate</code> is going to be a symbolic expression but I suspect you don't want that. Try just passing an ordinary Python list of parameters into <code>T.grad</code>.</p>\n\n<p>So try,</p>\n\n<pre><code>self.theta = [self.hidden_layer.theta, self.log_layer.theta]\n</code></pre>\n\n<p>and</p>\n\n<pre><code>grad_theta = T.grad(cost_var, classifier.theta)\n</code></pre>\n"
}
] |
30,302,520 | 1 |
<python><numpy><parallel-processing><theano>
|
2015-05-18T11:52:29.460
| 30,302,918 | 738,017 |
Parallelize operations for each cell in a numpy array
|
<p>I am trying to figure out which is the best way to parallelize the execution of a single operation for each cell in a 2D numpy array.</p>
<p>In particular, I need to do a bitwise operation for each cell in the array.</p>
<p>This is what I do using a single cycle:</p>
<pre></pre>
<p>I found a way to do the same above using the method:</p>
<pre></pre>
<p>However, using vectorize doesn't seem to improve performance.</p>
<p>I read about <em>numexpr</em> in <a href="https://stackoverflow.com/a/11460119/738017">this answer on StackOverflow</a>, where also <em>Theano</em> and <em>Cython</em> are cited. <em>Theano</em> in particular seems a good solution, but I cannot find examples that fit my case.</p>
<p>So my question is: which is the best way to improve the above code, using parallelization and possibly GPU computation? May someone post some sample code to do this?</p>
|
[
{
"AnswerId": "30302918",
"CreationDate": "2015-05-18T12:12:29.030",
"ParentId": null,
"OwnerUserId": "4367286",
"Title": null,
"Body": "<p>I am not familiar with bitwise operations but this here gives me the same result as your code and is vectorized. </p>\n\n<pre><code>import numpy as np\n\n# make sure it is a numpy.array\nv = np.array(v)\n\n# vectorized computation\nN = (v >> 7) & 255\n</code></pre>\n"
}
] |
30,305,891 | 1 |
<python><numpy><theano>
|
2015-05-18T14:27:51.757
| 30,309,565 | 2,213,825 |
Convert einsum computation to dot product to be used in Theano
|
<p>I have just recently learned about and quickly became addicted to it. But it seems that doesn't have an equivalent function so I need to convert my code to theano somehow. How can I write the following computation in theano?</p>
<pre></pre>
|
[
{
"AnswerId": "30309565",
"CreationDate": "2015-05-18T17:35:13.150",
"ParentId": null,
"OwnerUserId": "110026",
"Title": null,
"Body": "<p>You only need to rearrange your axes to get this to work:</p>\n\n<pre><code>>>> import numpy as np\n>>> a = np.random.rand(3, 4, 5)\n>>> b = np.random.rand(5, 6)\n>>> np.allclose(np.einsum('ikj,jl->ikl', a, b), np.dot(a, b))\n</code></pre>\n\n<p>So with that in mind:</p>\n\n<pre><code>>>> a = np.random.rand(3, 5, 4)\n>>> b = np.random.rand(6, 5)\n>>> out_ein = np.einsum('ijk,lj->ilk', a, b)\n>>> out_dot = np.transpose(np.dot(np.transpose(a, (0, 2, 1)),\n... np.transpose(b, (1, 0))),\n... (0, 2, 1))\n>>> np.allclose(out_ein, out_dot)\n</code></pre>\n"
}
] |
30,325,108 | 2 |
<c++><layer><deep-learning><caffe>
|
2015-05-19T12:04:11.647
| 30,394,784 | 213,615 |
Caffe layer creation failure
|
<p>I'm trying to load in TEST phase a network configuration which has a memory data layer first and then a convolution layer. The MemoryData layer creation succeeds,
But the convolution layer's creation fails at the following location:</p>
<pre></pre>
<p>Printed error is: </p>
<blockquote>
<p>F0519 14:54:12.494139 14504 layer_factory.hpp:77] Check failed:
registry.count(t ype) == 1 (0 vs. 1) Unknown layer type: Convolution
(known types: MemoryData)</p>
</blockquote>
<p>registry has one entry only, indeed with MemoryData.
When stepping into the registry creation functions, it looks like it first (and last, since this is a singletone) called from </p>
<pre></pre>
<p>in memory_data_later.cpp.</p>
<p>I see similar calls for the other supported layers, but it looks like they are never called.
How could I solve it?</p>
<p>Thanks!</p>
|
[
{
"AnswerId": "30394784",
"CreationDate": "2015-05-22T10:50:11.197",
"ParentId": null,
"OwnerUserId": "561794",
"Title": null,
"Body": "<p>This error occurs when trying to link caffe statically to an executable. You need to pass extra linker flags to make sure that layer registration code gets included.</p>\n\n<p>If you are using cmake take a look at Targets.cmake:</p>\n\n<pre><code>###########################################################################################\n# Defines global Caffe_LINK flag, This flag is required to prevent linker from excluding\n# some objects which are not addressed directly but are registered via static constructors\nif(BUILD_SHARED_LIBS)\n set(Caffe_LINK caffe)\nelse()\n if(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"Clang\")\n set(Caffe_LINK -Wl,-force_load caffe)\n elseif(\"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"GNU\")\n set(Caffe_LINK -Wl,--whole-archive caffe -Wl,--no-whole-archive)\n endif()\nendif()\n</code></pre>\n\n<p>And then where you create your target:</p>\n\n<pre><code># target\nadd_executable(${name} ${source})\ntarget_link_libraries(${name} ${Caffe_LINK})\n</code></pre>\n\n<p>A quick solution would be to build and link caffe as a shared lib instead of static.</p>\n\n<p>Also see <a href=\"https://groups.google.com/forum/#!topic/caffe-users/Py6IwMQvtqo/discussion\" rel=\"noreferrer\">this post</a>.</p>\n\n<p>Just to complete this for MSVC compilation on Windows:\nUse <a href=\"https://msdn.microsoft.com/en-US/library/bxwfs976(v=vs.120).aspx\" rel=\"noreferrer\">/OPT:NOREF</a> or <a href=\"https://msdn.microsoft.com/en-us/library/2s3hwbhs(v=VS.100).aspx\" rel=\"noreferrer\">/INCLUDE</a> linker options on the target executable or dll.</p>\n"
},
{
"AnswerId": "32119735",
"CreationDate": "2015-08-20T13:34:15.130",
"ParentId": null,
"OwnerUserId": "2559219",
"Title": null,
"Body": "<p>Replace <code>-l$(PROJECT)</code> with <code>$(STATIC_LINK_COMMAND)</code> in your Makefile in the appropriate places, and remove the now unnecessary runtime load path: <code>-Wl,-rpath,$(ORIGIN)/../lib</code>. </p>\n"
}
] |
30,330,394 | 1 |
<command-line><homebrew><caffe>
|
2015-05-19T15:51:27.600
| 30,389,640 | 2,438,538 |
What does the `--fresh` option do in Brew?
|
<p>While following installation instructions (e.g., for <a href="http://caffe.berkeleyvision.org/install_osx.html" rel="nofollow">caffe</a> for os x), I run into the flag for <a href="http://brew.sh/" rel="nofollow">homebrew</a>. For example,</p>
<pre></pre>
<p>However, I see no documentation about what does, and I don't find it in the source code for homebrew. What does this flag do? (Or what did it used to do?)</p>
|
[
{
"AnswerId": "30389640",
"CreationDate": "2015-05-22T06:20:14.090",
"ParentId": null,
"OwnerUserId": "2438538",
"Title": null,
"Body": "<p>I found an old <a href=\"https://github.com/Homebrew/homebrew/issues/26979\" rel=\"nofollow\">github issue</a> describing the behavior of <code>--fresh</code>.</p>\n\n<p>The flag was meant to ensure packages would be installed without any previously set compile-time options (like <code>--with-python</code>), but it was removed because it didn't do anything:</p>\n\n<pre><code>commit 64744646e9be93dd758ca5cf202c6605accf4deb\nAuthor: Jack Nagel <jacknagel@gmail.com>\nDate: Sat Jul 5 19:28:15 2014 -0500\n\n Remove remaining references to \"--fresh\"\n\n This option was removed in 8cdf4d8ebf439eb9a9ffcaa0e455ced9459e1e41\n because it did not do anything.\n</code></pre>\n"
}
] |
30,331,476 | 2 |
<matlab><caffe><matcaffe>
|
2015-05-19T16:43:39.957
| 30,376,249 | 2,191,652 |
How to adapt Caffe Matlab wrapper for a network trained on Mnist?
|
<p>I successfully trained my Caffe net on the mnist database following <a href="http://caffe.berkeleyvision.org/gathered/examples/mnist.html" rel="nofollow">http://caffe.berkeleyvision.org/gathered/examples/mnist.html</a></p>
<p>Now I want to test the network with my own images using the Matlab wrapper.</p>
<p>Therefore in "matcaffe.m" im loading the file "lenet.prototxt" which is not used for training but which seems to be suited for testing. It is referencing a input size of 28 x 28 pixels:</p>
<pre></pre>
<p>Therefore I adapted the "prepare_image" function in "matcaffe.m" accordingly. It now looks like this:</p>
<pre></pre>
<p>This converts the input image to a [1 x 1 x 28 x 28], 4dim, grayscale image. But still Matlab is complaining:</p>
<pre></pre>
<p>Does somebody have experience with testing the trained mnist net on his own data?</p>
|
[
{
"AnswerId": "30376249",
"CreationDate": "2015-05-21T14:08:48.380",
"ParentId": null,
"OwnerUserId": "2191652",
"Title": null,
"Body": "<p>Finally I found the full solution:\nThis how to predict a digit of your own input image using the matcaffe.m (Matlab wrapper) for Caffe</p>\n\n<ol>\n<li>In \"matcaffe.m\": One has to reference the file \"caffe-master/examples/mnist/lenet.prototxt\"</li>\n<li>Adapt the file \"lenet.prototxt\" as pointed out by mprat: Change the entry input_dim to <code>input_dim: 1</code></li>\n<li>Use the follwing adaptation to the subfunction \"prepare_image\" in matcaffe.m:</li>\n</ol>\n\n<p>(Input can be an rgb image of any size)</p>\n\n<pre><code>function image = prepare_image(im)\n\nIMAGE_DIM = 28;\n\n% If input image is too big , is rgb and of type uint8:\n% -> resize to fixed input size, single channel, type float\n\nim = rgb2gray(im);\nim = imresize(im, [IMAGE_DIM IMAGE_DIM], 'bilinear');\nim = single(im);\n\n% Caffe needs a 4D input matrix which has single precision\n% Data has to be scaled by 1/256 = 0.00390625 (like during training)\n% In the second last line the image is beeing transposed!\nimages = zeros(1,1,IMAGE_DIM,IMAGE_DIM);\nimages(1,1,:,:) = 0.00390625*im';\nimages = single(images);\n</code></pre>\n"
},
{
"AnswerId": "30331542",
"CreationDate": "2015-05-19T16:47:48.007",
"ParentId": null,
"OwnerUserId": "2773607",
"Title": null,
"Body": "<p>The reason you are having that error (input size does not match) is that the network prototxt is expecting a batch of 64 images. The lines</p>\n\n<pre><code>input_dim: 64\ninput_dim: 1\ninput_dim: 28\ninput_dim: 28\n</code></pre>\n\n<p>Mean that the network is expecting a batch of 64 grayscale, 28 by 28 images. If you keep all your MATLAB code the same and change that first line to </p>\n\n<pre><code>input_dim: 1\n</code></pre>\n\n<p>Your problem should go away.</p>\n"
}
] |
30,356,427 | 1 |
<python><ipython><caffe>
|
2015-05-20T17:24:14.643
| null | 562,769 |
Why do I get 'ImportError: dynamic module does not define init function (PyInit__caffe)' in ipython notebook while it works for Python?
|
<p>I try to get the <a href="https://github.com/BVLC/caffe/blob/master/examples/classification.ipynb" rel="nofollow"></a> from the <a href="https://github.com/BVLC/caffe" rel="nofollow"></a> to work. When I Python 2.7.8 via console, it works. I can and (after a few seconds) its just finished. No error message. No need to append something to the .</p>
<p>When I start the example mentioned above and execute the first Python cell, I get an error. To make it simpler, I added a cell with only which gives me:</p>
<pre></pre>
<p>What is the problem here? </p>
<p>Do I eventually have the wrong version?</p>
<pre></pre>
|
[
{
"AnswerId": "30886398",
"CreationDate": "2015-06-17T08:37:59.667",
"ParentId": null,
"OwnerUserId": "4973198",
"Title": null,
"Body": "<p>When you work with caffe in the ipython session. You should add caffe root folder to sys.path. From my experience, Ipython doesn't load PYTHONPATH variable like using Python in console.</p>\n"
}
] |
30,360,980 | 2 |
<lua><neural-network><deep-learning><torch>
|
2015-05-20T21:48:49.370
| null | 1,082,019 |
Torch, "size mismatch" in StochasticGradient function training
|
<p>I'm implementing a deep neural network in Torch7 with a dataset made of two torch.Tensor() objects.
The first is made of 12 elements (completeTable), the other one is made of 1 element (presentValue).
Each dataset row is an array of these two tensors:</p>
<pre></pre>
<p>Everything works for the neural network training and testing.
But now I want to switch and use only half of the 12 elements of completeTable, that are only 6 elements (firstChromRegionProfile).</p>
<pre></pre>
<p>If I run the same neural network architecture with this new dataset, it does not work. <strong>It says that the trainer:train(dataset_firstChromRegion) function cannot work because of "size mismatch".</strong></p>
<p>Here's my neural network function:</p>
<pre></pre>
<p>Here's the error log:</p>
<pre></pre>
|
[
{
"AnswerId": "30402775",
"CreationDate": "2015-05-22T17:38:57.957",
"ParentId": null,
"OwnerUserId": "1082019",
"Title": null,
"Body": "<p>I was surprisingly able to fix my problem, by eliminating the line: </p>\n\n<pre><code>act_function = nn.Tanh();\n</code></pre>\n\n<p>and consequently by replacing any occurrences of <code>act_function</code> with <code>nn.Tanh()</code></p>\n\n<p>I do not why, but know everything works...\nSo the lesson is: never assign an activation function to a variable (!?).</p>\n"
},
{
"AnswerId": "30531799",
"CreationDate": "2015-05-29T14:10:41.920",
"ParentId": null,
"OwnerUserId": "4850610",
"Title": null,
"Body": "<p>All of your activation layers share the same <code>nn.Tanh()</code> object. That is the problem. Try something like this instead:</p>\n\n<pre><code>act_function = nn.Tanh\nperceptron:add( act_function() )\n</code></pre>\n\n<p>Why?</p>\n\n<p>To perform a backward propagation step, we have to compute a gradient of the layer w.r.t. its input. In our case: </p>\n\n<blockquote>\n <p>tanh'(input) = 1 - <strong>tanh(input)</strong>^2</p>\n</blockquote>\n\n<p>One can notice that <strong>tanh(input)</strong> = <strong>output</strong> of the layer's forward step. You can store this output inside the layer and use it during backward pass to speed up training. This is exactly what happens inside <code>nn</code> library:</p>\n\n<pre><code>// torch/nn/generic/Tanh.c/Tanh_updateGradInput:\n\nfor(i = 0; i < THTensor_(nElement)(gradInput); i++)\n {\n real z = ptr_output[i];\n ptr_gradInput[i] = ptr_gradOutput[i] * (1. - z*z);\n }\n</code></pre>\n\n<p>Output sizes of your activation layers don't match, so error occurs. Even if they did, it would lead to wrong result.</p>\n\n<p>Sorry about my English.</p>\n"
}
] |
30,361,755 | 1 |
<lua><openmp><torch>
|
2015-05-20T22:56:27.843
| 30,368,097 | 219,603 |
How to disable omp in Torch nn package?
|
<p>Specifically I would like to not use omp when the size of the input tensor is small. I have a small script to test the run time.</p>
<pre class="lang-lua prettyprint-override"></pre>
<p>If is 10, then my basic log softmax function run much faster:</p>
<pre></pre>
<p>But once is 10,000,000, omp really helps a lot:</p>
<pre></pre>
<p>So I suspect that omp overhead is very high. If my code has to call log softmax several times with small inputs (says tensor size is only 3), it will cost too much time. Is there a way to manually disable omp usage in some cases (but not always)?</p>
|
[
{
"AnswerId": "30368097",
"CreationDate": "2015-05-21T08:12:31.513",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<blockquote>\n <p>Is there a way to manually disable omp usage in some cases (but not always)?</p>\n</blockquote>\n\n<p>If you really want to do that one possibility is to use <a href=\"https://github.com/torch/torch7/blob/57ff61e/utils.c#L196\" rel=\"nofollow\"><code>torch.setnumthreads</code></a> and <a href=\"https://github.com/torch/torch7/blob/57ff61e/utils.c#L197\" rel=\"nofollow\"><code>torch.getnumthreads</code></a> like that:</p>\n\n<pre><code>local nth = torch.getnumthreads()\ntorch.setnumthreads(1)\n-- do something\ntorch.setnumthreads(nth)\n</code></pre>\n\n<p>So you can monkey-patch <code>nn.LogSoftMax</code> as follow:</p>\n\n<pre><code>nn.LogSoftMax.updateOutput = function(self, input)\n local nth = torch.getnumthreads()\n torch.setnumthreads(1)\n local out = input.nn.LogSoftMax_updateOutput(self, input)\n torch.setnumthreads(nth)\n return out\nend\n</code></pre>\n"
}
] |
30,362,370 | 1 |
<python><theano>
|
2015-05-21T00:02:44.470
| null | 2,822,004 |
What is the purpose of creating symbolic variables in Theano?
|
<p>In Theano, variables are written as 'symbols':</p>
<pre></pre>
<p>From reading the documentation, it is implied that the reason we create these symbols may be due to the fact that these variables are compiled into C code. But I'm not sure if this is the case, much less the only reason for using symbolic variables. </p>
<p>What is the purpose of creating symbolic variables in Theano? What can they do that a, out of the box assignment in Python can't do?</p>
|
[
{
"AnswerId": "30500909",
"CreationDate": "2015-05-28T08:18:13.377",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>The Theano web site opens with</p>\n\n<blockquote>\n <p>Theano is a Python library that allows you to define, optimize, and\n evaluate mathematical expressions involving multi-dimensional arrays\n efficiently.</p>\n</blockquote>\n\n<p>which seems like quite a good summary of what it does but perhaps not why it does it.</p>\n\n<p>One of the main features of Theano is its symbolic differentiation feature. That is, given a symbolic mathematical expression, Theano can automatically differentiate the expression with respect to some variable within the expression, i.e. it can automatically determine the gradient of the expression along some dimension(s) of interest.</p>\n\n<p>For example, if <code>y=x**2</code> (where <code>**</code> is the power operator) then the gradient of <code>y</code> with respect to <code>x</code> is <code>dy/dx = 2*x</code>. Theano can do this automatically:</p>\n\n<pre><code>import theano\nimport theano.tensor\nx = theano.tensor.scalar('x')\ny = x ** 2\ntheano.printing.pp(y)\ndy_dx = theano.grad(y, x)\ntheano.printing.pp(dy_dx)\n</code></pre>\n\n<p>If you run this code you'll see the output</p>\n\n<pre><code>(x ** TensorConstant{2})\n((fill((x ** TensorConstant{2}), TensorConstant{1.0}) * TensorConstant{2}) * (x ** (TensorConstant{2} - TensorConstant{1})))\n</code></pre>\n\n<p>The <code>fill</code> operation just creates a tensor of the correct shape (a scalar in this case) filled with ones and <code>TensorConstant{a}</code> is just the number <code>a</code> so this can be simplified to</p>\n\n<pre><code>(x ** 2) == x ** 2\n((1 * 2) * (x ** (2 - 1))) == (2 * (x ** 1)) == 2 * x\n</code></pre>\n\n<p>As expected.</p>\n\n<p>Clearly this is not particularly helpful for such a simple mathematical expression, but now imagine you're constructing an arbitrarily large and complex mathematical expression for which the gradient is not so immediately obvious, as is often the case in neural network research.</p>\n\n<p>But there's more. Theano can do the above for expressions that involve operations over not just scalars but also vectors, matrices, or any other tensor.</p>\n\n<p>Besides symbolic differentiation, Theano's symbolic approach offers other significant benefits:</p>\n\n<ul>\n<li>Theano can compile symbolic mathematical expressions into executable code. It uses various techniques that can yield executable code that can run faster than plain Python code.</li>\n<li>Theano programs can often be switched between running on CPU and GPU with no code changes whatsoever.</li>\n<li>When running on CPU, Theano makes full use of numpy BLAS and OpenMP facilities, when available, to parallelize the execution over multiple CPU cores.</li>\n<li>When running on GPU, Theano makes full use of of the many cores on modern GPUs to parallelize the most costly operations (most notably matrix multiplication).</li>\n<li>Theano's compiler is an optimizing compiler in that it can change the expression (also known as the computation graph) using various transforms that maintain the semantics of the expression (it still computes the same results given the same inputs) but achieving various performance and, crucially, <a href=\"http://mathworld.wolfram.com/NumericalStability.html\" rel=\"nofollow\">numerical stability</a> gains.</li>\n</ul>\n\n<p>Plain Python does not offer any of the above and using numpy alone only offers some of the parallelization features (via BLAS and OpenMP).</p>\n\n<p>More on this topic <a href=\"http://deeplearning.net/software/theano/introduction.html\" rel=\"nofollow\">in the Theano documentation</a>.</p>\n\n<p>P.S. this is an expansion of @eickenberg brief comment to the question.</p>\n"
}
] |
30,365,455 | 1 |
<python><theano>
|
2015-05-21T05:43:51.617
| 30,366,957 | 4,013,571 |
Python - theano.scan() - return function values without feeding back into the loop
|
<p>Is it possible to return values calculated within the scan function without feeding them back into the scan function.</p>
<p>e.g.</p>
<pre></pre>
<p>which gives</p>
<pre></pre>
<p>However, I would like to output both of the values in without feeding the latter value back into the scan function:</p>
<pre></pre>
<p>to give something like:</p>
<pre></pre>
<p><em>(I would just like to note that this problem is trivial but I would like to use the idea for debugging scan functions and checking output values)</em></p>
|
[
{
"AnswerId": "30366957",
"CreationDate": "2015-05-21T07:15:38.913",
"ParentId": null,
"OwnerUserId": "2929337",
"Title": null,
"Body": "<p>You need to specify the <code>outputs_info</code> to be <code>None</code> for the output which you don't want to feed back into <code>f</code>. For further information, see <a href=\"http://deeplearning.net/software/theano/library/scan.html\" rel=\"nofollow\">the <code>scan</code> documentation.</a> Below is an example which should do what you want.</p>\n\n<pre><code>import theano\nimport theano.tensor as T\nimport numpy as np\n\ntheano.config.exception_verbosity='high'\ntheano.config.optimizer='None'\n\ndef f(seq_v, prev_v):\n return seq_v*prev_v, seq_v+1\n\na = T.vector('a')\n\nini = T.constant(1, dtype=theano.config.floatX)\n\nresult, updates = theano.scan(fn=f,\n outputs_info=[ini,None],\n sequences=[a])\n\nfn = theano.function(inputs=[a], outputs=result)\n\nA = np.arange(1,5, dtype=T.config.floatX)\nout = fn(A)\n\nprint('Values:\\nf:\\t{}'.format(out))\n</code></pre>\n\n<p>Output:</p>\n\n<pre><code>Values:\nf: [array([ 1., 2., 6., 24.], dtype=float32), array([ 2., 3., 4., 5.], dtype=float32)]\n</code></pre>\n"
}
] |
30,367,443 | 1 |
<python><canopy><theano><scikits>
|
2015-05-21T07:39:06.387
| null | 2,991,243 |
Theano package of Python (Canopy) returns an error: cannot import name gof
|
<p>This this the error returns using in :</p>
<pre></pre>
<p>How can i solve this problem? I installed and using package manager and installed developer version using in windows 8.1 .</p>
|
[
{
"AnswerId": "30367924",
"CreationDate": "2015-05-21T08:03:40.933",
"ParentId": null,
"OwnerUserId": "531222",
"Title": null,
"Body": "<p>You can read the history of this bug here:</p>\n\n<p><a href=\"https://github.com/Theano/Theano/issues/2406\" rel=\"nofollow\">https://github.com/Theano/Theano/issues/2406</a></p>\n\n<p>It says</p>\n\n<blockquote>\n <p>The problem appears to be that the compilation is trying to link with\n the static library libpython2.7.a, instead of the dynamic version of\n the library (which should be something like libpython2.7.so).</p>\n</blockquote>\n\n<p>and later</p>\n\n<blockquote>\n <p>Hey,thanks for your reply, I did remake python and do altinstall\n ,solved this problem.</p>\n</blockquote>\n\n<p>You should rebuild Python and it'll be fine.</p>\n\n<p><strong>UPDATE</strong></p>\n\n<p>I think you should get rid of Canopy and install Python manually. Canopy seems to be a ready-built distribution of Python and some scientific packages and I guess Theano will not work in it at this moment.</p>\n\n<p>Google for instructions how to install python on windows (I never used windows systems), for example this one: <a href=\"http://www.anthonydebarros.com/2014/02/16/setting-up-python-in-windows-8-1/\" rel=\"nofollow\">http://www.anthonydebarros.com/2014/02/16/setting-up-python-in-windows-8-1/</a></p>\n\n<p>And then install all required packages via <code>pip</code>.</p>\n"
}
] |
30,369,287 | 1 |
<python><theano>
|
2015-05-21T09:09:00.943
| 30,423,839 | 2,213,825 |
Theano is missing signal.conv module
|
<p>My theano doesn't have the module</p>
<pre></pre>
<p>My theano version is '0.7.0'. I tried to upgrade doing and it tells me that I am already up-to-date. How can I get the conv module?</p>
<p>PS: I even updated to the dev version by doing and still no !!</p>
<p>If I do I get the path to the file in the same folder I have the file conv.py and downsample.py I can sucessfuly call but not </p>
<p>---- Installing on a Virtualenv ----</p>
<p>I tried to reproduce the error on a virtualenv:</p>
<pre></pre>
<p>I am on a Ubuntu 14.04 64 bits, python 2.7.6</p>
|
[
{
"AnswerId": "30423839",
"CreationDate": "2015-05-24T13:06:12.330",
"ParentId": null,
"OwnerUserId": "1423333",
"Title": null,
"Body": "<p>as written in the comment above, i think this is caused by <code>tensor</code> not implicitly importing <code>signal</code> or even <code>signal.conv</code>, hence you have to do the import yourself to use it:</p>\n\n<pre><code>In [1]: import theano\n\nIn [2]: theano.tensor\nOut[2]: <module 'theano.tensor' from '/usr/local/lib/python2.7/site-packages/theano/tensor/__init__.pyc'>\n</code></pre>\n\n<p>As you can see importing <code>theano</code> also gets us the <code>theano.tensor</code> module, but as <code>tensor.__init__.py</code> doesn't import <code>signal</code> for example, the following does not work:</p>\n\n<pre><code>In [3]: theano.tensor.signal\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n<ipython-input-3-53b46c46cb25> in <module>()\n----> 1 theano.tensor.signal\n\nAttributeError: 'module' object has no attribute 'signal'\n\nIn [4]: theano.tensor.signal.conv\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n<ipython-input-4-b2a3482abaed> in <module>()\n----> 1 theano.tensor.signal.conv\n\nAttributeError: 'module' object has no attribute 'signal'\n</code></pre>\n\n<p>After importing the submodule it does:</p>\n\n<pre><code>In [5]: import theano.tensor.signal.conv\n\nIn [6]: theano.tensor.signal\nOut[6]: <module 'theano.tensor.signal' from '/usr/local/lib/python2.7/site-packages/theano/tensor/signal/__init__.pyc'>\n\nIn [7]: theano.tensor.signal.conv\nOut[7]: <module 'theano.tensor.signal.conv' from '/usr/local/lib/python2.7/site-packages/theano/tensor/signal/conv.pyc'>\n</code></pre>\n"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.