QuestionId
int64
388k
59.1M
AnswerCount
int64
0
47
Tags
stringlengths
7
102
CreationDate
stringlengths
23
23
AcceptedAnswerId
float64
388k
59.1M
OwnerUserId
float64
184
12.5M
Title
stringlengths
15
150
Body
stringlengths
12
29.3k
answers
listlengths
0
47
32,132,388
1
<arrays><python-2.7><numpy><slice><theano>
2015-08-21T04:32:22.717
32,134,610
5,106,953
Looping through slices of Theano tensor
<p>I have two 2D Theano tensors, call them and , and suppose for the sake of example, both and have shape (1, 50). Now, to compute their mean squared error, I simply run:</p> <pre></pre> <p>However, what I wanted to do was construct a new tensor that consists of their mean squared error in chunks of 10. In other words, since I'm more familiar with NumPy, what I had in mind was to create the following tensor M in Theano:</p> <pre></pre> <p>Now, since Theano doesn't have for loops, but instead uses scan (which map is a special case of), I thought I would try the following:</p> <pre></pre> <p>However, this does not seem to work, as I get the error:</p> <blockquote> <p>only integers, slices (), ellipsis (), numpy.newaxis () and integer or boolean arrays are valid indices</p> </blockquote> <p>Is there a way to loop through the slices using theano.scan (or map)? Thanks in advance, as I'm new to Theano!</p>
[ { "AnswerId": "32134610", "CreationDate": "2015-08-21T07:24:42.720", "ParentId": null, "OwnerUserId": "2902280", "Title": null, "Body": "<p>Similar to what can be done in <code>numpy</code>, a solution would be to reshape your (1, 50) tensor to a (1, 10, 5) tensor (or even a (10, 5) tensor), and then to compute the mean along the second axis.</p>\n\n<p>To illustrate this with numpy, suppose I want to compute means by slices of 2</p>\n\n<pre><code>x = np.array([0, 2, 0, 4, 0, 6])\nx = x.reshape([3, 2])\nnp.mean(x, axis=1)\n</code></pre>\n\n<p>outputs</p>\n\n<pre><code>array([ 1., 2., 3.])\n</code></pre>\n" } ]
32,141,732
0
<matlab><regression><caffe><matcaffe>
2015-08-21T13:36:03.877
null
3,698,878
Caffe regression wrong no. of outputs in final layer
<p>I am doing regression using a fine-tuned net, on the lines of caffe flickr-style example, I have changed the num outputs in the last layer to 1, but when testing it on an image using the matlab wrapper function . It returns 10 outputs corresponding to a single image while it should return only 1. </p> <p>This is my final layer in </p> <pre></pre> <p>As one can see, the is 1, the problem is when I test the finetuned net on an image using , it gives 10 output labels instead of 1. Can anyone help me understand what is happening here. Thanks in advance.</p>
[]
32,142,970
2
<machine-learning><neural-network><theano><deep-learning><shogun>
2015-08-21T14:31:16.440
32,161,588
1,377,127
How does the SHOGUN Toolbox convolutional neural network compare to Caffe and Theano?
<p>I'm interested in implementing a convolutional neural network in my C++ program where I'm tracking tagged insects (I'm also using OpenCV). I see people mention Caffe, Torch and Theano a lot but I haven't heard the CNN in the SHOGUN Toolbox discussed. Does this CNN work well and would anyone recommend it if you're working in C++? I've used Theano via scikit-neuralnetwork in Python to test out some images and that worked really well, except unfortunately Theano is Python-only.</p>
[ { "AnswerId": "32161588", "CreationDate": "2015-08-22T23:00:39.263", "ParentId": null, "OwnerUserId": "5226958", "Title": null, "Body": "<p>The difference lies in the speed. cnn is computationally expensive, so a GPU implementation is at least 10 times faster than CPU. caffe and theano provide seamless integration of calling either CPU or GPU, which may not be easy for you to implement without much GPU programming experience.</p>\n\n<p>Other factors may exist including a unified interface for multiplayer, stochastic gradient descent, and etc. but I think speed issue is most crucial among all these factors.</p>\n" }, { "AnswerId": "34257516", "CreationDate": "2015-12-13T23:07:24.527", "ParentId": null, "OwnerUserId": "5675011", "Title": null, "Body": "<p>Shogun also has GPU support of some of the operations used in the NN code. This is work in progress though. At this point in time, other libraries might be faster. We mostly built these networks in there in order to be able to easily compare them to the other algorithms in the toolbox.</p>\n\n<p>The advantage, however, is that you can use it from a large number of languages (while internally, C++ code is executed) -- useful if you don't want to use python.</p>\n\n<p>Here are some IPython notebooks that you could use as a basis to compare:</p>\n\n<ul>\n<li><a href=\"http://www.shogun-toolbox.org/static/notebook/current/autoencoders.html\" rel=\"nofollow\">autoencoders for denoising and classification</a></li>\n<li><a href=\"http://www.shogun-toolbox.org/static/notebook/current/neuralnets_digits.html\" rel=\"nofollow\">(convolution) networks for digit classification</a></li>\n</ul>\n\n<p>We appreciate any experience to be shared. Shogun is in constant development and especially the NNs attract a lot of people to work on them, so expect things to change. If you are interested in helping GPU-fying Shogun, please let us know.</p>\n" } ]
32,148,440
1
<centos><makefile><protocol-buffers><caffe>
2015-08-21T20:02:19.540
null
1,556,092
how to address "make: protoc: Command not found"
<p>I'm installing Caffe on a CentOS system over which I do not have administrative privileges. When I attempt to compile, I encounter the following message:</p> <pre></pre> <p>What I have done so far is the following:</p> <pre></pre> <p>How should I address this error?</p>
[ { "AnswerId": "32150726", "CreationDate": "2015-08-21T23:34:12.630", "ParentId": null, "OwnerUserId": "4518274", "Title": null, "Body": "<p>Since you lack administrative privileges you can either</p>\n\n<ul>\n<li>ask your admin to install the <a href=\"http://rpm.pbone.net/index.php3/stat/4/idpl/23552166/dir/centos_6/com/protobuf-2.5.0-16.1.x86_64.rpm.html\" rel=\"nofollow\">protobuf</a> and <a href=\"http://rpm.pbone.net/index.php3/stat/4/idpl/23552167/dir/centos_6/com/protobuf-compiler-2.5.0-16.1.x86_64.rpm.html\" rel=\"nofollow\">protobuf-compiler</a> packages.</li>\n<li>compile it yourself and install the binaries in your <code>~/bin</code> directory.</li>\n</ul>\n\n<p>For the latter, this page (<em><a href=\"http://tech.yipp.ca/linux/install-google-protocol-buffers-linux/\" rel=\"nofollow\">Install google protocol buffers (protoc, protobuf) on CentOS 6 (linux)</a></em>) hints that it may be as simple as using the <code>--prefix=$HOME</code> option on the configure script.</p>\n" } ]
32,149,975
1
<python><numpy><fft><theano><ifft>
2015-08-21T22:10:30.477
32,165,217
5,106,953
Inverse FFT in Theano
<p>I know that <a href="https://github.com/Theano/Theano/blob/master/theano/tensor/fourier.py" rel="nofollow"> is essentially </a>. However, I was wondering if the inverse FFT was implemented? Namely, is there something like a , which is equivalent to ?</p> <p>I noticed that <a href="https://github.com/Theano/Theano/blob/master/theano/sandbox/fourier.py" rel="nofollow">this</a> has it, but I'm not sure how complete or reliable it is for doing what I want. Perhaps someone with a better understanding of Theano can weigh in here.</p> <p>Also, if I were to use this sandbox Fourier, how would I go about doing it? Simply calling , where is a 1D tensor, returns the error:</p> <pre></pre> <p>Is there a way to fix this?</p>
[ { "AnswerId": "32165217", "CreationDate": "2015-08-23T09:38:46.893", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>I can't comment on the robustness of the code but Theano as a whole is still in development (version 0.7) and this code is in the <code>sandbox</code> which should, I believe, be considered even less robust than the rest of Theano.</p>\n\n<p>It's clear that this FFT operation is incomplete because it is currently incapable of computing gradients (note the TODO comments). If you need gradients then sorry, this operation isn't going to help (maybe you could finish it off and submit the enhancement?)</p>\n\n<p>This implementation is just a shim around numpy's implementation so if numpy's implementation is sufficiently complete and reliable for doing what you want then this Theano shim probably is as well.</p>\n\n<p>Note that because this just wraps numpy, it won't run on the GPU and if you're mixing this operation with other GPU enabled operations and running on a GPU then you're going to have a slowdown due to data being copied backwards and forwards between main and GPU memories.</p>\n\n<p>To use this operation you'd do this:</p>\n\n<pre><code>import theano\nimport theano.sandbox.fourier as tsf\n\ntsf.ifft(frames=..., n=..., axis=...)\n</code></pre>\n" } ]
32,151,251
3
<python><theano><lasagne><nolearn>
2015-08-22T01:01:15.150
null
865,662
How to define a cost function in nolearn, lasagne?
<p>I'm doing a neural network in nolearn, a Theano based library that uses lasagne.</p> <p>I'm not understanding how do I define my own cost function.</p> <p>The output layer is only 3 neurons and I want it to be mostly sure when it gives 1 or 2, but otherwise - if it isn't really sure of 1, 2 - to give back simply 0.</p> <p>So, I came up with a cost function (will need tuning) where the cost is double for 1 and 2 than for 0, but I can't understand how to tell this to the network.</p> <pre></pre> <p>This is the code for the update, but how to I tell SGD to use my cost function instead of it's own?</p> <p><strong>EDIT:</strong> The full net code is:</p> <pre></pre> <p><strong>EDIT</strong> Error when using </p> <pre></pre>
[ { "AnswerId": "38563992", "CreationDate": "2016-07-25T09:22:00.583", "ParentId": null, "OwnerUserId": "6632332", "Title": null, "Body": "<p>I used a custom loss function in a classification task and thought i'd share that with you too. I basically wanted different emphasis on training data depending on the label.</p>\n\n<pre><code>import lasagne\nimport theano.tensor as T\nimport theano\n\ndef weighted_crossentropy(predictions, targets):\n\n weights_per_label = theano.shared(lasagne.utils.floatX([0.2, 0.4, 0.4]))\n weights = weights_per_label[targets] #returns a targets-shaped weight matrix\n loss = lasagne.objectives.aggregate(T.nnet.categorical_crossentropy(predictions, targets), weights=weights)\n return loss\n\nnet = NeuralNet(\n # layers and parameters\n objective_loss_function=weighted_crossentropy,\n # ...\n )\n</code></pre>\n\n<p><a href=\"https://github.com/dnouri/nolearn/issues/109\" rel=\"nofollow\" title=\"Weighted loss\">This</a> is where I found how to implement it.</p>\n" }, { "AnswerId": "32198644", "CreationDate": "2015-08-25T08:03:29.400", "ParentId": null, "OwnerUserId": "2902280", "Title": null, "Body": "<p>When you instantiate your neural network, you can pass a custom loss function that you've defined previously:</p>\n\n<pre><code>import theano.tensor as T\nimport numpy as np\nfrom nolearn.lasagne import NeuralNet\n# I'm skipping other inputs for the sake of concision\n\ndef multilabel_objective(predictions, targets):\n epsilon = np.float32(1.0e-6)\n one = np.float32(1.0)\n pred = T.clip(predictions, epsilon, one - epsilon)\n return -T.sum(targets * T.log(pred) + (one - targets) * T.log(one - pred), axis=1)\n\nnet = NeuralNet(\n # your other parameters here (layers, update, max_epochs...)\n # here are the one you're interested in:\n objective_loss_function=multilabel_objective,\n custom_score=(\"validation score\", lambda x, y: np.mean(np.abs(x - y)))\n )\n</code></pre>\n\n<p>As you can see, it's also possible to define a custom score (using the keyword <code>custom_score</code>)</p>\n" }, { "AnswerId": "32163561", "CreationDate": "2015-08-23T05:40:31.227", "ParentId": null, "OwnerUserId": "650654", "Title": null, "Body": "<p>See the following example (taken from <a href=\"https://github.com/Lasagne/Lasagne/blob/master/lasagne/updates.py\" rel=\"nofollow\">here</a>) that specifies its own loss function:</p>\n\n<pre><code>import lasagne\nimport theano.tensor as T\nimport theano\nfrom lasagne.nonlinearities import softmax\nfrom lasagne.layers import InputLayer, DenseLayer, get_output\nfrom lasagne.updates import sgd, apply_momentum\nl_in = InputLayer((100, 20))\nl1 = DenseLayer(l_in, num_units=3, nonlinearity=softmax)\nx = T.matrix('x') # shp: num_batch x num_features\ny = T.ivector('y') # shp: num_batch\nl_out = get_output(l1, x)\nparams = lasagne.layers.get_all_params(l1)\nloss = T.mean(T.nnet.categorical_crossentropy(l_out, y))\nupdates_sgd = sgd(loss, params, learning_rate=0.0001)\nupdates = apply_momentum(updates_sgd, params, momentum=0.9)\ntrain_function = theano.function([x, y], updates=updates)\n</code></pre>\n\n<p>Coincidentally, this code also has three units in the output layer.</p>\n" } ]
32,158,870
1
<python><theano>
2015-08-22T17:26:45.363
32,165,112
4,305,160
Theano shared updating last element in python
<p>I have a shared variable persistent_vis_chain which is being updated by a theano function where it gets its function from a theano.scan, But thats not the problem just back story.</p> <p>My shared variable looks like D = [image1, ... , imageN] where each images is [x1,x2,...,x784].</p> <p>What I want to do is take the average of all the images and put them into the last imageN. That is I want to sum all the values in each image except the last 1, which will result in [s1,s2,...,s784] then I want to set imageN = [s1/len(D),s2/len(D),...s784/len(D)]</p> <p>So my problem is I do not know how to do this with theano.shared and may be with my understanding of theano functions and doing this computation with symbolic variables. Any help would be greatly appreciated.</p>
[ { "AnswerId": "32165112", "CreationDate": "2015-08-23T09:27:13.570", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>If you have <code>N</code> images, each of shape <code>28x28=784</code> then, presumably, your shared variable has shape <code>(N,28,28)</code> or <code>(N,784)</code>? This method should work with either shape.</p>\n\n<p>Given <code>D</code> is your shared variable containing your image data. If you want to get the average image then <code>D.mean(keepdims=True)</code> will give it to you symbolically.</p>\n\n<p>It's unclear if you want to change the final image to equal the mean image (sounds like a strange thing to do), or if you want to add a further <code>N+1</code>'th image to the shared variable. For the former you could do something like this:</p>\n\n<pre><code>D = theano.shared(load_D_data())\nD_update_expression = do_something_with_scan_to_get_D_update_expression(D)\nupdates = [(D, T.concatenate(D_update_expression[:-1],\n D_update_expression.mean(keepdims=True)))]\nf = theano.function(..., updates=updates)\n</code></pre>\n\n<p>If you want to do the latter (add an additional image), change the <code>updates</code> line as follows:</p>\n\n<pre><code>updates = [(D, T.concatenate(D_update_expression,\n D_update_expression.mean(keepdims=True)))]\n</code></pre>\n\n<p>Note that this code is intended as a guide. It may not work as it stands (e.g. you may need to mess with the <code>axis=</code> parameter in the <code>T.concatenate</code> command).</p>\n\n<p>The point is that you need to construct a symbolic expression explaining what the new value for D looks like. You want it to be a combination of the updates from scan plus this additional average thing. <code>T.concatenate</code> allows you to combine those two parts together.</p>\n" } ]
32,159,129
1
<python><python-2.7><keras>
2015-08-22T17:55:45.833
null
5,106,953
Error importing Keras layer
<p>I am having issues importing a new layer (let's call it "newlayer") for the sake of simplicity, in Keras.</p> <p>I recently upgraded my Keras version using:</p> <pre></pre> <p>because my older install of Keras did not have newlayer. The interesting thing I notice though is when I upgrade, Keras gets installed in the ./Python/2.7/site-packages directory. So when I cd to that directory, and import newlayer, it works fine.</p> <p>However, when I am in my home directory and I import newlayer, it does not work (I get "ImportError: cannot import name newlayer").</p> <p>Is there a reason for this? Maybe I installed Keras somehow to my home directory a while back and it is using that version? I tried searching my home directory for a Keras installation and it's not installed there at all. More importantly, is there a way to fix this instead of having to cd into ./Python/2.7/site-packages each time?</p>
[ { "AnswerId": "35438432", "CreationDate": "2016-02-16T16:55:01.083", "ParentId": null, "OwnerUserId": "3990607", "Title": null, "Body": "<p>Make sure that pip is setup properly for the version of python which you are using. </p>\n\n<p>You can do for example</p>\n\n<pre><code>curl -O https://bootstrap.pypa.io/get-pip.py\npython2.7 get-pip.py\n</code></pre>\n\n<p>to re-install pip.</p>\n\n<p>and then:</p>\n\n<pre><code>pip-2.7 install --upgrade git+git://github.com/fchollet/keras.git\n</code></pre>\n" } ]
32,169,085
1
<matlab><image-processing><caffe><nvidia-digits>
2015-08-23T16:50:59.153
32,218,651
1,661,607
Input must have 4 axes, corresponding to (num, channels, height, width)
<p>Not sure if this is an issue or not - but I have been searching for days and cannot seem to figure it out!</p> <p>Any image I try to classify individually using digits seems to run okay. However, when using the "classify many images" button, the network crashes because of the aforementioned title/bug/I don't even know what the hell it is.</p> <p>I'm entirely new to caffe and DIGITS, and as I said I've spent days googling this problem - and cant seem to figure it out. What is the 5th dimension on the image, and if I do actually have 5D images, how do I convert them to 4D?</p>
[ { "AnswerId": "32218651", "CreationDate": "2015-08-26T05:31:33.063", "ParentId": null, "OwnerUserId": "1661607", "Title": null, "Body": "<p>Turns out it was a bug - Posted on git and they flagged it officially and patched - to anyone else experiencing this problem, update your DIGITS files</p>\n" } ]
32,171,043
3
<python><gcc><theano>
2015-08-23T20:12:35.580
32,204,900
2,430,739
Error running theano: LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)
<p>I followed the directions on <a href="https://www.kaggle.com/c/otto-group-product-classification-challenge/forums/t/13973/a-few-tips-to-install-theano-on-windows-64-bits/87880" rel="nofollow noreferrer">https://www.kaggle.com/c/otto-group-product-classification-challenge/forums/t/13973/a-few-tips-to-install-theano-on-windows-64-bits/87880</a> (with OpenBLAS) to install Theano with Python 3.4, on 64-bit Windows 7.</p> <p>Theano seemed to install without error, but when I try to run a test program (or just "import theano" in python) I get an error the core of which seems to be:</p> <pre></pre> <p>How do I "configure" gcc/glibc correctly?</p> <p>I looked at several other questions on this error but haven't found a solution.</p> <ul> <li><a href="https://stackoverflow.com/questions/648482/a-trivial-python-swig-error-question">A trivial Python SWIG error question</a></li> <li><a href="https://stackoverflow.com/questions/26879158/mingw-compiler-for-pip-after-cannot-find-vcvarsall-bat-error-still-does-not-w/27220505#27220505">MinGW compiler for pip after &quot;cannot find vcvarsall.bat&quot; error, still does not work</a></li> </ul>
[ { "AnswerId": "32204900", "CreationDate": "2015-08-25T13:08:53.150", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>This error message is strongly indicative of using Theano with Cygwin. The solution is to use MinGW instead. If you have both installed then make sure MinGW appears before Cygwin in the <code>PATH</code> environment variable.</p>\n" }, { "AnswerId": "43254960", "CreationDate": "2017-04-06T12:15:31.727", "ParentId": null, "OwnerUserId": "3257826", "Title": null, "Body": "<p><code>conda install theano</code> is all you need to do now.</p>\n" }, { "AnswerId": "45971053", "CreationDate": "2017-08-30T23:18:19.533", "ParentId": null, "OwnerUserId": "5057446", "Title": null, "Body": "<p>Check if gcc is installed first.If not make sure you install it.</p>\n\n<p>If gcc already exists abd still facing the issue, make sure you are using the right bits for Theano and Python, like 64-bit in your case.\nIn my case I installed 32-bit anaconda python on 64-bit OS which caused the issue. re-installing the right version fixed it. </p>\n" } ]
32,171,378
1
<c++><caffe>
2015-08-23T20:47:33.480
32,171,424
2,439,854
error while loading shared libraries: libcaffe.so
<p>I am trying to write a simple c++ app which uses caffee </p> <p>This is part of my makefile:</p> <pre></pre> <p>the program compiles succesfully but when I try to run the result I get the following error:</p> <pre></pre> <p>But the file is clearly at the location: ../Caffe/caffe/build/lib which I have included. Can anyone help me out here?</p>
[ { "AnswerId": "32171424", "CreationDate": "2015-08-23T20:53:20.737", "ParentId": null, "OwnerUserId": "200291", "Title": null, "Body": "<p>When you link, it includes a little note in the executable to the dynamic linker that “hey, I need <code>libcaffe.so</code>!” but it doesn’t say where to find it. When you run the program, you may need to give the dynamic linker some extra information, saying “hey, when you’re looking for libraries, check here too!”, and you can do by setting the <code>LD_LIBRARY_PATH</code> environment variable to the directory containing <code>libcaffe.so</code> before running your program.</p>\n" } ]
32,171,454
1
<python><image><caffe>
2015-08-23T20:56:03.700
32,171,560
2,539,274
Caffe, how to run classify.py for a set of images
<p>I installed Caffe on Linux successfully. Then I failed to make it work with Matlab. So I installed it with Python following the tutorial of <a href="http://radar.oreilly.com/2014/07/how-to-build-and-run-your-first-deep-learning-network.html" rel="nofollow">Pete Warden</a>. However, I never used Python before I just run the command and it works. </p> <p>My question is how can I test for a set images rather than a single image? I tried to read images from test directory as following </p> <pre></pre> <p>but it returns each time</p> <blockquote> <p>Error; Syntax inccorct </p> </blockquote> <p>I just work intuitively as in Matlab. Do I need to compile before using it? Is the passage of arguments correct?</p> <p>Thank you in advance.</p>
[ { "AnswerId": "32171560", "CreationDate": "2015-08-23T21:08:55.940", "ParentId": null, "OwnerUserId": "5113071", "Title": null, "Body": "<p>well, this works for files in a dir</p>\n\n<pre><code>mypath = './'\nfiles = [ f for f in listdir(mypath) if isfile(join(mypath,f)) ]\nfor f in files:\n print join(mypath,f)\n</code></pre>\n\n<p>so perhaps you should modify yours to something like</p>\n\n<pre><code>import os\nfrom os.path import isfile, join\n\nmypath = './example/images/'\nfiles = [ f for f in listdir(mypath) if isfile(join(mypath,f)) ]\nfor f in files:\n cmd = \"python python/classify.py --print_results %s foo\" % join(mypath,f)\n os.system(cmd)\n</code></pre>\n" } ]
32,177,764
2
<machine-learning><neural-network><deep-learning><caffe><gradient-descent>
2015-08-24T08:33:15.017
32,178,158
1,714,410
What is `weight_decay` meta parameter in Caffe?
<p>Looking at an example <a href="https://github.com/BVLC/caffe/blob/tutorial/examples/cifar10/cifar10_full_solver.prototxt#L15" rel="noreferrer"></a>, posted on BVLC/caffe git, there is a training meta parameter</p> <pre></pre> <p>What does this meta parameter mean? And what value should I assign to it?</p>
[ { "AnswerId": "32178158", "CreationDate": "2015-08-24T08:55:19.120", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>The <code>weight_decay</code> meta parameter govern the regularization term of the neural net.</p>\n\n<p>During training a regularization term is added to the network's loss to compute the backprop gradient. The <code>weight_decay</code> value determines how dominant this regularization term will be in the gradient computation. </p>\n\n<p>As a rule of thumb, the more training examples you have, the weaker this term should be. The more parameters you have (i.e., deeper net, larger filters, larger InnerProduct layers etc.) the higher this term should be.</p>\n\n<p>Caffe also allows you to choose between <code>L2</code> regularization (default) and <code>L1</code> regularization, by setting</p>\n\n<pre><code>regularization_type: \"L1\"\n</code></pre>\n\n<p>However, since in most cases weights are small numbers (i.e., <code>-1&lt;w&lt;1</code>), the <code>L2</code> norm of the weights is significantly smaller than their <code>L1</code> norm. Thus, if you choose to use <code>regularization_type: \"L1\"</code> you might need to tune <code>weight_decay</code> to a significantly smaller value.</p>\n\n<p>While learning rate may (and usually does) change during training, the regularization weight is fixed throughout.</p>\n" }, { "AnswerId": "32178151", "CreationDate": "2015-08-24T08:54:54.623", "ParentId": null, "OwnerUserId": "987599", "Title": null, "Body": "<p>Weight decay is a regularization term that penalizes big weights.\nWhen the weight decay coefficient is big the penalty for big weights is also big, when it is small weights can freely grow.</p>\n\n<p>Look at this answer (not specific to caffe) for a better explanation:\n<a href=\"https://stats.stackexchange.com/questions/29130/difference-between-neural-net-weight-decay-and-learning-rate\">Difference between neural net <em>\"weight decay\"</em> and <em>\"learning rate\"</em></a>.</p>\n" } ]
32,179,981
1
<predict><deep-learning><mse><keras>
2015-08-24T10:24:58.630
null
5,142,261
the loss of mse always be 0 when keras for topic predict
<p>my input is a 200 dims vector, which is generated by mean of the word2vector of all words of a article, my output is a 50 dims vector,which is generated by the LDA results of a article I want to use mse as the loss function,but the value of the loss always be 0 my code as follows:</p> <pre></pre> <p>the screen output as follows: <a href="https://i.stack.imgur.com/euVdr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/euVdr.png" alt="enter image description here"></a></p> <p>who can tell me why ,thanks!</p>
[ { "AnswerId": "32272087", "CreationDate": "2015-08-28T13:21:38.377", "ParentId": null, "OwnerUserId": "3403018", "Title": null, "Body": "<p>First, is your output a one-hot vector of predicted classes? IE: class one is [1, 0, 0, ...] and class two is [0, 1, 0, 0, ...].</p>\n\n<p>If so, then using softmax activation at the output layer is acceptable and you are doing a classification problem. If you are doing a classification problem (one-hot output) you cannot use MSE as the loss, use categorical cross-entropy. </p>\n\n<p>Softmax scales the output so that the number given is a predicted probability of a certain class. Wikipedia here: <a href=\"https://en.wikipedia.org/wiki/Softmax_function\" rel=\"noreferrer\">https://en.wikipedia.org/wiki/Softmax_function</a></p>\n\n<p>If you are expecting the output vector to be real numbers then you need to use linear activation on your output neurons.</p>\n" } ]
32,185,649
1
<python><executable><theano>
2015-08-24T15:06:13.623
32,334,133
4,305,160
How do I compile a theano network into an self-contained executable?
<p>We would like to enter a competition where we would need to submit our network in the form of an executable. In one of the subcompetitions, the network will be trained by the judges, e.g.</p> <p>CASE 1:</p> <pre></pre> <p>CASE 2: and in another, we would submit a trained network that will just receive an input file, e.g.</p> <pre></pre> <p>How do I compile our theano-built network (whether it's an RBM or other variation) into something that can be packaged in this way without requiring the people on the other end to actually install theano?</p>
[ { "AnswerId": "32334133", "CreationDate": "2015-09-01T14:23:35.270", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Can the judge install a VM or a Docker image with Theano and your stuff installed? This would be easier in the short term.</p>\n\n<p>We don't have exactly what you ask, but there is work to generate a shared library for a Theano function. This is not yet merged in Theano itself. I started this and Alberto Orlandi is continuing this. He have a working version on a branch. See this branch:</p>\n\n<p><a href=\"https://github.com/AlOa/Theano/commits/embed_theano\" rel=\"nofollow\">https://github.com/AlOa/Theano/commits/embed_theano</a></p>\n\n<p>Also this mailing list discussion:</p>\n\n<p><a href=\"https://groups.google.com/d/topic/theano-users/JtxIequsHNg/discussion\" rel=\"nofollow\">https://groups.google.com/d/topic/theano-users/JtxIequsHNg/discussion</a></p>\n\n<p>He is working to make a PR for this. But the current version do not do exactly what you want and need some work from you for the shared variables and to make this work.</p>\n\n<p>What this branch do is generate a C++ shared library that depend on Python and NumPy, but not Theano.</p>\n" } ]
32,188,392
1
<lua><neural-network><torch>
2015-08-24T17:41:09.487
32,231,558
5,260,975
Expecting a contiguous tensor error with nn.Sum
<p>I have a 2x16x3x10x10 tensor that I feed into my network. My network has two parts that work in parallel. The first part takes the 16x3x10x10 matrix and computes the sum over the last two dimensions, returning a 16x3 tensor. The second part is a convolutional neural network that produces a 16x160 tensor. Whenever I try to run this model, I get the following error:</p> <pre></pre> <p>Here is the relevant part of the model:</p> <pre></pre> <p>The code works when the input tensor is 2x1x3x10x10, but not when the tensor is 2x16x3x10x10.</p> <p>Edit: I only just realized that this happens when I do model:backward and not model:forward. Here is the relevant code:</p> <pre></pre> <p>x is a 2x16x3x10x10 tensor and dE_dy is 16x2.</p>
[ { "AnswerId": "32231558", "CreationDate": "2015-08-26T16:07:35.697", "ParentId": null, "OwnerUserId": "4850610", "Title": null, "Body": "<p>This is a flaw in <code>torch.nn</code> library. To perform a backward step, <code>nn.Parallel</code> splits <code>gradOutput</code> it receives from higher module into pieces and sends them to its parallel submodules. Splitting are done effectively without copying memory, and thus those pieces are <a href=\"https://stackoverflow.com/a/4059454\">non-contiguous</a> (unless you split on the 1st dimension).</p>\n\n<pre><code>local first_part = nn.Parallel(1,2)\n-- ^\n-- Merging on the 2nd dimension; \n-- Chunks of splitted gradOutput will not be contiguous\n</code></pre>\n\n<p>The problem is that <code>nn.Sum</code> cannot work with non-contiguous <code>gradOutput</code>. I haven't got a better idea than to make changes to it: </p>\n\n<pre><code>Sum_nc, _ = torch.class('nn.Sum_nc', 'nn.Sum')\nfunction Sum_nc:updateGradInput(input, gradOutput)\n local size = input:size()\n size[self.dimension] = 1\n -- modified code:\n if gradOutput:isContiguous() then\n gradOutput = gradOutput:view(size) -- doesn't work with non-contiguous tensors\n else\n gradOutput = gradOutput:resize(size) -- slower because of memory reallocation and changes gradOutput\n -- gradOutput = gradOutput:clone():resize(size) -- doesn't change gradOutput; safer and even slower\n end\n --\n self.gradInput:resizeAs(input)\n self.gradInput:copy(gradOutput:expandAs(input))\n return self.gradInput\nend \n\n[...]\n\nsums = nn.Sequential()\nsums:add(nn.Sum_nc(3)) -- &lt;- will use torch.view\nsums:add(nn.Sum_nc(3)) -- &lt;- will use torch.resize\n</code></pre>\n" } ]
32,202,249
0
<python><numpy><theano>
2015-08-25T10:59:46.213
null
3,042,790
Different results of cost function in theano and numpy
<p>I'm getting different results when calculating a negative log likelihood of a simple two layer neural net in theano and numpy.</p> <p>This is the numpy code:</p> <pre></pre> <p>where model is a function that generates initial parameters and X is the input array.</p> <pre></pre> <p>I'm getting a result of 1.3819194609246772, which is the correct value for the loss function. However my Theano code yields a value of 1.3715655944645178.</p> <pre></pre> <p>I'm already getting wrong results when calculating the values in the output layer. I'm not really sure what the problem could be. Does the shared function keep the type of numpy array that is used as the value argument or is that converted to float32? Can anybody tell me what I'm doing wrong in the theano code?</p> <p>EDIT: The problem seems to occur in the hidden layer after applying the ReLU function: Here's the comparison in the results between theano and numpy in each layer:</p> <pre></pre> <p>I have the idea of using the switch() function for the ReLU layer from this post: <a href="https://stackoverflow.com/questions/26497564/theano-hiddenlayer-activation-function">Theano HiddenLayer Activation Function</a> and I don't really see how that function is different from the equivalent numpy code: z_3 = np.maximum(0, z_2)?! Solution to first problem: T.switch(first_layer > 0,0,first_layer) sets all the values greater than 0 to 0 => it should be T.switch(first_layer &lt; 0,0,first_layer).</p> <p>EDIT2: The gradients that theano calculates significantly differ from the numerical gradients I was given, this is my implementation:</p> <pre></pre> <p>This is an assignment for the Convolutional Neural Networks class that was offered by Stanford earlier this year and I think it's safe to say that their numerical gradients are probably correct. I could post the code to their numerical implementation though if required.</p> <p>Using a relative error the following way:</p> <pre></pre> <p>Calculating the numerical gradients using the eval_numerical_gradient method that was provided by the course get the following relative errors for the gradients:</p> <pre></pre> <p>Which are too large for W1 and W2, the relative error should be less than 1e-8. Can anybody explain this or help in any way?</p>
[]
32,203,687
1
<c++><machine-learning><neural-network><deep-learning><caffe>
2015-08-25T12:11:20.253
null
1,377,127
What does the "TEST" variable do in Caffe example code?
<p>In the C++ <a href="https://github.com/BVLC/caffe/blob/master/examples/cpp_classification/classification.cpp" rel="nofollow">example provided by Caffe</a> they use the variable "TEST" on line 56. I haven't seen it referenced before or after so I was wondering if anyone knew what the variable does?</p>
[ { "AnswerId": "32203950", "CreationDate": "2015-08-25T12:23:44.617", "ParentId": null, "OwnerUserId": "5089383", "Title": null, "Body": "<p>I didn't download the whole project to find the definition of that constant (not variable) but the comments say there are two constants TEST and TRAIN indicating the \"phase\".</p>\n\n<p>They must be defined (and maybe commented further) in one of the many Caffe hpp files.</p>\n" } ]
32,204,866
1
<c++><opencv><machine-learning><deep-learning><caffe>
2015-08-25T13:06:53.727
32,206,293
1,377,127
Converting OpenCV grayscale Mat to Caffe blob
<p>I've been <a href="https://github.com/jyegerlehner/caffe/blob/nat_img_autoenc_parallel/tools/prop-patches.cpp" rel="nofollow">following an example</a> I was referred to on how to convert an OpenCV into a Caffe object I could make predictions from. From what I understand, the first section scales the image and then initialises the caffe class :</p> <pre></pre> <p>Then, the OpenCV Mat "patch" is converted into "input_blob". I've changed this part because I've loaded in my image in grayscale instead of colour. </p> <pre></pre> <p>Finally, I'm not too sure what this section does - if I already have my OpenCV Mat converted to a Caffe blob, why do I need to push back on the "input" vector and pass it to the net? Can't I pass input_blob directly into the net to get my prediction back?</p> <pre></pre>
[ { "AnswerId": "32206293", "CreationDate": "2015-08-25T14:08:52.813", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>You need to <code>push_back</code> your <code>input_blob</code> in order to pass it to <code>net</code> since <code>net</code> is expecting its input as a <em><code>std::vector</code></em> of <code>Blobs</code> (in principle, there may be <code>net</code>s that need more than a single input blob to produce output).<br>\nNote that you are not copying <code>input_blob</code> into the input vector, but rather passing a pointer to it.</p>\n" } ]
32,208,193
1
<opencv><machine-learning><neural-network><deep-learning><caffe>
2015-08-25T15:33:43.590
32,211,796
1,377,127
How to modify Caffe network input for C++ API?
<p>I'm trying to use the MINST Caffe example via the C++ API, but I'm having a bit of trouble working out how to restructure the network prototxt file I'll deploy after training. I've trained and tested the model with the original file (<a href="https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_train_test.prototxt" rel="nofollow">lenet_train_test.prototxt</a>), but when I want to deploy it and make predictions like in the <a href="https://github.com/BVLC/caffe/blob/master/examples/cpp_classification/classification.cpp" rel="nofollow">C++ and OpenCV example</a>, I realise I have to modify the input section to make it similar to the <a href="https://github.com/BVLC/caffe/blob/master/models/bvlc_reference_caffenet/deploy.prototxt" rel="nofollow">deploy.prototxt</a> file they have.</p> <p>Can I replace the information in the training and testing layers of the lenet_train_test.prototxt with this section of the deploy.prototxt file?</p> <pre></pre> <p>The images I'll be passing for classification to the network will be grayscale and 24*24 pixels, and I'll also want to scale it like was done with the MINST dataset, so could I modify the section to this?</p> <pre></pre> <p>I'm not entirely sure what the "" is coming from though.</p>
[ { "AnswerId": "32211796", "CreationDate": "2015-08-25T18:49:25.470", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>In order to \"convert\" you <em>train_val</em> prototxt to a <em>deploy</em> one you remove the input data layers (reading your train/val data) and replacing them with the declaration</p>\n\n<pre><code>name: \"CaffeNet\"\ninput: \"data\"\ninput_shape {\n dim: 10\n dim: 1\n dim: 24\n dim: 24\n}\n</code></pre>\n\n<p>Note that the <em>deploy</em> prototxt does not have two phases for train and test only a single flavor.<br>\nReplacing the input data layer with this declaration basically tells caffe that you are responsible of supplying the data, and the net should allocate space for inputs of this size.</p>\n\n<p>Regarding scale: once you <em>deploy</em> your net, the net has no control over the inputs - it does not read the data for you as the input data layers in the <em>train_val</em> net. Therefore, you'll have to scale the input data yourself <strong>before</strong> feeding it to the network. You can use the <a href=\"https://github.com/BVLC/caffe/blob/master/include/caffe/data_transformer.hpp\" rel=\"nofollow\">DataTransformer</a> class to help you transform your input blobs in the same way they were transformed during training.</p>\n\n<p>Regarding the first <code>dim: 10</code>: every Blob (i.e., data/parameters storage unit) in caffe has 4 dimensions: batch-size, channels, height and width. This parameter actually means the net should allocate space for batches of 10 inputs at a time.<br>\nThe \"magic\" number 10 comes from the way googlenet and other competitors in ILSVRC challenge used to classify images: they classified 10 crops from each image and averaged the outputs to produce better classification results.</p>\n" } ]
32,208,646
1
<python-2.7><python-import><theano><lasagne><nolearn>
2015-08-25T15:56:31.980
null
4,801,125
nolearn 0.5 are not compatible with lasagne 0.1 or 0.2?
<p>When I want to import:</p> <pre></pre> <p>I always got this error </p> <pre></pre> <p>My Theano version is 0.7.0.</p>
[ { "AnswerId": "32209459", "CreationDate": "2015-08-25T16:37:00.723", "ParentId": null, "OwnerUserId": "4801125", "Title": null, "Body": "<p>I uninstalled nolearn and lasagne:</p>\n\n<pre><code>pip uninstall nolearn\npip uninstall lasagne\n</code></pre>\n\n<p>And then run following in command line:</p>\n\n<pre><code>pip install -r https://raw.githubusercontent.com/dnouri/nolearn/master/requirements.txt https://github.com/dnouri/nolearn/archive/master.zip#egg=nolearn\n</code></pre>\n\n<p>Now it works. </p>\n" } ]
32,211,530
2
<python-2.7><theano><nolearn>
2015-08-25T18:33:46.427
null
4,801,125
Theano TensorType error
<p>When I am using nolearn to implement multi-label classification, I got this error:</p> <blockquote> <p>'Bad input argument to theano function with name "/Users/lm/Documents/anaconda/lib/python2.7/site-packages/nolearn/lasagne/base.p‌​y:391" at index 1(0-based)', 'TensorType(float32, matrix) cannot store a value of dtype int64 without risking loss of precision. If you do not mind this loss, you can: 1) explicitly cast your data to float32, or 2) set "allow_input_downcast=True" when calling "function".', array([[0, 0, 0, ..., 0, 0, 1],</p> </blockquote>
[ { "AnswerId": "44017652", "CreationDate": "2017-05-17T06:55:38.230", "ParentId": null, "OwnerUserId": "5695374", "Title": null, "Body": "<p>In my case all I did was change the <code>floatX</code> flag (under <code>[global]</code>) to on the .theanorc file from : </p>\n\n<pre><code>[global]\nfloatX = float64\n</code></pre>\n\n<p>to: </p>\n\n<pre><code>[global]\nfloatX = float32\n</code></pre>\n\n<p><em>Notice that the 64 at the end was replaced by the 32.</em></p>\n" }, { "AnswerId": "32211597", "CreationDate": "2015-08-25T18:37:45.730", "ParentId": null, "OwnerUserId": "2902280", "Title": null, "Body": "<p>As told in the error message, you need to convert your input and output to the appropriate type (if you do not fear losing precision).</p>\n\n<pre><code>input = input.astype(np.float32)\noutput = output.astype(np.float32)\n</code></pre>\n\n<p>should work</p>\n\n<p>Note: even if you do this, the error might remain if you have a <code>BatchIterator</code> which transforms your data (and by inadvertance uses <code>float64</code> again). The solution is the same: inside the <code>BatchIterator</code>, cast the data to <code>float32</code> right before returning it.</p>\n" } ]
32,216,638
1
<opencv><machine-learning><neural-network><deep-learning><caffe>
2015-08-26T01:33:29.647
32,217,915
1,377,127
Scaling OpenCV Mat for Caffe
<p>I've been following the <a href="http://caffe.berkeleyvision.org/gathered/examples/mnist.html" rel="nofollow">Caffe MINST example</a> and trying to deploy a test of the trained model with C++ where I use OpenCV to read in the images. In the example, they mention how for the training and test images they</p> <blockquote> <p>scale the incoming pixels so that they are in the range [0,1). Why 0.00390625? It is 1 divided by 256.</p> </blockquote> <p>I've heard how there's a DataTransformer class in Caffe you can use to scale your images, but if I multiplied each pixel in the OpenCV Mat object by 0.00390625 would this give the same result?</p>
[ { "AnswerId": "32217915", "CreationDate": "2015-08-26T04:17:23.577", "ParentId": null, "OwnerUserId": "2589776", "Title": null, "Body": "<p>The idea is right. But remember to convert your OpenCV Mats to float or double type before scaling.</p>\n\n<p>Something like:</p>\n\n<pre><code>cv::Mat mat; // assume this is one of your images (grayscale)\n\n/* convert it to float */\nmat.convertTo(mat, CV_32FC1); // use CV_32FC3 for color images\n\n/* scaling here */\nmat = mat * 0.00390625;\n</code></pre>\n\n<hr>\n\n<p><strong>Update #1</strong>: Converting and scaling can also simply be done in one line, i.e. </p>\n\n<pre><code>cv::Mat mat; // assume this is one of your images (grayscale)\n\n/* convert and scale here */\nmat.convertTo(mat, CV_32FC1, 0.00390625);\n</code></pre>\n" } ]
32,218,132
1
<theano>
2015-08-26T04:41:12.587
32,224,804
1,401,278
theano finding the indices of a tensor elements in a second tensor
<p>I can't seem to find a solution for this. Given two theano tensors a and b, I want to find the indices of elements in b within the tensor a. This example will help, say a = [1, 5, 10, 17, 23, 39] and b = [1, 10, 39], I want the result to be the indices of the b values in tensor a, i.e. [0, 2, 5].</p> <p>After spending some time, I thought the best way would be to use scan; here is my shot at the minimal example.</p> <pre></pre> <p>I am getting the error:</p> <pre></pre> <p>in the return statement line. Further, the output needs to be a single tensor of shape b and I am not sure if this would get the desired result. Any suggestion would be helpful.</p>
[ { "AnswerId": "32224804", "CreationDate": "2015-08-26T10:58:04.203", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>It all depends on how big your arrays will be. As long as it fits in memory you can proceed as follows</p>\n\n<pre><code>import numpy as np\nimport theano\nimport theano.tensor as T\n\naa = T.ivector()\nbb = T.ivector()\n\nequality = T.eq(aa, bb[:, np.newaxis])\nindices = equality.nonzero()[1]\n\nf = theano.function([aa, bb], indices)\n\na = np.array([1, 5, 10, 17, 23, 39], dtype=np.int32)\nb = np.array([1, 10, 39], dtype=np.int32)\n\nf(a, b)\n\n# outputs [0, 2, 5]\n</code></pre>\n" } ]
32,218,466
4
<c++><machine-learning><neural-network><deep-learning><caffe>
2015-08-26T05:15:23.917
32,218,586
1,377,127
Compiling Caffe C++ Classification Example
<p>I recently modified the Caffe <a href="https://github.com/BVLC/caffe/blob/master/examples/cpp_classification/classification.cpp" rel="noreferrer">C++ classification example file</a> and I am trying to recompile it. However, I'm having trouble linking a simple g++ compilation to the .hpp files in the include directory. I know this is a basic question but I can't seem to work it out - can someone help me work out how to compile this program? The compilation looks like this now:</p> <pre></pre> <p>But I'm getting this error:</p> <pre></pre> <p>I'm running this on a machine without Nvidia GPUs so when I looked at the device_alternate.hpp file I realised this is calling a lot of cuda-related .hpp files as well which don't exist.</p>
[ { "AnswerId": "32297533", "CreationDate": "2015-08-30T14:23:20.910", "ParentId": null, "OwnerUserId": "5282027", "Title": null, "Body": "<p>Change Makefile.config to set <code>CPU_ONLY := 1</code></p>\n\n<pre><code># CPU-only switch (uncomment to build without GPU support).\nCPU_ONLY := 1\n</code></pre>\n" }, { "AnswerId": "32218586", "CreationDate": "2015-08-26T05:25:55.033", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>Usually, in order to help the compiler locate header files you need to add <a href=\"http://www.rapidtables.com/code/linux/gcc/gcc-i.htm\" rel=\"noreferrer\"><code>-I /path/to/include/folder</code></a> option to the compilation line:</p>\n\n<pre><code>~$ g++ -I /path/to/caffe/include myfile.cpp\n</code></pre>\n" }, { "AnswerId": "32295890", "CreationDate": "2015-08-30T11:08:25.693", "ParentId": null, "OwnerUserId": "2430986", "Title": null, "Body": "<p>If you want to build custom files in caffe, there are two ways</p>\n\n<p><strong>The easy way</strong> </p>\n\n<ul>\n<li>Make the necessary changes and keep the file ( in your case - classification.cpp ) inside a directory ( say test ) in examples folder in th\ne caffe root directory.</li>\n<li>run <code>make</code>. This will automatically add the necessary cxxflags and ldflags and compile your code and place the executable in the build/examples/test folder. This also ensure the flag <strong>CPU_ONLY</strong> is set ( as mentioned in the <em>Makefile.config</em> )</li>\n</ul>\n\n<p><strong>The Hard way</strong></p>\n\n<ul>\n<li>Run make without the pretty print option ( mentioned inside <em>Makefile.config</em> ). You will be able to see the compile and link options used to build the examples and tools. You can copy and paste these options ( and make necessary changes to relative paths if used ) to compile your file</li>\n</ul>\n\n<p>Hope this helps</p>\n\n<p><strong>Edit</strong>\nAs the op requested an easy way, it can be done as follows</p>\n\n<p>This is a very <strong>minimal example</strong> and I encourage the OP to refer to full online documentation and example of cmake usage.</p>\n\n<ul>\n<li>Requirements\n\n<ul>\n<li>Caffe needs to be built with <strong>cmake</strong> - Relatively easy as the current master branch has CMakeLists and everything defined. Use the Cmake-gui or ccmake to set your options</li>\n</ul></li>\n</ul>\n\n<p>Now, I am assuming you have a project structure as follows.</p>\n\n<pre><code>-project \n - src \n - class1.cpp\n - CMakeLists.txt ( to be added )\n - include\n - class1.hpp\n\n - main.cpp\n - CMakeLists.txt ( to be added )\n</code></pre>\n\n<p>The CMakeLists.txt ( src ) needs to contain (<em>at minimum</em>) the following lines,</p>\n\n<pre><code>cmake_minimum_required(VERSION 2.8)\nfind_package(OpenCV REQUIRED) # Optional in case of dependency on opencv \nadd_library( c1 class1.cpp )\n</code></pre>\n\n<p><strong>Note:</strong> In case class1 depends on other external libraries, the path to headers must be included using <code>include_directories</code>.</p>\n\n<p>The CMakeLists.txt ( outermost ) needs to contain the following at <em>minimum</em></p>\n\n<pre><code>cmake_minimum_required(VERSION 2.8)\nPROJECT(MyProject)\n\nfind_package(OpenCV REQUIRED)\nfind_package(Caffe REQUIRED)\n\ninclude_directories( \"${PROJECT_SOURCE_DIR}/include\" )\nadd_subdirectory( src )\n\ninclude_directories( \"$Caffe_INCLUDE_DIRS}\" )\nadd_executable(MyProject main.cpp)\n\ntarget_link_libraries( MyProject ${OpenCV_LIBS} c1 ${Caffe_LIBRARIES} ) \n</code></pre>\n\n<p>Now, the following commands from inside the project directory will create the executable <code>MyProject</code> inside the <code>build</code> folder.</p>\n\n<pre><code>mkdir build\ncd build\ncmake ..\nmake\n</code></pre>\n\n<p>You can then run your program with <code>./MyProject (arguments)</code></p>\n\n<p><strong>EDIT 2</strong></p>\n\n<p>Satisfying the requirement of building caffe with CMake is very important for this to work. You need to configure and generate the Makefiles using CMake. Use <code>cmake-gui</code> or <code>ccmake</code> for this purpose so that you can set your options like CPU_ONLY, etc.</p>\n\n<p>You should create a build directory inside caffe and execute the following for a basic setup</p>\n\n<pre><code>mkdir build\ncd build\ncmake ..\nmake -jX #X is the number of threads your CPU can handle\n</code></pre>\n\n<p>Now, the .cmake directory in your $HOME folder consists of the following\n<code>/home/user/.cmake/packages/Caffe/&lt;random_string&gt;</code> file. This file points to the install location of caffe ( which is our build directory )</p>\n\n<p>Now, the find_package command should run without errors for your other projects. And since you are using CMake you can keep your project folder outside the Caffe folder ( and it is better to keep it outside as the make process of caffe will try to build your files but it will fail )</p>\n\n<p><strong>Note</strong>: In case that the error persists, you can manually set the <em>Caffe_DIR</em> during cmake configuration.</p>\n" }, { "AnswerId": "54447936", "CreationDate": "2019-01-30T19:13:43.140", "ParentId": null, "OwnerUserId": "6327658", "Title": null, "Body": "<p>In <code>./caffe/include/caffe/util/device_alternate.hpp</code> you have</p>\n\n<pre><code>...\n\n#ifdef CPU_ONLY // CPU-only Caffe.\n\n...\n\n#else // Normal GPU + CPU Caffe.\n\n#include &lt;cublas_v2.h&gt;\n\n...\n</code></pre>\n\n<p>This means it will try to include <code>cublas_v2.h</code> unless <code>CPU_ONLY</code> flag is defined.</p>\n\n<p>Since your machine doesn't have Nvidia GPU, you have to define <code>CPU_ONLY</code> flag during compilation with <code>-DCPU_ONLY=1</code></p>\n\n<p>So your full compilation command should look like</p>\n\n<pre><code>g++ -DCPU_ONLY=1 -I /home/jack/caffe/include classification.cpp -o classify\n</code></pre>\n" } ]
32,218,712
1
<python><numpy><machine-learning><neural-network><keras>
2015-08-26T05:37:10.530
32,389,326
2,605,604
Neural network dimension mis-match
<p>I have a neural network setup for the MNIST digits dataset in Keras that looks like this:</p> <pre></pre> <p>features_train is of shape (1000,784), labels_train is (1000,1), and both are numpy arrays. I want 784 input nodes, 200 hidden, and 9 output to classify the digits</p> <p>I keep getting an input dimension mismatch error:</p> <pre></pre> <p>I'm trying to identify where my dimensions may be incorrect but I'm not seeing it. Can anyone see the problem?</p>
[ { "AnswerId": "32389326", "CreationDate": "2015-09-04T02:50:28.077", "ParentId": null, "OwnerUserId": "2605604", "Title": null, "Body": "<p>I've been training 2 class classification models for so long that I'm used to dealing with labels that are just single values. For this problem (classifying more than 1 outcome) I just had to change the labels to be vectors themselves.</p>\n\n<p>This solved my problem:</p>\n\n<pre><code>from keras.utils.np_utils import to_categorical\n\nlabels_train = to_categorical(labels_train)\n</code></pre>\n" } ]
32,225,388
2
<neural-network><deep-learning><caffe><lstm><recurrent-neural-network>
2015-08-26T11:27:22.137
null
2,191,652
LSTM module for Caffe
<p>Does anyone know if there exists a nice LSTM module for Caffe? I found one from a github account by russel91 but apparantly the webpage containing examples and explanations disappeared (Formerly <a href="http://apollo.deepmatter.io/" rel="noreferrer">http://apollo.deepmatter.io/</a> --> it now redirects only to the <a href="https://github.com/russell91/apollocaffe" rel="noreferrer">github page</a> which has no examples or explanations anymore).</p>
[ { "AnswerId": "32440448", "CreationDate": "2015-09-07T13:58:32.040", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>I know <a href=\"http://jeffdonahue.com/\" rel=\"noreferrer\">Jeff Donahue</a> worked on LSTM models using Caffe. He also gave a nice <a href=\"http://tutorial.caffe.berkeleyvision.org/caffe-cvpr15-sequences.pdf\" rel=\"noreferrer\">tutorial</a> during CVPR 2015. He has a <a href=\"https://github.com/BVLC/caffe/pull/2033\" rel=\"noreferrer\">pull-request</a> with RNN and LSTM. </p>\n\n<p><strong>Update</strong>: there is a <a href=\"https://github.com/BVLC/caffe/pull/3948\" rel=\"noreferrer\">new PR</a> by Jeff Donahue including RNN and LSTM. This PR was merged on June 2016 to master.</p>\n" }, { "AnswerId": "35967589", "CreationDate": "2016-03-13T07:10:34.850", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>In fact, training recurrent nets is often done by unrolling the net. That is, replicating the net over the temporal steps (sharing weights across the temporal steps) and simply doing forward-backward passes on the unrolled model.</p>\n\n<p>To unroll LSTM (or any other unit) you don't have to use <a href=\"http://jeffdonahue.com/\" rel=\"noreferrer\">Jeff Donahue</a>'s recurrent branch, but rather use <code>NetSpec()</code> to explicitly unroll the model.</p>\n\n<p>Here's a simple example:</p>\n\n<pre><code>from caffe import layers as L, params as P, to_proto\nimport caffe\n\n# some utility functions\ndef add_layer_to_net_spec(ns, caffe_layer, name, *args, **kwargs):\n kwargs.update({'name':name})\n l = caffe_layer(*args, **kwargs)\n ns.__setattr__(name, l)\n return ns.__getattr__(name)\ndef add_layer_with_multiple_tops(ns, caffe_layer, lname, ntop, *args, **kwargs): \n kwargs.update({'name':lname,'ntop':ntop})\n num_in = len(args)-ntop # number of input blobs\n tops = caffe_layer(*args[:num_in], **kwargs)\n for i in xrange(ntop):\n ns.__setattr__(args[num_in+i],tops[i])\n return tops\n\n# implement single time step LSTM unit\ndef single_time_step_lstm( ns, h0, c0, x, prefix, num_output, weight_names=None):\n \"\"\"\n see arXiv:1511.04119v1\n \"\"\"\n if weight_names is None:\n weight_names = ['w_'+prefix+nm for nm in ['Mxw','Mxb','Mhw']]\n # full InnerProduct (incl. bias) for x input\n Mx = add_layer_to_net_spec(ns, L.InnerProduct, prefix+'lstm/Mx', x,\n inner_product_param={'num_output':4*num_output,'axis':2,\n 'weight_filler':{'type':'uniform','min':-0.05,'max':0.05},\n 'bias_filler':{'type':'constant','value':0}},\n param=[{'lr_mult':1,'decay_mult':1,'name':weight_names[0]},\n {'lr_mult':2,'decay_mult':0,'name':weight_names[1]}])\n Mh = add_layer_to_net_spec(ns, L.InnerProduct, prefix+'lstm/Mh', h0,\n inner_product_param={'num_output':4*num_output, 'axis':2, 'bias_term': False,\n 'weight_filler':{'type':'uniform','min':-0.05,'max':0.05},\n 'bias_filler':{'type':'constant','value':0}},\n param={'lr_mult':1,'decay_mult':1,'name':weight_names[2]})\n M = add_layer_to_net_spec(ns, L.Eltwise, prefix+'lstm/Mx+Mh', Mx, Mh,\n eltwise_param={'operation':P.Eltwise.SUM})\n raw_i1, raw_f1, raw_o1, raw_g1 = \\\n add_layer_with_multiple_tops(ns, L.Slice, prefix+'lstm/slice', 4, M,\n prefix+'lstm/raw_i', prefix+'lstm/raw_f', prefix+'lstm/raw_o', prefix+'lstm/raw_g',\n slice_param={'axis':2,'slice_point':[num_output,2*num_output,3*num_output]})\n i1 = add_layer_to_net_spec(ns, L.Sigmoid, prefix+'lstm/i', raw_i1, in_place=True)\n f1 = add_layer_to_net_spec(ns, L.Sigmoid, prefix+'lstm/f', raw_f1, in_place=True)\n o1 = add_layer_to_net_spec(ns, L.Sigmoid, prefix+'lstm/o', raw_o1, in_place=True)\n g1 = add_layer_to_net_spec(ns, L.TanH, prefix+'lstm/g', raw_g1, in_place=True)\n c1_f = add_layer_to_net_spec(ns, L.Eltwise, prefix+'lstm/c_f', f1, c0, eltwise_param={'operation':P.Eltwise.PROD})\n c1_i = add_layer_to_net_spec(ns, L.Eltwise, prefix+'lstm/c_i', i1, g1, eltwise_param={'operation':P.Eltwise.PROD})\n c1 = add_layer_to_net_spec(ns, L.Eltwise, prefix+'lstm/c', c1_f, c1_i, eltwise_param={'operation':P.Eltwise.SUM})\n act_c = add_layer_to_net_spec(ns, L.TanH, prefix+'lstm/act_c', c1, in_place=False) # cannot override c - it MUST be preserved for next time step!!!\n h1 = add_layer_to_net_spec(ns, L.Eltwise, prefix+'lstm/h', o1, act_c, eltwise_param={'operation':P.Eltwise.PROD})\n return c1, h1, weight_names\n</code></pre>\n\n<p>Once you have the single time step, you can unroll it as many times you want...</p>\n\n<pre><code>def exmaple_use_of_lstm():\n T = 3 # number of time steps\n B = 10 # batch size\n lstm_output = 500 # dimension of LSTM unit\n\n # use net spec\n ns = caffe.NetSpec()\n\n # we need initial values for h and c\n ns.h0 = L.DummyData(name='h0', dummy_data_param={'shape':{'dim':[1,B,lstm_output]},\n 'data_filler':{'type':'constant','value':0}})\n\n ns.c0 = L.DummyData(name='c0', dummy_data_param={'shape':{'dim':[1,B,lstm_output]},\n 'data_filler':{'type':'constant','value':0}})\n\n # simulate input X over T time steps and B sequences (batch size)\n ns.X = L.DummyData(name='X', dummy_data_param={'shape': {'dim':[T,B,128,10,10]}} )\n # slice X for T time steps\n xt = L.Slice(ns.X, name='slice_X',ntop=T,slice_param={'axis':0,'slice_point':range(1,T)})\n # unroling\n h = ns.h0\n c = ns.c0\n lstm_weights = None\n tops = []\n for t in xrange(T):\n c, h, lstm_weights = single_time_step_lstm( ns, h, c, xt[t], 't'+str(t)+'/', lstm_output, lstm_weights)\n tops.append(h)\n ns.__setattr__('c'+str(t),c)\n ns.__setattr__('h'+str(t),h)\n # concat all LSTM tops (h[t]) to a single layer\n ns.H = L.Concat( *tops, name='concat_h',concat_param={'axis':0} )\n return ns\n</code></pre>\n\n<p>Writing the prototxt:</p>\n\n<pre><code>ns = exmaple_use_of_lstm()\nwith open('lstm_demo.prototxt','w') as W:\n W.write('name: \"LSTM using NetSpec example\"\\n')\n W.write('%s\\n' % ns.to_proto())\n</code></pre>\n\n<p>The resulting unrolled net (for three time steps) looks like</p>\n\n<p><a href=\"https://i.stack.imgur.com/K11tK.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/K11tK.png\" alt=\"LSTM\"></a></p>\n" } ]
32,226,362
1
<python><numpy><theano>
2015-08-26T12:14:25.750
32,227,610
1,139,393
Advanced 2d indexing in Theano to extract multiple pixels from an image
<h2>Background</h2> <p>I have an image that I want to sample at a number (P) of x,y coordinates.</p> <p>In Numpy I can use advanced indexing to do this via:</p> <pre></pre> <p>This returns a vector of P pixels sampled from the image.</p> <h2>Question</h2> <p>How can I do this advanced indexing in Theano?</p> <h2>What I've tried</h2> <p>I've tried the corresponding code in theano:</p> <pre></pre> <p>this compiles and executes without any warning messages, but results in an output tensor of shape (2,8,100), so it looks like it is doing some variant of basic indexing returning lots of rows of the image, instead of extracting pixels.</p> <h2>Full code</h2> <pre class="lang-py prettyprint-override"></pre> <p>This prints (8,) for Numpy, and (2,8,100) for Theano.</p> <p>My Theano version is 0.7.0.dev-RELEASE</p>
[ { "AnswerId": "32227610", "CreationDate": "2015-08-26T13:11:27.493", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>This appears to be a subtle difference between Theano and numpy advanced indexing (there are other differences that don't apply here).</p>\n\n<p>Instead of</p>\n\n<pre><code>t_points = t_image[ [t_pos[:,1],t_pos[:,0]] ]\n</code></pre>\n\n<p>you need to use</p>\n\n<pre><code>t_points = t_image[ (t_pos[:,1],t_pos[:,0]) ]\n</code></pre>\n\n<p>Note the change from a list of lists to a tuple of lists. This variant also works in numpy so it's probably best to just use a tuple instead of a list there too.</p>\n" } ]
32,229,486
1
<python><theano>
2015-08-26T14:31:29.103
32,230,092
3,042,790
verify_grad function: 'TensorVariable' object is not callable
<p>I'd like to use the verify_grad function, but I'm getting errors of the form "'TensorVariable' object is not callable". </p> <pre></pre> <p>In the doc it says that fun is "a Python function that takes Theano variables as inputs, and returns a Theano variable. For instance, an Op instance with a single output."</p> <p>I have gone through the graph structures section in the docs and I thought I understood what an op node is, but apparently I don't. </p> <p>E.g. if I have two TensorVariables x and y and I'd like to take the product of them, then * is the op node, correct? But if I declare z=x*y, then z is again a TensorVariable, right?</p> <p>So is there any way how to define an op for e.g. a negative log likelihood function in order to evaluate the correctness of the gradient for that function? Or is there any other way to get the numerical gradient in theano for a function that you constructed?</p>
[ { "AnswerId": "32230092", "CreationDate": "2015-08-26T14:58:51.553", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>Here's an example of <code>verify_grad</code> in use:</p>\n\n<pre><code>import numpy\nimport theano\n\ndef something_complicated(x, y):\n z = x * y\n return z\n\nx_value = numpy.array([[1., 2., 3.], [4., 5., 6.]], dtype=theano.config.floatX)\ny_value = numpy.array([[7., 8., 9.], [10., 11., 12.]], dtype=theano.config.floatX)\ntheano.gradient.verify_grad(something_complicated, (x_value, y_value), rng=numpy.random)\n</code></pre>\n\n<p>As required, <code>something_complicated</code> is \"a Python function that takes Theano variables as inputs [<code>x</code> and <code>y</code> in this case], and returns a Theano variable [<code>z</code> in this case].\"</p>\n\n<p>You can construct any symbolic expression inside <code>something_complicated</code>, such as the computation for a negative log likelihood.</p>\n\n<p>A Theano operation can be anything as long as</p>\n\n<ol>\n<li>It is <em>callable</em> (objects are callable if they implement the special <a href=\"https://docs.python.org/2/reference/datamodel.html#object.__call__\" rel=\"nofollow\"><code>__call__</code> function</a>)</li>\n<li>When it is called, it treats all inputs as Theano variables</li>\n<li>When it is called, it only returns Theano variables.</li>\n</ol>\n\n<p><code>something_complicated</code> clearly meets these requirements. It is callable by virtue of being a Python function, it assumes <code>x</code> and <code>y</code> are Theano variables, and its return value <code>z</code> is also a Theano variable.</p>\n" } ]
32,229,882
1
<python><gpu><gpgpu><theano>
2015-08-26T14:50:06.197
null
5,186,172
Are int operations possible on GPU in Theano?
<p>So I have read that theano cannot do gpu computations with and to store ints as shared variables on the gpu they have to be initialised as shared data, and then recast to ints (like in the "little hack" in the logistic regression <a href="http://deeplearning.net/tutorial/logreg.html" rel="nofollow">example</a>)...but after such a recast, can theano do gpu computations on <strong>ints</strong>? and is storage a precondition for computation? In other words, are the following two scenarios possible?</p> <p>Scenario 1. I want to do a dot product on two large vectors of ints. I therefore make them shared as and recast them to int before the dot product, is this dot product then done on the gpu(regardless of int type)?</p> <p>Scenario 2. If scenario 1 is possible, would it still be possible to do the computation on the gpu without storing them as shared float32 first? (I understand that sharing variables might mitigate gpu-cpu communication, but would the dot product still be possible? Is storage a precondition for computation on gpu?)</p>
[ { "AnswerId": "32231011", "CreationDate": "2015-08-26T15:41:27.880", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>No, there is (currently) no way to do any operations on the GPU with any type other than <code>float32</code>.</p>\n\n<p>This can be seen with this little demo code:</p>\n\n<pre><code>import numpy\nimport theano\nimport theano.tensor as tt\n\nx = theano.shared(numpy.arange(9 * 10).reshape((9, 10)).astype(numpy.float32))\ny = theano.shared(numpy.arange(10 * 11).reshape((10, 11)).astype(numpy.float32))\nz = theano.dot(tt.cast(x, 'int32'), tt.cast(y, 'int32'))\nf = theano.function([], outputs=z)\ntheano.printing.debugprint(f)\n</code></pre>\n\n<p>When run on a GPU it will print the following computation graph:</p>\n\n<pre><code>dot [@A] '' 4\n |Elemwise{Cast{int32}} [@B] '' 3\n | |HostFromGpu [@C] '' 1\n | |&lt;CudaNdarrayType(float32, matrix)&gt; [@D]\n |Elemwise{Cast{int32}} [@E] '' 2\n |HostFromGpu [@F] '' 0\n |&lt;CudaNdarrayType(float32, matrix)&gt; [@G]\n</code></pre>\n\n<p>Here you can see that the two shared variables are indeed stored in GPU memory (the two <code>CudaNdarrayType</code>s) but they are moved to the host (i.e. CPU/main memory) from the GPU (the <code>HostFromGpu</code> operations) before being cast to ints and a regular <code>dot</code> operation being used.</p>\n\n<p>If the casts are omitted then you would see</p>\n\n<pre><code>HostFromGpu [@A] '' 1\n |GpuDot22 [@B] '' 0\n |&lt;CudaNdarrayType(float32, matrix)&gt; [@C]\n |&lt;CudaNdarrayType(float32, matrix)&gt; [@D]\n</code></pre>\n\n<p>Showing that the GPU is performing the dot product (the <code>GpuDot22</code> operation) but on floating point data, not integer data.</p>\n" } ]
32,233,262
1
<matlab><machine-learning><computer-vision><neural-network><caffe>
2015-08-26T17:43:51.457
32,240,964
5,269,749
How to extract trained caffe kernels filter of the first layer
<p>I am using caffe and I wonder if I just can use one of the filters separately. So basically I just need the trained kernel of that filters (using in the first layer).<br> I could not find the formula of the kernels in the paper.<br> So I really appreciate it if someone can help me out please.<br> If you can also tell me how to extract them in matlab version I would be so grateful.</p> <p>Thanks</p>
[ { "AnswerId": "32240964", "CreationDate": "2015-08-27T04:52:50.310", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>Suppose you have a trained net with its <code>'deploy.prototxt'</code> file defining the net, and the trained parameters at <code>'my_weights.caffemodel'</code> file.<br>\nSuppose the layer you are interested in is defined like this in <code>'deploy.prototxt</code>':</p>\n\n<pre><code>layer {\n name: \"conv1\"\n type: \"Convolution\"\n bottom: \"data\"\n top: \"conv1\"\n param {\n lr_mult: 1\n }\n param {\n lr_mult: 2\n }\n convolution_param {\n num_output: 32\n pad: 2\n kernel_size: 5\n stride: 1\n }\n}\n</code></pre>\n\n<p>As you can see the layer's name is <code>\"conv1\"</code> it has 32 filters of size 5-by-5.</p>\n\n<p>First, you need to load the net in Matlab</p>\n\n<pre><code> &gt;&gt; net = get_net( 'path/to/deploy.prototxt', 'path/to/my_weights.caffemodel', 'test' );\n</code></pre>\n\n<p>Once you load the net, you can access its parameters using the layer's name</p>\n\n<pre><code>&gt;&gt; w = net.params( 'conv1', 1 ).get(); \n</code></pre>\n" } ]
32,240,380
1
<python><machine-learning><neural-network><theano>
2015-08-27T03:49:43.540
32,311,521
4,013,571
Best way to re-initialise a compiled Theano function
<p>I want to refresh my compiled MLP model in Theano as I want to repeat a model with different hyper-parameters.</p> <p>I am aware that I can redefine all the functions, however, compile time for each function is significant.</p> <p>I want to define a function which will refresh the models of their </p> <p>The following code below is shown for demonstration.</p> <pre></pre> <p>My instinct suggests that I can simply re-define the class without any implications on the other compiled functions.</p> <p>Is this correct?</p> <p>I was thinking that if this is the case I could define a function of which re-instanciated parameters for each component of the class </p>
[ { "AnswerId": "32311521", "CreationDate": "2015-08-31T12:39:52.483", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>It's unclear how the <code>MLP</code> class is working but you can re-use a previously compiled computation as long as the number of dimensions of the shared variables does not change.</p>\n\n<p>In the following example the <code>compile_model</code> function creates a simple neural network with randomly initialized parameters. After training with those parameters, the shared variables are re-initialized to new random values, but this time the network's hidden layer size is increased. Despite this change in size, the original training function is re-used.</p>\n\n<pre><code>import numpy\nimport theano\nimport theano.tensor as tt\n\n\ndef compile_model(input_size, hidden_size, output_size, learning_rate):\n w_h = theano.shared(numpy.random.randn(input_size, hidden_size).astype(theano.config.floatX))\n b_h = theano.shared(numpy.zeros(hidden_size, dtype=theano.config.floatX))\n w_y = theano.shared(numpy.random.randn(hidden_size, output_size).astype(theano.config.floatX))\n b_y = theano.shared(numpy.zeros(output_size, dtype=theano.config.floatX))\n parameters = (w_h, b_h, w_y, b_y)\n x = tt.matrix()\n z = tt.lvector()\n h = tt.tanh(theano.dot(x, w_h) + b_h)\n y = tt.nnet.softmax(theano.dot(h, w_y) + b_y)\n c = tt.nnet.categorical_crossentropy(y, z).mean()\n u = [(p, p - learning_rate * theano.grad(c, p)) for p in parameters]\n trainer = theano.function([x, z], outputs=[c], updates=u)\n tester = theano.function([x], outputs=[y])\n return trainer, tester, parameters\n\n\ndef refresh_model(parameters, input_size, hidden_size, output_size):\n w_h, b_h, w_y, b_y = parameters\n w_h.set_value(numpy.random.randn(input_size, hidden_size).astype(theano.config.floatX))\n b_h.set_value(numpy.zeros(hidden_size, dtype=theano.config.floatX))\n w_y.set_value(numpy.random.randn(hidden_size, output_size).astype(theano.config.floatX))\n b_y.set_value(numpy.zeros(output_size, dtype=theano.config.floatX))\n\n\ndef main():\n input_size = 30\n hidden_size = 10\n output_size = 20\n learning_rate = 0.01\n batch_size = 40\n epoch_count = 50\n\n trainer, tester, parameters = compile_model(input_size, hidden_size, output_size, learning_rate)\n x = numpy.random.randn(batch_size, input_size)\n z = numpy.random.randint(output_size, size=(batch_size,))\n print 'Training model with hidden size', hidden_size\n\n for _ in xrange(epoch_count):\n print trainer(x, z)\n\n hidden_size = 15\n refresh_model(parameters, input_size, hidden_size, output_size)\n print 'Training model with hidden size', hidden_size\n\n for _ in xrange(epoch_count):\n print trainer(x, z)\n\n\nmain()\n</code></pre>\n" } ]
32,241,193
1
<c++><machine-learning><neural-network><deep-learning><caffe>
2015-08-27T05:14:39.157
32,243,571
1,377,127
Error with Caffe C++ example with different deploy.prototxt file
<p>I <a href="https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_solver.prototxt" rel="nofollow noreferrer">trained</a> a model using the MNIST <a href="https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_train_test.prototxt" rel="nofollow noreferrer">example architecture</a> (but on my own set of 3 image classes) and have been trying to integrate it into the <a href="https://github.com/BVLC/caffe/blob/master/examples/cpp_classification/classification.cpp" rel="nofollow noreferrer">C++ example</a>. I modified the MNIST architecture file to make it similar to the deploy.prototxt file for the <a href="https://github.com/BVLC/caffe/blob/master/models/bvlc_reference_caffenet/deploy.prototxt" rel="nofollow noreferrer">C++ example</a> (<a href="https://stackoverflow.com/questions/32208193/how-to-modify-caffe-network-input-for-c-api/32211796?noredirect=1#comment52319229_32211796">replacing the train and test layers with the input layer</a>). </p> <p>Unfortunately, when I run the C++ program it gives me the following error:</p> <blockquote> <p>F0827 14:57:28.427697 25511 insert_splits.cpp:35] Unknown bottom blob 'label' (layer 'accuracy', bottom index 1)</p> </blockquote> <p>I tried to Google it and I think there's some difference between the layers in the files for the MNIST and C++ examples but can't work out what I can change to make this work.</p>
[ { "AnswerId": "32243571", "CreationDate": "2015-08-27T07:49:45.647", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>As pointed out by <a href=\"https://stackoverflow.com/questions/32241193/error-with-caffe-c-example-with-different-deploy-prototxt-file#comment52364300_32241193\">AbdulRahman AlHamali's comment</a> it seems like you left in your <code>deploy.prototxt</code> file the loss and accuracy layers that expects as inputs (\"bottom\"s) <code>\"label\"</code>.<br>\nRemoving these loss layers from <code>deploy.prototxt</code> should solve the poroblem.</p>\n\n<p>Note, that if you used <code>\"SoftmaxWithLoss\"</code> layer as a loss, you need to <em>replace</em> it with a <code>\"Softmax\"</code> layer to get class probabilities as the net outputs. <code>\"Softmax\"</code> layer takes only one <code>\"bottom\"</code> and does not require <code>bottom: \"label\"</code>.</p>\n" } ]
32,243,285
2
<autoencoder><keras>
2015-08-27T07:34:05.283
null
4,535,321
keras autoencoder not converging
<p>Could someone please explain to me why the autoencoder is not converging? To me the results of the two networks below should be the same. However, the autoencoder below is not converging, whereas, the network beneath it is. </p> <pre class="lang-py prettyprint-override"></pre> <hr> <pre class="lang-py prettyprint-override"></pre>
[ { "AnswerId": "33327770", "CreationDate": "2015-10-25T08:44:35.913", "ParentId": null, "OwnerUserId": "2616754", "Title": null, "Body": "<p>I think Keras's Autoencoder implementation ties the weights of the encoder and decoder, whereas in your implementation, the encoder and decoder have separate weights. If your implementation is leading to much better performance on the test data, then it may indicate that un-tied weights may be needed for your problem. </p>\n" }, { "AnswerId": "34306306", "CreationDate": "2015-12-16T07:35:15.470", "ParentId": null, "OwnerUserId": "2031688", "Title": null, "Body": "<p>The new version (0.3.0) of Keras no longer has tied weights in AutoEncoder, and it still shows different convergence. This is because weights are initialized differently.</p>\n\n<p>In the non-AE example, Dense(32,16) weights are initialized first, followed by Dense(16,32). In the AE example, Dense(32,16) weights are initialized first, followed by Dense(16,32), and then when you create the AutoEncoder instance, Dense(32,16) weights are initialized again (self.encoder.set_previous(node) will call build() to initialize weights).</p>\n\n<p>Now the following two NNs converge exactly the same:</p>\n\n<pre><code>autoencoder = Sequential()\nencoder = containers.Sequential([Dense(32,16,activation='tanh')]) \ndecoder = containers.Sequential([Dense(16,32)])\nautoencoder.add(AutoEncoder(encoder=encoder, decoder=decoder, \n output_reconstruction=True))\nrms = RMSprop()\nautoencoder.compile(loss='mean_squared_error', optimizer=rms)\nnp.random.seed(0)\nautoencoder.fit(trainData,trainData, nb_epoch=20, batch_size=64,\n validation_data=(testData, testData), show_accuracy=False)\n</code></pre>\n\n<hr>\n\n<pre><code># non-autoencoder\nmodel = Sequential()\nmodel.add(Dense(32,16,activation='tanh')) \nmodel.add(Dense(16,32))\nmodel.set_weights(autoencoder.get_weights())\nmodel.compile(loss='mean_squared_error', optimizer=rms)\nnp.random.seed(0)\nmodel.fit(trainData,trainData, nb_epoch=numEpochs, batch_size=batch_size,\n validation_data=(testData, testData), show_accuracy=False)\n</code></pre>\n" } ]
32,247,374
1
<machine-learning><computer-vision><neural-network><deep-learning><caffe>
2015-08-27T10:49:25.803
32,248,352
1,377,127
Caffe output layer number accuracy
<p>I've modified the Caffe <a href="https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_train_test.prototxt">MNIST example</a> to classify 3 classes of image. One thing I noticed was that if I specify the number of output layers as 3, then my test accuracy drops horribly - down to the low 40% range. However, if I +1 and have 4 output layers, the result is in the 95% range.<br> I added an extra class of images to my dataset (so 4 classes) and noticed the same thing - if the number of output layers were the same as the number of classes, then the result was horrible, if it was the same +1, then it worked really well.</p> <pre></pre> <p>Does anyone know why this is? I've noticed that when I use the model I train with the <a href="https://github.com/BVLC/caffe/blob/master/examples/cpp_classification/classification.cpp">C++ example code</a> on an image from my test set then it will complain that I've told it that there are 4 classes present and I've only supplied labels for 3 in my labels file. If I invent a label and add it to the file, I can get the program to run, but then it just returns one of the classes with a probability of 1.0 no matter what image I give it.</p>
[ { "AnswerId": "32248352", "CreationDate": "2015-08-27T11:36:40.057", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>It is important to notice that when fine-tuning and/or changing the number of labels the input labels <strong>must</strong> always start from 0, as they are used as indices into the output probability vector when computing the loss.<br>\nThus, if you have</p>\n\n<pre><code> inner_product_params {\n num_output: 3\n }\n</code></pre>\n\n<p>You must have training labels 0,1 and 2 only.</p>\n\n<p>If you use <code>num_output: 3</code> with labels 1,2,3 caffe is unable to represent label 3 and in fact has a redundant line corresponding to label 0 that is left unused.<br>\nAs you observed, when changing to <code>num_output: 4</code> caffe is again able to represent label 3 and the results improved, but still you have an unused row in the parameters matrix.</p>\n" } ]
32,249,861
2
<machine-learning><computer-vision><neural-network><deep-learning><caffe>
2015-08-27T12:49:54.467
32,292,853
1,377,127
Caffe predicts same class regardless of image
<p>I <a href="https://github.com/jackbrucesimpson/caffe_tests/blob/master/lenet_train_test.prototxt" rel="nofollow">modified the MNIST example</a> and when I train it with my 3 image classes it returns an accuracy of 91%. However, when I modify the <a href="https://github.com/jackbrucesimpson/caffe_tests/blob/master/classification.cpp" rel="nofollow">C++ example</a> with a <a href="https://github.com/jackbrucesimpson/caffe_tests/blob/master/deploy_arch.prototxt" rel="nofollow">deploy prototxt</a> file and <a href="https://github.com/jackbrucesimpson/caffe_tests/blob/master/labels.txt" rel="nofollow">labels file</a>, and try to test it on some images it returns a prediction of the second class (1 circle) with a probability of 1.0 no matter what image I give it - even if it's images that were used in the training set. I've tried a dozen images and it consistently just predicts the one class.</p> <p>To clarify things, in the C++ example I modified I did scale the image to be predicted just like the images were scaled in the training stage:</p> <pre></pre> <p>If that was the right thing to do, then it makes me wonder if I've done something wrong with the output layers that calculate probability in my <a href="https://github.com/jackbrucesimpson/caffe_tests/blob/master/deploy_arch.prototxt" rel="nofollow">deploy_arch.prototxt</a> file.</p>
[ { "AnswerId": "32292853", "CreationDate": "2015-08-30T03:25:17.773", "ParentId": null, "OwnerUserId": "2376331", "Title": null, "Body": "<p>I think you have forgotten to scale the input image during classification time, as can be seen in line 11 of the train_test.prototxt file. You should probably multiply by that factor somewhere in your C++ code, or alternatively use a Caffe layer to scale the input (look into ELTWISE or POWER layers for this).</p>\n\n<p><strong>EDIT:</strong></p>\n\n<p>After a conversation in the comments, it turned out that the image mean was mistakenly being subtracted in the classification.cpp file whereas it was not being subtracted in the original training/testing pipeline.</p>\n" }, { "AnswerId": "32293302", "CreationDate": "2015-08-30T04:55:54.373", "ParentId": null, "OwnerUserId": "4329255", "Title": null, "Body": "<p>Are your train classes balanced?\nYou may get to a stacked network on a prediction of one major class.\nIn order to find the issue I suggest to output the train prediction during training compared to predictions with the forward example on same train images from a different class.</p>\n" } ]
32,268,923
1
<python><neural-network><theano><lasagne>
2015-08-28T10:30:05.277
null
2,537,088
In Lasagne/Theano, do I need a 4d numpy array for a 4d Theano tensor?
<p>I'm building a neural network with lasagne and am <a href="https://github.com/Lasagne/Lasagne" rel="nofollow">following the example from the github.</a> I'm curious on how exactly to input into the network. In the example they state that the input layer is 4 dimensions and indeed it is a theano tensor4. Does this mean I have to give the network a 4 dimensional numpy array? Is that even possible? How would you build one from a 4 d vector of lists?</p>
[ { "AnswerId": "32270198", "CreationDate": "2015-08-28T11:37:48.033", "ParentId": null, "OwnerUserId": "2902280", "Title": null, "Body": "<p>In the MNIST example provided by Lasagne, you need to input a 4D tensor.</p>\n\n<p>Generally speaking, if your data is 2 dimensional (images for example), the shape of your input must be <code>(n_samples, n_channels, height, width)</code>. In the MNIST dataset <code>n_channel</code> is 1 (might be something else, e.g. 3 for RGB images), and <code>height</code> and <code>width</code> are both 28.</p>\n\n<p>If your data is only 1 dimensional, then you must input a 3D tensor, of shape <code>(n_samples, n_channel, n_features)</code>.</p>\n\n<p>Note that this might be problematic if you want to predict a label for a single image ((28, 28) ndarray as in <a href=\"https://stackoverflow.com/questions/31499761/get-output-from-lasagne-python-deep-neural-network-framework/32007540#32007540\">this question</a>, because you need to make the input 4 dimensional. In this case, you can add axis using <code>data = data[None, None, :, :]</code>.</p>\n" } ]
32,274,543
2
<python><machine-learning><theano>
2015-08-28T15:23:33.687
32,275,233
5,197,007
In Python, can I call the variable from main function - use global variable?
<p>In Python, can I call a variable from main function? Use global variable? Any help appreciated! </p> <pre></pre>
[ { "AnswerId": "32275188", "CreationDate": "2015-08-28T15:59:26.380", "ParentId": null, "OwnerUserId": "678611", "Title": null, "Body": "<p>Try with this.</p>\n\n<p><strong>main.py</strong></p>\n\n<pre><code>__dataset__ = main(dataset, n_h, n_y, batch_size, dev_split, 5000)\n</code></pre>\n\n<p><strong>sub.py</strong></p>\n\n<pre><code>import sys, main\n__dataset__ = sys.modules['__main__'].__dataset__\n</code></pre>\n\n<p><br>\n<strong>EDIT:</strong><br>\nAnother method is to use a class with static variables.</p>\n\n<p><strong>mclass.py</strong></p>\n\n<pre><code>class MClass:\n i = 0\n\nMClass.i = 1\n</code></pre>\n\n<p><strong>main.py</strong></p>\n\n<pre><code>import sub\nfrom mclass import MClass\n\n# In the main file\nprint(MClass.i) # Outputs 1\nMClass.i = 3\nprint(MClass.i) # Outputs 3\n\n# In a subfile\nsub.mPrint() # Outputs 3\nsub.set(10)\nsub.mPrint() # Outputs 10\n\n# And back in the main\nprint(MClass.i) # Outputs 10\n</code></pre>\n\n<p><strong>sub.py</strong></p>\n\n<pre><code>from mclass import MClass\n\ndef mPrint():\n print(MClass.i)\n\ndef set(n):\n MClass.i = n\n</code></pre>\n" }, { "AnswerId": "32275233", "CreationDate": "2015-08-28T16:01:43.670", "ParentId": null, "OwnerUserId": "5215951", "Title": null, "Body": "<p>You have to define 'input_to_state' and 'RNN' outside the 'main' function, and then modify them afterwards. Like this:</p>\n\n<pre><code>input_to_state = None\nRNN = None\ndef main(dataset, n_h, n_y, batch_size, dev_split, n_epochs):\n # Calling 'global' allows you to modify these variables\n global input_to_state\n global RNN\n input_to_state = Linear(name='input_to_state',\n input_dim=seq_u.shape[-1],\n output_dim=n_h)\n RNN = SimpleRecurrent(activation=Tanh(),\n dim=n_h, name=\"RNN\")\n\n\ndef predict(dev_X):\n dev_transform = input_to_state.apply(dev_X)\n dev_h = RNN.apply(dev_transform)\n\nif __name__ == \"__main__\": \n main(args) \n predict(dev_X)\n</code></pre>\n\n<p>Howerver, I would not recommend this, global variables should be used as little as possible. <a href=\"https://google-styleguide.googlecode.com/svn/trunk/pyguide.html#Global_variables\" rel=\"nofollow\">more detail here</a>.</p>\n\n<p>A better solution would be to return 'input_to_state' and 'RNN' at the end of the main function, like this:</p>\n\n<pre><code>def main(dataset, n_h, n_y, batch_size, dev_split, n_epochs):\n input_to_state = Linear(name='input_to_state',\n input_dim=seq_u.shape[-1],\n output_dim=n_h)\n RNN = SimpleRecurrent(activation=Tanh(),\n dim=n_h, name=\"RNN\")\n return input_to_state, RNN\n\ndef predict(dev_X, input_to_state, RNN):\n dev_transform = input_to_state.apply(dev_X)\n dev_h = RNN.apply(dev_transform)\n\nif __name__ == \"__main__\": \n input_to_state, RNN = main(args) \n predict(dev_X, input_to_state, RNN)\n</code></pre>\n" } ]
32,278,568
1
<neural-network><deep-learning><caffe><conv-neural-network>
2015-08-28T19:32:29.030
32,293,404
2,832,033
How to use convert_imageset in caffe for images which are not put in one folder?
<p>I'm trying to train a CNN on my own dataset using Caffe framework, and it is highly recommended that the dataset be converted to the lmdb or leveldb formats due to speed efficiency. To do so, all images must be put into a <strong>single folder</strong> and the must be prepared accordingly. My own dataset is so huge and in so many folders and subfolders so that it would be so laborious copy all of them into a single folder. Thereforeو I'm wondering to know whether there exist any alternative way to generate the lmdb file without need to copy all images into a single folder.</p>
[ { "AnswerId": "32293404", "CreationDate": "2015-08-30T05:14:53.450", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>There are (at least) two solutions to your problem.</p>\n\n<ol>\n<li><p>Don't copy the files to a single folder, just create <a href=\"http://www.cyberciti.biz/faq/creating-soft-link-or-symbolic-link/\" rel=\"nofollow\">symbolic links</a>.</p></li>\n<li><p>All the images do not have to be in the same folder. You can have full paths in <code>'list.txt'</code> file. For example:</p></li>\n</ol>\n\n<blockquote>\n <p>/path/to/image.jpg 0<br>\n /another/path/class01.jpg 1<br>\n /yet/another/path/class0.jpg 0 </p>\n</blockquote>\n\n<p>And so on...</p>\n" } ]
32,280,071
2
<python><arrays><numpy><matrix><theano>
2015-08-28T21:34:05.073
32,280,982
4,491,330
How to assign values elementwise to theano matrix ? Difference between Numpy and Theano?
<p>I'm new to theano. I would like to replace the numpy functions in my scripts with theano functions in order to speed up the calculation process. I'm not sure how to do it. </p> <p>My final goal is to apply affine transformation to 3D rigid body, assign a score to the conformation after each transformation, and do some optimization on the parameters determining the scores. </p> <p>Here's an example of what I'm trying to do. </p> <pre></pre> <p>The above theano function didn't pass compilation. I've got the following error message. </p> <pre></pre> <p>In general, my questions are </p> <p>(1) is there a way to assign or update or initialize a theano matrix with a specific shape elementwise, </p> <p>(2) as theano is closely related with numpy, what's the difference between theano and numpy in defining, optimizing, and evaluating mathematical expressions, </p> <p>and (3) can theano replace numpy in the sense that we can use theano functions solely in defining, optimizing, and evaluating mathematical expressions without calling numpy functions. </p>
[ { "AnswerId": "32282512", "CreationDate": "2015-08-29T03:47:41.500", "ParentId": null, "OwnerUserId": "4491330", "Title": null, "Body": "<p>Just for getting the theano function posted above work, my version is: </p>\n\n<pre><code>angle_var = T.dscalar()\n\ndef rotate_x_axis_expr(angle):\n a = T.deg2rad(angle)\n cosa = T.cos(a)\n sina = T.sin(a) \n\n R = theano.shared(np.identity(4))\n R = T.set_subtensor(R[1,1], cosa)\n R = T.set_subtensor(R[1,2], -sina)\n R = T.set_subtensor(R[2,1], sina)\n R = T.set_subtensor(R[2,2], cosa)\n\n return R\n\nrotate_x_axis = theano.function([angle_var],rotate_x_axis_expr(angle_var))\n</code></pre>\n" }, { "AnswerId": "32280982", "CreationDate": "2015-08-28T23:03:10.467", "ParentId": null, "OwnerUserId": "1730674", "Title": null, "Body": "<p>I can't answer your questions 1, 2, 3, since I haven't used theano before ten minutes ago. But, to define the function in theano, you don't seem to use the <code>def</code> construction; you want to do something more like this:</p>\n\n<pre><code>angle_var = T.dscalar('angle_var')\na = T.deg2rad(angle_var)\ncosa = T.cos(a)\nsina = T.sin(a) \n\nR = theano.shared(np.identity(4))\nR = T.set_subtensor(R[1,1], cosa)\nR = T.set_subtensor(R[1,2], -sina)\nR = T.set_subtensor(R[2,1], sina)\nR = T.set_subtensor(R[2,2], cosa)\n\nrotate_x_axis_theano = theano.function([angle_var], R)\n</code></pre>\n\n<p>Doesn't help much with speed though, for a scalar angle at least:</p>\n\n<pre><code>In [368]: timeit rotate_x_axis_theano(10)\n10000 loops, best of 3: 67.7 µs per loop\n\nIn [369]: timeit rotate_x_axis_numpy(10)\nThe slowest run took 4.23 times longer than the fastest. This could mean that an intermediate result is being cached\n10000 loops, best of 3: 22.7 µs per loop\n\nIn [370]: np.allclose(rotate_x_axis_theano(10), rotate_x_axis_numpy(10))\nOut[370]: True\n</code></pre>\n" } ]
32,283,364
2
<python><theano><lstm><lasagne>
2015-08-29T06:34:23.860
null
2,537,088
Can't figure out the issue with my Lasagne LSTM
<p>I'm curious if anyone has any insights. Even if you can't figure out the issue how can I begin to debug it. I must say, I am not strong in theano. </p> <p>The input data is a numpy tensor of shape (10,15,10)</p> <p>Here it is. It ran when I just hooked up the input to the dense layer. </p> <pre></pre> <p>Here is the error. It is kind of a doozy. </p> <pre></pre>
[ { "AnswerId": "36096190", "CreationDate": "2016-03-19T00:00:31.357", "ParentId": null, "OwnerUserId": "3858735", "Title": null, "Body": "<p>There are a few mistakes in this: </p>\n\n<p>Your input is a vector. Your output has 10 units per entry in your batch:</p>\n\n<pre><code>target_var = T.ivector('targets')\nl_out = lasagne.layers.DenseLayer(l_shp, num_units=10)\n</code></pre>\n\n<p>Your output shape is (150, 10) which you want to compare to your label input (10,):</p>\n\n<pre><code>l_shp =lasagne.layers.ReshapeLayer(l_lstm, (10*15, 10))\nprint 'l_shp shape:', l_shp.output_shape\nl_out = lasagne.layers.DenseLayer(l_shp, num_units=10)\n</code></pre>\n\n<p>You probably want to use a (10, 10) matrix for your labels. This way you are using 10 classes for your batch size of 10</p>\n\n<pre><code>target_var = T.matrix('targets')\n</code></pre>\n\n<p>You also need to change your network so that it outputs a (10,10) shaped result. You can do this by reshaping your l_shp differently by reshaping directly to (10, something). There are lots of other options. </p>\n\n<p>Btw, you can look at layer shapes with their 'output_shape' property.</p>\n" }, { "AnswerId": "32401611", "CreationDate": "2015-09-04T15:26:38.513", "ParentId": null, "OwnerUserId": "1139393", "Title": null, "Body": "<p>Is it possible that ResultsBatch contains numbers from 1 to 10?</p>\n\n<p>If so, you could try testing with </p>\n\n<pre><code>train_fn(TextBatch,ResultsBatch - 1)\n</code></pre>\n\n<p>to convert the targets to 0,1,2,..,9 instead.</p>\n" } ]
32,288,399
0
<machine-learning><computer-vision><svm><libsvm><caffe>
2015-08-29T16:44:54.990
null
2,832,033
How can I train an SVM classifier with leveldb datatype, generated from caffe framework?
<p>I've fed a bunch of images to the caffe framework (AlexNet) and the last feature descriptors have been extracted and stored in LEVELDB. Now, I want to train a linear SVM classifier on these descriptors. Therefore, I'm wondering to know whether there is any way to train the SVM with LEVELDB data or not. Thanks</p> <p>EDIT: Working with <a href="https://github.com/jostosh/caffe-utils" rel="nofollow">this tool</a> solved the problem</p>
[]
32,302,599
1
<deep-learning><theano><lasagne>
2015-08-31T00:43:25.743
32,310,274
5,081,447
Modifying perform function in Theano.tensor.nnet.softmax
<p>I have just begun using lasagne and Theano to do some machine learning on Python.</p> <p>I am trying to modify the softmax class in Theano. I want to change how the activation function(softmax) is calculated. Instead of dividing e_x by e_x.sum(axis=1), I want to divide e_x by sum of three consecutive numbers. </p> <p>For instance, the result will be as follows:</p> <pre></pre> <p>and so on...</p> <p>The problem is that I cannot quite grasp how theano carries out the computation.</p> <p>Here is my main question. Does it suffice to just change the perform() function in the softmax class? </p> <p>Here is the original perform() function:</p> <pre></pre> <p>Here is my modified perform()</p> <pre></pre> <p>With the current code, I am getting 'unorderable types:int()>str()' error when I use the predict method in lasagne.</p>
[ { "AnswerId": "32310274", "CreationDate": "2015-08-31T11:34:19.807", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>For something like this you're probably better off constructing a custom softmax via symbolic expressions rather than creating (or modifying) an operation.</p>\n\n<p>Your custom softmax can be defined in terms of symbolic expressions. Doing it this way will give you gradients (and other Theano operation bits and pieces) \"for free\" but might run slightly slower than a custom operation could.</p>\n\n<p>Here's an example:</p>\n\n<pre><code>import numpy\nimport theano\nimport theano.tensor as tt\n\nx = tt.matrix()\n\n# Use the built in softmax operation\ny1 = tt.nnet.softmax(x)\n\n# A regular softmax operation defined via ordinary Theano symbolic expressions\ny2 = tt.exp(x)\ny2 = y2 / y2.sum(axis=1)[:, None]\n\n# Custom softmax operation\ndef custom_softmax(a):\n b = tt.exp(a)\n b1 = b[:, :3] / b[:, :3].sum(axis=1)[:, None]\n b2 = b[:, 3:] / b[:, 3:].sum(axis=1)[:, None]\n return tt.concatenate([b1, b2], axis=1)\ny3 = custom_softmax(x)\n\nf = theano.function([x], outputs=[y1, y2, y3])\n\nx_value = [[.1, .2, .3, .4, .5, .6], [.1, .3, .5, .2, .4, .6]]\ny1_value, y2_value, y3_value = f(x_value)\nassert numpy.allclose(y1_value, y2_value)\nassert y3_value.shape == y1_value.shape\na = numpy.exp(.1) + numpy.exp(.2) + numpy.exp(.3)\nb = numpy.exp(.4) + numpy.exp(.5) + numpy.exp(.6)\nc = numpy.exp(.1) + numpy.exp(.3) + numpy.exp(.5)\nd = numpy.exp(.2) + numpy.exp(.4) + numpy.exp(.6)\nassert numpy.allclose(y3_value, [\n [numpy.exp(.1) / a, numpy.exp(.2) / a, numpy.exp(.3) / a, numpy.exp(.4) / b, numpy.exp(.5) / b, numpy.exp(.6) / b],\n [numpy.exp(.1) / c, numpy.exp(.3) / c, numpy.exp(.5) / c, numpy.exp(.2) / d, numpy.exp(.4) / d, numpy.exp(.6) / d]\n]), y3_value\n</code></pre>\n" } ]
32,320,919
0
<gpu><nvidia><deep-learning><caffe><nvidia-digits>
2015-08-31T22:32:45.983
null
4,058,368
Running DIGITS in CPU mode only
<p><a href="https://i.stack.imgur.com/z2nR8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z2nR8.jpg" alt="enter image description here"></a></p> <p>I am trying to build a model in DIGITS. I am using CPU only to do the learning.. However, I get a CUDA driver version error although I am not using GPU. What could this problem be? I have attached my solver.prototxt below. <a href="https://i.stack.imgur.com/3dLOS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3dLOS.jpg" alt="enter image description here"></a></p>
[]
32,321,943
1
<lua><neural-network><convolution><torch>
2015-09-01T00:44:43.373
32,357,456
5,240,885
Custom Spatial Convolution In Torch
<p>I need to perform a custom spatial convolution in Torch. Rather than simply multiplying each input pixel by a weight for that pixel and adding them together with the filter's bias to form each output pixel, I need to do a more complex mathematical function to the input pixels before adding them together.</p> <p>I know how to do this, but I do not know a GOOD way to do this. The best way I've come up with is to take the full input tensor, create a bunch of secondary tensors that are "views" of the original without allocating additional memory, putting those into a Replicate layer (the output filter count being the replication count), and feeding that into a ParallelTable layer containing a bunch of regular layers that have their parameters shared between filters.</p> <p>The trouble is, even though this is fine memory-wise with a very manageable overhead, we're talking <strong>inputwidth</strong>^<strong>inputheight</strong>^<strong>inputdepth</strong>^<strong>outputdepth</strong> mini-networks, here. Maybe there's some way to create massive "long and tall" networks that work on the entire replicated input set at once, but how do I create layers that are partially-connected (like convolutions) instead of fully-connected?</p> <p>I would have liked to just use inheritance to create a special copy of the regular <strong>SpatialConvolution "class"</strong> and modify it, but I can't even try because it's implemented in an external C library. I can't just use regular layers before a regular SpatialConvolution layer because I need to do my math with different weights and biases for each filter (shared between applications of the same filter to different input coordinates).</p>
[ { "AnswerId": "32357456", "CreationDate": "2015-09-02T15:39:43.890", "ParentId": null, "OwnerUserId": "4850610", "Title": null, "Body": "<p>Good question. You made me give some serious thought.\nYour approach has a flaw: it does not allow to take advantage of vectorized computations since each mini-network works independently. </p>\n\n<p><strong>My idea is as follows:</strong></p>\n\n<p>Suppose network's <code>input</code> and <code>output</code> are 2D tensors. We can produce (efficiently, without memory copying) an auxiliary 4D tensor \n<code>rf_input (kernel_size x kernel_size x output_h x output_w)</code> \nsuch that <code>rf_input[:, :, k, l]</code> is a 2D tensor of size <code>kernel_size x kernel_size</code> containing a receptive field which <code>output[k, l]</code> will be gotten from. Then we iterate over positions inside the kernel <code>rf_input[i, j, :, :]</code> getting pixels at position <code>(i, j)</code> inside all receptive fields and computing their contribution to each <code>output[k, l]</code> at once using vectorization.</p>\n\n<p><strong>Example:</strong></p>\n\n<p>Let our \"convolving\" function be, for example, a product of tangents of sums:</p>\n\n<p><a href=\"https://i.stack.imgur.com/hRSLt.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hRSLt.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then its partial derivative w.r.t. the <code>input</code> pixel at position <code>(s,t)</code> in its receptive field is</p>\n\n<p><a href=\"https://i.stack.imgur.com/pP8U1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pP8U1.png\" alt=\"enter image description here\"></a></p>\n\n<p>Derivative w.r.t. <code>weight</code> is the same.</p>\n\n<p>At the end, of course, we must sum up gradients from different <code>output[k,l]</code> points. For example, each <code>input[m, n]</code> contributes to at most <code>kernel_size^2</code> outputs as a part of their receptive fields, and each <code>weight[i, j]</code> contributes to all <code>output_h x output_w</code> outputs.</p>\n\n<p><strong>Simple implementation may look like this:</strong></p>\n\n<pre><code>require 'nn'\nlocal CustomConv, parent = torch.class('nn.CustomConv', 'nn.Module')\n\n-- This module takes and produces a 2D map. \n-- To work with multiple input/output feature maps and batches, \n-- you have to iterate over them or further vectorize computations inside the loops.\n\nfunction CustomConv:__init(ker_size)\n parent.__init(self)\n\n self.ker_size = ker_size\n self.weight = torch.rand(self.ker_size, self.ker_size):add(-0.5)\n self.gradWeight = torch.Tensor(self.weight:size()):zero()\nend\n\nfunction CustomConv:_get_recfield_input(input)\n local rf_input = {}\n for i = 1, self.ker_size do\n rf_input[i] = {}\n for j = 1, self.ker_size do\n rf_input[i][j] = input[{{i, i - self.ker_size - 1}, {j, j - self.ker_size - 1}}]\n end\n end\n return rf_input\nend\n\nfunction CustomConv:updateOutput(_)\n local output = torch.Tensor(self.rf_input[1][1]:size())\n -- Kernel-specific: our kernel is multiplicative, so we start with ones\n output:fill(1) \n --\n for i = 1, self.ker_size do\n for j = 1, self.ker_size do\n local ker_pt = self.rf_input[i][j]:clone()\n local w = self.weight[i][j]\n -- Kernel-specific\n output:cmul(ker_pt:add(w):tan())\n --\n end\n end\n return output\nend\n\nfunction CustomConv:updateGradInput_and_accGradParameters(_, gradOutput)\n local gradInput = torch.Tensor(self.input:size()):zero()\n for i = 1, self.ker_size do\n for j = 1, self.ker_size do\n local ker_pt = self.rf_input[i][j]:clone()\n local w = self.weight[i][j]\n -- Kernel-specific\n local subGradInput = torch.cmul(gradOutput, torch.cdiv(self.output, ker_pt:add(w):tan():cmul(ker_pt:add(w):cos():pow(2))))\n local subGradWeight = subGradInput\n --\n gradInput[{{i, i - self.ker_size - 1}, {j, j - self.ker_size - 1}}]:add(subGradInput)\n self.gradWeight[{i, j}] = self.gradWeight[{i, j}] + torch.sum(subGradWeight)\n end\n end\n return gradInput\nend\n\nfunction CustomConv:forward(input)\n self.input = input\n self.rf_input = self:_get_recfield_input(input)\n self.output = self:updateOutput(_)\n return self.output\nend\n\nfunction CustomConv:backward(input, gradOutput)\n gradInput = self:updateGradInput_and_accGradParameters(_, gradOutput)\n return gradInput\nend\n</code></pre>\n\n<p>If you change this code a bit: </p>\n\n<pre><code>updateOutput: \n output:fill(0)\n [...]\n output:add(ker_pt:mul(w))\n\nupdateGradInput_and_accGradParameters:\n local subGradInput = torch.mul(gradOutput, w)\n local subGradWeight = torch.cmul(gradOutput, ker_pt)\n</code></pre>\n\n<p>then it will work exactly as <code>nn.SpatialConvolutionMM</code> with zero <code>bias</code> (I've tested it).</p>\n" } ]
32,324,961
1
<lua><torch>
2015-09-01T06:30:59.643
null
2,293,069
Torch nngraph building nodes with nn.Linear
<p>Hi I am new in torch/lua and I am doing the <a href="http://(https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/practicals/practical5.pdf" rel="nofollow">practical5</a> of the oxford machine learning course: </p> <p>What I am trying to implement is a simple layer: m = x1 + x2 cmul linear(x3) where cmul is the element multiply and linear is just a linear layer.</p> <p>My code looks like:</p> <pre></pre> <p>However, I got the error message:</p> <pre></pre> <p>And if I only want aa = nn.Linear(10,20)()</p> <p>I got same error as above.</p> <p>Even if I follow the <a href="https://github.com/torch/nngraph/#two-hidden-layers-mlp" rel="nofollow">example</a> on the torch github: </p> <p>I got the same error.</p> <p><strong>UPDATE</strong> and SOLVED:</p> <p>I miss import the package. Although and are both referred to as in the code, they are actually different packages.</p> <p>Should do</p> <pre></pre> <p>And I only did</p> <pre></pre>
[ { "AnswerId": "39314203", "CreationDate": "2016-09-04T06:36:28.430", "ParentId": null, "OwnerUserId": "6792483", "Title": null, "Body": "<p>Last argument in this line should be x3 instead of x1:</p>\n\n<pre><code>l3 = nn.Linear(params.x3_size1, params.x3_size2)(x3)\n</code></pre>\n" } ]
32,337,591
4
<lua><luajit><torch>
2015-09-01T17:27:44.140
null
3,113,501
How catch ctrl-c in lua when ctrl-c is sent via the command line
<p>I would like to know when the user from a command line presses control-c so I can save some stuff. </p> <p>How do I do this? I've looked but haven't really seen anything.</p> <p>Note: I'm somewhat familiar with lua, but I'm no expert. I mostly use lua to use the library Torch (<a href="http://torch.ch/">http://torch.ch/</a>)</p>
[ { "AnswerId": "32409607", "CreationDate": "2015-09-05T04:50:48.193", "ParentId": null, "OwnerUserId": "5129715", "Title": null, "Body": "<p><a href=\"https://msdn.microsoft.com/en-us/library/windows/desktop/ms685049(v=vs.85).aspx\" rel=\"nofollow\">windows : SetConsoleCtrlHandler</a></p>\n\n<p><a href=\"http://www.gnu.org/software/libc/manual/html_node/Basic-Signal-Handling.html\" rel=\"nofollow\">linux : signal</a></p>\n\n<p>There are two behaviors of the signal which are undesirable, which will cause complexities in the code.</p>\n\n<ol>\n<li>Program termination</li>\n<li>Broken IO</li>\n</ol>\n\n<p>The first behavior can be caught and remembered in a C program by using SetConsoleCtrlHandler/signal. This will allow your function to be called, and you can remember that the system needs to shutdown. Then at some point in the lua code you see it has happened (call to check), and perform your tidy up and shutdown.</p>\n\n<p>The second behavior, is that a blocking operation (read/write) will be cancelled by the signal, and the operation will be unfinished. That would need to be checked at each IO event, and then re-started, or cancelled as appropriate.</p>\n" }, { "AnswerId": "32471856", "CreationDate": "2015-09-09T05:54:17.147", "ParentId": null, "OwnerUserId": "2328287", "Title": null, "Body": "<p>There exists io libraries that support this.\nI know zmq and libuv</p>\n\n<p>Libuv example with lluv binding - <a href=\"https://github.com/moteus/lua-lluv/blob/master/examples/sig.lua\" rel=\"nofollow\">https://github.com/moteus/lua-lluv/blob/master/examples/sig.lua</a></p>\n\n<p>ZeroMQ return EINTR from poll function when user press Ctrl-C</p>\n\n<p>But I do not handle thi byself</p>\n" }, { "AnswerId": "32598961", "CreationDate": "2015-09-16T02:51:33.307", "ParentId": null, "OwnerUserId": "4808555", "Title": null, "Body": "<pre><code>require('sys')\nsys.catch_ctrl_c()\n</code></pre>\n\n<p>I use this to catch the exit from cli.</p>\n" }, { "AnswerId": "34409274", "CreationDate": "2015-12-22T05:52:06.407", "ParentId": null, "OwnerUserId": "320911", "Title": null, "Body": "<p>Implementing a <a href=\"https://en.wikipedia.org/wiki/Unix_signal#SIGINT\" rel=\"noreferrer\"><code>SIGINT</code></a> handler is straightforward using the excellent <a href=\"https://luarocks.org/modules/gvvaughan/luaposix\" rel=\"noreferrer\">luaposix</a> library:</p>\n\n<pre><code>local signal = require(\"posix.signal\")\n\nsignal.signal(signal.SIGINT, function(signum)\n io.write(\"\\n\")\n -- put code to save some stuff here\n os.exit(128 + signum)\nend)\n</code></pre>\n\n<p>Refer to the <a href=\"https://luaposix.github.io/luaposix/modules/posix.signal.html\" rel=\"noreferrer\">posix.signal</a> module's API documentation for more information.</p>\n" } ]
32,339,193
0
<python><python-2.7><theano><keras>
2015-09-01T19:11:03.393
null
4,513,062
cannot import name Cop from keras.models python
<p>Greeting Dear Community,</p> <p>I try to use the python keras package and I got this error: I am running this on a oracle-linux virtual box. Is it something to do it expects some kind of GPU box ?</p> <p>Thx for the help.</p> <pre></pre> <p>The line from dnn.py</p> <pre></pre>
[]
32,341,493
1
<python><machine-learning><theano><cox-regression>
2015-09-01T21:53:55.767
null
2,723,734
Negative log likelihood in theano (cox regression)
<p>I'm trying to implement cox regression in theano. </p> <p>I'm using the logistic regression tutorial (<a href="http://deeplearning.net/tutorial/logreg.html" rel="nofollow">http://deeplearning.net/tutorial/logreg.html</a>) as a framework and replacing the logistic log likelihood (LL) function by the cox regression LL function (<a href="https://en.wikipedia.org/wiki/Proportional_hazards_model#The_partial_likelihood" rel="nofollow">https://en.wikipedia.org/wiki/Proportional_hazards_model#The_partial_likelihood</a>). </p> <p>Here's what I have so far:</p> <pre></pre> <p>Basically, I need to sum over LL_i (where i is 0 to ytime.shape - 1). But I'm not sure how to do this. Should I use the scan function? </p>
[ { "AnswerId": "32343254", "CreationDate": "2015-09-02T01:20:56.750", "ParentId": null, "OwnerUserId": "2723734", "Title": null, "Body": "<p>Figured it out. The trick was not to use the scan function, but to convert the double summation to a pure matrix operation. </p>\n" } ]
32,342,672
1
<machine-learning><image-recognition><deep-learning><caffe><conv-neural-network>
2015-09-01T23:59:53.453
null
933,728
Convolutional Neural Network - Pretrained model with fast feature extraction
<p>What is the fastest pretrained network I can get for image recognition?<br> The biggest source is probably <a href="https://github.com/BVLC/caffe/wiki/Model-Zoo" rel="nofollow">Model Zoo</a>, but I cannot get precise information about feature extraction time.</p> <p>Right now I am using <a href="https://gist.github.com/ksimonyan/fd8800eeb36e276cd6f9#file-readme-md" rel="nofollow">VGG_CNN_S</a> model which gives mi around 80ms for processing 1 image.</p> <p>As far as I know you can get down to 1 ms for one image.</p> <p>Which pretrained network available for download is the fastest in extracting features?</p>
[ { "AnswerId": "32344752", "CreationDate": "2015-09-02T04:43:28.227", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>Good and efficient deep architectures are introduced quite frequently, so I suppose any answer you'll get here will have a short expiration date.</p>\n\n<p>Regarding \"feature extraction time\", I suppose you mean the duration of the forward pass only - you are not interested in training time. This time will also depend on the layer which you want to extract: the deeper the layer, the longer the time (for the same net) since it requires more computations to get deeper into any specific net. However, for different nets it often takes different times to reach the same \"depth\" since the computations at each depth are different.</p>\n\n<p>Nevertheless, roughly when Oxford VGG lab intorduced VGG_CNN_S, Google labs came up with <a href=\"http://vision.princeton.edu/pvt/GoogLeNet/\" rel=\"nofollow\">GoogLeNet</a>: it is a very deep architecture for recognition, but with an extra effort on keeping the computational burden within reason. It's worth giving it a try. </p>\n" } ]
32,343,282
1
<c++><machine-learning><deep-learning><caffe><glog>
2015-09-02T01:24:02.350
32,346,978
1,377,127
What does "InitGoogleLogging" do?
<p>I've been modifying an <a href="https://github.com/BVLC/caffe/blob/master/examples/cpp_classification/classification.cpp" rel="noreferrer">example C++ program</a> from the Caffe deep learning library and I noticed this code on <a href="https://github.com/BVLC/caffe/blob/master/examples/cpp_classification/classification.cpp#L234" rel="noreferrer">line 234</a> that doesn't appear to be referenced again.</p> <pre></pre> <p>The argument provided is a prototxt file which defines the parameters of the deep learning model I'm calling. The thing that is confusing me is where the results from this line go? I know they end up being used in the program because if I make a mistake in the prototxt file then the program will crash. However I'm struggling to see how the data is passed to the class performing the classification tasks.</p>
[ { "AnswerId": "32346978", "CreationDate": "2015-09-02T07:25:28.993", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>First of all, <code>argv[0]</code> is <em>not</em> the first argument you pass to your executable, but rather the <a href=\"https://stackoverflow.com/a/2051031/1714410\">executable name</a>. So you are passing to <code>::google::InitGoogleLogging</code> the executable name and not the prototxt file.<br>\n<code>'glog'</code> module (google logging) is using this name to decorate the log entries it outputs.</p>\n\n<p>Second, caffe is using google logging (<a href=\"https://google-glog.googlecode.com/svn/trunk/\" rel=\"nofollow noreferrer\">aka <code>'glog'</code></a>) as its logging module, and hence this module must be initialized once when running caffe. This is why you have this </p>\n\n<pre><code>::google::InitGoogleLogging(argv[0]);\n</code></pre>\n\n<p>in your code.</p>\n" } ]
32,346,763
1
<python-2.7><cuda><gpu><theano><lasagne>
2015-09-02T07:14:06.923
32,347,299
3,006,092
Running Python lasagne with CUDNN in Ubuntu 14.04 Linux
<p>I am on a Linux 3.16.0-30-generic #40~14.04.1-Ubuntu x86_64 GNU/Linux machine with NVIDIA Corporation GF108 [GeForce GT 430] [10de:0de1] (rev a1) graphic card. </p> <p>I am trying to run package with GPU enabled, however by running , I get an ImportError:</p> <pre></pre> <p>Currently, I have installed and successfully compiled all the samples as well as .</p> <pre></pre> <p>I get this output:</p> <pre></pre> <p>I am using and used to install all these packages from their files, which are downloaded from their repositories.</p>
[ { "AnswerId": "32347299", "CreationDate": "2015-09-02T07:43:10.537", "ParentId": null, "OwnerUserId": "3006092", "Title": null, "Body": "<p>I finally figured out the problem! My system's GPU cannot/is not supported by <code>cudNN</code>, unfortunately. :(</p>\n" } ]
32,348,618
0
<pandas><machine-learning><neural-network><theano><deep-learning>
2015-09-02T08:49:55.473
null
5,197,007
Load data for RNN
<p>In RNN training examples, I noticed input data and target data are all 3-dimension arrays, and need to define time-step delay between input and output.</p> <pre></pre> <p>I wanted to load custom data for RNN training - input vector=1, output vector =1, time_steps =1 (see attached data1a.csv). Reshape doesn't work here. Could anyone please illustrate how to do it?</p> <pre></pre> <p>Thanks!</p> <p>data link: <a href="https://groups.google.com/forum/#!topic/theano-users/--X2zGSd8rw" rel="nofollow">links</a> <a href="https://groups.google.com/group/theano-users/attach/28565ada37494d/data1a.csv?part=0.1&amp;authuser=0" rel="nofollow">data1a.csv</a></p> <p>I'm just getting some ideas about it, but don't know how to continue:</p> <pre></pre> <p>What is the next move? </p> <pre></pre>
[]
32,352,820
1
<python><caffe>
2015-09-02T12:12:09.343
null
416,645
Caffe MemoryData Layer and Solver inteface
<p>I am trying to train a network but rather using lmdb or leveldb, I am feeding data to my network on the fly. So I am following procedure as outlined below</p> <ol> <li>My data is loaded in Memory Data Layer. </li> <li>I create a mini batch using a python script. </li> <li>Set data and label as </li> <li>After that I call </li> </ol> <p>Here solver is of type SGDSolver. Now my question is what is the difference between and ? </p> <p>2ndly this approach doesn't let me have a memory data layer for test network. Is there any work around for that?</p> <p>My solver.prototxt looks like</p> <pre></pre> <p>With my approach every 20th iteration network displays some output loss etc. And somehow loss stays constant over some numbert of iterations, what could be the reason for that.</p>
[ { "AnswerId": "39089447", "CreationDate": "2016-08-22T22:05:15.367", "ParentId": null, "OwnerUserId": "4785185", "Title": null, "Body": "<ol>\n<li>What is the difference between solver.solve and solver.step?</li>\n</ol>\n\n<p><strong>solve</strong> does the entire training run, to whatever limits you've set -- usually the iteration limit. <strong>step</strong> does only the specified number of iterations.</p>\n\n<ol start=\"2\">\n<li>How do I get a memory data layer for test network?</li>\n</ol>\n\n<p>If you're not reading from a supported data channel / format, I <em>think</em> you have to write a customer input routine (your <strong>data</strong> package).</p>\n\n<ol start=\"3\">\n<li>Loss stays constant over multiple iterations, what could be the reason?</li>\n</ol>\n\n<p>There are several possibilities, depending on the surrounding effects. If the loss shows only one value ever, then your back propagation is a likely culprit. Perhaps you're not properly connected to the data set, and you're not getting the expected classifications fed in.</p>\n\n<p>If the loss has a temporary stability but then converges decently, don't worry about it; this is likely an effect of training ordering.</p>\n\n<p>If the loss declines decently and then settles at a fixed value, then you're also doing well: the training converged before it ran out of iterations.</p>\n" } ]
32,353,509
3
<caffe><openblas>
2015-09-02T12:43:22.147
null
4,996,964
"/usr/bin/ld: cannot find -lopenblas" error in Caffe compilation
<p>When I was compiling Caffe, I had this error, despite OpenBLAS is installed:</p> <pre></pre> <p>Is there a solution for it?</p>
[ { "AnswerId": "36215963", "CreationDate": "2016-03-25T07:44:15.020", "ParentId": null, "OwnerUserId": "1511337", "Title": null, "Body": "<p>I saw the similar problem (I'm compiling caffe again for some reason).\nI found the library file the builder is looking for (-lcblas or -latlas means libcblas.so and libatlas.so) are under /usr/lib64/atlas. So just added symbolic links under /usr/lib64 like this.</p>\n\n<pre><code>sudo ln /usr/lib64/atlas/libcblas.so.3.0 /usr/lib64/libcblas.so\nsudo ln -s /usr/lib64/atlas/libatlas.so.3.0 /usr/lib64/libatlas.so\n</code></pre>\n\n<p>But I guess more proper method is to set Makefile.config (the CBLAS path). (I thought the default path will do away with it reading the comment saying so, but it did not.) Hope this helps anyone.</p>\n" }, { "AnswerId": "37448511", "CreationDate": "2016-05-25T22:11:36.693", "ParentId": null, "OwnerUserId": "5043076", "Title": null, "Body": "<p>Including the base packs even after cloning OpenBlas and making will link the appropriate libraries in 14.04 and 16.</p>\n\n<pre><code>apt install liblapack-dev liblapack3 libopenblas-base libopenblas-dev\n</code></pre>\n\n<p>apt install liblapack-dev liblapack3 libopenblas-base libopenblas-dev</p>\n" }, { "AnswerId": "34243307", "CreationDate": "2015-12-12T18:17:05.440", "ParentId": null, "OwnerUserId": "2807033", "Title": null, "Body": "<p>I faced the same problem. Even adding library directory \"/opt/OpenBLAS/lib/\" to ldconfig cache didn't help (as my libopenblas.so is at \"/opt/OpenBLAS/lib/libopenblas.so\").</p>\n\n<p>Using cmake helped me. Try this from caffe root directory:</p>\n\n<p><code>mkdir build\ncd build\ncmake -DBLAS=open ..\nmake all\nmake runtest</code></p>\n\n<p>If you need to use make, add the symlink of libopenblas.so to /usr/lib. I did the following:</p>\n\n<p><code>ln -s /opt/OpenBLAS/lib/libopenblas.so /usr/lib/libopenblas.so</code></p>\n" } ]
32,368,577
1
<neural-network><deep-learning><caffe>
2015-09-03T06:39:38.553
null
5,295,427
How do I use a pre-trained Caffe model?
<p>I have some questions about how to actually interact with a pre-trained Caffe model. In my case I'm using a model for <a href="http://places.csail.mit.edu" rel="nofollow">scene recognition</a>.</p> <p>In the caffe git repository, there are some code examples in Python and C++ on the implementations of Image Classifiers. However, those do not apply to my use case (since they only classify the input image as ONE class).</p> <p>My goal is an application that takes an input image (jpg) and outputs the highest predicted class label for each pixel in the input image (e.i., indices for sky, beach, road, car).</p> <p>Could anyone give me some pointers on how to proceed? </p> <p>There already seem to exist implementations for this. This demo (<a href="http://places.csail.mit.edu/demo.html" rel="nofollow">http://places.csail.mit.edu/demo.html</a>) is kind of what I what. </p> <p>Thank you!</p>
[ { "AnswerId": "32373937", "CreationDate": "2015-09-03T11:07:32.110", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>What you are looking for is not image classification, but rather <em>semantic segmentation</em>.</p>\n\n<p>A recent work, by Jonathan Long, Evan Shelhamer and Trevor Darrell is based on Caffe, and can be found <a href=\"https://github.com/BVLC/caffe/wiki/Model-Zoo#fcn\" rel=\"nofollow\">here</a>. It uses fully convolutional network, that is, a network with no <code>\"InnerProduct\"</code> layers only convolutional layers, thus capable of producing outputs with different sizes for different sizes of inputs.</p>\n" } ]
32,379,841
2
<lua><torch><cmdline-args>
2015-09-03T15:30:23.690
32,380,948
49,985
Lua cmd-line string
<p>Say I call Lua with this cmd: </p> <pre class="lang-bsh prettyprint-override"></pre> <p>How can I access this same cmd-line string from Lua?</p> <p>I know about the table, but it removes all quotes from the original command string making it difficult to reconstruct:</p> <pre></pre> <p>If I can save the exact string to a file from within Lua, I can easily call it again later.</p>
[ { "AnswerId": "32380948", "CreationDate": "2015-09-03T16:26:55.833", "ParentId": null, "OwnerUserId": "2726734", "Title": null, "Body": "<p>@peterpi is correct that the shell is interpreting the command and as a result stripping away the quotes. However, reconstructing the command exactly is not really necessary to have the shell interpret the command the same way as before.</p>\n\n<p>For simple cases concatenating the arguments to the script is often enough:</p>\n\n<pre><code>local command = table.concat(arg, ' ', -1, #arg)\n</code></pre>\n\n<p>This will fail if the quotes are actually necessary, most commonly when an argument contains a space or shell character, so quoting everything is easy and somewhat more robust, but not pretty.</p>\n\n<p>Here is an example with a Lua pattern to check for special (bash) shell characters and spaces to decide if and which quotes are necessary. It may not be complete but it handles filenames, most strings, and numbers as arguments.</p>\n\n<pre><code>local mod_arg = { }\nfor k, v in pairs(arg) do\n if v:find\"'\" then\n mod_arg[k] = '\"'..v..'\"'\n elseif v:find'[%s$`&gt;&lt;|#]' then\n mod_arg[k] = \"'\"..v..\"'\" \n else\n mod_arg[k] = v\n end\nend \nlocal command = table.concat(mod_arg, ' ', -1, #mod_arg)\nprint(command)\n</code></pre>\n" }, { "AnswerId": "32380157", "CreationDate": "2015-09-03T15:46:34.640", "ParentId": null, "OwnerUserId": "819046", "Title": null, "Body": "<p>No doubt somebody will prove me wrong, but generally I don't think this is possible. It's the <em>shell</em> rather than luajit that takes the quotes away and chops the line up into individual tokens.</p>\n" } ]
32,379,878
2
<ipython-notebook><caffe><pycaffe>
2015-09-03T15:32:43.747
32,509,003
2,191,652
Cheat sheet for caffe / pycaffe?
<p>Does anyone know whether there is a cheat sheet for all important pycaffe commands? I was so far using caffe only via Matlab interface and terminal + bash scripts.</p> <p>I wanted to shift towards using ipython and work through the ipython notebook examples. However I find it hard to get an overview of all the functions that are inside the caffe module for python. (I'm also quite new to python).</p>
[ { "AnswerId": "32509003", "CreationDate": "2015-09-10T18:16:05.333", "ParentId": null, "OwnerUserId": "5293046", "Title": null, "Body": "<p>The <a href=\"https://github.com/BVLC/caffe/tree/master/python/caffe/test\">pycaffe tests</a> and <a href=\"https://github.com/BVLC/caffe/blob/master/python/caffe/_caffe.cpp\">this file</a> are the main gateway to the python coding interface.</p>\n\n<p>First of all, you would like to choose whether to use Caffe with CPU or GPU. It is sufficient to call <code>caffe.set_mode_cpu()</code> or <code>caffe.set_mode_gpu()</code>, respectively.</p>\n\n<h2>Net</h2>\n\n<p>The main class that the pycaffe interface exposes is the <code>Net</code>. It has two constructors:</p>\n\n<pre><code>net = caffe.Net('/path/prototxt/descriptor/file', caffe.TRAIN)\n</code></pre>\n\n<p>which simply create a <code>Net</code> (in this case using the <em>Data Layer</em> specified for training), or</p>\n\n<pre><code>net = caffe.Net('/path/prototxt/descriptor/file', '/path/caffemodel/weights/file', caffe.TEST)\n</code></pre>\n\n<p>which creates a <code>Net</code> and automatically loads the weights as saved in the provided <em>caffemodel</em> file - in this case using the <em>Data Layer</em> specified for testing.</p>\n\n<p>A <code>Net</code> object has several attributes and methods. They can be found <a href=\"https://github.com/BVLC/caffe/blob/master/python/caffe/pycaffe.py\">here</a>. I will cite just the ones I use more often.</p>\n\n<p>You can access the network blobs by means of <code>Net.blobs</code>. E.g.</p>\n\n<pre><code>data = net.blobs['data'].data\nnet.blobs['data'].data[...] = my_image\nfc7_activations = net.blobs['fc7'].data\n</code></pre>\n\n<p>You can access the parameters (weights) too, in a similar way. E.g.</p>\n\n<pre><code>nice_edge_detectors = net.params['conv1'].data\nhigher_level_filter = net.params['fc7'].data\n</code></pre>\n\n<p>Ok, now it's time to actually feed the net with some data. So, you will use <code>backward()</code> and <code>forward()</code> methods. So, if you want to classify a single image</p>\n\n<pre><code>net.blobs['data'].data[...] = my_image\nnet.forward() # equivalent to net.forward_all()\nsoftmax_probabilities = net.blobs['prob'].data\n</code></pre>\n\n<p>The <code>backward()</code> method is equivalent, if one is interested in computing gradients.</p>\n\n<p>You can save the net weights to subsequently reuse them. It's just a matter of</p>\n\n<pre><code> net.save('/path/to/new/caffemodel/file')\n</code></pre>\n\n<h2>Solver</h2>\n\n<p>The other core component exposed by pycaffe is the <code>Solver</code>. There are several types of solver, but I'm going to use only <code>SGDSolver</code> for the sake of clarity. It is needed in order to train a caffe model.\nYou can instantiate the solver with</p>\n\n<pre><code>solver = caffe.SGDSolver('/path/to/solver/prototxt/file')\n</code></pre>\n\n<p>The <code>Solver</code> will encapsulate the network you are training and, if present, the network used for testing. Note that they are usually the same network, only with a different <em>Data Layer</em>. The networks are accessible with</p>\n\n<pre><code> training_net = solver.net\n test_net = solver.test_nets[0] # more than one test net is supported\n</code></pre>\n\n<p>Then, you can perform a solver iteration, that is, a forward/backward pass with weight update, typing just</p>\n\n<pre><code> solver.step(1)\n</code></pre>\n\n<p>or run the solver until the last iteration, with</p>\n\n<pre><code> solver.solve()\n</code></pre>\n\n<h2>Other features</h2>\n\n<p>Note that pycaffe allows you to do more stuff, such as <a href=\"https://github.com/BVLC/caffe/blob/master/python/caffe/test/test_net_spec.py\">specifying the network architecture through a Python class</a> or <a href=\"https://github.com/BVLC/caffe/blob/master/python/caffe/test/test_python_layer.py\">creating a new <em>Layer</em> type</a>.\nThese features are less often used, but they are pretty easy to understand by reading the test cases.</p>\n" }, { "AnswerId": "38427997", "CreationDate": "2016-07-18T02:59:08.387", "ParentId": null, "OwnerUserId": "4481686", "Title": null, "Body": "<p>Please note that the answer by Flavio Ferrara has a litte problem which may cause you waste a lot of time:</p>\n\n<pre><code>net.blobs['data'].data[...] = my_image\nnet.forward()\n</code></pre>\n\n<p>The code above is noneffective if your first layer is a Data type layer, because when <code>net.forward()</code> is called, it will begin from the first layer, and then your inserted data <code>my_image</code> will be covered. So it will show no error but give you totally irrelevant output. The correct way is to assign the start and end layer, for example:</p>\n\n<p><code>net.forward(start='conv1', end='fc')</code></p>\n\n<p>Here is a Github repository of Face Verification Experiment on LFW Dataset, using pycaffe and some matlab code. I guess it could help a lot, especially the <code>caffe_ftr.py</code> file.</p>\n\n<p><a href=\"https://github.com/AlfredXiangWu/face_verification_experiment\">https://github.com/AlfredXiangWu/face_verification_experiment</a></p>\n\n<p>Besides, here are some short example code of using pycaffe for image classification:</p>\n\n<p><a href=\"http://codrspace.com/Jaleyhd/caffe-python-tutorial/\">http://codrspace.com/Jaleyhd/caffe-python-tutorial/</a>\n<a href=\"http://prog3.com/sbdm/blog/u011762313/article/details/48342495\">http://prog3.com/sbdm/blog/u011762313/article/details/48342495</a></p>\n" } ]
32,389,905
2
<c++><linux><posix><caffe><sigaction>
2015-09-04T04:08:40.777
32,390,101
2,467,772
Sigaction and porting Linux code to Windows
<p>I am trying to port <a href="http://caffe.berkeleyvision.org/" rel="noreferrer">caffe</a> (developed for Linux) source code to Windows environment. The problem is at structure at and . The source codes are shown below. My query is which library or code replacement can be done to make this works in Windows.</p> <p>///Header file </p> <pre></pre> <p>///Source file </p> <pre></pre> <p>The errors are </p> <pre></pre>
[ { "AnswerId": "34414651", "CreationDate": "2015-12-22T11:17:10.290", "ParentId": null, "OwnerUserId": "213615", "Title": null, "Body": "<p>Based on @nneonneo:</p>\n\n<pre><code>void handle_signal(int signal) {\n switch (signal) {\n#ifdef _WIN32\n case SIGTERM:\n case SIGABRT:\n case SIGBREAK:\n#else\n case SIGHUP:\n#endif\n got_sighup = true;\n break;\n case SIGINT:\n got_sigint = true;\n break;\n }\n }\nvoid HookupHandler() {\n if (already_hooked_up) {\n LOG(FATAL) &lt;&lt; \"Tried to hookup signal handlers more than once.\";\n }\n already_hooked_up = true;\n#ifdef _WIN32\n signal(SIGINT, handle_signal);\n signal(SIGTERM, handle_signal);\n signal(SIGABRT, handle_signal);\n#else\n struct sigaction sa;\n // Setup the handler\n sa.sa_handler = &amp;handle_signal;\n // Restart the system call, if at all possible\n sa.sa_flags = SA_RESTART;\n // Block every signal during the handler\n sigfillset(&amp;sa.sa_mask);\n // Intercept SIGHUP and SIGINT\n if (sigaction(SIGHUP, &amp;sa, NULL) == -1) {\n LOG(FATAL) &lt;&lt; \"Cannot install SIGHUP handler.\";\n }\n if (sigaction(SIGINT, &amp;sa, NULL) == -1) {\n LOG(FATAL) &lt;&lt; \"Cannot install SIGINT handler.\";\n }\n#endif\n }\n void UnhookHandler() {\n if (already_hooked_up) {\n#ifdef _WIN32\n signal(SIGINT, SIG_DFL);\n signal(SIGTERM, SIG_DFL);\n signal(SIGABRT, SIG_DFL);\n#else\n struct sigaction sa;\n // Setup the sighub handler\n sa.sa_handler = SIG_DFL;\n // Restart the system call, if at all possible\n sa.sa_flags = SA_RESTART;\n // Block every signal during the handler\n sigfillset(&amp;sa.sa_mask);\n // Intercept SIGHUP and SIGINT\n if (sigaction(SIGHUP, &amp;sa, NULL) == -1) {\n LOG(FATAL) &lt;&lt; \"Cannot uninstall SIGHUP handler.\";\n }\n if (sigaction(SIGINT, &amp;sa, NULL) == -1) {\n LOG(FATAL) &lt;&lt; \"Cannot uninstall SIGINT handler.\";\n }\n#endif\n\n already_hooked_up = false;\n }\n }\n</code></pre>\n" }, { "AnswerId": "32390101", "CreationDate": "2015-09-04T04:30:06.827", "ParentId": null, "OwnerUserId": "1204143", "Title": null, "Body": "<p><code>sigaction</code> is part of the UNIX signals API. Windows provides only <code>signal</code>, which doesn't support <code>SIGHUP</code> or any flags (such as <code>SA_RESTART</code>). However, the very basic support is still there, so the code should still work reasonably correctly if you use just <code>signal</code> (and not <code>sigaction</code>).</p>\n" } ]
32,397,226
1
<python><theano><deep-learning>
2015-09-04T11:42:07.813
32,397,924
2,625,087
Theano not running on Windows
<p>I am trying to install theano on windows using python 3.4. I am following this instruction <a href="http://deeplearning.net/software/theano/install_windows.html#install-windows" rel="nofollow noreferrer">Theano on Windows</a></p> <p>Its creating one file called <b>Theano.egg-link</b> inside python <b>Lib/site-packages</b></p> <p>but i am getting this error when trying to call <i>import theano</i> </p> <p><a href="https://i.stack.imgur.com/sTUnH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sTUnH.png" alt="enter image description here"></a></p> <p>i used and it gave me this window. which indicating everything installed fine. any help?</p> <p><a href="https://i.stack.imgur.com/5aEps.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5aEps.png" alt="enter image description here"></a></p>
[ { "AnswerId": "32397924", "CreationDate": "2015-09-04T12:19:59.753", "ParentId": null, "OwnerUserId": "1843331", "Title": null, "Body": "<p>This can most likely be fixed by <a href=\"https://github.com/Theano/Theano/archive/master.zip\" rel=\"nofollow\">redownloading the theano project</a>. </p>\n\n<p>As you can see <a href=\"https://github.com/Theano/Theano/search?utf8=%E2%9C%93&amp;q=%2C+e0\" rel=\"nofollow\">here</a>, the code that is giving you that error is not in the current codebase anymore. It now looks like this</p>\n\n<pre><code>def dot(l, r):\n \"\"\"Return a symbolic matrix/dot product between l and r \"\"\"\n rval = NotImplemented\n e0, e1 = None, None\n\n if rval == NotImplemented and hasattr(l, '__dot__'):\n try:\n rval = l.__dot__(r)\n except Exception as e0:\n rval = NotImplemented\n if rval == NotImplemented and hasattr(r, '__rdot__'):\n try:\n rval = r.__rdot__(l)\n except Exception as e1:\n rval = NotImplemented\n if rval == NotImplemented:\n raise NotImplementedError(\"Dot failed for the following reasons:\",\n (e0, e1))\n return rval\n</code></pre>\n" } ]
32,405,035
3
<python><opencv><ubuntu><anaconda><caffe>
2015-09-04T19:09:21.177
32,514,285
1,391,376
caffe installation : opencv libpng16.so.16 linkage issues
<p>I am trying to compile caffe with python interface on an Ubuntu 14.04 machine.</p> <p>I have installed Anaconda and opencv with . I have also installed all the requirement stipulated in the coffee and changed the commentary blocks in so that PYTHON_LIB and PYTHON_INCLUDE point towards Anaconda distributions. </p> <p>When I am calling , the following command is issued: </p> <pre></pre> <p>However, it is stopped by the following set of errors: </p> <pre></pre> <p>Following the advice from this question: <a href="https://stackoverflow.com/questions/31962975/caffe-install-on-ubuntu-for-anaconda-with-python-2-7-fails-with-libpng16-so-16-n">Caffe install on ubuntu for anaconda with python 2.7 fails with libpng16.so.16 not found</a>, I tired running the , and obtained the following output:</p> <pre></pre> <p>I then proceeded to add all the required directories into the and of the file (hence additional and in the call above), including the explicit link to already present in anaconda libraries. This have however not resolved the issue. I also tried adding the file to and , but without any effect.</p> <p>What might the problem be and how could I resolve it?</p>
[ { "AnswerId": "32436803", "CreationDate": "2015-09-07T10:47:13.690", "ParentId": null, "OwnerUserId": "5308701", "Title": null, "Body": "<p>I'm guessing you've added <code>/home/andrei/anaconda/bin</code> to the <code>PATH</code> environment variable, so that <code>libpng-config</code> resolves to <code>/home/andrei/anaconda/bin/libpng16-config</code>, which is what is causing <code>cmake</code> to try and link with libpng v1.6. </p>\n\n<p>Strip the <code>anaconda</code> dir from your <code>PATH</code> environment variable and <code>libpng-config</code> should default to libpng v1.2 in <code>/usr/lib</code> or similar.</p>\n" }, { "AnswerId": "32514285", "CreationDate": "2015-09-11T01:49:40.003", "ParentId": null, "OwnerUserId": "1319871", "Title": null, "Body": "<p>I came across the same problem. I found it similar to <a href=\"https://github.com/BVLC/caffe/issues/2007\">https://github.com/BVLC/caffe/issues/2007</a>, and I solved it by</p>\n\n<pre><code>cd /usr/lib/x86_64-linux-gnu\nsudo ln -s ~/anaconda/lib/libpng16.so.16 libpng16.so.16\nsudo ldconfig\n</code></pre>\n" }, { "AnswerId": "46981654", "CreationDate": "2017-10-27T18:50:23.593", "ParentId": null, "OwnerUserId": "871418", "Title": null, "Body": "<p>It works with me based on the solution here <a href=\"https://github.com/hashdist/hashstack/issues/670\" rel=\"nofollow noreferrer\">https://github.com/hashdist/hashstack/issues/670</a></p>\n\n<pre><code>export LD_LIBRARY_PATH=~/anaconda2/lib:$LD_LIBRARY_PATH\n</code></pre>\n\n<p>This shall be added in <code>~/.bashrc</code> or <code>~/.bash_profile</code></p>\n" } ]
32,409,231
1
<numpy><theano>
2015-09-05T03:29:55.863
32,410,831
588,495
Emulating boolean masks in Theano
<p>I'm porting a numpy expression to theano. The expression finds the number of true positive predictions for each class, given a one-hot matrix of ground truth classes and a one-hot matrix of predicted classes. The numpy code is:</p> <pre></pre> <p>The last expression yields . I've tried using theano's nonzero:</p> <pre></pre> <p>The eval results in , which is a 0-1 mask of the rows of where the prediction is correct. Any suggestions for how to make this work? For different ways of doing it? Thanks.</p>
[ { "AnswerId": "32410831", "CreationDate": "2015-09-05T07:59:54.380", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>Here are three variants demonstrating how to re-implement parts of your numpy code in Theano.</p>\n\n<p>Note that Theano's <code>Unique</code> operation does not support running on the GPU and does not appear to support gradients either. As a result version 3 many not be of much use. Version 2 provides a workaround: compute the unique values outside Theano and pass them in. Version 1 is a Theano implementation of the final line of your numpy code only.</p>\n\n<p>To address your specific issue: there is no need to use <code>nonzero</code>; in this case the indexing works in Theano just like it works in numpy. Maybe you were getting confused between <code>y</code> and <code>Y</code>? (common Python style is to stick with lower case for all variable and parameter names).</p>\n\n<pre><code>import numpy as np\nimport theano\nimport theano.tensor as tt\nimport theano.tensor.extra_ops\n\n\ndef numpy_ver(y, y_hat):\n Y = np.zeros(shape=(len(y), len(np.unique(y))), dtype=np.int64)\n Y_hat = np.zeros_like(Y, dtype=np.int64)\n rows = np.arange(len(y), dtype=np.int64)\n Y[rows, y] = 1\n Y_hat[rows, y_hat] = 1\n return ((Y_hat == Y) &amp; (Y == 1)).sum(axis=0), Y, Y_hat\n\n\ndef compile_theano_ver1():\n Y = tt.matrix(dtype='int64')\n Y_hat = tt.matrix(dtype='int64')\n z = (tt.eq(Y_hat, Y) &amp; tt.eq(Y, 1)).sum(axis=0)\n return theano.function([Y, Y_hat], outputs=z)\n\n\ndef compile_theano_ver2():\n y = tt.vector(dtype='int64')\n y_hat = tt.vector(dtype='int64')\n y_uniq = tt.vector(dtype='int64')\n Y = tt.zeros(shape=(y.shape[0], y_uniq.shape[0]), dtype='int64')\n Y_hat = tt.zeros_like(Y, dtype='int64')\n rows = tt.arange(y.shape[0], dtype='int64')\n Y = tt.set_subtensor(Y[rows, y], 1)\n Y_hat = tt.set_subtensor(Y_hat[rows, y_hat], 1)\n z = (tt.eq(Y_hat, Y) &amp; tt.eq(Y, 1)).sum(axis=0)\n return theano.function([y, y_hat, y_uniq], outputs=z)\n\n\ndef compile_theano_ver3():\n y = tt.vector(dtype='int64')\n y_hat = tt.vector(dtype='int64')\n y_uniq = tt.extra_ops.Unique()(y)\n Y = tt.zeros(shape=(y.shape[0], y_uniq.shape[0]), dtype='int64')\n Y_hat = tt.zeros_like(Y, dtype='int64')\n rows = tt.arange(y.shape[0], dtype='int64')\n Y = tt.set_subtensor(Y[rows, y], 1)\n Y_hat = tt.set_subtensor(Y_hat[rows, y_hat], 1)\n z = (tt.eq(Y_hat, Y) &amp; tt.eq(Y, 1)).sum(axis=0)\n return theano.function([y, y_hat], outputs=z)\n\n\ndef main():\n y = np.array([1, 0, 1, 2, 2], dtype=np.int64)\n y_hat = np.array([2, 0, 1, 1, 0], dtype=np.int64)\n y_uniq = np.unique(y)\n result, Y, Y_hat = numpy_ver(y, y_hat)\n print result\n theano_ver1 = compile_theano_ver1()\n print theano_ver1(Y, Y_hat)\n theano_ver2 = compile_theano_ver2()\n print theano_ver2(y, y_hat, y_uniq)\n theano_ver3 = compile_theano_ver3()\n print theano_ver3(y, y_hat)\n\n\nmain()\n</code></pre>\n" } ]
32,409,825
0
<c++><visual-studio-2012><caffe>
2015-09-05T05:30:30.210
null
2,467,772
LNK2005 error between cpp file and cu file
<p>I am building dll for <a href="http://caffe.berkeleyvision.org/" rel="nofollow">caffe</a> library with Visual Studio 2013. I have Link error 2005, I guess a conflict between and . The errors are</p> <pre></pre> <p>This is not in both and . Why this Linker error is there? I use Visual Studio 2013 to compile.</p> <p>My and are as follows.</p> <p>flatten_layer.cpp</p> <pre></pre> <p>flatten_layer.cu</p> <pre></pre>
[]
32,416,226
2
<c++><visual-studio><caffe>
2015-09-05T18:12:48.247
null
2,467,772
Create layer error at layer_factory.hpp
<p>I am trying to extract features using . I implement in Visual Studio. My caffe library is also build as static library and link to . When I run the code, I have error as </p> <pre></pre> <p>The error happens at </p> <pre></pre> <p>Initially was I thought Link error. Now I look more carefully and realized that not linker issue. What could be the error?</p>
[ { "AnswerId": "38584507", "CreationDate": "2016-07-26T08:15:59.170", "ParentId": null, "OwnerUserId": "4265775", "Title": null, "Body": "<p>make sure your caffe is compiled with opencv</p>\n" }, { "AnswerId": "33351175", "CreationDate": "2015-10-26T16:48:07.363", "ParentId": null, "OwnerUserId": "5490165", "Title": null, "Body": "<p>I've met recently familiar problem to run my applicatin that has been linked with static library of Caffe (compiled in Visual Studio). There I've found 2 different solutions:</p>\n\n<ol>\n<li><p>Add Caffe project to your solution and set the next option in your main project:</p>\n\n<p>Project properties -> Common Properties -> Framework and References -> Caffe -> Use Library Dependency Inputs -> True</p></li>\n</ol>\n\n<p>This method is simple, but sometimes we want to use only caffe.lib without project and here comes the 2nd method.</p>\n\n<ol start=\"2\">\n<li>Create header files in your project and add there all layer classes declarations externally to oblige a linker to use their symbols. See an example below:</li>\n</ol>\n\n<h1>Example</h1>\n\n<pre><code>#include \"caffe/common.hpp\"\nnamespace caffe\n{\n extern INSTANTIATE_CLASS(ConvolutionLayer);\n extern INSTANTIATE_CLASS(PoolingLayer);\n extern INSTANTIATE_CLASS(ReLULayer);\n extern INSTANTIATE_CLASS(TanHLayer);\n}\n</code></pre>\n\n<p>Finally include the very header file in your application where you're using caffe.</p>\n\n<p>Also check the Layer you met mentioned in your problem, for instance, in your case it is \"ImageData\" (or to be more corrected ImageDataLayer), open \"image_data_layer.cpp\" file in VS and check there that \"REGISTER_LAYER_CLASS(ImageData);\" is available there. </p>\n\n<p>Hope it will help to solve the problem.</p>\n" } ]
32,419,510
8
<python><numpy><theano><keras>
2015-09-06T02:41:11.500
null
579,980
How to get reproducible results in keras
<p>I get different results (test accuracy) every time I run the example from Keras framework (<a href="https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py">https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py</a>) The code contains in the top, before any keras imports. It should prevent it from generating different numbers for every run. What am I missing? </p> <p>UPDATE: How to repro: </p> <ol> <li>Install Keras (<a href="http://keras.io/">http://keras.io/</a>) </li> <li>Execute <a href="https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py">https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py</a> a few times. It will train the model and output test accuracy.<br> Expected result: Test accuracy is the same on every run.<br> Actual result: Test accuracy is different on every run.</li> </ol> <p>UPDATE2: I'm running it on Windows 8.1 with MinGW/msys, module versions:<br> theano 0.7.0<br> numpy 1.8.1<br> scipy 0.14.0c1</p> <p>UPDATE3: I narrowed the problem down a bit. If I run the example with GPU (set theano flag device=gpu0) then I get different test accuracy every time, but if I run it on CPU then everything works as expected. My graphics card: NVIDIA GeForce GT 635)</p>
[ { "AnswerId": "59076062", "CreationDate": "2019-11-27T18:03:28.080", "ParentId": null, "OwnerUserId": "2543623", "Title": null, "Body": "<p>The problem is now solved in Tensorflow 2.0 ! I had the same issue with TF 1.x (see <a href=\"https://stackoverflow.com/questions/59075244/if-keras-results-are-not-reproducible-whats-the-best-practice-for-comparing-mo\">If Keras results are not reproducible, what&#39;s the best practice for comparing models and choosing hyper parameters?</a> ) but </p>\n\n<pre><code>import os\n####*IMPORANT*: Have to do this line *before* importing tensorflow\nos.environ['PYTHONHASHSEED']=str(1)\n\nimport tensorflow as tf\nimport tensorflow.keras as keras\nimport tensorflow.keras.layers \nimport random\nimport pandas as pd\nimport numpy as np\n\ndef reset_random_seeds():\n os.environ['PYTHONHASHSEED']=str(1)\n tf.random.set_seed(1)\n np.random.seed(1)\n random.seed(1)\n\n#make some random data\nreset_random_seeds()\nNUM_ROWS = 1000\nNUM_FEATURES = 10\nrandom_data = np.random.normal(size=(NUM_ROWS, NUM_FEATURES))\ndf = pd.DataFrame(data=random_data, columns=['x_' + str(ii) for ii in range(NUM_FEATURES)])\ny = df.sum(axis=1) + np.random.normal(size=(NUM_ROWS))\n\ndef run(x, y):\n reset_random_seeds()\n\n model = keras.Sequential([\n keras.layers.Dense(40, input_dim=df.shape[1], activation='relu'),\n keras.layers.Dense(20, activation='relu'),\n keras.layers.Dense(10, activation='relu'),\n keras.layers.Dense(1, activation='linear')\n ])\n NUM_EPOCHS = 500\n model.compile(optimizer='adam', loss='mean_squared_error')\n model.fit(x, y, epochs=NUM_EPOCHS, verbose=0)\n predictions = model.predict(x).flatten()\n loss = model.evaluate(x, y) #This prints out the loss by side-effect\n\n#With Tensorflow 2.0 this is now reproducible! \nrun(df, y)\nrun(df, y)\nrun(df, y)\n</code></pre>\n" }, { "AnswerId": "40151338", "CreationDate": "2016-10-20T10:05:08.103", "ParentId": null, "OwnerUserId": "36195", "Title": null, "Body": "<p>I finally got reproducible results with my code. It's a combination of answers I saw around the web. The first thing is doing what @alex says:</p>\n\n<ol>\n<li>Set <code>numpy.random.seed</code>;</li>\n<li>Use <code>PYTHONHASHSEED=0</code> for Python 3.</li>\n</ol>\n\n<p>Then you have to solve the issue noted by @user2805751 regarding cuDNN by calling your Keras code with the following additional <code>THEANO_FLAGS</code>:</p>\n\n<ol start=\"3\">\n<li><code>dnn.conv.algo_bwd_filter=deterministic,dnn.conv.algo_bwd_data=deterministic</code></li>\n</ol>\n\n<p>And finally, you have to patch your Theano installation as per <a href=\"https://github.com/fchollet/keras/issues/1935#issuecomment-194359606\" rel=\"noreferrer\">this comment</a>, which basically consists in:</p>\n\n<ol start=\"4\">\n<li>replacing all calls to <code>*_dev20</code> operator by its regular version in <code>theano/sandbox/cuda/opt.py</code>.</li>\n</ol>\n\n<p>This should get you the same results for the same seed.</p>\n\n<p>Note that there might be a slowdown. I saw a running time increase of about 10%.</p>\n" }, { "AnswerId": "36888790", "CreationDate": "2016-04-27T11:44:02.840", "ParentId": null, "OwnerUserId": "3667840", "Title": null, "Body": "<p>I agree with the previous comment, but reproducible results sometimes needs the same environment(e.g. installed packages, machine characteristics and so on). So that, I recommend to copy your environment to other place in case to have reproducible results. Try to use one of the next technologies:</p>\n\n<ol>\n<li><a href=\"https://www.docker.com/\" rel=\"nofollow\">Docker</a>. If you have a Linux this very easy to move your environment to other place. Also you can try to use <a href=\"https://hub.docker.com/\" rel=\"nofollow\">DockerHub</a>. </li>\n<li><a href=\"http://mybinder.org/\" rel=\"nofollow\">Binder</a>. This is a cloud platform for reproducing scientific experiments.</li>\n<li><a href=\"http://everware.xyz/\" rel=\"nofollow\">Everware</a>. This is yet another cloud platform for \"reusable science\". See the <a href=\"https://github.com/everware\" rel=\"nofollow\">project repository</a> on Github.</li>\n</ol>\n" }, { "AnswerId": "56606207", "CreationDate": "2019-06-14T23:40:46.947", "ParentId": null, "OwnerUserId": "7350637", "Title": null, "Body": "<p>This works for me:</p>\n\n<pre><code>SEED = 123456\nimport os\nimport random as rn\nimport numpy as np\nfrom tensorflow import set_random_seed\n\nos.environ['PYTHONHASHSEED']=str(SEED)\nnp.random.seed(SEED)\nset_random_seed(SEED)\nrn.seed(SEED)\n</code></pre>\n" }, { "AnswerId": "38371558", "CreationDate": "2016-07-14T10:17:14.887", "ParentId": null, "OwnerUserId": "2534758", "Title": null, "Body": "<p>I have trained and tested <code>Sequential()</code> kind of neural networks using Keras. I performed non linear regression on noisy speech data. I used the following code to generate random seed : </p>\n\n<pre><code>import numpy as np\nseed = 7\nnp.random.seed(seed)\n</code></pre>\n\n<p>I get the exact same results of <code>val_loss</code> each time I train and test on the same data. </p>\n" }, { "AnswerId": "52897216", "CreationDate": "2018-10-19T17:23:30.617", "ParentId": null, "OwnerUserId": "9024698", "Title": null, "Body": "<p>You can find the answer at the Keras docs: <a href=\"https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development\" rel=\"noreferrer\">https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development</a>.</p>\n\n<p>In short, to be absolutely sure that you will get reproducible results with your python script <strong>on one computer's/laptop's CPU</strong> then you will have to do the following:</p>\n\n<ol>\n<li>Set the <code>PYTHONHASHSEED</code> environment variable at a fixed value</li>\n<li>Set the <code>python</code> built-in pseudo-random generator at a fixed value</li>\n<li>Set the <code>numpy</code> pseudo-random generator at a fixed value</li>\n<li>Set the <code>tensorflow</code> pseudo-random generator at a fixed value</li>\n<li>Configure a new global <code>tensorflow</code> session</li>\n</ol>\n\n<p>Following the <code>Keras</code> link at the top, the source code I am using is the following:</p>\n\n<pre><code># Seed value\n# Apparently you may use different seed values at each stage\nseed_value= 0\n\n# 1. Set the `PYTHONHASHSEED` environment variable at a fixed value\nimport os\nos.environ['PYTHONHASHSEED']=str(seed_value)\n\n# 2. Set the `python` built-in pseudo-random generator at a fixed value\nimport random\nrandom.seed(seed_value)\n\n# 3. Set the `numpy` pseudo-random generator at a fixed value\nimport numpy as np\nnp.random.seed(seed_value)\n\n# 4. Set the `tensorflow` pseudo-random generator at a fixed value\nimport tensorflow as tf\ntf.set_random_seed(seed_value)\n\n# 5. Configure a new global `tensorflow` session\nfrom keras import backend as K\nsession_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)\nsess = tf.Session(graph=tf.get_default_graph(), config=session_conf)\nK.set_session(sess)\n</code></pre>\n\n<p>It is needless to say that you do not have to to specify any <code>seed</code> or <code>random_state</code> at the <code>numpy</code>, <code>scikit-learn</code> or <code>tensorflow</code>/<code>keras</code> functions that you are using in your python script exactly because with the source code above we set globally their pseudo-random generators at a fixed value.</p>\n" }, { "AnswerId": "38950594", "CreationDate": "2016-08-15T06:57:53.947", "ParentId": null, "OwnerUserId": "2004627", "Title": null, "Body": "<p>I would like to add something to the previous answers. If you use <strong>python 3</strong> and you want to get reproducible results for every run, you have to</p>\n\n<ol>\n<li>set numpy.random.seed in the beginning of your code</li>\n<li>give PYTHONHASHSEED=0 as a parameter to the python interpreter</li>\n</ol>\n" }, { "AnswerId": "32687043", "CreationDate": "2015-09-21T03:45:27.923", "ParentId": null, "OwnerUserId": "3299394", "Title": null, "Body": "<p>Theano's <a href=\"http://deeplearning.net/software/theano/sandbox/randomnumbers.html\">documentation</a> talks about the difficulties of seeding random variables and why they seed each graph instance with its own random number generator. </p>\n\n<blockquote>\n <p>Sharing a random number generator between different {{{RandomOp}}}\n instances makes it difficult to producing the same stream regardless\n of other ops in graph, and to keep {{{RandomOps}}} isolated.\n Therefore, each {{{RandomOp}}} instance in a graph will have its very\n own random number generator. That random number generator is an input\n to the function. In typical usage, we will use the new features of\n function inputs ({{{value}}}, {{{update}}}) to pass and update the rng\n for each {{{RandomOp}}}. By passing RNGs as inputs, it is possible to\n use the normal methods of accessing function inputs to access each\n {{{RandomOp}}}’s rng. In this approach it there is no pre-existing\n mechanism to work with the combined random number state of an entire\n graph. So the proposal is to provide the missing functionality (the\n last three requirements) via auxiliary functions: {{{seed, getstate,\n setstate}}}.</p>\n</blockquote>\n\n<p>They also provide <a href=\"http://deeplearning.net/software/theano/tutorial/examples.html\">examples</a> on how to seed all the random number generators. </p>\n\n<blockquote>\n <p>You can also seed all of the random variables allocated by a\n RandomStreams object by that object’s seed method. This seed will be\n used to seed a temporary random number generator, that will in turn\n generate seeds for each of the random variables.</p>\n</blockquote>\n\n<pre><code>&gt;&gt;&gt; srng.seed(902340) # seeds rv_u and rv_n with different seeds each\n</code></pre>\n" } ]
32,426,221
3
<python><theano><dimensionality-reduction><autoencoder>
2015-09-06T17:19:17.993
null
1,081,942
Autoencoders for high dimensional data
<p>I'm working on a project where I need to reduce the dimensionality of my observations and still have a significative representation of them. The use of Autoencoders was strongly suggested for many reasons but I'm not quite sure it's the best approach.</p> <p>I have 1400 samples of dimension ~60,000 which is far too high, I am trying to reduce their dimensionality to a 10% of the original one. I'm using <strong>theano autoencoders</strong> [<a href="http://deeplearning.net/tutorial/dA.html" rel="nofollow noreferrer">Link</a>] and it seems like the cost keeps being around 30,000 (which is very high). I tried raising the number of epochs or lowering the learning rate with no success. I'm not a big expert on autoencoders so I'm not sure how to proceed from here or when to just stop trying.</p> <p>There are other tests I can run but, before going any further, I'd like to have an input from you. </p> <ul> <li><p>Do you think the dataset is too small (I can add another 600 samples for a total of ~2000) ? </p></li> <li><p>Do you think using stacked autoenoders could help ? </p></li> <li><p>Should I keep tweaking the parameters (epochs and learning rate) ?</p></li> </ul> <p>Since the dataset is an ensamble of pictures I tried to visualize the reconstructions from the autoencoders and all I got was the same output for every sample. This means that given the input the autoencoder tries to rebuild the input but what I get instead is the same (almost exactly) image for any input(which kind of looks like an average of all the images in the dataset). This means that the inner representation is not good enough since the autoencoder can't reconstruct the image from it.</p> <p><strong>The dataset:</strong> 1400 - 2000 images of scanned books (covers included) of around ~60.000 pixels each (which translates to a feature vector of 60.000 elements). Each feature vector has been normalized in [0,1] and originally had values in [0,255].</p> <p><strong>The problem</strong>: Reduce their dimensionality with Autoencoders (if possible)</p> <p>If you need any extra info or if I missed something that might be useful to better understand the problem, please add a comment and I will happily help you help me =).</p> <p>Note: I'm currently running a test with a higher number of epochs on the whole dataset and I will update my post accordingly to the result, it might take a while though.</p>
[ { "AnswerId": "32443030", "CreationDate": "2015-09-07T16:42:12.120", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>There's no reason to necessarily consider a cost of 30,000 as \"high\" unless more is known about the situation than described in the question. The globally minimal cost might actually be around 30,000 if, for example, the size of the hidden layer is particularly small and there is little redundancy in the data.</p>\n\n<p>If the cost is 30,000 before training (i.e. with random encoder and decoder weights) and remains around that level even after some training then something probably is wrong.</p>\n\n<p>You should expect the cost to decrease after the first update (you'll have many updates per epoch if you're using minibatch stochastic gradient descent). You should also expect the convergence cost to decrease as the size of the hidden layer is increased.</p>\n\n<p>Other techniques that might help in a situation like this include the <a href=\"http://ace.cs.ohio.edu/~razvan/courses/dl6890/papers/vincent08.pdf\" rel=\"nofollow noreferrer\">denoising autoencoder</a> (which can be thought of as artificially increasing the size of your training dataset by repeated application of random noise) and the <a href=\"http://www.icml-2011.org/papers/455_icmlpaper.pdf\" rel=\"nofollow noreferrer\">contractive autoencoder</a> which focusses its regularization power on the encoder, the part you care about. Both can be implemented in Theano and the first is the <a href=\"http://deeplearning.net/tutorial/dA.html\" rel=\"nofollow noreferrer\">subject of this tutorial</a> (<a href=\"http://deeplearning.net/tutorial/code/dA.py\" rel=\"nofollow noreferrer\">with code</a>).</p>\n" }, { "AnswerId": "35137864", "CreationDate": "2016-02-01T18:39:08.010", "ParentId": null, "OwnerUserId": "1268117", "Title": null, "Body": "<p>Autoencoders are useful in part because they can learn nonlinear dimensionality reductions. There are other dimensionality reduction techniques, however, which are much faster than autoencoders. Diffusion maps is a popular one; locally-linear embedding is another. I've used diffusion maps on >2000 60k-dimensional data (also images) and it works in under a minute.</p>\n\n<p>Here's a straightforward Python implementation using numpy et al:</p>\n\n<pre><code>def diffusion_maps(data, d, eps=-1, t=1):\n \"\"\"\n data is organized such that columns are points. so it's 60k x 2k for you\n d is the target dimension\n eps is the kernel bandwidth, estimated automatically if == -1\n t is the diffusion time, 1 is usually fine\n \"\"\"\n\n from scipy.spatial import pdist, squareform\n from scipy import linalg as la\n import numpy as np\n\n distances = squareform(pdist(data.T))\n\n if eps == -1:\n # if a kernel bandwidth was not supplied,\n # just use the distance to the tenth-nearest neighbor\n k = 10\n nn = np.sort(distances)\n eps = np.mean(nn[:, k + 1])\n\n kernel = np.exp(-distances ** 2 / eps ** 2)\n one = np.ones(n_samples)\n p_a = np.dot(kernel, one)\n kernel_p = walk / np.outer(p_a, p_a)\n dd = np.dot(kernel_p, one) ** 0.5\n walk = kernel_p / np.outer(dd, dd)\n\n vecs, eigs, _ = la.svd(walk, full_matrices=False)\n vecs = vecs / vecs[:, 0][:, None]\n diffusion_coordinates = vecs[:, 1:d + 1].T * (eigs[1:d + 1][:, None] ** t)\n\n return diffusion_coordinates\n</code></pre>\n\n<p>The gist of diffusion maps is that you form a random walk on your data such that you're much much more likely to visit nearby points than far-away ones. Then you can define a distance between points (the diffusion distance), which is in essence an average probability of moving between two points over all possible paths. The trick is this is actually extremely easy to compute; all you need to do is diagonalize a matrix, and then embed your data in low-dimensional space using its eigenvectors. In this embedding the Euclidean distance is the diffusion distance, up to an approximation error.</p>\n" }, { "AnswerId": "48127709", "CreationDate": "2018-01-06T13:15:27.930", "ParentId": null, "OwnerUserId": "9175058", "Title": null, "Body": "<p><strong>Simple things first...</strong> Note that if you have only 1400 points in 60,000 dimensional space, then you can <strong>without loss</strong>, reduce dimensionality to size &lt;=1400. That is a simple mathematical fact: your data matrix is 1400x60,000, so its rank (dimensionality) is at most 1400. Thus, Principal Components Analysis (PCA) will produce 1400 points in 1400 dimensional space, without loss. I strongly suggest using PCA to reduce the dimensionality of your data before considering anything else. </p>\n" } ]
32,433,515
1
<python><python-2.7><python-3.x><subprocess><theano>
2015-09-07T07:45:22.720
null
4,232,441
Multiple class call with same output
<p>I have this defined in , Now in the same file I have another class and a separate function, in both of that I call the .</p> <p>In I have 2 classes - and and one separate . In class MLP, I use this statement which returns a 2 dimensional array(eg. 5000 * 2100). Then In the another module I call the In which I want the Same value of to be passed as input to the function(iter_minibatches) or I want the same value to get assigned in the beginning of the function(iter_minibatches), so I tried calling the same class in , but since I use random function for permutation am getting different output(eg. 5000 * 2102) but my requirement is to get same value in the function iter_minibatches as well.</p> <p>Now I want same value of <strong>X</strong> should be returned from both the call. So how can I do that?</p> <pre></pre>
[ { "AnswerId": "32441573", "CreationDate": "2015-09-07T15:01:10.743", "ParentId": null, "OwnerUserId": "1405065", "Title": null, "Body": "<p>Your question seems mostly to be about how you can share a piece of data between several functions located in different modules. There are two good ways to go about it:</p>\n\n<p>First, you can store the data in a global variable and access it there any time you need it. For example:</p>\n\n<pre><code># top level code of module1.py\ninp_x = Input(traindata).X # I'm assuming traindata is also a global\n\n# later code in the same module\ndo_stuff(inp_x)\n\n# code in other modules can do:\nimport module1\ndo_other_stuff(module1.inp_x) # a module's global variables are attributes on the module\n</code></pre>\n\n<p>The second option is to create the data in some specific part of your program and store it locally, then pass it to each of the other places you need to use it. This lets you use a more general structure to your code, and you can pass different values at different times. Here's what that might look like:</p>\n\n<pre><code># functions that use an X value:\ndef func1(arg1, arg2, X):\n do_stuff(X)\n\ndef func2(X):\n do_other_stuff(X)\n\n# main module (or wherever you call func1 and func2 from)\nfrom module1 import func1\nfrom module2 import func2\n\ndef main():\n x = Input(traindata).X\n func1(\"foo\", \"bar\", x)\n func2(x)\n</code></pre>\n\n<p>In these examples, I'm only saving (or passing as an argument) the <code>X</code> value that's calculated in the <code>Input</code> class, which seems to be how you're using that class too.</p>\n\n<p>That is a bit silly. If you don't need to keep the instance of <code>Input</code>, you probably shouldn't make <code>Input</code> a class in the first place. Instead, make it a function that returns the <code>X</code> value at the end. You'll have a few more arguments to pass between the <code>realize</code> and <code>expand</code> functions, but the code will likely be cleaner overall.</p>\n\n<p>On the other hand, if the instances of <code>Input</code> have some other uses (which you simply haven't shown in your example), it might make sense to save the instance you're creating instead of its <code>X</code> attribute:</p>\n\n<pre><code>inp = Input(traindata) # save the Input instance, not only X\n\n# later code:\ndo_stuff(inp.X) # use the X attribute, as above\n\n# other code\ndo_other_stuff(inp) # pass the instance, so you can use other attributes or methods\n</code></pre>\n" } ]
32,444,016
2
<compilation><cuda><automated-tests><caffe>
2015-09-07T18:03:59.283
32,445,580
3,102,241
Caffe compiled fine with cudnn however runtest fails with error: CUDNN_STATUS_ARCH_MISMATCH
<p>When running make runtest using caffe I get the following output, it all compiles fine with Cudnn no errors are provided, I have also included the output of build_release/tools/caffe device_query -gpu &lt;0,1> for both NVidia Tesla GPUs running Cuda driver and runtime version 7.0. Could anyone help?</p> <pre></pre>
[ { "AnswerId": "32445580", "CreationDate": "2015-09-07T20:22:59.800", "ParentId": null, "OwnerUserId": "1695960", "Title": null, "Body": "<p>The cuDNN library <a href=\"https://developer.nvidia.com/cudnn\" rel=\"noreferrer\">requires a GPU of compute capability 3.0 or higher</a>:</p>\n\n<blockquote>\n <p>Supported on Windows, Linux and MacOS systems with Kepler, Maxwell or Tegra K1 GPUs.</p>\n</blockquote>\n\n<p>Your Fermi M2090 is a compute capability 2.0 GPU:</p>\n\n<pre><code>I0907 18:55:05.037195 729 common.cpp:169] Major revision number: 2\nI0907 18:55:05.037201 729 common.cpp:170] Minor revision number: 0\nI0907 18:55:05.037207 729 common.cpp:171] Name: Tesla M2090\n</code></pre>\n" }, { "AnswerId": "44433724", "CreationDate": "2017-06-08T10:52:24.880", "ParentId": null, "OwnerUserId": "2091453", "Title": null, "Body": "<p>Firstly check type this command <code>nvidia-smi</code> using the terminal.\nThen go to this <a href=\"https://en.wikipedia.org/wiki/CUDA\" rel=\"nofollow noreferrer\">link</a> and try to find your GPU in the table. You will probably find out that your gpu have lower compute capability than 3.0 which is not supported for cuDNN.</p>\n\n<p>Based on your output and TESLA M2090 you probably have one of the following GPUs:</p>\n\n<blockquote>\n <p>GeForce GTX 590, GeForce GTX 580, GeForce GTX 570, GeForce GTX 480,\n GeForce GTX 470, GeForce GTX 465, GeForce GTX 480M</p>\n</blockquote>\n\n<p>And the compute capability of above GPUs is 2.0. So my suggestion is try to install caffe without cuDNN , and do not use it at least in current machine.</p>\n" } ]
32,451,934
1
<image-processing><neural-network><deep-learning><caffe><labeling>
2015-09-08T07:39:54.857
32,471,602
5,295,427
Image per-pixel Scene labeling output issue (using FCN-32s Semantic Segmentation)
<p>I'm looking for a way that, given an input image and a neural network, it will output a labeled class for each pixel in the image (sky, grass, mountain, person, car etc).</p> <p>I've set up Caffe (the future-branch) and successfully run the <a href="https://gist.github.com/shelhamer/80667189b218ad570e82/" rel="nofollow noreferrer">FCN-32s Fully Convolutional Semantic Segmentation on PASCAL-Context</a> model. However, I'm unable to produce clear labeled images with it.</p> <p>Images that visualizes my problem:<br> Input image<br> <img src="https://i.imgur.com/gYuCHMj.jpg" alt=""><br> ground truth<br> <img src="https://i.imgur.com/zalGt9v.png" alt=""><br> And my result:<br> <img src="https://i.imgur.com/TN9juhO.png" alt=""></p> <p>This might be some resolution issue. Any idea of where I'm going wrong?</p>
[ { "AnswerId": "32471602", "CreationDate": "2015-09-09T05:35:01.157", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>It seems like the 32s model is making large strides and thus working at a coarse resolution. Can you try the <a href=\"https://gist.github.com/shelhamer/91eece041c19ff8968ee\" rel=\"noreferrer\">8s model</a> that seems to perform less resolution reduction.<br>\nLooking at <a href=\"http://www.cs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf\" rel=\"noreferrer\">J Long, E Shelhamer, T Darrell <em>Fully Convolutional Networks for Semantic Segmentation</em>, CVPR 2015</a> (especially at figure 4) it seems like the 32s model is not designed for capturing fine details of the segmentation.</p>\n" } ]
32,452,265
1
<computer-vision><neural-network><convolution><deep-learning><caffe>
2015-09-08T07:59:01.317
null
639,973
Multi-label Input to Single-label output
<p>Is there a research paper where inputs are multi-labeled but the output (classifier) is a single-labeled? Preferably in computer vision field.</p>
[ { "AnswerId": "32456454", "CreationDate": "2015-09-08T11:20:39.090", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>Your question is not very clear, but were you thinking of something like <a href=\"http://lear.imag.fr/pubs/2007/NJ07/NJ07.pdf\" rel=\"nofollow\">similarity learning</a> where you are trying to learn a binary label (\"same\"/\"not-same\") given that the inputs are labeled with class labels (0..<code>L</code>)?</p>\n\n<p>In computer vision you can find similar task for faces: given an input identity of faces you want a classifier to output (for a pair of faces) if they are of the same person or not. You can find more information regarding this research field on the web page of the <a href=\"http://vis-www.cs.umass.edu/lfw/\" rel=\"nofollow\">labeled faces in the wild</a> benchmark.</p>\n" } ]
32,457,236
3
<python><theano>
2015-09-08T11:57:37.417
null
4,476,617
Theano stack matrices programmatically?
<p>I have the following code which stacks 2 matrices into a 3D tensor.</p> <pre></pre> <p>However, I do not know how many times I need to stack the matrix in advance. For example the fourth line of code may be:</p> <pre></pre> <p>Is there a theano function to duplicate a matrix n times:</p> <pre></pre> <p>Then I can pass that 3, as an argument to the theano function f. Is this possible? I looked into broadcasting but broadcasting does not explicitly change dimensionality/stack.</p>
[ { "AnswerId": "32457538", "CreationDate": "2015-09-08T12:12:34.727", "ParentId": null, "OwnerUserId": "3849224", "Title": null, "Body": "<p>I don't know about theano, but you could accomplish this using list comprehension and unpacking argument list:</p>\n\n<pre><code>n = 5\nB = theano.tensor.stack(*[A for dummy in range(n)])\n</code></pre>\n\n<p>which is equivalent to:</p>\n\n<pre><code>B = theano.tensor.stack(A, A, A, A, A)\n</code></pre>\n\n<p>What this does is, it first constructs a list with <code>n</code> copies of <code>A</code> and then unpacks this list into separate arguments (see <a href=\"https://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists\" rel=\"nofollow\">https://docs.python.org/2/tutorial/controlflow.html#unpacking-argument-lists</a>).</p>\n" }, { "AnswerId": "32462494", "CreationDate": "2015-09-08T16:08:01.990", "ParentId": null, "OwnerUserId": "4476617", "Title": null, "Body": "<p>After digging long and hard through the theano documentation I have found the solution:</p>\n\n<pre><code>import theano\nimport theano.tensor as T\nA = T.matrix(\"A\")\nB = [A]\nC = theano.tensor.extra_ops.repeat(B, 3, axis=0)\nf = theano.function(inputs=[A], outputs=C)\nprint f([range(10)]*2)\n</code></pre>\n\n<p>is equivalent to:</p>\n\n<pre><code>import theano\nimport theano.tensor as T\nA = T.matrix(\"A\")\nB = theano.tensor.stack(A, A, A)\nf = theano.function(inputs=[A], outputs=B)\nprint f([range(10)]*2)\n</code></pre>\n\n<p>except we can now choose the number of repeats programatically as the second argument to: theano.tensor.extra_ops.repeat</p>\n" }, { "AnswerId": "32478228", "CreationDate": "2015-09-09T11:23:04.733", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>Here is an example using broadcasting</p>\n\n<pre><code>import theano\nimport theano.tensor as T\nimport numpy as np\n\nA = T.fmatrix()\nn = T.iscalar()\n\nones = T.ones((n, 1, 1))\n\nstackedA = ones * A[np.newaxis, :, :]\n\nf = theano.function([A, n], stackedA)\n\na = np.arange(30).reshape(5, 6).astype('float32')\nnn = 3\n\nr = f(a, nn)\n\nprint r.shape # outputs (3, 4, 5)\nprint (r == a[np.newaxis]).all() # outputs True\n</code></pre>\n\n<p>This approach can help the compiler avoid tiling if it can optimize that away.</p>\n" } ]
32,462,036
2
<python><numpy><scipy><theano>
2015-09-08T15:45:17.510
32,462,902
3,888,963
python theano Optimization failure due to: local_dot_to_dot22
<p>I just pip installed theano and tried to run theano.test(). It produced a very long log of errors and I copied the first part. I also tried a couple other examples - I have seen </p> <pre></pre> <p>and </p> <pre></pre> <p>several times. </p> <p>I'm using python 2.7 (canopy), scipy 0.15.1-2 and numpy 1.9.2-1. I am very new to theano. I appreciate if you can point me to the right direction. Thanks!</p> <pre></pre>
[ { "AnswerId": "43199865", "CreationDate": "2017-04-04T06:30:28.040", "ParentId": null, "OwnerUserId": "2097240", "Title": null, "Body": "<p>In case you don't want to reinstall things, if they're heavy programs, for instance, affecting Window's registry and so, you can try <strong>symbolic links</strong>. </p>\n\n<p>A symbolic link will create something similar to a shortcut to a folder, but seen as an actual folder by other applications. </p>\n\n<p>So, you can do something like this:</p>\n\n<ul>\n<li>Run <code>cmd</code> as administrator </li>\n<li>User this command: <code>mklink /D \"C:\\LinkToProgramFiles\" \"C:\\Program Files\"</code></li>\n</ul>\n\n<p>And then, you start using \"C:\\LinkToProgramFiles\" in your ldflags var. </p>\n" }, { "AnswerId": "32462902", "CreationDate": "2015-09-08T16:33:39.487", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>The problem here is problem caused by having spaces in your path, i.e. Canopy is installed in <code>C:\\Program Files\\Enthought\\Canopy</code> but the Theano scripts don't work well with the space between <code>Program</code> and <code>Files</code>. Try uninstalling Canopy and reinstall in a directory with no space in the path.</p>\n\n<p>You should also follow the other instructions for <a href=\"http://deeplearning.net/software/theano/install_windows.html\" rel=\"nofollow\">installing Theano on Windows</a>. Unfortunately it's not as simple as just <code>pip install theano</code>.</p>\n" } ]
32,471,395
0
<fft><convolution><theano>
2015-09-09T05:15:33.650
null
5,307,788
can you modify this theano code for fft-convolution available?
<p>I'm searching for the way to use fft-convolution in theano. I wrote simple convolution code with theano. But this code doesn't work if i set though simple convolution works with </p> <p>Please tell me what is wrong with this code?</p> <pre class="lang-py prettyprint-override"></pre> <p>The error message is like below: </p> <pre></pre>
[]
32,471,670
1
<memory><gpu><theano>
2015-09-09T05:40:42.667
40,531,639
1,401,278
Storing specific shared variables in CPU
<p>Is it possible in theano to selectively choose some shared variables in the CPU? I have a huge matrix in the output layer over entire vocabulary (~2M) that wouldn't fit in the GPU memory. I have experimented with reducing its size thro' sampling, but I want to see if I can use the entire matrix. One way I could do is to use in theano flags. But, this seem to use GPU only on a need basis. I checked the tutorial and it doesn't seem to have more details.</p> <p>I wonder if it is possible to specify one or few shared variables to be stored in cpu. One can do this when creating the shared variable I guess. Having some of the variables in GPU will be faster than having everything in CPU right? Or does theano somehow figure out which ones to implicitly keep/move automatically? Would appreciate some explanation.</p>
[ { "AnswerId": "40531639", "CreationDate": "2016-11-10T15:48:50.020", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>In newer Theano (I forgot Theano 0.8.2 or the dev version of Theano 0.9), there is a different interface. You can do theano.shared(data, target='cpu')</p>\n\n<p>Continue to initialize the GPU as you did before.</p>\n" } ]
32,472,524
1
<image-processing><machine-learning><neural-network><deep-learning><caffe>
2015-09-09T06:39:57.640
null
1,377,127
Caffe error: no field named "net"
<p>I had the Caffe C++ example program working on my computer, but after recently recompiling Caffe, I've encountered this error when I try to run the program:</p> <blockquote> <p>[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 2:4: Message type "caffe.NetParameter" has no field named "net".<br> upgrade_proto.cpp:928] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/jack/Desktop/beeshiny/deploy.prototxt</p> </blockquote> <p>Am I missing something or has the syntax of the prototxt files been changed? My deploy.prototxt file (that I pass to the C++ program) looks like this:</p> <pre></pre> <p>The contents of the deploy_arch.prototxt file referenced in the prototxt file above:</p> <pre></pre> <p>I don't understand why this has stopped working all of a sudden, unless an update has made my prototxt file obsolete?</p>
[ { "AnswerId": "37581277", "CreationDate": "2016-06-02T01:30:12.567", "ParentId": null, "OwnerUserId": "3835707", "Title": null, "Body": "<p>I solved my problem by adding <code>caffe/python</code> in <code>$PYTHONPATH</code>.</p>\n" } ]
32,487,164
1
<theano>
2015-09-09T18:51:57.533
null
5,285,204
Calling seperate theano functions with same inputs?
<p>I have something like this:</p> <pre></pre> <p>This is working fine. I would now like to see what the mean of the hidden units is. I tried adding this before the line where is declared:</p> <pre></pre> <p>I get the following error:</p> <p></p> <p>I really have two questions: 1) Why can't I do this? 2) What is the correct way to achieve what I want?</p> <p>Thank you</p>
[ { "AnswerId": "32496887", "CreationDate": "2015-09-10T08:32:58.287", "ParentId": null, "OwnerUserId": "4511818", "Title": null, "Body": "<p>As far as I can see, you are giving him the function as input. If you use your array/matrix of hidden units instead the code should work.</p>\n\n<pre><code>hidden_mean_func = th.function(inputs=[hidden], outputs=[hm], name=\"hidden_mean_function_printer\")\nprint hidden_mean_func(hidden)\n</code></pre>\n" } ]
32,493,904
5
<linux><ubuntu><cuda><nvidia><caffe>
2015-09-10T05:35:21.713
51,078,813
2,467,772
Could not insert 'nvidia_352': No such device
<p>I am trying to run <a href="http://caffe.berkeleyvision.org/">caffe</a> on . After installation, I run caffe in gpu and the error is </p> <pre></pre> <p>My NVIDIA driver is 352.41. I installed 352 and it is installed latest version.</p> <pre></pre> <p>My Ubuntu has NVIDIA driver 352 and why I have error like</p> <pre></pre> <p>I checked whether I have CUDA capable device like</p> <pre></pre> <p>I have CUDA capable device and why I get the error?</p> <p>EDIT 1: Yeah my test with ./deviceQuery failed.</p> <pre></pre> <p>I checked in the dev/ folder, I have nvidia0.</p> <pre></pre> <p>My nvcc -V check gave me</p> <pre></pre> <p>Then my version check</p> <pre></pre> <p>What could be wrong?</p>
[ { "AnswerId": "37337328", "CreationDate": "2016-05-20T03:08:38.730", "ParentId": null, "OwnerUserId": "2492724", "Title": null, "Body": "<p>I also had this problem. The above answers didn't work for me. When I installed latest driver(<code>nvidia-364</code>), it worked. Commands to run:</p>\n\n<pre><code>sudo add-apt-repository ppa:xorg-edgers/ppa \nsudo apt-get update \nsudo apt-get install nvidia-364\n</code></pre>\n\n<p>I think the problem occurs when we have different version of <code>gcc</code> used to compile driver modules and the Linux kernel. </p>\n" }, { "AnswerId": "56833148", "CreationDate": "2019-07-01T09:27:53.007", "ParentId": null, "OwnerUserId": "2467772", "Title": null, "Body": "<p>If you are showing video from non-nvidia device but have driver installed, you have to install it with “--no-opengl-files” flag, for Gnome to work.</p>\n\n<p>I suggest to download a separate driver and install it manually by logging to console:</p>\n\n<pre><code>1. Alt Ctrl F2/f3/f4/f5 to get to console.\n2. “init 3” to kill UI\n3. relogin if necessary to console\n4. wget http://us.download.nvidia.com/tesla/418.67/NVIDIA-Linux-\n</code></pre>\n\n<p>driver x86_64-418.67.run</p>\n\n<pre><code>5. sh NVIDIA-Linux-x86_64-418.67.run --no-opengl-files\n6. After installation - reboot\n</code></pre>\n" }, { "AnswerId": "32556866", "CreationDate": "2015-09-14T03:24:42.460", "ParentId": null, "OwnerUserId": "2467772", "Title": null, "Body": "<p>Now the problem is solved.\nI checked <code>sudo dpkg --list | grep nvidia</code>\nI found as my kernel has 352.41, but the client has 304.12.\nSo I did <code>sudo apt-get remove --purge nvidia-*</code>. It removed all packages.\nThen, install 352.41 as</p>\n\n<pre><code>$ sudo add-apt-repository ppa:xorg-edgers/ppa -y\n$ sudo apt-get update\n$ sudo apt-get install nvidia-352\n</code></pre>\n\n<p>After that </p>\n\n<pre><code>$ sudo dpkg --list | grep nvidia\nrc nvidia-304 304.128-0ubuntu0~gpu14.04.2 amd64 NVIDIA legacy binary driver - version 304.128\nrc nvidia-304-updates 304.125-0ubuntu0.0.2 amd64 NVIDIA legacy binary driver - version 304.125\nii nvidia-352 352.41-0ubuntu0~gpu14.04.1 amd64 NVIDIA binary driver - version 352.41\nrc nvidia-opencl-icd-304 304.128-0ubuntu0~gpu14.04.2 amd64 NVIDIA OpenCL ICD\nrc nvidia-opencl-icd-304-updates 304.125-0ubuntu0.0.2 amd64 NVIDIA OpenCL ICD\nii nvidia-opencl-icd-352 352.41-0ubuntu0~gpu14.04.1 amd64 NVIDIA OpenCL ICD\nii nvidia-prime 0.6.2 amd64 Tools to enable NVIDIA's Prime\nii nvidia-settings 355.11-0ubuntu0~gpu14.04.1 amd64 Tool for configuring the NVIDIA graphics driver\n</code></pre>\n\n<p>Now version matches.\nThen ./deviceQuery and all work as expected.\nThanks</p>\n" }, { "AnswerId": "32942278", "CreationDate": "2015-10-05T06:13:28.900", "ParentId": null, "OwnerUserId": "5408646", "Title": null, "Body": "<p>I have this problem too. And re-installing the nvidia drivers didn't solve the issue. </p>\n\n<p>Finally, I solved this problem by add two kernel parameters with grub.</p>\n\n<p>add in:</p>\n\n<pre><code>GRUB_CMDLINE_LINUX_DEFAULT\n</code></pre>\n\n<p>with:</p>\n\n<pre><code>pci=nocrs pci=realloc\n</code></pre>\n\n<p>I think this is a collision between <code>cuda7.5</code> and <code>kernel3.19</code>.</p>\n" }, { "AnswerId": "51078813", "CreationDate": "2018-06-28T09:04:04.137", "ParentId": null, "OwnerUserId": "2467772", "Title": null, "Body": "<p>Another way I can do is install using .run file.\nThat needs to kill X server first.\nX server is killed as follow.</p>\n\n<pre><code>Make sure you are logged out.\nHit CTRL+ALT+F1 and login using your credentials.\nkill your current X server session by typing sudo service lightdm stop or sudo stop lightdm\nEnter runlevel 3 (or 5) by typing sudo init 3 (or sudo init 5) and install your .run file.\nYou might be required to reboot when the installation finishes. If not, run sudo service start lightdm or sudo start lightdm to start your X server again.\n</code></pre>\n\n<p>Then <code>run .run file as sudo sh xxxxx.run</code></p>\n\n<p>You may get error as <code>The distribution-provided pre-install script failed! Are you sure you want to continue?</code>. Then abort the installation and </p>\n\n<pre><code>disable the \"Nouveau kernel driver\" as sudo update-initramfs -u\n</code></pre>\n\n<p>Then reboot the system and <code>redo stop X server, enter runlevel 3 and do sudo sh xxxx.run again.</code></p>\n\n<p>This time you can ignore the message and continue for that prescript fail message.\nThen you will be able to install Nvidia Driver from .run file.</p>\n" } ]
32,499,160
1
<permissions><scipy><permission-denied><theano>
2015-09-10T10:19:00.270
32,500,869
639,973
Permission denied due to scipy while installing Theano
<p>I'm installing Theano on a server where I'm not the root.</p> <p>I ran</p> <pre></pre> <p>which returns the following error</p> <pre></pre> <p>so apparently, Theano wants to install scipy, but it's already installed, so it attempts to uninstall it first, which brings the permission issue.</p> <p>How can I go around it so as to not uninstall scipy, but use the existing one?</p>
[ { "AnswerId": "32500869", "CreationDate": "2015-09-10T11:38:02.227", "ParentId": null, "OwnerUserId": "4511818", "Title": null, "Body": "<p>The problem is that the scipy-version you have installed is not recommended. Theano usually needs at least version 0.11 to work. It seems that your version is also working but has some known bugs. (<a href=\"http://deeplearning.net/software/theano/install.html\" rel=\"nofollow\">Installation Instructions</a>) If you wish to use your old version and risk the bugs, you should be able to use:</p>\n\n<pre><code>pip install Theano --user --no-dependencies\n</code></pre>\n\n<p>Note that the other two requirements numpy and six will also not be checked and updated</p>\n" } ]
32,504,394
1
<c++><neural-network><deep-learning><caffe>
2015-09-10T14:16:50.080
32,504,940
1,103,412
Batch processing mode in Caffe
<p>I'd like to use the Caffe library to extract image features but I'm having performance issues. I can only use the CPU mode. I was told Caffe supported batch processing mode, in which the average time required to process one image was much slower.</p> <p>I'm calling the following method:</p> <pre></pre> <p>and I'm putting in a vector of size 1, containing a single blob of the following dimensions - (num: 10, channels: 3, width: 227, height: 227). It represents a single image oversampled in the same way as in the official python wrapper.</p> <p>This works and gives correct results. It is, however, too slow.</p> <p>Whenever I try to send in a vector containing more than one blob (of the same dimensions), I get the following error:</p> <blockquote> <p>F0910 16:10:14.848492 15615 blob.cpp:355] Trying to copy blobs of different sizes.<br> Check failure stack trace:</p> </blockquote> <p>How do I make Caffe process my images in a batch?</p>
[ { "AnswerId": "32504940", "CreationDate": "2015-09-10T14:39:40.497", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>If you want to feed larger batches you need the first (and only) blob in <code>bottom</code> to have <code>num&gt;10</code>. Feeding a blob with <code>num=20</code> is the same as feeding two inputs with <code>oversample=10</code>. You will, of course, have to perform the averaging manually according to the <code>oversampling</code> you are using.</p>\n\n<p>Furthermore, you might want to change the first input dimension in your <code>deploy.prototxt</code> file from 10 to some larger value (depending on your machine's memory capacity)</p>\n" } ]
32,504,728
1
<python><python-3.x><theano><ode>
2015-09-10T14:31:12.033
42,079,900
99,989
How do I use Theano to solve an ordinary differential equation?
<p>Here is my Python code:</p> <pre></pre> <p>Is it possible to use Theano to solve the ODE?</p>
[ { "AnswerId": "42079900", "CreationDate": "2017-02-07T00:51:51.727", "ParentId": null, "OwnerUserId": "3583290", "Title": null, "Body": "<p>Here is a VERY simple ode solver in theano:</p>\n\n<pre><code>import numpy\nimport theano\n\n# the right-hand side\ndef f(x, t):\n return x*(1-x)\n\nx = theano.tensor.matrix() # why not a matrix\ndt = theano.tensor.scalar()\nt = theano.tensor.scalar()\n\nx_next = x + f(x, t)*dt # implement your favourite RK method here!\n\n# matrix of random initial values\n# store it on the device\nx_shared = theano.shared(numpy.random.rand(10, 10))\n\nstep = theano.function([t, dt], [],\n givens=[(x, x_shared)],\n updates=[(x_shared, x_next)],\n on_unused_input='warn')\n\nt = 0.0\ndt = 0.01\n\nwhile t &lt; 10:\n step(t, dt)\n t += dt\n # test halt condition here\n\nprint(x_shared.get_value()) # read back the result\n</code></pre>\n\n<p>I hope it helps.\nBasically you have to implement your Runge-Kutta method.</p>\n\n<p>And mind that the strength of theano is in vectorization, so I won't bother implementing the time loop in theano. That's why I used a simple python while loop, although I could use <a href=\"http://deeplearning.net/software/theano/library/scan.html\" rel=\"nofollow noreferrer\">theano scan</a>. Anyway, depending on your goal, the optimization can be tricky. I'm not 100% convinced that theano is a good choice for an ODE solver. Numpy does the vectorizations on your force and position matrices anyway, at least on CPU. With a theano implementation you can utilize GPU, but that's not a guarantee for a speedup.</p>\n" } ]
32,505,458
3
<c++><neural-network><deep-learning><caffe>
2015-09-10T15:04:23.923
32,505,913
1,103,412
Reduce a Caffe network model
<p>I'd like to use Caffe to extract image features. However, it takes too long to process an image, so I'm looking for ways to optimize for speed.</p> <p>One thing I noticed is that the network definition I'm using has four extra layers on top the one from which I'm reading a result (and there are no feedback signals, so they should be safe to delete).</p> <p>I tried to delete them from the definition file but it had no effect at all. I guess I might need to remove the corresponding part of the file that contains pre-trained weights, too. That is, however, a binary file (a protobuffer) so editing it is not that easy.</p> <p>Do you think that removing the four layers might have a profound effect of the net performance?</p> <p>If so then how do I get familiar with the file contents so that I could edit it and how do I know which parts to remove?</p>
[ { "AnswerId": "32505913", "CreationDate": "2015-09-10T15:25:13.650", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>first, I don't think removing the binary weights will have any effect.<br>\nSecond, you can do it easily using the python interface: see <a href=\"http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/net_surgery.ipynb\" rel=\"nofollow noreferrer\">this tutorial</a>.<br>\nLast but not least, have you tried running <a href=\"http://caffe.berkeleyvision.org/tutorial/interfaces.html\" rel=\"nofollow noreferrer\"><code>caffe time</code></a> to measure the performance of your net? this may help you identify the bottlenecks of your computations.</p>\n\n<p>PS,\nYou might find <a href=\"https://stackoverflow.com/q/30822009/1714410\">this thread</a> relevant as well.</p>\n" }, { "AnswerId": "49224090", "CreationDate": "2018-03-11T19:21:34.137", "ParentId": null, "OwnerUserId": "1060382", "Title": null, "Body": "<p>I would retrain on a smaller input size, change strides, etc. However if you want to reduce file size, I'd suggest quantizing the weights <a href=\"https://github.com/yuanyuanli85/CaffeModelCompression\" rel=\"nofollow noreferrer\">https://github.com/yuanyuanli85/CaffeModelCompression</a> and then using something like lzma compression (xz for unix). We do this so we can deploy to mobile devices. 8 bit weights compress nicely.</p>\n" }, { "AnswerId": "37579452", "CreationDate": "2016-06-01T21:55:51.403", "ParentId": null, "OwnerUserId": "5326903", "Title": null, "Body": "<p>Caffemodel stores data as key-value pair. Caffe only copies weight for those layers (in train.prototxt) having exactly same name as caffemodel. Hence I don't think removing binary weights will work. If you want to change network structure, just modify train.prototxt and deploy.txt.</p>\n\n<p>If you insist to remove weights from binary file, follow this <a href=\"http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/net_surgery.ipynb\" rel=\"nofollow\">caffe example</a>.</p>\n\n<p>And to make sure you delete right part, this <a href=\"http://ethereon.github.io/netscope/#/editor\" rel=\"nofollow\">visualizing tool</a> should help.</p>\n" } ]
32,507,202
1
<machine-learning><neural-network><classification><theano><lasagne>
2015-09-10T16:30:58.843
null
5,321,075
neural network with lasagne accuracy
<p>I'm trying to construct a binary classifier with a neural network on some images using Lasagne. The training and validation loss fluctuate wildly (and do not settle) and the validation accuracy is always at . Furthermore, the network always predicts the target as for the test set.</p> <p>The network I am using is basically just a copy of Lasagne's example for the mnist dataset found <a href="https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py" rel="nofollow">here</a>, but adapted for my images which are quite a bit larger () with around images in the training set. I am wondering if this is a problem, and if the network may need to be deeper / have more neurons? </p> <p>Do I need a larger training set for this size of image? Or should I be seeing some, albeit inaccurate, set of predictions for my test set?</p>
[ { "AnswerId": "32515310", "CreationDate": "2015-09-11T04:11:06.493", "ParentId": null, "OwnerUserId": "5226958", "Title": null, "Body": "<p>I'd resize the images into smaller ones. Since your training examples are so limited you probably do not want to train a big model which easily overfits.</p>\n\n<p>The following tricks may also be useful for you:</p>\n\n<ul>\n<li><p>check whether your images are subtracted some mean value. If your input values are raw pixels between [0,255], that will be too big. </p></li>\n<li><p>try different learning rates. If your result fluctuates it is possible your learning rate is too high.</p></li>\n<li><p>use data augmentation. You may flip your images, or move it up/down/left/right some pixels. Then you can get more training examples.</p></li>\n<li><p>look at training set. See where your model makes mistakes. If your training error is bad, the there must be something wrong.</p></li>\n</ul>\n" } ]
32,519,272
1
<theano><keras>
2015-09-11T08:46:43.920
null
417,357
Add AUC as loss function for keras
<p>Has anyone had any luck with writing a custom AUC loss function for Keras using Theano?</p> <p>The documentation is here: <a href="http://keras.io/objectives/" rel="noreferrer">http://keras.io/objectives/</a></p> <p>Sample code is here: <a href="https://github.com/fchollet/keras/blob/master/keras/objectives.py" rel="noreferrer">https://github.com/fchollet/keras/blob/master/keras/objectives.py</a></p> <p>I saw there is an implementation in pylearn2, (which is really a wrapper around sklearn) but was unable to port this to use in Keras</p> <p><a href="https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/train_extensions/roc_auc.py" rel="noreferrer">https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/train_extensions/roc_auc.py</a></p> <p>So i guess my question is, has anybody been able to write this function? and would you be willing to share?</p>
[ { "AnswerId": "36970788", "CreationDate": "2016-05-01T19:10:03.990", "ParentId": null, "OwnerUserId": "1397061", "Title": null, "Body": "<p>AUC is not differentiable, so you can't use it as a loss function without some modification. There's been <a href=\"http://www.icml-2011.org/papers/198_icmlpaper.pdf\">some work</a> on algorithms to maximize AUC, but I'd recommend just using the regular cross-entropy / log likelihood loss.</p>\n" } ]
32,520,049
1
<macos><python-3.x><theano>
2015-09-11T09:25:01.703
32,522,028
5,324,587
I can't install Theano on Mac OS X 10.10.5
<p>I'm trying to install Theano (and subsequently pylearn2) with Python 3.4 on my MacBookPro, with Mac OS X 10.10.5. I have Anaconda and I follow the instructions reported on Theano's documentation (<a href="http://deeplearning.net/software/theano/install.html" rel="nofollow noreferrer">http://deeplearning.net/software/theano/install.html</a>). I have to use the sudo -H pip command and Theano gets downloaded, a Theano folder is created in anaconda and all the required dependencies satisfying the versions requisites. However, when i write the</p> <blockquote> <p>import theano</p> </blockquote> <p>command in Python, I get (both in the terminal and in ipython-qtconsole) the following:</p> <blockquote> <p>Traceback (most recent call last):</p> <p>File "", line 1, in </p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/site-packages/theano/<strong>init</strong>.py", line 44, in from theano.gof import \</p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/site-packages/theano/gof/<strong>init</strong>.py", line 38, in from theano.gof.cc import \</p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/site-packages/theano/gof/cc.py", line 64, in from theano.gof import link</p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/site-packages/theano/gof/link.py", line 12, in from theano.gof.type import Type</p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/site-packages/theano/gof/type.py", line 14, in from theano.gof.op import CLinkerObject</p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/site-packages/theano/gof/op.py", line 31, in from theano.gof.cmodule import GCC_compiler</p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/site-packages/theano/gof/cmodule.py", line 37, in from theano.gof.compiledir import gcc_version_str, local_bitwidth</p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/site-packages/theano/gof/compiledir.py", line 259, in in_c_key=False)</p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/sitepackages/theano/configparser.py", line 237, in AddConfigVar configparam.<strong>get</strong>()</p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/site-packages/theano/configparser.py", line 279, in <strong>get</strong> self.<strong>set</strong>(None, val_str)</p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/site-packages/theano/configparser.py", line 290, in <strong>set</strong> self.val = self.filter(val)</p> <p>File "/Users/davidefloriello/anaconda/lib/python3.4/site-packages/theano/gof/compiledir.py", line 185, in filter_compiledir " or listing permissions." % path) ValueError: compiledir '/Users/davidefloriello/.theano/compiledir_Darwin-14.5.0-x86_64-i386-64bit-i386-3.4.3-64' exists but you don't have read, write or listing permissions.</p> </blockquote> <p>This last line is quite surprising since I'm the only admin. I gave a look at this other question <a href="https://stackoverflow.com/questions/24728255/how-to-install-theano-library-on-os-x">How to install theano library on OS X?</a> but I haven't found it very useful, as our Python versions are different and it seems to me that my problem is a bit less vague.</p> <p>Any help is very much appreciated!</p> <p>Thanks a lot,</p> <p>Davide </p>
[ { "AnswerId": "32522028", "CreationDate": "2015-09-11T11:09:40.003", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>A similar problem was found reported <a href=\"https://github.com/dnouri/kfkd-tutorial/issues/9\" rel=\"nofollow\">here</a>. The solution, updated to account for different user, is:</p>\n\n<blockquote>\n <p>Try and remove that '/Users/davidefloriello/.theano/' directory and\n run Python again without sudo.</p>\n</blockquote>\n" } ]
32,522,469
1
<c++><image-processing><deep-learning><caffe>
2015-09-11T11:32:43.670
32,531,407
1,103,412
Batch processing mode in Caffe - no performance gains
<p>Following on <a href="https://stackoverflow.com/q/32504394/1103412">this thread</a> I reimplemented my image processing code to send in 10 images at once (i.e. I now have the num property of the input blob set to 100 instead of 10).</p> <p>However, the time required to process this batch is 10 times bigger than originally. Which means that I did not get any performance increase.</p> <p>Is that reasonable or did I make something wrong?</p> <p>I am running Caffe in CPU mode. Unfortunately GPU mode is not an option for me.</p>
[ { "AnswerId": "32531407", "CreationDate": "2015-09-11T20:10:48.963", "ParentId": null, "OwnerUserId": "809993", "Title": null, "Body": "<p>Update: Caffe now natively supports parallel processing of multiple images when using multiple GPUs. Though it seems relatively simple to implement base on the current implementation of GPU parallelism, at the moment there's no similar support for parallel processing on multiple CPUs.</p>\n\n<p>Considering that the main problem with implementing parallelism is the syncing you need during training If you just want to process your images in parallel (as opposed to training the model), then you could load several copies of the same network to memory (whether through python with multiprocessing or c++ with multi-threading), and process each image on a different network. It would be simple and quite effective, especially if you load the networks once and then just process a large amount of images. Nevertheless, GPUs are much faster :)</p>\n\n<hr>\n\n<p>Caffe doesn't process multiple images in parallel, the only saving you get by batch processing several images is in the time it takes to transfer the image data back and forth between Caffe's framework, which could be significant when dealing with the GPU.</p>\n\n<p>IIRC there are several attempts to make Caffe process images in parallel, but most focus on the GPU implementation (CUDNN, CUDA Streams etc.), with few attempts to add parallelism to the CPU code (OpenBLAS's multithread mode, or simply running on multiple threads). Of those I believe only the CUDNN option is currently part of the stable version of Caffe, but obviously requires a GPU. You can try to look at one of the pull requests about this matter on Caffe's github page and see if it works for you, but note that it might cause compatibilities issue with your current version.</p>\n\n<p>This is one such version that in the past I've used, though it's no longer maintained: <a href=\"https://github.com/BVLC/caffe/pull/439\" rel=\"nofollow\">https://github.com/BVLC/caffe/pull/439</a></p>\n\n<p>I've also noticed in the last comment of the above issue that there's some speed up to the CPU code on this pull request as well, though I've never tried it myself: <a href=\"https://github.com/BVLC/caffe/pull/2610\" rel=\"nofollow\">https://github.com/BVLC/caffe/pull/2610</a></p>\n" } ]
32,527,603
0
<theano><lasagne>
2015-09-11T16:01:43.397
null
1,001,019
LSTMLayer produces NaN values even before training it
<p>I'm currently trying to construct a LSTM network with Lasagne to predict the next step of noisy sequences. I first trained a stack of 2 LSTM layers for a while, but had to use an abysmally small learning rate (1e-6) because of divergence issues (that ultimately produced NaN values). The results were kind of disappointing, as the network produced smooth, out-of-phase versions of the input.</p> <p>I then came to the conclusion I should use better parameter initialization than what is given by default. The goal was to start from a network that just mimics identity, since for strongly auto-correlated signal it should be a good first estimation of the next step (x(t) ~ x(t+1)), and to sprinkle a bit of noise on top of it.</p> <pre></pre> <p>I then use this lstm generation code to generate the following network:</p> <pre></pre> <p>Problem is, even without any training, this network produces garbage values and sometimes even a bunch of NaNs, right from the very first LSTM layer:</p> <pre></pre> <p>I don't get why, I checked each matrices and their values are fine, like I wanted them to be. I even tried to recreate each gate activations and the resulting hidden activations using the actual numpy arrays and they reproduce the input just fine. What did I do wrong there??</p>
[]
32,529,056
1
<theano>
2015-09-11T17:30:40.013
33,463,358
3,261,934
Why theano scan works differently when the codes are nearly the same?
<p>The code below:</p> <pre></pre> <p>have an error:</p> <pre></pre> <p>But when I change 1 to any other number, for example:</p> <pre></pre> <p>It works fine.</p> <p>I don't know what happened here. Could someone help me?</p>
[ { "AnswerId": "33463358", "CreationDate": "2015-11-01T14:41:15.117", "ParentId": null, "OwnerUserId": "2859669", "Title": null, "Body": "<p>Please follow this post: <a href=\"https://github.com/Theano/Theano/issues/2985\" rel=\"nofollow\">https://github.com/Theano/Theano/issues/2985</a></p>\n\n<p>Passing a tensor whose shape includes 1 as a part of outputs_info when calling theano.scan fails unless those axes of shape 1 are unbroadcasted manually using tensor.unbroadcast. This is due to the different broadcasting pattern between the actual return from the inner function of scan and the corresponding one passed via outputs_info.</p>\n\n<p>Try:</p>\n\n<pre><code>h1=T.unbroadcast(T.as_tensor_variable(np.zeros((1, 20), dtype=theano.config.floatX)), 0)\ns1=T.unbroadcast(T.as_tensor_variable(np.zeros((1, 20), dtype=theano.config.floatX)), 0)\n</code></pre>\n\n<p>to make the first dimension unbroadcastable. </p>\n" } ]
32,536,426
2
<computer-vision><neural-network><convolution><deep-learning><caffe>
2015-09-12T07:40:09.220
32,547,133
639,973
fine-tuning a CNN from a lower fc layer
<p>I've noticed that most fine-tuning of CNN over new dataset is done only on the "last" fully connected (fc) layer.</p> <p>I'm interested in fine-tuning from the "first" fully connected layer: that is, I want to use mid-level features from convolution and pooling layers as they are, (supposing it's trained on ImageNet) but then fit all fc layers to my new dataset.</p> <p>Theoretically and in practice, what is the supposed effect of this? Is it likely to learn a more proper set of parameters for my new dataset?</p>
[ { "AnswerId": "32547133", "CreationDate": "2015-09-13T06:58:58.803", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>Theoretically, the deeper you fine-tune, the better your model fits your data. So, if you could fine-tune the whole model - the better. </p>\n\n<p>So, what's the catch, you must be asking, why don't everyone fine-tune the whole model?<br>\nFirst, fine-tuning the whole model involves lots and lots of parameters, in order to train properly millions of parameters without the risk of overfitting, you must have a LOT of new training examples. In most cases, when fine-tuning you only have a very few annotated samples for the new task and therefore you are unable to afford fine-tuning of the whole model.<br>\nSecond, fine-tuning the whole model takes much longer than training just the top fc layer. Thus, if you have little time and budget you only fine-tune the top fc layer.</p>\n\n<p>In your case, if you have enough samples you may fine-tune the top two fc layers. From my experience it is better to fine tune the top layer first and then fine-tune the top two together after some iterations are done on the top layer alone. </p>\n" }, { "AnswerId": "42738900", "CreationDate": "2017-03-11T18:20:30.483", "ParentId": null, "OwnerUserId": "3718030", "Title": null, "Body": "<p>The purpose of FC layers in a ConvNet is only to perform classification for your problem. You could just use the final flattened output from your last Conv/Pooling layer as engineered features and put it in another machine learning model and it would have the same effect.</p>\n\n<p>This implies that the parameters learned by FC layers in most cases are very problem specific (depends on the data) and, in most cases not transferable. </p>\n\n<p>So whenever people fine tune on a pre-trained model they almost always dump the FC layers on the top.</p>\n\n<p>Now you can go 2 ways from here.</p>\n\n<ol>\n<li>Use the final flattened output from the last Conv/Pooling layer as extracted features for your problem and train a ML model on it. This method is commonly used if your data set is small or not similar to the pre-trained model.</li>\n<li>Get the extracted features using the above method and then use them to train a FC neural network. Once you achieved a decent accuracy stack it on top of last conv/pooling layer of the pre-trained model (dont forget to remove the orignal FC layers). Now freeze(parameters are made constant and dont change on training) most of the pre-trained model and only allow the last few conv layers to be trained. Now train the entire network to train with a very small learning rate.</li>\n</ol>\n\n<p>The whole point of freezing most of the model is that we assume that the model already knows basic stuff like edge detection and color from the earlier conv layers. Now we fine tune the last few layers for the our problem. We chose a small learning rate so that we don't end up messing up what the model has already learned. </p>\n\n<p>The reason we train the FC layers before we fit them on to the pre-trained model is just to save training time and more importantly to ensure that we don't make to much changes to the Conv layers and end up over fitting. </p>\n" } ]
32,538,758
2
<python><ipython><caffe><pycaffe>
2015-09-12T12:25:27.750
32,539,282
2,452,617
NameError: name 'get_ipython' is not defined
<p>I am working on Caffe framework and using PyCaffe interface. I am using a Python script obtained from converting the IPython Notebook <strong>00-classification.ipynb</strong> for testing the classification by a trained model for ImageNet. But any <strong>get_ipython()</strong> statement in the script is giving the following error:</p> <pre></pre> <p>In the script, I'm importing the following:</p> <pre></pre> <p>Can someone please help me to resolve this error?</p>
[ { "AnswerId": "32539282", "CreationDate": "2015-09-12T13:17:25.860", "ParentId": null, "OwnerUserId": "1048100", "Title": null, "Body": "<p>You have to run your script with ipython:</p>\n\n<pre><code>$ ipython python/my_test_imagenet.py\n</code></pre>\n\n<p>Then <code>get_ipython</code> will be already in global context.</p>\n\n<p>Note: Importing it via <code>from IPython import get_ipython</code> in ordinary shell <code>python</code> will not work as you really need <code>ipython</code> running.</p>\n" }, { "AnswerId": "48032629", "CreationDate": "2017-12-30T09:27:33.323", "ParentId": null, "OwnerUserId": "207661", "Title": null, "Body": "<p>If your intention is to run converted .py file notebook then you should just comment out <code>get_ipython()</code> statements. The matlibplot output can't be shown inside console so you would have some work to do . Ideally, iPython shouldn't have generated these statements. You can use following to show plots:</p>\n\n<pre><code>plt.show(block=True)\n</code></pre>\n" } ]
32,553,374
0
<python><machine-learning><neural-network><theano>
2015-09-13T18:55:14.593
null
3,301,357
How can I get not only an unrolled for k steps truncated-BPTT grad in theano 'scan', but also an each(1st through k-th) of components of that grad?
<p>If I use theano.scan with a truncate_gradient=k(int >0), I can obtain a truncated-BPTT gradient for a recurrent neural network. And that gradient will be calculated for an unrolled for k steps backwards in time RNN(which becomes k-layered feed forward neural network FFNN). Basically(according to theory), that truncated gradient for RNN(calculated by unrolling RNN to k steps backwards in time) should consist of sum of gradients for all k layers of unrolled FFNN(obtained from unrolling RNN to k steps backwards).</p> <p>Basically, I'm looking how to get each component of that sum in theano... As it seems to me, that you can easily get the resulting truncated grad of an RNN(which is represented by a scan op with truncate_gradient=k)(this functionality works out of the box), but it's quite tricky to get each component of that unrolled sum for truncated gradients for scan op/RNN.</p> <p><strong>What I already tried:</strong></p> <p>I looked through the theano scan op internals, especially through the 'grad' method... And it's code is quite complicated.</p> <p>Also I tried to print graphs for resulting gradients - i tried to print graphs for the same RNN with truncate_gradients=1 and truncate_gradients=10 (using theano.printing.pydotprint) - These graphs were the same :(</p> <p>Then I used theano.printing.pydotprint with scan_graphs=True That had to print internals of scan. But I failed with such exception:</p> <p>AttributeError: 'Scan' object has no attribute 'fn'.</p> <p>Also I googled for these questions - nothing. Asked in theano-users - haven't got any response yet</p>
[]
32,563,651
1
<lua><neural-network><batch-processing><torch>
2015-09-14T11:27:08.540
32,570,711
350,605
Batch processing in Torch with ClassNLLCriterion
<p>I'm trying to implement a simple NN in Torch to learn more about it. I created a very simple dataset: binary numbers from 0 to 15 and my goal is to classify the numbers into two classes - class 1 are numbers 0-3 and 12-15, class 2 are the remaining ones. The following code is what i have now (i have removed the data loading routine only):</p> <pre></pre> <p>This is how the data and class Tensors look like:</p> <pre></pre> <p>Which is what I expect it to be. However when running this code, i get the following error on line <strong>loss = criterion:forward( prediction, class )</strong>:</p> <blockquote> <p>torch/install/share/lua/5.1/nn/ClassNLLCriterion.lua:69: attempt to perform arithmetic on a nil value</p> </blockquote> <p>When i modify the training routine like this (processing a single data point at a time instead of all 16 in a batch) it works and the network successfully learns to recognize the two classes:</p> <pre></pre> <p>I'm not sure what might be wrong with the "batch processing" i'm trying to do. A brief look at the ClassNLLCriterion didn't help, it seems i'm giving it the expected input (see below), but it still fails. The input it receives (prediction and class Tensors) looks like this:</p> <pre></pre> <p>Can someone help me out here? Thanks.</p>
[ { "AnswerId": "32570711", "CreationDate": "2015-09-14T17:42:17.660", "ParentId": null, "OwnerUserId": "4850610", "Title": null, "Body": "<p>Experience has shown that <code>nn.ClassNLLCriterion</code> expects target to be a <strong>1D tensor</strong> of size <code>batch_size</code> or a <strong>scalar</strong>. Your <code>class</code> is a 2D one (<code>batch_size x 1</code>) but <code>class[i]</code> is 1D, that's why your non-batch version works. </p>\n\n<p>So, this will solve your problem:</p>\n\n<pre><code>class = class:view(-1)\n</code></pre>\n\n<p>Alternatively, you can replace</p>\n\n<pre><code>network:add( nn.LogSoftMax() )\ncriterion = nn.ClassNLLCriterion()\n</code></pre>\n\n<p>with the equivalent:</p>\n\n<pre><code>criterion = nn.CrossEntropyCriterion()\n</code></pre>\n\n<p>The interesting thing is that <code>nn.CrossEntropyCriterion</code> is also able to take a <strong>2D tensor</strong>. Why is <code>nn.ClassNLLCriterion</code> not?</p>\n" } ]
32,565,102
0
<python><theano><lasagne><nolearn>
2015-09-14T12:42:13.403
null
2,902,280
Nolearn/Lasagne neural network not starting training
<p>I am using , and to train a neural net on GPU (in an notebook). However, because of the first layer (), the following network does not start training, aven after waiting a few hours:</p> <pre></pre> <p>If I comment out the first layer, the training starts in a few seconds. Training more complicated networks is not a problem either. Any idea of what's causing the issue?</p> <p><strong>Edit</strong>: oddly enough, if I remove and , the training also starts within a reasonable time.</p> <p><strong>Edit2</strong>: what's even stranger is, if I change the size of the filters to 10 in the layer , then the training starts in a reasonable amount of time. If after that I stop the cell's execution, change this value to 1, and reexecute the cell, the training goes fine...</p> <p>Finally I've started using another framework, but if someone's interested, <a href="https://groups.google.com/forum/#!searchin/lasagne-users/not$20starting/lasagne-users/XR-jmU1pMWc/Erym1kOhBQAJ" rel="nofollow">here's the link to the thread</a> I started on the lasagne user group.</p>
[]
32,574,251
0
<c++><protocol-buffers><deep-learning><caffe><gradient-descent>
2015-09-14T21:29:10.757
null
1,245,262
Adjusting proto file for Caffe
<p>I'm trying to modify in order to add 2 new fields to SolverParameter. the two lines I add, at the very end of the SolverParameter message are:</p> <pre></pre> <p>However, when I rerun training nets that worked before, I get the following error:</p> <pre></pre> <p>The full protobuf message is as follows:</p> <pre></pre> <p>Also, surprisingly (to me), if the two new lines:</p> <pre></pre> <p>Are inserted just after </p> <pre></pre> <p>Then, the error I get is connected with a misreading/parsing of the variable (key id = 30).</p> <p>Does this make sense to anyone? Is there something obviously wrong with the way I'm inserting my 2 new fields? Is there some other code section I need to modify?</p> <p>Thanks....</p>
[]
32,576,858
1
<backpropagation><torch>
2015-09-15T02:40:50.587
32,589,478
5,118,777
How backpropagation works in torch 7?
<p>I tried to understand supervised learning by torch tutorial.</p> <p><a href="http://code.madbits.com/wiki/doku.php?id=tutorial_supervised" rel="nofollow">http://code.madbits.com/wiki/doku.php?id=tutorial_supervised</a></p> <p>And backpropagation :</p> <p><a href="http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html" rel="nofollow">http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html</a></p> <p>As I know, parameter update in this torch tutorial is in Step 4 Training Procedure,</p> <pre></pre> <p>For example, I got this</p> <pre></pre> <p>df_do is this ?</p> <pre></pre> <p>I know the target is 9 and the output is 4 in this example, so the result is wrong and give the 9-th element of df_do "-1".</p> <p>But why ?</p> <p>According to <a href="http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html" rel="nofollow">http://home.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html</a>,</p> <p>df_do is [ target (desired output) - output ].</p>
[ { "AnswerId": "32589478", "CreationDate": "2015-09-15T15:05:36.557", "ParentId": null, "OwnerUserId": "4850610", "Title": null, "Body": "<p>In Torch backprop works exactly as it does in mathematics. <code>df_do</code> is a <strong>derivative of loss w.r.t. prediction</strong>, and therefore entirely defined by your <a href=\"http://code.madbits.com/wiki/doku.php?id=tutorial_supervised_3_loss\" rel=\"nofollow noreferrer\">loss function</a>, i.e. <code>nn.Criterion</code>. \nThe most famous one is Mean Square Error (<code>nn.MSECriterion</code>):\n<a href=\"https://i.stack.imgur.com/fsfI0.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fsfI0.gif\" alt=\"enter image description here\"></a></p>\n\n<p>Note that MSE criterion expects target to have the same size as prediction (a one-hot vector for classification). If you choose MSE, your derivative vector <code>df_do</code> will be computed as: </p>\n\n<p><a href=\"https://i.stack.imgur.com/Z2dr2.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Z2dr2.gif\" alt=\"enter image description here\"></a></p>\n\n<p>The MSE criterion, however, is typically not very good for classification. The more suitable one is Likelihood criterion, which takes a <a href=\"https://en.wikipedia.org/wiki/Probability_vector\" rel=\"nofollow noreferrer\">probability vector</a> as prediction and a scalar index of the true class as target. The aim is to simply maximize probability of the true class, that equals to minimization of its negative:</p>\n\n<p><a href=\"https://i.stack.imgur.com/qEK3O.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qEK3O.gif\" alt=\"enter image description here\"></a></p>\n\n<p>If we give it log-probability vector qua prediction (it is a monotone transformation and thus doesn't affect the optimization result but more computationally stable), we'll get the Negative Log Likelihood loss function (<code>nn.ClassNLLCriterion</code>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/9ME9n.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9ME9n.gif\" alt=\"enter image description here\"></a></p>\n\n<p>In that case, <code>df_do</code> is as follows:</p>\n\n<p><a href=\"https://i.stack.imgur.com/glQ7R.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/glQ7R.gif\" alt=\"enter image description here\"></a></p>\n\n<p>In the torch tutorial <a href=\"https://github.com/torch/tutorials/blob/master/2_supervised/3_loss.lua\" rel=\"nofollow noreferrer\">NLL criterion is used by default</a>.</p>\n" } ]
32,578,951
2
<input><deep-learning><classification><theano>
2015-09-15T06:17:19.607
32,590,548
5,121,855
How to get Elemwise{tanh,no_inplace}.0 value
<p>I am using <a href="https://github.com/lisa-lab/DeepLearningTutorials" rel="nofollow">Deep learning Theano</a>. How can I see the content of a variable like this: . It is the input data of <a href="https://github.com/lisa-lab/DeepLearningTutorials/blob/master/code/logistic_sgd.py" rel="nofollow">logistic layer</a>. </p>
[ { "AnswerId": "32583060", "CreationDate": "2015-09-15T09:57:17.340", "ParentId": null, "OwnerUserId": "4511818", "Title": null, "Body": "<p>Right now, you don't seem to print values but operations. The output <code>Elemwise{tanh,no_inplace}.0</code> means, that you have an element wise operation of tanh, that is not done in place. You still need to create a function that takes input and executes your operation. Then you need to call that function and print the result. You can read more about that in the graph-structure part of their <a href=\"http://deeplearning.net/software/theano/tutorial/symbolic_graphs.html\" rel=\"nofollow\">tutorial</a>.</p>\n" }, { "AnswerId": "32590548", "CreationDate": "2015-09-15T15:56:25.320", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>Suppose your variable is called <code>t</code>. Then you can evaluate it by calling <code>t.eval()</code>. This may fail if input data are needed. In that case you need to supply them by providing a dictionary like this <code>t.eval({input_var1: value1, input_var2: value2})</code>. This is the ad-hoc way of evaluating a theano-expression.</p>\n\n<p>The way it works in real programs is to create a function taking the necessary input, for example: <code>f = theano.function([input_var1, input_var2], t)</code>, will yield a function that takes two input variables, calculates <code>t</code> from them and outputs the result.</p>\n" } ]
32,581,184
1
<python><theano>
2015-09-15T08:23:35.313
null
4,565,947
No module named nanguardmode
<p>I have a bug in my theano program leading to NaN values. The doc recommends using to track down the source of the problem.</p> <p>When I copy/paste this line from the doc webpage:</p> <pre></pre> <p>I get:</p> <pre></pre> <p>Can't find any sign of when I type:</p> <pre></pre> <p>Any idea why is absent? How can I fix this?</p> <p>EDIT:</p> <p>Thanks for your replies.</p> <p>Concerning my Theano version, I'm couldn't find how to check it. But I assume it is the latest: I installed it form the install webpage about about a month ago. I'm on Windows 64bit.</p> <p>Concerning detect_nan hack: things just get weirder!</p> <p>First: if I try to use:</p> <pre></pre> <p>I get: </p> <pre></pre> <p>Indeed, numpy was not imported in the monitormode module... Is that a known bug?</p> <p>Second: if I try to use a copy/paste of detect_nan, the NaNs magically go away. Everything else remaining the same, without detect_nan in my theano function (that trains a model iteratively), I get NaNs at iteration 5:</p> <pre></pre> <p>(the last figure is the cost value)</p> <p>When I do add</p> <pre></pre> <p>to the function, no NaNs appear up to at least iteration 100 (and probably more).</p> <pre></pre> <p>What's going on here??? </p>
[ { "AnswerId": "32595409", "CreationDate": "2015-09-15T20:52:40.590", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p><code>NanGuardMode</code> was moved to Theano's bleeding edge version (from PyLearn2) on May 1st. This was after the release of version 0.7 on March 26th so you'll need to <a href=\"http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions\" rel=\"nofollow\">upgrade to the bleeding edge version</a> from GitHub to use NanGuardMode.</p>\n\n<p>Alternatively you could use the <code>detect_nan</code> sample found in the <a href=\"http://deeplearning.net/software/theano/tutorial/debug_faq.html#how-do-i-step-through-a-compiled-function\" rel=\"nofollow\">debug FAQ</a>:</p>\n\n<pre><code>import numpy\n\nimport theano\n\n# This is the current suggested detect_nan implementation to\n# show you how it work. That way, you can modify it for your\n# need. If you want exactly this method, you can use\n# ``theano.compile.monitormode.detect_nan`` that will always\n# contain the current suggested version.\n\ndef detect_nan(i, node, fn):\n for output in fn.outputs:\n if (not isinstance(output[0], numpy.random.RandomState) and\n numpy.isnan(output[0]).any()):\n print '*** NaN detected ***'\n theano.printing.debugprint(node)\n print 'Inputs : %s' % [input[0] for input in fn.inputs]\n print 'Outputs: %s' % [output[0] for output in fn.outputs]\n break\n\nx = theano.tensor.dscalar('x')\nf = theano.function([x], [theano.tensor.log(x) * x],\n mode=theano.compile.MonitorMode(\n post_func=detect_nan))\n</code></pre>\n" } ]
32,587,927
1
<python><deep-learning><caffe>
2015-09-15T13:52:37.517
32,600,424
1,103,412
Caffe - draw_net_to_file - 'Classifier' object has no attribute 'name'
<p>I found the method in <a href="https://github.com/BVLC/caffe/blob/master/python/caffe/draw.py" rel="nofollow"></a> and want to use it to understand the Caffe network I was given to work with better.</p> <p>The problem is that the following code</p> <pre></pre> <p>fails with the following error</p> <pre></pre> <p>Upon closer investigation, the object really does not expose many methods of the underlying object, such as . How do I instantiate a correctly working instance for this case?</p> <p>I'm using Caffe built from revision .</p>
[ { "AnswerId": "32600424", "CreationDate": "2015-09-16T05:29:18.090", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>Take a look at the script <a href=\"https://github.com/BVLC/caffe/blob/master/python/draw_net.py\" rel=\"nofollow\"><code>draw_net.py</code></a> where you can see an example of how to use the functions of <code>draw.py</code>. The <code>net</code> argument is not exactly the same as the <code>caffe.Net</code> object but rather a parsed prototxt:</p>\n\n<pre><code>from google.protobuf import text_format\nimport caffe.draw\nfrom caffe.proto import caffe_pb2\n\nnet = caffe_pb2.NetParameter()\ntext_format.Merge(open(args.input_net_proto_file).read(), net)\n</code></pre>\n" } ]
32,589,349
1
<machine-learning><computer-vision><neural-network><deep-learning><caffe>
2015-09-15T14:59:39.107
32,600,303
1,377,127
How many images can you pass to Caffe at a time?
<p>I noticed how the Caffe MNIST <a href="https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet.prototxt" rel="nofollow">example prototxt file</a> allows for up to 64 images to be passed to the network at a time.<br> Is there a limit for how high I can set this number?<br> Could I (for example) set this number to 200 or even 500 so that I can accept up to 200/500 images at a time without it impacting the predictions negatively?</p>
[ { "AnswerId": "32600303", "CreationDate": "2015-09-16T05:18:31.433", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>The only limit is your machine's memory: When caffe loads the model it allocates memory for all the parameters <em>and</em> all the intermediate data blobs. The more images you process concurrently, the larger the memory you need to allocate in advance.<br>\nThe easiest (and crudest) way of determining this number is simply trail-and-error, try setting it to 200 and see if you get an \"out of memory\" error when loading the model.<br>\nNote that the number of images you can process at the same time depends also on whether you are using GPU or CPU: usually GPU memory is smaller than CPU memory and thus allows you to process fewer images.</p>\n" } ]
32,601,546
1
<ubuntu><g++><shared-libraries><ld><caffe>
2015-09-16T06:50:35.897
null
5,340,867
ld error while compiling caffe: libpng and libgfortran not found, shared libraries
<p>I'm trying to compile caffe on ubuntu 14.04 LTS with Anaconda, in CPU-only mode and with OpenBLAS. Unfortunately, I get an ld error.</p> <p>I followed the instructions, added the dependencies that didn't come with Anaconda and adjusted the Makefile.config accordingly, especially included the Anaconda path. When I do I get the error you see below (I also included the Makefile.config), even though the two files that were not found are in the anaconda/lib folder and in their respective pkgs folders as shared libraries.</p> <p>Thank you very much for your help!</p> <p>Terminal:</p> <pre></pre> <p>Makefile.config:</p> <pre></pre>
[ { "AnswerId": "32604843", "CreationDate": "2015-09-16T09:35:33.360", "ParentId": null, "OwnerUserId": "5340867", "Title": null, "Body": "<p>Okay it looks like i was able to solve this by myself. I don't know whether it's a good solution though, so I appreciate any comment: In the folder /etc/ld.so.conf.d I created a file fooLibrary.conf. This file contains only the full path to Anaconda's lib-folder. After <code>sudo ldconfig</code>, ld was able to find the relevant packages. Once I was done I deleted the file again, but I doubt the problem is solved conclusively. Hence I'll be glad about comments or better solutions.</p>\n" } ]
32,614,006
1
<theano><deep-learning>
2015-09-16T16:28:55.600
null
3,511,203
Is it normal that memory usage is fluctuating in Theano?
<p>I started to use Theano library, because I am seek of compile &amp; debugging with C++ with caffe ( though, it was a really great library :) ) </p> <p>Anyway, I made deep network(almost like CNN), with lasagne, and I started learning my network. However, my nvidia-smi shows that the memory usage keep fluctuating and I feel bad about it. It was not shown when I used caffe, and, because of this, learning could be slow.</p> <p>I used multiprocess module to fetch dataset in advance, and my queue status seems right, so loading a dataset could not be the case for my slow training.</p> <p>I used T.shared to allocate memory in GPU, in advance and make function with given variables. </p> <p>Any ideas?</p> <p>Thanks! Happy learning!</p>
[ { "AnswerId": "32615028", "CreationDate": "2015-09-16T17:28:04.913", "ParentId": null, "OwnerUserId": "3511203", "Title": null, "Body": "<p>I find out allow_gc option, and after I turn off the option with allow_gc = False, everything goes fine. </p>\n\n<p>My GPU-Utilization shows better, and learning is faster now. </p>\n" } ]