QuestionId
int64
388k
59.1M
AnswerCount
int64
0
47
Tags
stringlengths
7
102
CreationDate
stringlengths
23
23
AcceptedAnswerId
float64
388k
59.1M
OwnerUserId
float64
184
12.5M
Title
stringlengths
15
150
Body
stringlengths
12
29.3k
answers
listlengths
0
47
24,804,516
1
<python><numpy><theano><deep-learning>
2014-07-17T13:09:29.357
null
2,688,733
Error while calculating dot product in Theano
<p>I have the following simple code written in Theano and I am getting error while compiling function f:</p> <pre></pre> <p>What is going wrong on my side?</p>
[ { "AnswerId": "24805730", "CreationDate": "2014-07-17T14:02:55.163", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>From the error message you get it seems like theano cannot find the mkl library file libmkl_intel_lp64.dylib.</p>\n\n<p>How do you configure your theano? especially, the <a href=\"http://deeplearning.net/software/theano/library/config.html#config.config.blas.ldflags\" rel=\"nofollow\">blas library</a>?</p>\n\n<p>Please check <a href=\"http://deeplearning.net/software/theano/install.html#troubleshooting-make-sure-you-have-a-blas-library\" rel=\"nofollow\">this section</a> of theano installation manual.</p>\n" } ]
24,892,171
1
<python><theano>
2014-07-22T15:55:33.640
null
1,099,534
Printing whole matrix in Theano
<p>I'm debugging my Theano code and printing the values of my tensors as advised <a href="http://deeplearning.net/software/theano/tutorial/debug_faq.html" rel="nofollow">here</a>:</p> <pre></pre> <p>The issue is that, when is a relatively large matrix, the value is truncated to the first couple of rows and the last couple of rows. However, I would like the whole matrix to be printed. Is this possible?</p>
[ { "AnswerId": "24893256", "CreationDate": "2014-07-22T16:52:10.763", "ParentId": null, "OwnerUserId": "3821154", "Title": null, "Body": "<p>I believe you can print the underlying numpy, accessed as <code>a.get_value()</code>. Within numpy you can modify printing by </p>\n\n<pre><code>numpy.set_printoptions(threshold=10000000)\n</code></pre>\n\n<p>where threshold should be bigger than the number of elements expected, and then the whole array will show. See the documentation for <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html\" rel=\"nofollow\">set_printoptions</a>. Note that if outputted to a console, this may freeze up because of the possibly very large amount of text.</p>\n" } ]
24,917,916
1
<theano>
2014-07-23T18:14:29.777
24,919,518
1,099,534
Thresholding in Theano
<p>Is there a way to threshold the values in a Theano tensor? For instance, if , I would like to create another tensor which contains the same values as , except that the ones that exceed a certain threshold are replaced by itself:</p> <pre></pre> <p>More generally, what is there a standard framework to create your own operations on tensors?</p>
[ { "AnswerId": "24919518", "CreationDate": "2014-07-23T19:42:06.163", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Here is code that do that. Use the clip function.</p>\n\n<pre><code>import theano\nv = theano.tensor.vector()\nf = theano.function([v], theano.tensor.clip(v, 0, 100))\nf([1, 2, 3, 100, 200, 300])\n# array([ 1., 2., 3., 100., 100., 100.])\n</code></pre>\n\n<p>If you you don't want a min you can use a switch:</p>\n\n<pre><code>import theano\nv = theano.tensor.vector()\nf = theano.function([v], theano.tensor.clip(v, 0, 100))\nf([1, 2, 3, 100, 200, 300])\n# array([ 1., 2., 3., 100., 100., 100.])\nf = theano.function([v], theano.tensor.switch(v&lt;100, v, 100))\nf([1, 2, 3, 100, 200, 300])\n# array([ 1., 2., 3., 100., 100., 100.])\n</code></pre>\n" } ]
25,003,733
2
<python><cuda><gpu><nvcc><theano>
2014-07-28T20:26:44.753
null
562,769
Why does Theano print "cc1plus: fatal error: cuda_runtime.h: No such file or directory"?
<p>I am trying to use the GPU with Theano. I've read <a href="http://deeplearning.net/software/theano/install.html#gpu-linux" rel="nofollow">this tutorial</a>.</p> <p>However, I can't get theano to use the GPU and I don't know how to continue.</p> <h2>Testing machine</h2> <pre></pre> <h2>Cuda sample</h2> <p>Compiling and executing worked as a super user (tested with ):</p> <pre></pre> <p>When I try this as normal user, I get:</p> <pre></pre> <p>How can I get cuda to work with non-super users?</p> <h2>Testing code</h2> <p>The following code is from "<a href="http://deeplearning.net/software/theano/tutorial/using_gpu.html#testing-theano-with-gpu" rel="nofollow">Testing Theano with GPU</a>"</p> <pre></pre> <h2>The error message</h2> <p>The complete error message is much too long to post it here. A longer version is on <a href="http://pastebin.com/eT9vbk7M" rel="nofollow">http://pastebin.com/eT9vbk7M</a>, but I think the relevant part is:</p> <pre></pre> <p>The standard stream gives:</p> <pre></pre> <h2>theano.rc</h2> <pre></pre>
[ { "AnswerId": "25011528", "CreationDate": "2014-07-29T08:55:00.370", "ParentId": null, "OwnerUserId": "571812", "Title": null, "Body": "<p>Try exporting C_INCLUDE_PATH to cuda toolkit include files on your system, something like:</p>\n\n<pre><code>export C_INCLUDE_PATH=${C_INCLUDE_PATH}:/usr/local/cuda/include\n</code></pre>\n" }, { "AnswerId": "25049424", "CreationDate": "2014-07-31T01:08:46.523", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>As some comments told, the problem is the permissio of /dev/nvidia*. As some told this mean during your startup, it don't get initialized correctly. Normally, this is done correctly when the GUI is started. My guess is that you didn't enable it or install it. So you probably have an headless server.</p>\n\n<p>To fix this, just run as root <code>nvidia-smi</code>. This will detect that it isn't started correctly and will fix it. root have the permission to fix things. Normal user don't have the permission to fix this. That is why it work with root (it get automatically fixed), but not as normal user.</p>\n\n<p>This fix need to be done each time the computer boot. To automatise this, you can create as root this file <code>/etc/init.d/nvidia-gpu-config</code> with this content:</p>\n\n<pre><code>#!/bin/sh\n#\n# nvidia-gpu-config Start the correct initialization of nvidia GPU driver.\n#\n# chkconfig: - 90 90\n# description: Init gpu to wanted states\n\n# sudo /sbin/chkconfig --add nvidia-smi\n#\n\ncase $1 in\n'start')\nnvidia-smi\n;;\nesac\n</code></pre>\n\n<p>Then as root run this command: <code>/sbin/chkconfig --add nvidia-gpu-config</code>.</p>\n\n<p>UPDATE: This work for OS that use the init system SysV. If your system use the init system systemd, I don't know if it work.</p>\n" } ]
25,045,897
1
<python><machine-learning><neural-network><theano><conv-neural-network>
2014-07-30T20:03:26.727
25,062,985
3,479,456
Adding additional features in Theano (CNN)
<p>I'm using Theano for classification (convolutional neural networks)</p> <p>Previously, I've been using the pixel values of the (flattened) image as the features of the NN. Now, I want to add additional features. <br>I've been told that I can concatenate that vector of additional features to the flattened image features and then use that as input to the fully-connected layer, but I'm having trouble with that.</p> <p>First of all, is that the right approach?</p> <p>Here's some code snippets and my errors:<br> Similar to the provided example from their site with some modifications<br></p> <p>(from the class that builds the model)</p> <pre></pre> <p>Below, variables and are defined previously. What's important is :</p> <pre></pre> <p>(from the class that trains)</p> <pre></pre> <p>However, I get an error when the train_model is called:</p> <pre></pre> <p>Do the input shapes represent the shapes of , and , respectively?</p> <p>If so, the third seems correct (batchsize=5, 2 extra features), but why is the first a scalar and the second a matrix?</p> <p>More details:</p> <pre></pre> <p>Do I have the right idea or is there a better way of accomplishing this? Any insights into why I'm getting an error?</p>
[ { "AnswerId": "25062985", "CreationDate": "2014-07-31T15:28:02.613", "ParentId": null, "OwnerUserId": "3479456", "Title": null, "Body": "<p>Issue was that I was concatenating on the wrong axis.</p>\n\n<pre><code>layer2_input = T.concatenate([layer2_input, self.f.flatten(2)])\n</code></pre>\n\n<p>should have been</p>\n\n<pre><code>layer2_input = T.concatenate([layer2_input, self.f.flatten(2)], axis=1)\n</code></pre>\n" } ]
25,046,108
1
<python><machine-learning><gpu><theano>
2014-07-30T20:16:18.570
25,084,895
1,460,123
Theano ValueError: Some matrix has no unit stride
<p>Recently have run into issues getting predictions for a trained pylearn2 model. The relevant bits of the traceback are provided below. I have ensured that matches the shape of the numpy array I'm passing in to the theano prediction function I've generated, but still receive the following error. </p> <pre></pre> <p>Interestingly enough, behavior seems to be machine-dependent. I have the prediction script working on my local machine, but execution on a Google Compute Engine instance produces the above error. </p> <p>Any ideas where I might start debugging? The input strides look pretty odd, but I'm unsure how to begin debugging that value.</p>
[ { "AnswerId": "25084895", "CreationDate": "2014-08-01T16:38:42.150", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The problem is that NumPy create an ndarray with wrong strides. This was fixed in more recent version of NumPy. So update NumPy and it should work.</p>\n\n<p>This is the line that show NumPy strides are bad:</p>\n\n<pre><code>Inputs strides: [(9223372036854775807, 4), (4, 9223372036854775807)]\n</code></pre>\n\n<p>Did you compile NumPy with some special flags to test strides for dimensions of size 1?</p>\n\n<p>Here I did a PR to be more tolerant to those invalid strides:</p>\n\n<pre><code>https://github.com/Theano/Theano/pull/2008\n</code></pre>\n" } ]
25,057,977
1
<python><numpy><scipy><theano>
2014-07-31T11:40:05.210
25,097,908
54,564
Defining a function with a loop in Theano
<p>I want to define the following function of two variables in Theano and compute its Jacobian:</p> <pre></pre> <p>How do I make a Theano function for the above expression - and eventually minimize it using its Jacobian?</p>
[ { "AnswerId": "25097908", "CreationDate": "2014-08-02T18:00:18.447", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>Since your function is scalar, the Jacobian reduces to the gradient. Assuming your two variables <code>x1, x2</code> are scalar (looks like it from the formula, easily generalizable to other objects), you can write</p>\n\n<pre><code>import theano\nimport theano.tensor as T\n\nx1 = T.fscalar('x1')\nx2 = T.fscalar('x2')\n\nk = T.arange(1, 10)\n\nexpr = ((2 + 2 * k - T.exp(x1 * k) - T.exp(x2 * k)) ** 2).sum()\n\nfunc = theano.function([x1, x2], expr)\n</code></pre>\n\n<p>You can call <code>func</code> on two scalars</p>\n\n<pre><code>In [1]: func(0.25,0.25)\nOut[1]: array(126.5205307006836, dtype=float32)\n</code></pre>\n\n<p>The gradient (Jacobian) is then</p>\n\n<pre><code>grad_expr = T.grad(cost=expr, wrt=[x1, x2])\n</code></pre>\n\n<p>And you can use <code>updates</code> in <code>theano.function</code> in the standard way (see theano tutorials) to make your gradient descent, setting <code>x1, x2</code> as shared variables in givens, by hand on the python level, or using <code>scan</code> as indicated by others.</p>\n" } ]
25,121,749
1
<numpy><theano>
2014-08-04T15:04:21.173
null
2,688,733
Non-unit value on shape on a broadcastable dimension
<p>I am a theano newbie and I am implementing a simple perceptron based learning rule and I am getting following error, I fail to understand why am I getting this error?</p> <p>Here my code:</p> <pre></pre> <p>ERROR:</p> <pre></pre>
[ { "AnswerId": "25127898", "CreationDate": "2014-08-04T21:32:23.217", "ParentId": null, "OwnerUserId": "2688733", "Title": null, "Body": "<p>So I found the problem in my code, </p>\n\n<p>The \"train\" function expects a row, but the value being passed is a (3, 1) column vector. </p>\n\n<p>So here is the corrected version:</p>\n\n<pre><code>train = function([z2],\n updates=[(w,w-T.transpose(z2))]\n )\ncount = 0\nwhile np.abs(np.sum(predictedOut(D[0])-D[1])) &gt; 0 and count &lt; 1000 :\n print 'on example ',count\n a1 = D[0][count%N,:].reshape(1,feats+1)\n b1 = D[1][count%N].reshape(1,1)\n a2 = prod(a1).reshape(1,1)\n a3 = w_up(a1,b1,a2)[0].reshape(1,feats+1)\n train(a3)\n count += 1\n</code></pre>\n" } ]
25,143,328
2
<python><numerical-methods><theano>
2014-08-05T15:59:42.247
null
1,508,226
How to group several output variables together in Theano?
<p>I am trying to implement a function in Theano that maps a vector to a vector, but each dimension of the output vector is specified by hand. If I create a Theano function like so:</p> <pre></pre> <p>then gives as the output, when I'd like it to return . What is the Theanic way of doing this?</p>
[ { "AnswerId": "25190520", "CreationDate": "2014-08-07T19:21:38.877", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The other answer by Kyle Kastner will work, but you can have Theano do that for you (I fixed the division by 0 from your example):</p>\n\n<pre><code>import theano\nimport theano.tensor as T\nx = T.dvector('x')\ndx = 28.0 * (x[1] - x[0])\ndy = x[0] * (10.0 - x[1]) - x[2]\ndz = x[0] * x[1] - 8.0/3.0 * x[2]\no = T.as_tensor_variable([dx,dy,dz])\nf = theano.function([x],o)\nf([1,2,3])\n# output array([ 28., 5., -6.])\n</code></pre>\n" }, { "AnswerId": "25158728", "CreationDate": "2014-08-06T11:04:03.477", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>The output of the function is just a list of numpy arrays - you can do <code>np.array(f([1, 2, 3]))</code> to convert the output list into a numpy vector.</p>\n" } ]
25,159,498
1
<python><numpy><neural-network><convolution><theano>
2014-08-06T11:42:17.727
25,161,521
1,714,410
Theano conv2d and max_pool_2d
<p>When implementing a <a href="http://deeplearning.net/tutorial/lenet.html" rel="nofollow">convolutional neural network (CNN) in theno</a> one comes across two variants of operator:</p> <ul> <li><a href="http://deeplearning.net/software/theano/library/tensor/nnet/conv.html#theano.tensor.nnet.conv.conv2d" rel="nofollow"></a></li> <li><a href="http://deeplearning.net/software/theano/library/tensor/signal/conv.html#theano.tensor.signal.conv.conv2d" rel="nofollow"></a></li> </ul> <p>And an implementation of max-pooling: </p> <ul> <li><a href="http://deeplearning.net/software/theano/library/tensor/signal/downsample.html#theano.tensor.signal.downsample.max_pool_2d" rel="nofollow"></a></li> </ul> <p>My questions are: </p> <ol> <li>What is the difference between the two implementations of ? </li> <li><p>What is the difference between the use of argument of and the application of subsampling after ?<br> That is what is the difference between:</p> <pre></pre> <p>and </p> <pre></pre></li> </ol>
[ { "AnswerId": "25161521", "CreationDate": "2014-08-06T13:19:09.840", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>In answer to your first question, <a href=\"http://deeplearning.net/software/theano/library/tensor/nnet/conv.html\" rel=\"nofollow\">here is the section of the Theano docs that addresses it</a>:</p>\n\n<blockquote>\n <p>Two similar implementation exists for conv2d:</p>\n\n<pre><code>signal.conv2d and nnet.conv2d.\n</code></pre>\n \n <p>The former implements a traditional 2D convolution, while the latter\n implements the convolutional layers present in convolutional neural\n networks (where filters are 3D and pool over several input channels).</p>\n</blockquote>\n\n<p>Under the hood they both call the same function, so the only difference is the user interface.</p>\n\n<p>Regarding your second question, the result is different. The equivalent call to:</p>\n\n<pre><code>conv2(..., subsample=(2,2))\n</code></pre>\n\n<p>would be:</p>\n\n<pre><code>conv2d(...,subsample=(1,1))[:,:,::2,::2]\n</code></pre>\n\n<p>In other words <code>conv2d</code> doesn't take the max over the whole pooling region, but rather the element at index <code>[0,0]</code> of the pooling region.</p>\n" } ]
25,166,657
1
<python><neural-network><theano>
2014-08-06T17:36:06.393
25,205,016
1,120,370
Index gymnastics inside a Theano function
<p>I am using <a href="http://deeplearning.net/software/theano/" rel="nofollow noreferrer">Theano</a> to implement a neural n-gram language model along the lines of <a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0CCIQFjAA&amp;url=http%3A%2F%2Fmachinelearning.wustl.edu%2Fmlpapers%2Fpaper_files%2FBengioDVJ03.pdf&amp;ei=lmDiU5HuLJSlyAStv4LoBA&amp;usg=AFQjCNGlSitd2Z9Z-MSvGtcoJOwpuu28Ug&amp;sig2=GHJW6buocK6-jft5XZ6nIg&amp;bvm=bv.72197243,d.aWw" rel="nofollow noreferrer">Bengio et al 2003</a>. This model uses a distributed representation for words, and I'm having trouble writing a symbolic expression that allows me to take the gradient with respect to the word representation vectors.</p> <p>Following the notation in the paper, I have a word representation matrix of size <em>V x m</em>, where <em>V</em> is the vocabulary size and <em>m</em> is the dimensionality of the word embedding. Each row of is a vector representation of a word.</p> <p>My training data consists of n-grams drawn from a corpus. Let's say I let <em>n = 3</em>. Then I am trying to estimate <em>P(w<sub>t</sub>|w<sub>t-1</sub>, w<sub>t-2</sub>)</em>. A neural network estimates this probability by using the concatenated embedding vectors for <em>w<sub>t-1</sub></em> and <em>w<sub>t-2</sub></em> to predict <em>w<sub>t</sub></em> via a non-linear function. (See the paper for details.) Each word is represented by an index into a vocabulary which also indexes its representation row in . If these indexes are <em>i<sub>1</sub></em>, <em>i<sub>2</sub></em>, and <em>i<sub>3</sub></em>, I am trying to write a Theano expression for.</p> <pre></pre> <p>where contains a hidden layer and a non-linear function, and is the concatenation of the arrays and . The first thing I have to do is write a symbolic Theano expression for . Also, this function needs to take not just a single training instance, but a mini-batch of multiple training instances.</p> <p>I know how to do this if I'm working directly with numpy matricies instead of abstract Theano expressions. For example, if is shared, and is a <em>N x n - 1</em> minibatch of <em>N</em> training vectors of word indexes, I can look up the concatenated vectors like so:</p> <pre></pre> <p>(A bit of index gymnastics I learned <a href="https://stackoverflow.com/questions/25146874/concatenate-over-dimension-in-numpy">elsewhere</a> on StackOverflow.)</p> <p>When I try to compile this expression into a Theano function, however, I run into errors.</p> <pre></pre> <p>The preceding gives me this error</p> <pre></pre> <p>I think this means that the index trick of putting -1 as the final reshape parameter is not supported by the Theano compiler.</p> <p>The equivalent command gives a different error.</p> <pre></pre> <p>I need to write the symbolic expression for so that I can take its gradient with respect to . Can anyone help me do this?</p> <p>Alternately, can someone point me to example Theano code that works with word embeddings. All the tutorial material I've found has been for writing neural nets over image data, but I haven't seen any examples of how to do distributed representations.</p>
[ { "AnswerId": "25205016", "CreationDate": "2014-08-08T13:46:59.560", "ParentId": null, "OwnerUserId": "1120370", "Title": null, "Body": "<p>Well, I'm an idiot. Kinda. I was missing an extra pair of parentheses around my <code>reshape</code> argument. The following works.</p>\n\n<pre><code>function([X_var], C[X_var].reshape((X_var.shape[0], -1)))\n</code></pre>\n\n<p>It's confusing though because the <code>reshape</code> method of an <code>array</code> will take either two arguments like I have above or a tuple like I have in the answer, but Theano will only compile the latter.</p>\n" } ]
25,237,039
4
<cuda><gpu><pickle><theano><deep-learning>
2014-08-11T06:28:42.483
null
1,505,986
Converting a theano model built on GPU to CPU?
<p>I have some pickle files of deep learning models built on gpu. I'm trying to use them in production. But when i try to unpickle them on the server, i'm getting the following error. </p> <blockquote> <p>Traceback (most recent call last):<br> File "score.py", line 30, in <br> model = (cPickle.load(file))<br> File "/usr/local/python2.7/lib/python2.7/site-packages/Theano-0.6.0-py2.7.egg/theano/sandbox/cuda/type.py", line 485, in CudaNdarray_unpickler<br> return cuda.CudaNdarray(npa)<br> AttributeError: ("'NoneType' object has no attribute 'CudaNdarray'", , (array([[ 0.011515 , 0.01171047, 0.10408644, ..., -0.0343636 ,<br> 0.04944979, -0.06583775],<br> [-0.03771918, 0.080524 , -0.10609912, ..., 0.11019105,<br> -0.0570752 , 0.02100536],<br> [-0.03628891, -0.07109226, -0.00932018, ..., 0.04316209,<br> 0.02817888, 0.05785328],<br> ...,<br> [ 0.0703947 , -0.00172865, -0.05942701, ..., -0.00999349,<br> 0.01624184, 0.09832744],<br> [-0.09029484, -0.11509365, -0.07193922, ..., 0.10658887,<br> 0.17730837, 0.01104965],<br> [ 0.06659461, -0.02492988, 0.02271739, ..., -0.0646857 ,<br> 0.03879852, 0.08779807]], dtype=float32),)) </p> </blockquote> <p>I checked for that cudaNdarray package in my local machine and it is not installed, but still i am able to unpickle them. But in the server, i am unable to. How do i make them to run on a server which doesnt have a GPU?</p>
[ { "AnswerId": "41057109", "CreationDate": "2016-12-09T09:29:28.790", "ParentId": null, "OwnerUserId": "4087317", "Title": null, "Body": "<p>I solved this problem by just saving the parameters W &amp; b, but not the whole model. You can save the parameters use this:<a href=\"http://deeplearning.net/software/theano/tutorial/loading_and_saving.html?highlight=saving%20load#robust-serialization\" rel=\"nofollow noreferrer\">http://deeplearning.net/software/theano/tutorial/loading_and_saving.html?highlight=saving%20load#robust-serialization</a>\nThis can save the CudaNdarray to numpy array. Then you need to read the params by numpy.load(), and finally convert the numpy array to tensorSharedVariable use theano.shared().</p>\n" }, { "AnswerId": "25243316", "CreationDate": "2014-08-11T12:28:34.000", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>There is a script in pylearn2 which may do what you need:</p>\n\n<p><code>pylearn2/scripts/gpu_pkl_to_cpu_pkl.py</code></p>\n" }, { "AnswerId": "31711656", "CreationDate": "2015-07-29T21:39:58.067", "ParentId": null, "OwnerUserId": "309653", "Title": null, "Body": "<p>This works for me. Note: this doesn't work unless the following environment variable is set: <code>export THEANO_FLAGS='device=cpu'</code></p>\n\n<pre><code>import os\nfrom pylearn2.utils import serial\nimport pylearn2.config.yaml_parse as yaml_parse\n\nif __name__==\"__main__\":\n\n_, in_path, out_path = sys.argv\nos.environ['THEANO_FLAGS']=\"device=cpu\"\n\nmodel = serial.load(in_path)\n\nmodel2 = yaml_parse.load(model.yaml_src)\nmodel2.set_param_values(model.get_param_values())\n\nserial.save(out_path, model2)\n</code></pre>\n" }, { "AnswerId": "28958979", "CreationDate": "2015-03-10T08:23:15.550", "ParentId": null, "OwnerUserId": "133374", "Title": null, "Body": "<p>The related Theano code is <a href=\"https://github.com/Theano/Theano/blob/master/theano/sandbox/cuda/type.py\" rel=\"nofollow\">here</a>.</p>\n\n<p>From there, it looks like there is an option <code>config.experimental.unpickle_gpu_on_cpu</code> which you could set and which would make <code>CudaNdarray_unpickler</code> return the underlying raw Numpy array.</p>\n" } ]
25,247,373
3
<ubuntu><numpy><nose><theano>
2014-08-11T15:53:16.853
39,937,822
3,930,228
Need nose >= 0.10.0 error while attempting to install Theano on Ubuntu
<p>Overall I've had a hell of a time getting Theano to work, I've gotten to the stage where I <em>think</em> everything in stalled correctly. Running:</p> <pre></pre> <p>and the console tells me that I have the latest version of everything.</p> <p>Running:</p> <pre></pre> <p>and I'm told that all requirements are satisfied.</p> <p>Yet when I try to do the tests as the guide recommends</p> <pre></pre> <p>gives me </p> <pre></pre> <p>I've done a thorough search online and all the solutions I've seen seem to be based around installing nose but I've definitely got nose installed and above 0.10.0.</p>
[ { "AnswerId": "25324947", "CreationDate": "2014-08-15T10:31:31.473", "ParentId": null, "OwnerUserId": "3930228", "Title": null, "Body": "<p>As suggested I installed <a href=\"https://store.continuum.io/cshop/anaconda/\" rel=\"nofollow\">Anaconda</a>, that did the trick.</p>\n" }, { "AnswerId": "39937822", "CreationDate": "2016-10-08T22:04:31.360", "ParentId": null, "OwnerUserId": "4863734", "Title": null, "Body": "<p>If you use <code>Anaconda</code> simply try this whet your <code>conda</code> environment is activated:</p>\n\n<pre><code>conda install nose\n</code></pre>\n\n<p>I had the same issue and <code>conda install</code> works without any <code>pip</code>!</p>\n" }, { "AnswerId": "37839321", "CreationDate": "2016-06-15T15:11:07.860", "ParentId": null, "OwnerUserId": "2534758", "Title": null, "Body": "<p>I faced similar issue in running theano.test() under Windows 7.\nI used Anaconda to create separate environment for python3.4.\nI installed theano using conda install for the above python3.4 environment.\nBut theano.test() gave me same error message as you have.\nSo I downloaded the nose-parameterized zip from github\nand installed it using the following :</p>\n\n<pre><code>((theanoBasic)) C:\\pythonZipsPy34\\manualZip\\nose-parameterized-master&gt;python setup.py install --record installNotes.txt\n</code></pre>\n" } ]
25,248,472
1
<lua><torch>
2014-08-11T16:55:25.607
25,249,911
163,173
How to convert torch Tensor/ Storage to a lua table?
<p>If I have a tensor:</p> <pre></pre> <p>Is there any way get this data as a Lua table?</p>
[ { "AnswerId": "25249911", "CreationDate": "2014-08-11T18:22:59.900", "ParentId": null, "OwnerUserId": "1688185", "Title": null, "Body": "<p>There is a dedicated constructor to <a href=\"https://github.com/torch/torch7/blob/master/doc/tensor.md#torchtensortable\">create a tensor from a table</a> but so far there is no method out-of-the box to convert the other way around.</p>\n\n<p>Of course you can do that <em>manually</em>:</p>\n\n<pre><code>-- This assumes `t1` is a 2-dimensional tensor!\nlocal t2 = {}\nfor i=1,t1:size(1) do\n t2[i] = {}\n for j=1,t1:size(2) do\n t2[i][j] = t1[i][j]\n end\nend\n</code></pre>\n\n<p>--</p>\n\n<p><strong>Update</strong>: as of <a href=\"https://github.com/torch/torch7/commit/10f3323\">commit 10f3323</a> there is now a dedicated <a href=\"https://github.com/torch/torch7/blob/ff11731/doc/utility.md#table-torchtotableobject\"><code>torch.totable(object)</code></a> converter.</p>\n" } ]
25,269,922
1
<python><ubuntu><numpy><theano>
2014-08-12T16:56:27.573
25,282,710
3,930,228
Can't get Theano to work on ubuntu 14.04
<p>I'm trying to use Theano on ubuntu 14.04, I've followed the guide for an easy install located here <a href="http://deeplearning.net/software/theano/install_ubuntu.html#install-ubuntu" rel="nofollow">http://deeplearning.net/software/theano/install_ubuntu.html#install-ubuntu</a></p> <p>Everything says it's installed fine, if I run:</p> <pre></pre> <p>Then I get in return </p> <pre></pre> <p>And when running</p> <pre></pre> <p>I get</p> <pre></pre> <p>But when I go to run the tests they just don't work.</p> <pre></pre> <p>gives me</p> <pre></pre> <p>and</p> <pre></pre> <p>gives me</p> <pre></pre> <p>The last test gives very similar results</p> <pre></pre> <p>I'm a complete linux newbie so I'm completely baffled by what could be the problem.</p>
[ { "AnswerId": "25282710", "CreationDate": "2014-08-13T09:28:32.990", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>If you can, I would simply use a \"scientific\" Python - either <a href=\"https://store.continuum.io/cshop/anaconda/\" rel=\"nofollow\">Anaconda</a> (my preference) or Enthought Python. In addition to avoiding systemwide installation of packages, it is easy to install things with pip and numpy and scipy come preinstalled. For updating numpy and scipy you can also use the built-in conda package manager - it does an excellent job of handling the nasty work behind installing a new numpy or scipy. </p>\n" } ]
25,284,195
1
<python><gpu><theano>
2014-08-13T10:49:35.220
null
2,447,425
How to stack vectors in Theano without using scan?
<p>I am using theano.scan for creating stacked vector of contexts like this:</p> <pre></pre> <p>It seems that scan is so slow, that this slows down whole processing. In signal processing this is fairly standard operation, so I was thinking about creating a special op, just for this. Unfortunately, I would need also GPU implementation and grad for this op and it looks like a long shot for me. Can you kick me in the right direction? I have already read Extending theano documentation, but still doesn't help a lot.</p> <p>Example:</p> <p>in case of </p> <pre></pre> <p>matrix:</p> <pre></pre> <p>would be converted to </p> <pre></pre> <p>Thank you J</p>
[ { "AnswerId": "25293490", "CreationDate": "2014-08-13T18:26:44.580", "ParentId": null, "OwnerUserId": "2447425", "Title": null, "Body": "<p>So the problem is solvable in this way. Works now.</p>\n\n<pre><code>Y_= T.concatenate([Y_[c:Y_.shape[0]+c-left_ctx-right_ctx] for c in range(left_ctx+right_ctx+1)], axis=1)\n</code></pre>\n" } ]
25,326,462
1
<python><numpy><matrix><theano>
2014-08-15T12:33:06.230
25,334,094
1,461,210
Initializing a symmetric Theano dmatrix from its upper triangle
<p>I'm trying to fit a Theano model that is parametrized in part by a symmetric matrix . In order to enforce the symmetry of , I want to be able to construct by passing in just the values in the upper triangle.</p> <p>The equivalent numpy code might look something like this:</p> <pre></pre> <p>However, since symbolic tensor variables don't support item assignment, I'm struggling to find a way to do this in Theano.</p> <p>The closest thing I could find is , which allows me to construct a symbolic matrix from its diagonal:</p> <pre></pre> <p>Whilst there is also a function, this cannot be used to construct a matrix from the upper triangle, but rather returns a copy of an array with the lower triangular elements zeroed.</p> <p>Is there any way to construct a Theano symbolic matrix from its upper triangle?</p>
[ { "AnswerId": "25334094", "CreationDate": "2014-08-15T20:55:50.580", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>You could use the <code>theano.tensor.triu</code> and add the result to its transpose, then subtract the diagonal.</p>\n\n<p>Copy+Pasteable code:</p>\n\n<pre><code>import numpy as np\nimport theano\nimport theano.tensor as T\ntheano.config.floatX = 'float32'\n\nmat = T.fmatrix()\nsym1 = T.triu(mat) + T.triu(mat).T\ndiag = T.diag(T.diagonal(mat))\nsym2 = sym1 - diag\n\nf_sym1 = theano.function([mat], sym1)\nf_sym2 = theano.function([mat], sym2)\n\nm = np.arange(9).reshape(3, 3).astype(np.float32)\n\nprint m\n# [[ 0. 1. 2.]\n# [ 3. 4. 5.]\n# [ 6. 7. 8.]]\nprint f_sym1(m)\n# [[ 0. 1. 2.]\n# [ 1. 8. 5.]\n# [ 2. 5. 16.]]\nprint f_sym2(m)\n# [[ 0. 1. 2.]\n# [ 1. 4. 5.]\n# [ 2. 5. 8.]]\n</code></pre>\n\n<p>Does this help? This approach would require a full matrix to be passed, but would ignore everything below the diagonal and symmetrize using the upper triangle.</p>\n\n<p>We can also take a look at the derivative of this function. In order not to deal with a multidimensional output, we can e.g. look at the gradient of the sum of the matrix entries</p>\n\n<pre><code>sum_grad = T.grad(cost=sym2.sum(), wrt=mat)\nf_sum_grad = theano.function([mat], sum_grad)\n\nprint f_sum_grad(m)\n# [[ 1. 2. 2.]\n# [ 0. 1. 2.]\n# [ 0. 0. 1.]]\n</code></pre>\n\n<p>This reflects the fact that the upper triangular entries figure doubly in the sum.</p>\n\n<hr>\n\n<p>Update: You can do normal indexing:</p>\n\n<pre><code>n = 4\nnum_triu_entries = n * (n + 1) / 2\n\ntriu_index_matrix = np.zeros([n, n], dtype=int)\ntriu_index_matrix[np.triu_indices(n)] = np.arange(num_triu_entries)\ntriu_index_matrix[np.triu_indices(n)[::-1]] = np.arange(num_triu_entries)\n\ntriu_vec = T.fvector()\ntriu_mat = triu_vec[triu_index_matrix]\n\nf_triu_mat = theano.function([triu_vec], triu_mat)\n\nprint f_triu_mat(np.arange(1, num_triu_entries + 1).astype(np.float32))\n\n# [[ 1. 2. 3. 4.]\n# [ 2. 5. 6. 7.]\n# [ 3. 6. 8. 9.]\n# [ 4. 7. 9. 10.]]\n</code></pre>\n\n<hr>\n\n<p>Update: To do all of this dynamically, one way is to write a symbolic version of <code>triu_index_matrix</code>. This can be done with some shuffling of <code>arange</code>s. But probably I am overcomplicating.</p>\n\n<pre><code>n = T.iscalar()\nn_triu_entries = (n * (n + 1)) / 2\nr = T.arange(n)\n\ntmp_mat = r[np.newaxis, :] + (n_triu_entries - n - (r * (r + 1)) / 2)[::-1, np.newaxis]\ntriu_index_matrix = T.triu(tmp_mat) + T.triu(tmp_mat).T - T.diag(T.diagonal(tmp_mat))\n\ntriu_vec = T.fvector()\nsym_matrix = triu_vec[triu_index_matrix]\n\nf_triu_index_matrix = theano.function([n], triu_index_matrix)\nf_dynamic_sym_matrix = theano.function([triu_vec, n], sym_matrix)\n\nprint f_triu_index_matrix(5)\n# [[ 0 1 2 3 4]\n# [ 1 5 6 7 8]\n# [ 2 6 9 10 11]\n# [ 3 7 10 12 13]\n# [ 4 8 11 13 14]]\nprint f_dynamic_sym_matrix(np.arange(1., 16.).astype(np.float32), 5)\n# [[ 1. 2. 3. 4. 5.]\n# [ 2. 6. 7. 8. 9.]\n# [ 3. 7. 10. 11. 12.]\n# [ 4. 8. 11. 13. 14.]\n# [ 5. 9. 12. 14. 15.]]\n</code></pre>\n" } ]
25,330,635
2
<python><enthought><theano>
2014-08-15T16:49:54.653
null
2,688,733
theano NotImplementedError
<p>I am running some theano code making use of tensor.advanced_subtensor I am getting the following error :</p> <pre></pre> <p>I have the latest version of theano (0.6.0.dev-60b5ccc2bcabb1010714376764daf8a50722cee9) and numpy (1.8.0). Why am I still getting this error? How can I resolve this error? How do I clear theano cache?</p>
[ { "AnswerId": "44953776", "CreationDate": "2017-07-06T15:56:29.133", "ParentId": null, "OwnerUserId": "5468983", "Title": null, "Body": "<p>You need to clear Theano cache. Cache live in <strong>~/.theano/</strong> folder.\nFollow below steps to clear it manually.</p>\n\n<pre><code> import theano\n print (theano.config.compiledir)\n # and then delete directory returned from above.\n</code></pre>\n\n<p>If you don not want to delete manually then use below command.</p>\n\n<pre><code>theano-cache purge\n</code></pre>\n" }, { "AnswerId": "25334422", "CreationDate": "2014-08-15T21:22:35.227", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>The theano cache is usually in <code>~/.theano/</code> if you are using *ix.</p>\n" } ]
25,339,439
1
<python><machine-learning><theano>
2014-08-16T10:53:08.580
25,343,861
596,046
How to print values from inside a theano function?
<p>I've recently moved from Matlab/C++ to theano and have the following function</p> <pre></pre> <p>and I'd like to print the values between 2 layers of the net in every iteration (for debugging, better control of the function etc.) I've tried editing the function setting the classifier so that it prints (either using print() or theano.printing.Print/theano.pp()) and all I get is a single print while the model is being set.</p>
[ { "AnswerId": "25343861", "CreationDate": "2014-08-16T20:50:33.680", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>In your example <code>classifier.cost</code> is an expression, probably consisting of several other expressions building on the same input. You can turn any of those intermediate expressions into functions just as you are doing with <code>classifier.cost</code>, e.g.</p>\n\n<pre><code>f_first_layer = theano.function([x], first_layer)\n</code></pre>\n\n<p>You can then call and print the output of this function e.g. after every call to <code>train_model</code>. If you call it <em>before</em> <code>train_model</code> with the same params as you will call <code>train_model</code> with just after, then you will have the exact output of the layers as they will be evaluated by <code>train_model</code> (calling it after <code>train_model</code> will be different due to updating).</p>\n" } ]
25,365,440
1
<python><oop><numpy><pickle><theano>
2014-08-18T14:13:09.920
25,394,371
1,461,210
Why am I allowed pickle instancemethods that are Theano functions, but not normal instancemethods?
<p>In the process of using joblib to parallelize some model-fitting code involving Theano functions, I've stumbled across some behavior that seems odd to me.</p> <p>Consider this very simplified example:</p> <pre></pre> <p>I understand why the first case fails, since is clearly an instancemethod of . What I don't understand is why the second case <em>works</em>, since , and are only declared within the method of . can't be evaluated until the instance has been created. Surely, then, it should also be considered an instancemethod, and should therefore be unpickleable? In fact, even still works if I explicitly declare and to be attributes of the instance.</p> <p>Can anyone explain what's going on here?</p> <hr> <h2>Update</h2> <p>Just to illustrate why I think this behaviour is particularly weird, here are a few examples of some other callable member objects that don't take as the first argument:</p> <pre></pre> <p>None of them are pickleable with the exception of the !</p>
[ { "AnswerId": "25394371", "CreationDate": "2014-08-19T23:08:39.283", "ParentId": null, "OwnerUserId": "322806", "Title": null, "Body": "<p>Theano functions aren't python functions. Instead they are python objects that override <code>__call__</code>. This means that you can call them just like a function but internally they are really objects of some custom class. In consequence, you can pickle them. </p>\n" } ]
25,366,863
1
<python><numpy><theano><gradient-descent><deep-learning>
2014-08-18T15:27:52.347
25,367,188
3,670,532
Clarification in the Theano tutorial
<p>I am reading <a href="http://nbviewer.ipython.org/github/craffel/theano-tutorial/blob/master/Theano%20Tutorial.ipynb" rel="noreferrer">this tutorial</a> provided on the <a href="http://deeplearning.net/software/theano/index.html" rel="noreferrer">home page of Theano documentation</a></p> <p>I am not sure about the code given under the gradient descent section.</p> <p><img src="https://i.stack.imgur.com/Vu9t4.png" alt="enter image description here"></p> <p><strong>I have doubts about the for loop</strong>.</p> <p>If you initialize the '<strong>param_update</strong>' variable to zero.</p> <pre></pre> <p>and then you update its value in the remaining two lines.</p> <pre></pre> <p>Why do we need it?</p> <p>I guess I am getting something wrong here. Can you guys help me!</p>
[ { "AnswerId": "25367188", "CreationDate": "2014-08-18T15:46:25.873", "ParentId": null, "OwnerUserId": "3953341", "Title": null, "Body": "<p>The initialization of <code>param_update</code> using <code>theano.shared(.)</code> only tells Theano to reserve a variable that will be used by Theano functions. This initialization code is only called once, and will not be used later on to reset the value of <code>param_update</code> to 0.</p>\n\n<p>The actual value of <code>param_update</code> will be updated according to the last line </p>\n\n<pre><code>updates.append((param_update, momentum*param_update + (1. - momentum)*T.grad(cost, param)))\n</code></pre>\n\n<p>when <code>train</code> function that was constructed by having this update dictionary as an argument ([23] in the tutorial):</p>\n\n<pre><code>train = theano.function([mlp_input, mlp_target], cost,\n updates=gradient_updates_momentum(cost, mlp.params, learning_rate, momentum))\n</code></pre>\n\n<p>Each time <code>train</code> is called, Theano will compute the gradient of the <code>cost</code> w.r.t. <code>param</code> and update <code>param_update</code> to a new update direction according to momentum rule. Then, <code>param</code> will be updated by following the update direction saved in <code>param_update</code> with an appropriate <code>learning_rate</code>.</p>\n" } ]
25,390,978
1
<theano>
2014-08-19T18:56:47.417
25,428,302
3,670,532
What does 'no_inplace' mean in theano?
<p>Here is the code:</p> <pre></pre> <p>and the output is</p> <pre></pre> <p>add shows that the apply node has the add as the operation.</p> <p><strong>But what does no_inplace mean? and why we have a ".0" at the end of the output?</strong></p>
[ { "AnswerId": "25428302", "CreationDate": "2014-08-21T13:58:59.820", "ParentId": null, "OwnerUserId": "3670532", "Title": null, "Body": "<p>Inplace computations are computations that destroy their inputs as a side-effect. For example, if you iterate over a matrix and double every element, this is an inplace operation because when you are done, the original input has been overwritten. Ops representing inplace computations are destructive, and by default these can only be inserted by optimizations, not user code.</p>\n\n<p>So no_inplace is just the opposite.</p>\n\n<p>From <a href=\"http://deeplearning.net/software/theano/glossary.html#glossary\" rel=\"nofollow\">http://deeplearning.net/software/theano/glossary.html#glossary</a></p>\n" } ]
25,422,826
2
<python><ipython><theano><pydot>
2014-08-21T09:28:59.143
25,426,249
3,670,532
Error using python pydot
<p>I get error using the theano.printing.pydotprint() function</p> <p>following lines work fine without any error:</p> <pre></pre> <p>Also when I run</p> <pre></pre> <p>in the python interpreter I get output as</p> <pre></pre> <p>but the problem is when I execute the script using the function I get following error</p> <pre></pre> <p>Any idea what is the problem?</p> <p>P.S: I am running the python tutorial given here: <a href="http://deeplearning.net/software/theano/tutorial/printing_drawing.html" rel="nofollow">http://deeplearning.net/software/theano/tutorial/printing_drawing.html</a> So the call to the function is surely correct.</p> <p>Here is the traceback of the error I am getting:</p> <pre></pre>
[ { "AnswerId": "25426249", "CreationDate": "2014-08-21T12:21:16.510", "ParentId": null, "OwnerUserId": "3670532", "Title": null, "Body": "<p>Tried reinstalling pydot as given by the <a href=\"https://stackoverflow.com/questions/15951748/pydot-and-graphviz-error-couldnt-import-dot-parser-loading-of-dot-files-will\">solution to this problem</a>, but this was not working.</p>\n\n<p>That is</p>\n\n<pre><code>pip uninstall pyparsing\npip install -Iv https://pypi.python.org/packages/source/p/pyparsing/pyparsing-1.5.7.tar.gz#md5=9be0fcdcc595199c646ab317c1d9a709\npip install pydot\n</code></pre>\n\n<p>there was some problem with this installation, even though every time, installed successfully message was given.</p>\n\n<p>But</p>\n\n<pre><code>sudo apt-get install python-pydot\n</code></pre>\n\n<p>this worked.</p>\n\n<p>\"Because the solution was not to install pydot from somewhere, but \"python-pydot\" from official ubuntu repositories.\" - <a href=\"https://stackoverflow.com/questions/15951748/pydot-and-graphviz-error-couldnt-import-dot-parser-loading-of-dot-files-will\">answer by sadik</a> worked</p>\n\n<p>We must note that on successful installation of pydot, it can be checked at two places.<br/>\n/usr/share/doc/python-pydot<br/>\nand<br/>\n/usr/share/python-support/python-pydot</p>\n\n<p><img src=\"https://i.stack.imgur.com/nuAw5.png\" alt=\"enter image description here\"></p>\n" }, { "AnswerId": "25424972", "CreationDate": "2014-08-21T11:16:27.173", "ParentId": null, "OwnerUserId": "1403430", "Title": null, "Body": "<p>Abhishek : Check if you could see the \"pydot\" folder under the lib folder. Looks like you are in <em>ix</em> machine. Ideally you would find it is installed or not within 'lib' folder / or within site-packages.</p>\n\n<p>Meanwhile, I would suggest you to try re-installing the package(pydot) and see if it helps. </p>\n" } ]
25,423,173
2
<python><attributeerror><theano>
2014-08-21T09:47:00.147
null
1,521,172
Name conflicting in Theano
<p>I am trying to import theano in a module, but I am getting a traceback:</p> <pre></pre> <p>It seems that there is some name conflict in some config. Can anybody please point me to the same.</p>
[ { "AnswerId": "41095855", "CreationDate": "2016-12-12T07:18:19.797", "ParentId": null, "OwnerUserId": "3935797", "Title": null, "Body": "<p>I got similar error using when using the jupyter notebook. Restarting kernel solved the issue.</p>\n" }, { "AnswerId": "31148162", "CreationDate": "2015-06-30T20:44:06.220", "ParentId": null, "OwnerUserId": "1985353", "Title": null, "Body": "<p>This error happens because some module, probably <code>theano.gof</code>, is imported twice. Usually, this is because a first call to <code>import theano.gof</code> gets started, registering <code>'gcc.cxxflags'</code> in the configuration parser a first time, but then raises <code>ImportError</code>, which is catched and ignored.\nThen, <code>import theano.gof</code> gets called again, tries to register the option again, which raises the exception you get.</p>\n\n<p>Is there any traceback or error message before this one, or something that would give a hint of why the first import failed?</p>\n" } ]
25,449,271
1
<python><linux><machine-learning><theano><deep-learning>
2014-08-22T14:23:21.350
null
3,968,255
Why is Theano (much) slower on Windows than on Linux?
<p>I implemented a recursive autoencoder with Theano and tested it on both Linux and Windows. It tooks ~3 hours, 2.3G memory on Linux, while ~9 hours, 0.5G memory on Windows. config.allow_gc=True for both cases.</p> <p>It could be a Python issue, as discussed in the thread: <a href="https://stackoverflow.com/questions/10150881/why-is-python-so-much-slower-on-windows">Why is python so much slower on windows?</a></p> <p>Is there any specific setting in Theano that could slow things down on Windows as well?</p> <p>Thanks,</p> <p>Ya</p>
[ { "AnswerId": "25457043", "CreationDate": "2014-08-22T23:44:54.880", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>It could be that they use different BLAS librairies. From memory, autoencoder bottleneck is the matrix product, that call BLAS. Different BLAS implementation can have up to 10x speed difference.</p>\n\n<p>So check if you used the same BLAS. I would recommand to install python via EPD/Canopy or Anaconda python packages. There not free version link to a good blas and Theano reuse it. The now free version is free for academic.</p>\n" } ]
25,500,045
1
<python><cuda><gpu><theano>
2014-08-26T07:12:44.277
null
3,670,532
GPU with Theano giving poor result compared to CPU with theano
<p>I am doing this tutorial: <a href="http://deeplearning.net/software/theano/tutorial/using_gpu.html#exercise" rel="nofollow">http://deeplearning.net/software/theano/tutorial/using_gpu.html#exercise</a></p> <p>and the solution to the tutorial is given here: <a href="http://deeplearning.net/software/theano/_downloads/using_gpu_solution_1.py" rel="nofollow">http://deeplearning.net/software/theano/_downloads/using_gpu_solution_1.py</a></p> <p>But my issue is when I run the code</p> <p>with GPU:</p> <pre></pre> <p>I got the following output:</p> <pre></pre> <p>and with CPU:</p> <pre></pre> <p>I got the following output:</p> <pre></pre> <p>In <a href="http://deeplearning.net/software/theano/_downloads/using_gpu_solution_1.py" rel="nofollow">the solution</a> they have mentioned the speed up of almost double with GPU. But I am getting more elapsed time with GPU than CPU.</p> <p>Is it that the code ran on multiple core on CPU and it got improvements over GPU?</p> <p>Can anyone tell me what I am getting wrong? Only thing I can see is improve in system time using the GPU. Is this what they mean by speed up? as the overall elapsed time is still more with GPU.</p>
[ { "AnswerId": "25518636", "CreationDate": "2014-08-27T03:48:28.580", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>There is one problems (1) and consideration about this(2-3):</p>\n\n<p>1) You don't time correctly. The way you time, you include the theano compilation time. Theano compilation is not made to be included in the profile. You should only time the time spent inside Theano function. For this, modify the script of use the Theano profiler as in the profile.</p>\n\n<p>2) This is a toy example. It do pure Stochastic Gradient Descent(SGD). To get good speed up from the GPU, we need to use minibatch with SGD. If we don't the GPU don't have enough data to parallellize the computation.</p>\n\n<p>3) As this is a toy example with only a small model, the speed up will vary highly depending of the CPU and GPU used. It is probably that you did your timming on a better CPU then the original, or that you did it with a parallel BLAS.</p>\n" } ]
25,619,458
2
<algorithm><convolution><theano>
2014-09-02T08:52:44.560
null
3,897,995
perform the exact same convolution as in theano's conv2d
<p>I have an existing classification model that was trained using theano's conv2d under theano.tensor.nnet. Now I have to use this model to do some sort of prediction in Java. </p> <p>I implement a simple convolution in Python(In the end, I will code it in Java) as per some documentation(<a href="https://developer.apple.com/Library/ios/documentation/Performance/Conceptual/vImage/ConvolutionOperations/ConvolutionOperations.html" rel="nofollow">https://developer.apple.com/Library/ios/documentation/Performance/Conceptual/vImage/ConvolutionOperations/ConvolutionOperations.html</a>). For example, for a 2*2 kernel (k11,k12,k21,k22), one of the areas under the kernel is (a11,a12,a21,a22). The convolution is performed by a11*k11 + a12*k12 + a21*k21 + a22*k22.</p> <p>Unfortunately, as I test my convolution code and theano's conv code with some dummy matrix and kernels, they give different results. Only in some rare cases, they give same results.</p> <p>It seems to me that there are many variants of convolution algorithm and I have to implement the exact same convolution algorithm as used by theano's ConvOp. However, I can't find any material describing theano's Conv2d algorithm. </p> <p>Could you explain a little bit about theano's conv2d algorithm?</p> <p>The following is my python code for convolution:</p> <pre></pre>
[ { "AnswerId": "26234696", "CreationDate": "2014-10-07T11:12:37.737", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>Theano convolution does exactly the same thing as <code>scipy.signal.convolve2d</code>. This is exploited/tested e.g. <a href=\"https://github.com/sklearn-theano/sklearn-theano/blob/master/sklearn_theano/tests/test_base.py#L10\" rel=\"nofollow\">here</a>. For self-containedness, try copy+pasting:</p>\n\n<pre><code>import numpy as np\nfrom scipy.signal import convolve2d\nimport theano\nimport theano.tensor as T\n\nrng = np.random.RandomState(42)\nimage = rng.randn(500, 500).astype(np.float32)\nconv_filter = rng.randn(32, 32).astype(np.float32)\n\nimg = T.tensor4()\nfil = T.tensor4()\n\nfor border_mode in [\"full\", \"valid\"]:\n scipy_convolved = convolve2d(image, conv_filter, mode=border_mode)\n theano_convolve2d = theano.function([img, fil], T.nnet.conv2d(img, fil,\n border_mode=border_mode))\n theano_convolved = theano_convolve2d(image.reshape(1, 1, 500, 500),\n conv_filter.reshape(1, 1, 32, 32))\n l2_discrepancy = np.sqrt(((scipy_convolved - theano_convolved) ** 2).sum())\n print \"Discrepancy %1.5e for border mode %s\" % (l2_discrepancy, border_mode)\n print \"Norms of convolutions: %1.5e, %1.5e\" % (\n np.linalg.norm(scipy_convolved.ravel()),\n np.linalg.norm(theano_convolved.ravel()))\n</code></pre>\n\n<p>It outputs</p>\n\n<pre><code>Discrepancy 9.42469e-03 for border mode full\nNorms of convolutions: 1.65433e+04, 1.65440e+04\nDiscrepancy 9.03687e-03 for border mode valid\nNorms of convolutions: 1.55051e+04, 1.55054e+04\n</code></pre>\n\n<p>Since scipy implements a standard convolution, so does theano.</p>\n" }, { "AnswerId": "39976915", "CreationDate": "2016-10-11T12:08:48.733", "ParentId": null, "OwnerUserId": "3876563", "Title": null, "Body": "<p>Actually you and the (Theano, Scripy) are all right. the reason is that: your are using the different convolution2D. the Theano and Script using the convolution2D defined in Math that should rotate the kernel. but you did not (ref: <a href=\"http://www.songho.ca/dsp/convolution/convolution.html#convolution_2d\" rel=\"nofollow\">http://www.songho.ca/dsp/convolution/convolution.html#convolution_2d</a>).\nSO if your kernel is like this:</p>\n\n<pre><code>[1, 2, 3]\n[4, 5, 6]\n[7, 8, 9]\n</code></pre>\n\n<p>than change it to (center symmetry):</p>\n\n<pre><code>[9, 8, 7]\n[6, 5, 4]\n[3, 2, 1]\n</code></pre>\n\n<p>So using your method will get the same answer as Theano/Scripy</p>\n" } ]
25,685,104
1
<pymc><theano><dirichlet><pymc3>
2014-09-05T11:40:06.987
25,846,440
601,308
Dirichlet process in PyMC 3
<p>I would like to implement to implement the Dirichlet process example referenced in <a href="http://stronginference.com/post/implementing-dirichlet-processes-for-bayesian-semi-parametric-models" rel="nofollow">Implementing Dirichlet processes for Bayesian semi-parametric models</a> (source: <a href="https://github.com/fonnesbeck/pymc_radon/blob/master/radon_dp.py" rel="nofollow">here</a>) in PyMC 3.</p> <p>In the example the stick-breaking probabilities are computed using the decorator:</p> <pre></pre> <p>How would you implement this in PyMC 3 which is using Theano for the gradient computation?</p> <p>edit: I tried the following solution using the method:</p> <pre></pre> <p>Which sadly is really slow and does not obtain the original parameters of the synthetic data.</p> <p>Is there a better solution and is this even correct?</p>
[ { "AnswerId": "25846440", "CreationDate": "2014-09-15T10:54:31.927", "ParentId": null, "OwnerUserId": "2288595", "Title": null, "Body": "<p>Not sure I have a good answer but perhaps this could be sped up by instead using a theano blackbox op which allows you to write a distribution (or deterministic) in python code. E.g.: <a href=\"https://github.com/pymc-devs/pymc3/blob/master/pymc3/examples/disaster_model_arbitrary_deterministic.py\" rel=\"nofollow\">https://github.com/pymc-devs/pymc3/blob/master/pymc3/examples/disaster_model_arbitrary_deterministic.py</a></p>\n" } ]
25,729,969
7
<python><windows><cuda><mingw><theano>
2014-09-08T17:37:48.177
26,073,714
1,663,093
Installing theano on Windows 8 with GPU enabled
<p>I understand that the Theano support for Windows 8.1 is at experimental stage only but I wonder if anyone had any luck with resolving my issues. Depending on my config, I get three distinct types of errors. I assume that the resolution of any of my errors would solve my problem.</p> <p>I have installed Python using WinPython 32-bit system, using MinGW as described <a href="http://deeplearning.net/software/theano/install.html">here</a>. The contents of my file are as follows:</p> <pre></pre> <p>When I run the error is as follows:</p> <pre></pre> <p>I have also tested it using which is installed on my system with the following error:</p> <pre></pre> <p>In the latter error, several pop-up windows ask me how would I like to open (.res) file before error is thrown.</p> <p> is present in both folders (i.e. VS 2010 and VS 2013). </p> <p>Finally, if I set VS 2013 in the environment path and set contents as follows:</p> <pre></pre> <p>I get the following error:</p> <pre></pre> <p>If I run without the GPU option on, it runs without a problem. Also CUDA samples run without a problem. </p>
[ { "AnswerId": "34711198", "CreationDate": "2016-01-10T21:51:00.907", "ParentId": null, "OwnerUserId": "5771005", "Title": null, "Body": "<p>I used <a href=\"https://stackoverflow.com/a/26073714/1709587\">this guide</a>, and it was quite helpful.\nWhat many of Windows Theano guides only mention in passing (or not at all) is that you will need to compile theano from mingw shell, not from your IDE.</p>\n\n<p>I ran mingw-w64.bat, and from there \"python\" and \"import theano\". Only after that importing it from pycharm works.</p>\n\n<p>Additionally, official instructions on deeplearning.net are bad because they tell you to use CUDA 5.5, but it won't work with newer video cards.</p>\n\n<p>The comments are also quite helpful. If it complains about missing crtdefs.h or basetsd.h, do what Sunando's answer says. If AFTER THAT it still complains that identifier \"Iunknown\" is undefined in objbase.h, stick the following in \nC:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v7.1A\\Include\\objbase.h file, on line 236:</p>\n\n<pre><code>#include &lt;wtypes.h&gt;\n#include &lt;unknwn.h&gt;\n</code></pre>\n\n<p>I had to do this last part to make it work with bleeding edge install (required for parts of Keras).</p>\n\n<p>I also wrote a list of things that worked for me, here:\n<a href=\"http://acoupleofrobots.com/everything/?p=2238\" rel=\"nofollow noreferrer\">http://acoupleofrobots.com/everything/?p=2238</a>\nThis is for 64 bit version.</p>\n" }, { "AnswerId": "33128977", "CreationDate": "2015-10-14T15:04:49.807", "ParentId": null, "OwnerUserId": "3816062", "Title": null, "Body": "<p>I could compile the cu files by adding the required dependencies in the nvcc profile located in “C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v7.5\\bin\\nvcc.profile”</p>\n\n<p>I modified the include and the lib path and it started working.</p>\n\n<p>INCLUDES += “-I$(TOP)/include” $(<em>SPACE</em>) “-IC:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/include” $(<em>SPACE</em>) “-IC:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v7.1A\\Include” $(<em>SPACE</em>)\nLIBRARIES =+ $(<em>SPACE</em>) “/LIBPATH:$(TOP)/lib/$(_WIN_PLATFORM_)” $(<em>SPACE</em>) “/LIBPATH:C:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/lib/amd64” $(<em>SPACE</em>) “/LIBPATH:C:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v7.1A\\Lib\\x64” $(<em>SPACE</em>)</p>\n\n<p>I have made a full documentation of the install, hope it helps <a href=\"https://planetanacreon.wordpress.com/2015/10/09/install-theano-on-windows-8-1-with-visual-studio-2013-cuda-7-5/\" rel=\"nofollow\">https://planetanacreon.wordpress.com/2015/10/09/install-theano-on-windows-8-1-with-visual-studio-2013-cuda-7-5/</a></p>\n" }, { "AnswerId": "29081593", "CreationDate": "2015-03-16T16:08:33.083", "ParentId": null, "OwnerUserId": "4677134", "Title": null, "Body": "<p>Following the tutorial by Matt, I ran into issues with nvcc.\nI needed to add the path to VS2010 executables in nvcc.profile (you can find it in the cuda bin folder): </p>\n\n<p><code>\"compiler-bindir = C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\bin\\amd64\"</code></p>\n" }, { "AnswerId": "33011235", "CreationDate": "2015-10-08T09:03:23.100", "ParentId": null, "OwnerUserId": "3079329", "Title": null, "Body": "<p>Here are my simple steps for installing theano on a\n64-bit windows 10 machine. It's tested on the code listed <a href=\"http://deeplearning.net/software/theano/tutorial/using_gpu.html#testing-theano-with-gpu\" rel=\"noreferrer\">here</a></p>\n\n<p>(All installation are with default installation path)</p>\n\n<ul>\n<li>install anaconda python 3.x distribution (it already includes numpy,\nscipy, matlibplot, etc.) </li>\n<li>run 'conda install mingw libpython' in command-line</li>\n<li>install theano by downloading it from the official website and do `python setup.py install'</li>\n<li>install lastest CUDA toolkit for 64-bit windows 10 (now is 7.5)</li>\n<li>install visual studio 2013 (free for windows 10)</li>\n<li>create .theanorc.txt file under %USERPROFILE% path and here are\nthe content in the .theanorc.txt file to run theano with GPU</li>\n</ul>\n\n<hr>\n\n<p>[global]</p>\n\n<p>floatX = float32</p>\n\n<p>device = gpu</p>\n\n<p>[nvcc]</p>\n\n<p>fastmath = True</p>\n\n<p>compiler_bindir=C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin\\cl.exe</p>\n\n<p>[cuda]</p>\n\n<p>C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v7.5</p>\n\n<hr>\n" }, { "AnswerId": "30539337", "CreationDate": "2015-05-29T21:26:55.810", "ParentId": null, "OwnerUserId": "4853662", "Title": null, "Body": "<p>In case you want to upgrade to MS Visual Studio 2012 and CUDA 7 on Windows 8.1 x64, check out this tutorial here:</p>\n\n<p><a href=\"http://machinelearning.berlin/?p=383\" rel=\"nofollow\">http://machinelearning.berlin/?p=383</a></p>\n\n<p>It should work as long as you stick to it exactly.\nAll the best</p>\n\n<p>Christian</p>\n" }, { "AnswerId": "26073714", "CreationDate": "2014-09-27T10:30:52.703", "ParentId": null, "OwnerUserId": "1663093", "Title": null, "Body": "<p>Theano is a great tool for machine learning applications, yet I found that its installation on Windows is not trivial especially for beginners (like myself) in programming. In my case, I see 5-6x speedups of my scripts when run on a GPU so it was definitely worth the hassle. </p>\n\n<p>I wrote this guide based on my installation procedure and is meant to be verbose and hopefully complete even for people with no prior understanding of building programs under Windows environment. Most of this guide is based on these <a href=\"https://github.com/Theano/Theano/wiki/Windowsinstallation\" rel=\"nofollow noreferrer\">instructions</a> but I had to change some of the steps in order for it to work on my system. If there is anything that I do that may not be optimal or that doesn't work on your machine, please, let me know and I will try to modify this guide accordingly. </p>\n\n<p>These are the steps (in order) I followed when installing Theano with GPU enabled on my Windows 8.1 machine:</p>\n\n<h3>CUDA Installation</h3>\n\n<p>CUDA can be downloaded from <a href=\"https://developer.nvidia.com/cuda-downloads\" rel=\"nofollow noreferrer\">here</a>. In my case, I chose 64-bit Notebook version for my NVIDIA Optimus laptop with Geforce 750m. </p>\n\n<p>Verify that your installation was successful by launching <code>deviceQuery</code> from command line. In my case this was located in the following folder: <code>C:\\ProgramData\\NVIDIA Corporation\\CUDA Samples\\v6.5\\bin\\win64\\Release</code> . If successful, you should see PASS at the end of the test. </p>\n\n<h3>Visual Studio 2010 Installation</h3>\n\n<p>I installed this via <a href=\"http://www.dreamspark.com\" rel=\"nofollow noreferrer\">dreamspark</a>. If you are a student you are entitled for a free version. If not, you can still install the <a href=\"http://www.visualstudio.com/en-us/products/visual-studio-express-vs.aspx\" rel=\"nofollow noreferrer\">Express version</a> which should work just as well. After install is complete you should be able to call Visual Studio Command Prompt 2010 from the start menu. </p>\n\n<h3>Python Installation</h3>\n\n<p>At the time of writing, Theano on GPU only allows working with 32-bit floats and is primarily built for 2.7 version of Python. Theano requires most of the basic scientific Python libraries such as <code>scipy</code> and <code>numpy</code>. I found that the easiest way to install these was via <a href=\"http://sourceforge.net/projects/winpython/files/latest/download?source=files\" rel=\"nofollow noreferrer\">WinPython</a>. It installs all the dependencies in a self-contained folder which allows easy reinstall if something goes wrong in the installation process and you get some useful IDE tools such as ipython notebook and Spyder installed for free as well. For ease of use you might want to add the path to your python.exe and path to your Scripts folder in the <a href=\"https://superuser.com/questions/284342/what-are-path-and-other-environment-variables-and-how-can-i-set-or-use-them\">environment variables</a>.</p>\n\n<h3>Git installation</h3>\n\n<p>Found <a href=\"http://msysgit.github.io/\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<h3>MinGW Installation</h3>\n\n<p>Setup file is <a href=\"http://www.mingw.org/wiki/Getting_Started\" rel=\"nofollow noreferrer\">here</a>. I checked all the base installation files during the installation process. This is required if you run into g++ error described below. </p>\n\n<h3>Cygwin installation</h3>\n\n<p>You can find it <a href=\"https://cygwin.com/install.html\" rel=\"nofollow noreferrer\">here</a>. I basically used this utility only to extract PyCUDA tar file which is already provided in the base install (so the install should be straightforward). </p>\n\n<h3>Python distutils fix</h3>\n\n<p>Open <code>msvc9compiler.py</code> located in your <code>/lib/distutils/</code> directory of your Python installation. Line 641 in my case reads: <code>ld_args.append ('/IMPLIB:' + implib_file)</code>. Add the following after this line (same indentation): </p>\n\n<pre><code>ld_args.append('/MANIFEST')\n</code></pre>\n\n<h3>PyCUDA installation</h3>\n\n<p>Source for PyCUDA is <a href=\"http://pypi.python.org/packages/source/p/pycuda/pycuda-2012.1.tar.gz#md5=b67c4fce6c258834339073f2537fa84f\" rel=\"nofollow noreferrer\">here</a>. </p>\n\n<p><strong>Steps:</strong></p>\n\n<p>Open cygwin and navigate to the PyCUDA folder (i.e. <code>/cygdrive/c/etc/etc</code>) and execute <code>tar -xzf pycuda-2012.1.tar.gz</code>.</p>\n\n<p>Open Visual Studio Command Prompt 2010 and navigate to the directory where tarball was extracted and execute <code>python configure.py</code></p>\n\n<p>Open the ./siteconf.py and change the values so that it reads (for CUDA 6.5 for instance):</p>\n\n<pre><code>BOOST_INC_DIR = []\nBOOST_LIB_DIR = []\nBOOST_COMPILER = 'gcc43'\nUSE_SHIPPED_BOOST = True\nBOOST_PYTHON_LIBNAME = ['boost_python']\nBOOST_THREAD_LIBNAME = ['boost_thread']\nCUDA_TRACE = False\nCUDA_ROOT = 'C:\\\\Program Files\\\\NVIDIA GPU Computing Toolkit\\\\CUDA\\\\v6.5'\nCUDA_ENABLE_GL = False\nCUDA_ENABLE_CURAND = True\nCUDADRV_LIB_DIR = ['${CUDA_ROOT}/lib/Win32']\nCUDADRV_LIBNAME = ['cuda']\nCUDART_LIB_DIR = ['${CUDA_ROOT}/lib/Win32']\nCUDART_LIBNAME = ['cudart']\nCURAND_LIB_DIR = ['${CUDA_ROOT}/lib/Win32']\nCURAND_LIBNAME = ['curand']\nCXXFLAGS = ['/EHsc']\nLDFLAGS = ['/FORCE']\n</code></pre>\n\n<p>Execute the following commands at the VS2010 command prompt: </p>\n\n<pre><code>set VS90COMNTOOLS=%VS100COMNTOOLS%\npython setup.py build\npython setup.py install\n</code></pre>\n\n<p>Create this python file and verify that you get a result:</p>\n\n<pre><code># from: http://documen.tician.de/pycuda/tutorial.html\nimport pycuda.gpuarray as gpuarray\nimport pycuda.driver as cuda\nimport pycuda.autoinit\nimport numpy\na_gpu = gpuarray.to_gpu(numpy.random.randn(4,4).astype(numpy.float32))\na_doubled = (2*a_gpu).get()\nprint a_doubled\nprint a_gpu\n</code></pre>\n\n<h3>Install Theano</h3>\n\n<p>Open git bash shell and choose a folder in which you want to place Theano installation files and execute:</p>\n\n<pre><code>git clone git://github.com/Theano/Theano.git\npython setup.py install\n</code></pre>\n\n<p>Try opening python in VS2010 command prompt and run <code>import theano</code></p>\n\n<p>If you get a g++ related error, open MinGW msys.bat in my case installed here: <code>C:\\MinGW\\msys\\1.0</code> and try importing theano in MinGW shell. Then retry importing theano from VS2010 Command Prompt and it should be working now. </p>\n\n<p>Create a file in WordPad (NOT Notepad!), name it <code>.theanorc.txt</code> and put it in <code>C:\\Users\\Your_Name\\</code> or wherever your users folder is located: </p>\n\n<pre><code>#!sh\n[global]\ndevice = gpu\nfloatX = float32\n\n[nvcc]\ncompiler_bindir=C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\bin\n# flags=-m32 # we have this hard coded for now\n\n[blas]\nldflags =\n# ldflags = -lopenblas # placeholder for openblas support\n</code></pre>\n\n<p>Create a test python script and run it:</p>\n\n<pre><code>from theano import function, config, shared, sandbox\nimport theano.tensor as T\nimport numpy\nimport time\n\nvlen = 10 * 30 * 768 # 10 x #cores x # threads per core\niters = 1000\n\nrng = numpy.random.RandomState(22)\nx = shared(numpy.asarray(rng.rand(vlen), config.floatX))\nf = function([], T.exp(x))\nprint f.maker.fgraph.toposort()\nt0 = time.time()\nfor i in xrange(iters):\n r = f()\nt1 = time.time()\nprint 'Looping %d times took' % iters, t1 - t0, 'seconds'\nprint 'Result is', r\nif numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):\n print 'Used the cpu'\nelse:\n print 'Used the gpu'\n</code></pre>\n\n<p>Verify you got <code>Used the gpu</code> at the end and you're done! </p>\n" }, { "AnswerId": "25918575", "CreationDate": "2014-09-18T17:10:22.787", "ParentId": null, "OwnerUserId": "1680562", "Title": null, "Body": "<p>Here's a guide to installing theano with CUDA on 64-bit Windows.</p>\n\n<p>It seems straightforward, but I have not actually tested it to ensure that it works.</p>\n\n<p><a href=\"http://pavel.surmenok.com/2014/05/31/installing-theano-with-gpu-on-windows-64-bit/\" rel=\"nofollow\">http://pavel.surmenok.com/2014/05/31/installing-theano-with-gpu-on-windows-64-bit/</a></p>\n" } ]
25,788,268
1
<python-2.7><numpy><enthought><theano>
2014-09-11T13:05:28.370
25,812,219
2,688,733
Theano get unique values in a tensor
<p>I have a tensor which I convert into a vector by flattening, now I want to remove the duplicate values in this vector. How can I do this? What is equivalent for numpy.unique() in theano?</p> <pre></pre>
[ { "AnswerId": "25812219", "CreationDate": "2014-09-12T15:54:16.097", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>EDIT: this is now available in Theano: <a href=\"http://deeplearning.net/software/theano/library/tensor/extra_ops.html#theano.tensor.extra_ops.Unique\" rel=\"nofollow\">http://deeplearning.net/software/theano/library/tensor/extra_ops.html#theano.tensor.extra_ops.Unique</a></p>\n\n<p>This question was also asked on theano-user mailing list. The conclusion is that this is one of the function NumPy function that isn't wrapped in Theano. As he don't need the grad, it can be rapidly wrapped. Here is an example who expect the outputs to be the same as the input.</p>\n\n<pre><code>from theano.compile.ops import as_op\n\n\n@as_op(itypes=[theano.tensor.imatrix],\n otypes=[theano.tensor.imatrix])\ndef numpy_unique(a):\n return numpy.unique(a)\n</code></pre>\n\n<p>More doc about as_op is available here: <a href=\"http://deeplearning.net/software/theano/tutorial/extending_theano.html#as-op-example\" rel=\"nofollow\">http://deeplearning.net/software/theano/tutorial/extending_theano.html#as-op-example</a></p>\n" } ]
25,881,271
1
<python><theano>
2014-09-17T02:21:35.873
null
1,392,141
Swapping rows of a Theano symbolic matrix
<p>I am implementing parallel tempering Gibbs sampling using Theano. I am trying to create a Theano function that takes a matrix and swaps some of its rows. I have a symbolic binary vector named that denotes which rows should be swapped (i.e., if , then and should be swapped). The order of swapping is not important for me.</p> <p>I was trying to write a that goes through the vector and performs swapping of row-by-row. The problem is that Theano doesn't allow doing something like with symbolic variables. Here is a simple code snippet of what I am trying to do.</p> <pre></pre> <p>Any ideas on how I can do this the right way?</p>
[ { "AnswerId": "25901769", "CreationDate": "2014-09-17T23:09:29.807", "ParentId": null, "OwnerUserId": "1392141", "Title": null, "Body": "<p>Okay, here is a really simple solution I found.</p>\n\n<pre><code>import numpy as np\n\nimport theano\nimport theano.tensor as T\n\ndef swap(swp, pos, X):\n return T.concatenate([X[:pos],X[[pos+swp]],X[[pos+1-swp]],X[pos+2:]])\n\nmax_length = 10\nswaps = T.ivector('swaps')\npos = T.iscalar('pos')\nX = T.vector('X')\n\nnew_X, _ = theano.scan(swap,\n sequences=[swaps, T.arange(max_length)],\n outputs_info=X)\n\ndo_swaps = theano.function([swaps, X], new_X[-1])\n\nX_swapped = do_swaps(np.array([1, 1, 0, 1], dtype='int32'), np.arange(5))\nprint X_swapped\n</code></pre>\n\n<p>However, I am not sure how it is optimal or not for executing on a GPU.</p>\n" } ]
25,886,374
2
<python><matlab><scipy><theano>
2014-09-17T09:04:13.730
26,065,509
1,714,410
pdist for theano tensor
<p>I have a theano symbolic matrix</p> <pre></pre> <p> will be later on populated by vectors of dim (at train time).</p> <p>I would like to have the theano equivalent of (<a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html" rel="noreferrer"></a> of <a href="http://www.mathworks.com/help/stats/pdist.html" rel="noreferrer"></a>), something like</p> <pre></pre> <p>How can I achieve this? </p> <p>Calling on directly does not work as at this stage is only symbolic...</p> <p><strong>Update:</strong> I would very much like to be able to mimic "compact" behavior: that is, computing only ~1/2 of the x entries of the distance matrix.</p>
[ { "AnswerId": "26065509", "CreationDate": "2014-09-26T17:53:15.947", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p><code>pdist</code> from scipy is a collection of different functions - there doesn't exist a Theano equivalent for all of them at once. However, each specific distance, being a closed form mathematical expression, can be written down in Theano as such and then compiled.</p>\n\n<p>Take as a example the minkowski <code>p</code> norm distance (copy+pasteable):</p>\n\n<pre><code>import theano\nimport theano.tensor as T\nX = T.fmatrix('X')\nY = T.fmatrix('Y')\nP = T.scalar('P')\ntranslation_vectors = X.reshape((X.shape[0], 1, -1)) - Y.reshape((1, Y.shape[0], -1))\nminkowski_distances = (abs(translation_vectors) ** P).sum(2) ** (1. / P)\nf_minkowski = theano.function([X, Y, P], minkowski_distances)\n</code></pre>\n\n<p>Note that <code>abs</code> calls the built-in <code>__abs__</code>, so <code>abs</code> is also a theano function. We can now compare this to <code>pdist</code>:</p>\n\n<pre><code>import numpy as np\nfrom scipy.spatial.distance import pdist\n\nrng = np.random.RandomState(42)\nd = 20 # dimension\nnX = 10\nnY = 30\nx = rng.randn(nX, d).astype(np.float32)\ny = rng.randn(nY, d).astype(np.float32)\n\nps = [1., 3., 2.]\n\nfor p in ps:\n d_theano = f_minkowski(x, x, p)[np.triu_indices(nX, 1)]\n d_scipy = pdist(x, p=p, metric='minkowski')\n print \"Testing p=%1.2f, discrepancy %1.3e\" % (p, np.sqrt(((d_theano - d_scipy) ** 2).sum()))\n</code></pre>\n\n<p>This yields</p>\n\n<pre><code>Testing p=1.00, discrepancy 1.322e-06\nTesting p=3.00, discrepancy 4.277e-07\nTesting p=2.00, discrepancy 4.789e-07\n</code></pre>\n\n<p>As you can see, the correspondence is there, but the function <code>f_minkowski</code> is slightly more general, since it compares the lines of two possibly different arrays. If twice the same array is passed as input, <code>f_minkowski</code> returns a matrix, whereas <code>pdist</code> returns a list without redundancy. If this behaviour is desired, it can also be implemented fully dynamically, but I will stick to the general case here.</p>\n\n<p>One possibility of specialization should be noted though: In the case of <code>p=2</code>, the calculations become simpler through the binomial formula, and this can be used to save precious space in memory: Whereas the general Minkowski distance, as implemented above, creates a 3D array (due to avoidance of for-loops and summing cumulatively), which is prohibitive, depending on the dimension <code>d</code> (and <code>nX, nY</code>), for <code>p=2</code> we can write</p>\n\n<pre><code>squared_euclidean_distances = (X ** 2).sum(1).reshape((X.shape[0], 1)) + (Y ** 2).sum(1).reshape((1, Y.shape[0])) - 2 * X.dot(Y.T)\nf_euclidean = theano.function([X, Y], T.sqrt(squared_euclidean_distances))\n</code></pre>\n\n<p>which only uses <code>O(nX * nY)</code> space instead of <code>O(nX * nY * d)</code> We check for correspondence, this time on the general problem:</p>\n\n<pre><code>d_eucl = f_euclidean(x, y)\nd_minkowski2 = f_minkowski(x, y, 2.)\nprint \"Comparing f_minkowski, p=2 and f_euclidean: l2-discrepancy %1.3e\" % ((d_eucl - d_minkowski2) ** 2).sum()\n</code></pre>\n\n<p>yielding</p>\n\n<pre><code>Comparing f_minkowski, p=2 and f_euclidean: l2-discrepancy 1.464e-11\n</code></pre>\n" }, { "AnswerId": "25960161", "CreationDate": "2014-09-21T14:21:24.703", "ParentId": null, "OwnerUserId": "97160", "Title": null, "Body": "<p>I haven't worked with Theano before, but here is a solution based on pure Numpy functions (perhaps you convert it to the equivalent theano functions. Note that I'm using automatic <em>broadcasting</em> in the expression below, so you might have to rewrite that explicitly if Theano doesn't support it):</p>\n\n<pre class=\"lang-py prettyprint-override\"><code># X is an m-by-n matrix (rows are examples, columns are dimensions)\n# D is an m-by-m symmetric matrix of pairwise Euclidean distances\na = np.sum(X**2, axis=1)\nD = np.sqrt((a + a[np.newaxis].T) - 2*np.dot(X, X.T))\n</code></pre>\n\n<p>It is based on the fact that: <code>||u-v||^2 = ||u||^2 + ||v||^2 - 2*u.v</code>. (I showed this in <a href=\"https://stackoverflow.com/a/4171845/97160\">previous</a> <a href=\"https://stackoverflow.com/a/7774323/97160\">answers</a> of mine using MATLAB)</p>\n\n<p>Here is a comparison against Scipy existing functions:</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>import numpy as np\nfrom scipy.spatial.distance import pdist, squareform\n\ndef my_pdist(X):\n a = np.sum(X**2, axis=1)\n D = np.sqrt((a + a[np.newaxis].T) - 2*np.dot(X, X.T))\n return D\n\ndef scipy_pdist(X):\n D = squareform(pdist(X, metric='euclidean'))\n return D \n\nX = np.random.rand(5, 3)\nD1 = my_pdist(X)\nD2 = scipy_pdist(X)\n</code></pre>\n\n<p>The difference should be negligible, close to machine epsilon (<code>np.spacing(1)</code>):</p>\n\n<pre><code>&gt;&gt;&gt; np.linalg.norm(D1-D2)\n8.5368137554718277e-16\n</code></pre>\n\n<p>HTH</p>\n\n<hr>\n\n<h1>EDIT:</h1>\n\n<p>Here is another implementation with a single loop:</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>def my_pdist_compact(X):\n D = np.empty(shape=[0,0], dtype=X.dtype)\n for i in range(X.shape[0]-1):\n D = np.append(D, np.sqrt(np.sum((X[i,] - X[i+1:,])**2, axis=1)))\n return D\n</code></pre>\n\n<p>Somewhat equivalent MATLAB code:</p>\n\n<pre class=\"lang-matlab prettyprint-override\"><code>function D = my_pdist_compact(X)\n n = size(X,1);\n D = cell(n-1,1);\n for i=1:n-1\n D{i} = sqrt(sum(bsxfun(@minus, X(i,:), X(i+1:end,:)).^2, 2));\n end\n D = vertcat(D{:});\nend\n</code></pre>\n\n<p>This returns the pairwise-distances in compact form (upper triangular part of the symmetric matrix). This is the same output as <code>pdist</code>. Use <code>squareform</code> to convert it to full matrix.</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>&gt;&gt;&gt; d1 = my_pdist_compact(X)\n&gt;&gt;&gt; d2 = pdist(X) # from scipy.spatial.distance\n&gt;&gt;&gt; (d1 == d2).all()\nTrue\n</code></pre>\n\n<p>I will leave it to you to see if it's possible to write the equivalent <a href=\"http://deeplearning.net/software/theano/tutorial/loop.html\" rel=\"nofollow noreferrer\">loop</a> using Theano (see <a href=\"http://deeplearning.net/software/theano/library/scan.html\" rel=\"nofollow noreferrer\"><code>theano.scan</code></a>)!</p>\n" } ]
25,899,187
1
<pycuda><theano><deep-learning>
2014-09-17T19:51:47.827
25,903,319
1,754,197
pycuda vs theano vs pylearn2
<p>I am currently learning programming with GPU to improve the performance of machine learning algorithms. Initially I try to learn programming cuda with pure c, then I found pycuda which to me a wrapper of cuda library, and then I found theano and pylearn2 and got a little confused:</p> <p>I understand them in this way:</p> <ol> <li>pycuda: python wrapper for cuda library</li> <li>theano: similar to numpy but transparent to GPU and CPU</li> <li>pylearn2: deep learning package which build on theano and implemented several machine learning/deep learning model</li> </ol> <p>Since I am new to GPU programming, should I start learning from C/C++ implementation or starting from pycuda is enough, even starting from theano? E.g. I would like to implement randomForest model after learning the GPU programming.Thanks.</p>
[ { "AnswerId": "25903319", "CreationDate": "2014-09-18T02:37:08.770", "ParentId": null, "OwnerUserId": "534969", "Title": null, "Body": "<p>Your understand is almost right. I would just add some remarks about Theano. It's much more than a Numpy which can run on the GPU. Theano is indeed is math expression compiler, which translates symbolic math expressions in highly optimized C/CUDA code, targeted for both CPU and GPU. The code it generates is often much more efficient than the one most programmers would write. Theano also can make symbolic differentiation (very useful for gradient based optimization) and has also a feature to achieve better numerical stability (which probably is something useful, though I don't know for real to what extent). It's very likely Theano will be enough to implement what you need. If you still decide to learn CUDA or PyCUDA, choose the one based no the language you will use, C++ or Python. </p>\n" } ]
25,962,554
2
<python><yaml><theano><unsupervised-learning><autoencoder>
2014-09-21T18:37:33.107
null
3,670,532
Getting the learned representation of the data from the unsupervised learning in pylearn2
<p>We can train an autoencoder in pylearn2 using below YAML file (along with pylearn2/scripts/train.py)</p> <pre></pre> <p>What we get is the learned autoencoder model as "dae_l1.pkl".</p> <p>If I want to use this model for supervised training, I can use "dae_l1.pkl" to initialize the layer of an MLP. I can then train this model. I can even predict the output of the model using 'fprop' function.</p> <p>But what if I dun want to use this pretrained model for supervised learning and I just want to save the new learned representation of my data with the autoencoder.</p> <p>How can I do this?</p> <p>Even more detailed question is put <a href="https://groups.google.com/forum/#!topic/pylearn-users/8FsuAts9PoA" rel="nofollow">here</a></p>
[ { "AnswerId": "31551779", "CreationDate": "2015-07-22T00:14:25.513", "ParentId": null, "OwnerUserId": "674069", "Title": null, "Body": "<p>I think you can use the encode and decode functions of the autoencoder to get the hidden representation. E.g:</p>\n\n<pre><code>l1_path = 'dae_l1.pkl'\nl1 = serial.load(l1_path)\n\"\"\"encode\"\"\"\n#layer 1\nl1Input = l1.get_input_space().make_theano_batch()\nl1Encode = l1.encode(l1Input)\nl1Decode = l1.decode(l1Encode)\nl1EncodeFunction = theano.function([l1Input], l1Encode)\nl1DecodeFunction = theano.function([l1Encode], l1Decode)\n</code></pre>\n\n<p>Then, the representation will be:</p>\n\n<pre><code>l1encode = l1EncodeFunction(YourData)\n</code></pre>\n" }, { "AnswerId": "25997299", "CreationDate": "2014-09-23T14:09:31.240", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>The <code>reconstruct</code> method of the pickled model should do it - I believe usage is the same as <code>fprop</code>.</p>\n" } ]
25,986,855
1
<python><pymc><theano>
2014-09-23T04:22:04.747
null
1,559,693
PyMC + Theano on Debian Backports
<p>I'm attempting to run a model in pymc3 that takes advantage of when performing the dot product in a multilevel model. However, when I attempt to import theano I get:</p> <pre></pre> <p>I'm running .</p> <p>Conda:</p> <p>Current conda install:</p> <pre></pre> <p>I have also tried, after looking at various problem:</p> <pre></pre> <p>This seg faults.</p> <p>Also, it appears that I have the blas directories in .</p> <pre></pre> <p>Edit:</p> <p>The cause of this also breaks in pymc3.</p>
[ { "AnswerId": "26678427", "CreationDate": "2014-10-31T15:41:21.283", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>I suppose you have an old version of Theano, as now if we we can't find or detect how to link again blas, we have a fallback that do not cause crash, but cause only a warning.</p>\n\n<p>Update Theano to the development version.</p>\n\n<p>If you installed it via conda, remove it first, the update:</p>\n\n<pre><code>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git\n</code></pre>\n\n<p>or </p>\n\n<pre><code>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git --user\n</code></pre>\n\n<p>More info on how to update to the development version:</p>\n\n<p><a href=\"http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions\" rel=\"nofollow\">http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions</a></p>\n" } ]
26,009,101
2
<python><classification><theano><deep-learning>
2014-09-24T05:18:58.297
26,049,622
956,730
How to read values of Theano's LogisticRegression class predictions within DeepLearningTutorials package?
<p>I am using <a href="http://deeplearning.net/tutorial/logreg.html" rel="nofollow">Theano's LogisticRegression sample code</a> and i have not modified the code in the given package at all and i'm using the same data.</p> <p>I need to read the values of prediction which are within (self.y_pred) field in LogisticRegression class and also the values of prediction probabilities which are in self.p_y_given_x field of the same class.</p> <p>They are tensortype and tensorvariables and i don't know how to read/print them. I need them to do a postprocessing but i can't get access to the values. The values should be read after training which should be here around the star charachters.</p> <pre></pre>
[ { "AnswerId": "26070601", "CreationDate": "2014-09-27T02:08:27.450", "ParentId": null, "OwnerUserId": "956730", "Title": null, "Body": "<p>This is the code that worked for me as answered by Kyle. It returns the values of the prediction classes and i print a report out of it.</p>\n\n<pre><code>classifier = LogisticRegression(input=x, n_in=train_set_x.get_value(borrow=True).shape[1], n_out=25)\n\n# the cost we minimize during training is the negative log likelihood of\n# the model in symbolic format\ncost = classifier.negative_log_likelihood(y)\n\n# compiling a Theano function that computes the mistakes that are made by\n# the model on a minibatch\ntest_model = theano.function(inputs=[index],\n outputs=classifier.errors(y),\n givens={\n x: test_set_x[index * batch_size: (index + 1) * batch_size],\n y: test_set_y[index * batch_size: (index + 1) * batch_size]})\n\nvalidate_model = theano.function(inputs=[index],\n outputs=classifier.errors(y),\n givens={\n x: valid_set_x[index * batch_size:(index + 1) * batch_size],\n y: valid_set_y[index * batch_size:(index + 1) * batch_size]})\n\npredict = theano.function(inputs=[],\n outputs=classifier.y_pred,\n givens={\n x: test_set_x})\n# compute the gradient of cost with respect to theta = (W,b)\ng_W = T.grad(cost=cost, wrt=classifier.W)\ng_b = T.grad(cost=cost, wrt=classifier.b)\n\n# specify how to update the parameters of the model as a list of\n# (variable, update expression) pairs.\nupdates = [(classifier.W, classifier.W - learning_rate * g_W),\n (classifier.b, classifier.b - learning_rate * g_b)]\n\n# compiling a Theano function `train_model` that returns the cost, but in\n# the same time updates the parameter of the model based on the rules\n# defined in `updates`\ntrain_model = theano.function(inputs=[index],\n outputs=cost,\n updates=updates,\n givens={\n x: train_set_x[index * batch_size:(index + 1) * batch_size],\n y: train_set_y[index * batch_size:(index + 1) * batch_size]})\n\n###############\n# TRAIN MODEL #\n###############\nprint '... training the model'\n# early-stopping parameters\npatience = 50000 # look as this many examples regardless\npatience_increase = 2 # wait this much longer when a new best is\n # found\nimprovement_threshold = 0.995 # a relative improvement of this much is\n # considered significant\nvalidation_frequency = min(n_train_batches, patience / 2)\n # go through this many\n # minibatche before checking the network\n # on the validation set; in this case we\n # check every epoch\n\nbest_params = None\nbest_validation_loss = numpy.inf\ntest_score = 0.\nstart_time = time.clock()\n\ndone_looping = False\nepoch = 0\nwhile (epoch &lt; n_epochs) and (not done_looping):\n epoch = epoch + 1\n #********************here i call the function and report based on returned class predictions.\n report(predict())\n</code></pre>\n" }, { "AnswerId": "26049622", "CreationDate": "2014-09-25T23:14:04.597", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>You need to compile a function that gives back the prediction.</p>\n\n<p>This code may not work exactly, but this is the idea:</p>\n\n<pre><code>import numpy as np\nimport theano\nimport theano.tensor as T\n\n# Create some data with 100 samples, 10 features \nX = np.random.randn(100, 10)\nX_sym = T.fmatrix('X')\n# Create prediction function\npredict_function = theano.function(inputs=[X_sym], outputs=self.y_pred)\n\n# See the actual prediction\nprint(predict_function(X))\n</code></pre>\n" } ]
26,091,467
1
<github><theano>
2014-09-29T01:45:10.693
26,091,623
419,116
Getting a particular version of a branch
<p>Is there a way to download a particular version of a branch?</p> <p>In particular I'd like to do a git clone of <a href="https://github.com/Theano/Theano" rel="nofollow">https://github.com/Theano/Theano</a> now, and save a set of instructions on how to get the exact same version from github, regardless of future commits.</p>
[ { "AnswerId": "26091623", "CreationDate": "2014-09-29T02:10:25.777", "ParentId": null, "OwnerUserId": "1144203", "Title": null, "Body": "<p><strong>UPDATE</strong></p>\n\n<p>There is an easier way to do this on github if no further changes are expected. In github, you can navigate to the 'tree view' of a repository from your browser via the URL</p>\n\n<pre><code>https://github.com/&lt;repo_name&gt;/tree/&lt;commit_sha&gt;\n</code></pre>\n\n<p>Clicking on the 'Download ZIP' button on the right-hand navigation bar will download the codes of the repository up to that particular commit.</p>\n\n<p><strong>ORIGINAL ANSWER</strong></p>\n\n<p>I think one way to make your user's (or whoever will be reading your instructions) life easier is to clone the entire repository as-is. Then if the current <code>HEAD</code> commit is the version you want and you don't plan on making/merging changes to your repository, you can just refer your user to this new repository; probably tagging it will be a good idea.</p>\n\n<p>Otherwise, you can create a branch (and a tag) in your new repository with the specific commit by doing:</p>\n\n<pre><code>$ git checkout -b new_branch commit_sha // where commit_sha points to the version you wanted\n</code></pre>\n\n<p>Then you can refer your user to this new branch (or tag) in your repository, after they have cloned your repository. </p>\n\n<p>I don't think there is a simple way to clone the <a href=\"https://github.com/Theano/Theano\" rel=\"nofollow\">original repository</a> from an old commit without using <code>git reset</code>.</p>\n" } ]
26,107,927
3
<python><theano><pickle><mnist><dbn>
2014-09-29T20:04:09.283
31,861,263
4,092,486
How to put my dataset in a .pkl file in the exact format and data structure used in "mnist.pkl.gz"?
<p>I'm trying to use the Theano library in python to do some experiments with Deep Belief Networks. I use the code in this address: <a href="http://deeplearning.net/tutorial/code/DBN.py" rel="noreferrer">DBN full code</a>. This code use the <a href="http://deeplearning.net/data/mnist/mnist.pkl.gz" rel="noreferrer">MNIST Handwritten database</a>. This file is already in pickle format. It is unpicked in: </p> <ul> <li>train_set</li> <li>valid_set</li> <li>test_set</li> </ul> <p>Which is further unpickled in:</p> <ul> <li>train_set_x, train_set_y = train_set </li> <li>valid_set_x, valid_set_y = valid_set</li> <li>test_set_x, test_set_y = test_set</li> </ul> <p>Please can someone give me the code that constructs this dataset in order to create my own? The DBN example I use needs the data in this format and I don't know how to do it. if anyone has any ideas how to fix this, please tell me.</p> <p><strong>Here is my code:</strong></p> <pre></pre>
[ { "AnswerId": "26228673", "CreationDate": "2014-10-07T04:42:16.730", "ParentId": null, "OwnerUserId": "309653", "Title": null, "Body": "<p>The pickled file represents a tuple of 3 lists : the training set, the validation set and the testing set. (train, val, test)</p>\n\n<ul>\n<li>Each of the three lists is a pair formed from a list of images and a list of class labels for each of the images. </li>\n<li>An image is represented as numpy 1-dimensional array of 784 (28 x 28) float values between 0 and 1 (0 stands for black, 1 for white). </li>\n<li>The labels are numbers between 0 and 9 indicating which digit the image represents. </li>\n</ul>\n" }, { "AnswerId": "31861263", "CreationDate": "2015-08-06T16:39:06.013", "ParentId": null, "OwnerUserId": "3497273", "Title": null, "Body": "<p>A .pkl file is not necessary to adapt code from the Theano tutorial to your own data. You only need to mimic their data structure.</p>\n\n<h1>Quick fix</h1>\n\n<p>Look for the following lines. It's line 303 on <strong>DBN.py</strong>.</p>\n\n<pre><code>datasets = load_data(dataset)\ntrain_set_x, train_set_y = datasets[0]\n</code></pre>\n\n<p>Replace with your own <code>train_set_x</code> and <code>train_set_y</code>.</p>\n\n<pre><code>my_x = []\nmy_y = []\nwith open('path_to_file', 'r') as f:\n for line in f:\n my_list = line.split(' ') # replace with your own separator instead\n my_x.append(my_list[1:-1]) # omitting identifier in [0] and target in [-1]\n my_y.append(my_list[-1])\ntrain_set_x = theano.shared(numpy.array(my_x, dtype='float64'))\ntrain_set_y = theano.shared(numpy.array(my_y, dtype='float64'))\n</code></pre>\n\n<p>Adapt this to your input data and the code you're using.</p>\n\n<p>The same thing works for <strong>cA.py</strong>, <strong>dA.py</strong> and <strong>SdA.py</strong> but they only use <code>train_set_x</code>.</p>\n\n<p>Look for places such as <code>n_ins=28 * 28</code> where mnist image sizes are hardcoded. Replace <code>28 * 28</code> with your own number of columns.</p>\n\n<h1>Explanation</h1>\n\n<p>This is where you put your data in a format that Theano can work with.</p>\n\n<pre><code>train_set_x = theano.shared(numpy.array(my_x, dtype='float64'))\ntrain_set_y = theano.shared(numpy.array(my_y, dtype='float64'))\n</code></pre>\n\n<p><code>shared()</code> turns a numpy array into the Theano format designed for efficiency on GPUs.</p>\n\n<p><code>dtype='float64'</code> is expected in Theano arrays.</p>\n\n<p>More details on <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html\" rel=\"noreferrer\">basic tensor functionality</a>.</p>\n\n<h1>.pkl file</h1>\n\n<p>The .pkl file is a way to save your data structure.</p>\n\n<p>You can create your own.</p>\n\n<pre><code>import cPickle\nf = file('my_data.pkl', 'wb')\n cPickle.dump((train_set_x, train_set_y), f, protocol=cPickle.HIGHEST_PROTOCOL)\nf.close()\n</code></pre>\n\n<p>More details on <a href=\"http://deeplearning.net/software/theano/tutorial/loading_and_saving.html\" rel=\"noreferrer\">loading and saving</a>.</p>\n" }, { "AnswerId": "27614025", "CreationDate": "2014-12-23T04:19:04.490", "ParentId": null, "OwnerUserId": "2399329", "Title": null, "Body": "<p>This can help:</p>\n\n<pre><code>from PIL import Image\nfrom numpy import genfromtxt\nimport gzip, cPickle\nfrom glob import glob\nimport numpy as np\nimport pandas as pd\nData, y = dir_to_dataset(\"trainMNISTForm\\\\*.BMP\",\"trainLabels.csv\")\n# Data and labels are read \n\ntrain_set_x = Data[:2093]\nval_set_x = Data[2094:4187]\ntest_set_x = Data[4188:6281]\ntrain_set_y = y[:2093]\nval_set_y = y[2094:4187]\ntest_set_y = y[4188:6281]\n# Divided dataset into 3 parts. I had 6281 images.\n\ntrain_set = train_set_x, train_set_y\nval_set = val_set_x, val_set_y\ntest_set = test_set_x, val_set_y\n\ndataset = [train_set, val_set, test_set]\n\nf = gzip.open('file.pkl.gz','wb')\ncPickle.dump(dataset, f, protocol=2)\nf.close()\n</code></pre>\n\n<p>This is the function I used. May change according to your file details.</p>\n\n<pre><code>def dir_to_dataset(glob_files, loc_train_labels=\"\"):\n print(\"Gonna process:\\n\\t %s\"%glob_files)\n dataset = []\n for file_count, file_name in enumerate( sorted(glob(glob_files),key=len) ):\n image = Image.open(file_name)\n img = Image.open(file_name).convert('LA') #tograyscale\n pixels = [f[0] for f in list(img.getdata())]\n dataset.append(pixels)\n if file_count % 1000 == 0:\n print(\"\\t %s files processed\"%file_count)\n # outfile = glob_files+\"out\"\n # np.save(outfile, dataset)\n if len(loc_train_labels) &gt; 0:\n df = pd.read_csv(loc_train_labels)\n return np.array(dataset), np.array(df[\"Class\"])\n else:\n return np.array(dataset)\n</code></pre>\n" } ]
26,265,552
1
<python-2.7><machine-learning><theano>
2014-10-08T20:10:39.467
null
1,430,829
Converting theano tensor types
<p>I have a computation graph built with Theano. It goes like this:</p> <pre></pre> <p>Now, the mapping from a vector to a vector. However, input is set as a matrix type so I can pass many vectors through the mapping simultaneously. I'm doing some machine learning and this makes the learning phase more efficient.</p> <p>The problem is that after the learning phase, I'd like to view the mapping as vector to vector so I can compute:</p> <pre></pre> <p> complains that input is not . Is there a way I can change the input tensor type without rebuilding the whole computation graph?</p>
[ { "AnswerId": "42166488", "CreationDate": "2017-02-10T18:42:00.577", "ParentId": null, "OwnerUserId": "3583290", "Title": null, "Body": "<p>Technically, this a possible solution:</p>\n\n<pre><code>import theano\nfrom theano import tensor as T\nimport numpy as np\n\nW1 = theano.shared( np.random.rand(45,32).astype('float32'), 'W1')\nb1 = theano.shared( np.random.rand(32).astype('float32'), 'b1')\nW2 = theano.shared( np.random.rand(32,3).astype('float32'), 'W2')\nb2 = theano.shared( np.random.rand(3).astype('float32'), 'b2')\n\ninput = T.vector('input') # it will be reshaped!\nhidden = T.tanh(T.dot(input.reshape((-1, 45)), W1)+b1)\noutput = T.nnet.softmax(T.dot(hidden, W2)+b2)\n\n#Here comes the trick\njac = theano.gradient.jacobian(output.reshape((-1,)), wrt=input).reshape((-1, 45, 3))\n</code></pre>\n\n<p>In this way <code>jac.eval({input: np.random.rand(10*45)}).shape</code> will result <code>(100, 45, 3)</code>!</p>\n\n<p>The problem is that it calculates the derivative across the batch index. So in theory the first <code>1x45</code> number can effect all the <code>10x3</code> outputs (in a batch of length 10).</p>\n\n<p>For that, there are several solutions.\nYou could take the diagonal across the first two axes, but unfortunately <a href=\"https://github.com/Theano/Theano/blob/master/theano/tensor/nlinalg.py#L193\" rel=\"nofollow noreferrer\">Theano does not implement it</a>, <a href=\"https://docs.scipy.org/doc/numpy/reference/generated/numpy.diagonal.html\" rel=\"nofollow noreferrer\">numpy does</a>!</p>\n\n<p>I think it can be done with a <code>scan</code>, but this is an other matter.</p>\n" } ]
26,301,491
2
<theano>
2014-10-10T14:14:39.973
null
1,137,860
Which is better for use Theano on Linux or Windows?
<p>Which is better for use Theano on Linux or Windows? I want to try some deep learning methods by Theano. </p>
[ { "AnswerId": "30217062", "CreationDate": "2015-05-13T13:56:58.073", "ParentId": null, "OwnerUserId": "1373401", "Title": null, "Body": "<p>I use Theano on windows and once you use a package like enthought canopy or anaconda I find it easy to install and use. Although like a lot of this stuff on windows it is probably easier on Linux! :) </p>\n" }, { "AnswerId": "26306031", "CreationDate": "2014-10-10T18:45:09.220", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>It works fine on both, but I find better support for CUDA and much it is easier to install the scientific Python stack on Linux (usually Ubuntu).</p>\n" } ]
26,304,097
2
<python><python-2.7><numpy><theano>
2014-10-10T16:37:22.723
null
596,046
Using a shared varibale to select from a matrix
<p>I'm trying to select multiple elements from a matrix using a changing vector, which is sent to my Theano function at each iteration. But when I try running the code I get the following error for the last line (selecting from W)</p> <pre></pre> <p>The declaration is:</p> <pre></pre> <p>and then I use my input (x) as follows:</p> <pre></pre> <p>The intention is that at run time x will be a simple vector of indices of the form</p> <pre></pre> <p>Thanks for the help</p>
[ { "AnswerId": "26304514", "CreationDate": "2014-10-10T17:03:43.333", "ParentId": null, "OwnerUserId": "736578", "Title": null, "Body": "<p>The matrix W is of numpy ndarray type, so it does not know how to deal with Theano tensors such as x. If you want to index this numpy array, use a numpy index array instead of a Theano tensor. If you want to index the Theano tensor self.W, you will have to wait for the next release of Theano, or update to the development version: the <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#indexing\" rel=\"nofollow\">theano manual</a> says that:</p>\n\n<blockquote>\n <p>Like NumPy, Theano distinguishes between basic and advanced indexing. Theano fully supports basic indexing (see NumPy’s indexing).</p>\n \n <p>Integer advanced indexing will be supported in 0.6rc4 (or the development version). We do not support boolean masks, as Theano does not have a boolean type (we use int8 for the output of logic operators).</p>\n</blockquote>\n" }, { "AnswerId": "26306013", "CreationDate": "2014-10-10T18:43:48.190", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>You may be able to use <code>inc_subtensor</code> or <code>set_subtensor</code> for this type of behavior, instead of slicing. See <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.set_subtensor\" rel=\"nofollow\">http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.set_subtensor</a> for more details.</p>\n" } ]
26,367,114
0
<theano><pymc3><dirichlet>
2014-10-14T17:50:42.463
null
2,321,303
PyMC3 Dirichlet distribution
<p>I am implementing a linear regression model in pymc3 where the unknown vector of weights is constrained to be a probability mass function, hence modelled as a Dirichlet distribution, as in the following code:</p> <pre></pre> <p>After sampling the posterior by running:</p> <pre></pre> <p>I analysed the trace of the Dirichlet variables, and found that their values do not add to one (below is an example):</p> <pre></pre> <p>I am not familiar with theano variables, and found it difficult to explore how a Dirichlet RV is expressed in pymc3... Am I doing anything wrong, or should I just normalise the values returned in the trace so that they sum to one?</p> <p><strong>Quick update</strong> It looks like the function employes a sort of gradient descent optimisation. This does not take into account the constraint resulting from the fact that a vector representing a draw from a Dirichlet distribution is a probability mass function (its values should be positive and their sum should be one). This constraint is apparently not enforced at the sampling stage of the algorithm either, and causes convergence problems as the precision of the likelihood distribution drifts towards zero.</p>
[]
26,372,698
0
<torch><cudnn><cuda-gdb>
2014-10-15T00:44:49.603
null
1,386,335
Script works only when run in cuda-memcheck
<p>I'm writing a convnet using torch and cudnn and having some memory issues. I tried debugging the script with cuda-memcheck only to notice it actually runs when fed through cuda-memcheck (albeit slower than it should).</p> <p>Turns out if cuda-memcheck is running in the background, a separate instantiation of the script itself also runs fine.</p> <p>Any idea what might be happening here?</p>
[]
26,387,625
2
<linux><git><theano>
2014-10-15T16:35:51.397
26,388,618
3,681,744
Installing Theano from Git
<p>When I want to install pylearn2 (and Theano) I use the following command in Linux cluster:</p> <pre></pre> <p>However, I see the following error for Theano instllation:</p> <pre></pre> <p>I did not find anything on the net regarding this problem. I would appreciate if someone could help me.</p>
[ { "AnswerId": "26388618", "CreationDate": "2014-10-15T17:35:35.183", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>You can install manually the development version of Theano. From this link:</p>\n\n<p><a href=\"http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions\" rel=\"nofollow\">http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions</a></p>\n\n<pre><code>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git --user\n</code></pre>\n\n<p>or</p>\n\n<pre><code>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git\n</code></pre>\n\n<p>If for some reason that don't work, clone the repo and install it manually with <code>pip install .</code> in the repo.</p>\n" }, { "AnswerId": "32239995", "CreationDate": "2015-08-27T03:03:15.763", "ParentId": null, "OwnerUserId": "3966933", "Title": null, "Body": "<p>Theano has the following requirements make sure they are met: </p>\n\n<pre><code>1) Python &gt;= 2.6\n2) g++, python-dev\n3) NumPy &gt;= 1.6.2\n4) SciPy &gt;= 0.11\n5) A BLAS installation (with Level 3 functionality)\n</code></pre>\n\n<p>Then lets not complicate things and just try</p>\n\n<blockquote>\n <p>pip install Theano</p>\n</blockquote>\n" } ]
26,417,524
2
<lua><luajit><torch><zerobrane>
2014-10-17T03:43:14.287
27,289,841
936,332
How to set the environment variable of zerobrane studio
<p>I install all <a href="http://torch.ch/" rel="nofollow">torch</a> package into my local file torch-distro(Followed by this <a href="https://github.com/soumith/torch-distro" rel="nofollow">tutorial</a>). I want to use to debug my code. can't find my local path of torch. How Can I set my local path to the environment variable.<br> I tried to add path.lua = "${prefix}/torch-distro/install/bin/luajit" into the user.lua. But it can't work</p>
[ { "AnswerId": "26418570", "CreationDate": "2014-10-17T05:44:08.903", "ParentId": null, "OwnerUserId": "1442917", "Title": null, "Body": "<p>(These instructions are for the Windows version of Torch, but the steps should work for Linux/OSX versions assuming the paths are modified).</p>\n\n<p>Let's say the Torch is installed in <code>C:\\Program Files\\Torch</code>, then to get it running as the external interpreter from ZeroBrane Studio (ZBS), you need to add <code>path.lua=[[C:\\Program Files\\Torch\\bin\\torch-lua]]</code> to <code>&lt;ZBS&gt;\\cfg\\user.lua</code> configuration file.</p>\n\n<p>Now, when you execute a Lua script from ZBS (<code>Project | Run</code> or <code>F6</code>), it will run inside the Torch environment:</p>\n\n<pre><code>local torch = require 'torch'\nlocal data = torch.Tensor{\n {68, 24, 20},\n {74, 26, 21},\n {80, 32, 24}\n}\nprint(data)\n</code></pre>\n\n<p>However, there are few more steps required to get the debugging to work on Windows (these steps are likely not be needed on other systems, but I haven't tested debugging there). ZBS is using luasocket, which is compiled against <code>lua51.dll</code>, but Torch is using <code>libtorch-lua.dll</code>, so loading luasocket into your (Torch) process is likely to crash it. To make it work, you need to build a proxy DLL and put it into your <code>Torch/bin</code> folder.</p>\n\n<p>To build the proxy DLL, you will need Visual Studio C++ or mingw/gcc compiled and can follow these steps:</p>\n\n<ol>\n<li>Get <code>mkforwardlib.lua</code> (VS) or <code>mkforwardlib-gcc.lua</code> (mingw/gcc) script from <a href=\"http://lua-users.org/wiki/LuaProxyDllThree\" rel=\"nofollow\">Lua Proxy DLL3 page</a>.</li>\n<li>Run <code>lua mkforwardlib.lua libtorch-lua lua51 X86</code>; if everything goes well, this will produce <code>lua51.dll</code> file in the current folder.</li>\n<li>Copy <code>lua51.dll</code> file to <code>Torch\\bin</code> folder.</li>\n</ol>\n\n<p>Now you should be able to debug Torch scripts by using <code>Project | Start Debugging</code>.</p>\n" }, { "AnswerId": "27289841", "CreationDate": "2014-12-04T08:57:43.730", "ParentId": null, "OwnerUserId": "936332", "Title": null, "Body": "<p>Following method works on linux platform:</p>\n\n<ol>\n<li><p>Configuring the luajit interpreter by adding following code into the user.lua </p>\n\n<p><code>path.lua = \"your_path/luajit\"</code> </p></li>\n<li><p>Configuring the envrioment variable by adding following code into the /opt/zbsstudio/lualibs/mobdebug/mobdebug.lua </p>\n\n<p>package.path = package.path .. ';my_path/?/init.lua'\npackage.cpath = package.cpath .. ';my_path/?.so'</p></li>\n</ol>\n" } ]
26,464,341
1
<python><theano>
2014-10-20T11:15:42.283
26,483,381
498,892
Defining expression in theano
<p>I have a question about theano library. Suppose I have two matrices: and that respectively have shapes and .</p> <p>I want to define an expression which gives tensor of shape :</p> <pre></pre> <p>(pay attention that there is no sum, so this is not a dot product). Of course I can do this calculation in a loop, but I want theano to calculate derivatives for me.</p> <p>Thanks in advance!</p>
[ { "AnswerId": "26483381", "CreationDate": "2014-10-21T09:31:54.503", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>Theano tensors broadcast exactly like numpy arrays. All you have to do is</p>\n\n<pre><code>import theano\nimport theano.tensor as T\n\nimport numpy as np\nrng = np.random.RandomState(42)\n\nn, m1, m2 = 10, 20, 30\nA = T.fmatrix()\nB = T.fmatrix()\n\nC = A.reshape((n, m1, 1)) * B.reshape((n, 1, m2))\n\nfunc_C = theano.function([A, B], C)\n\na = rng.randn(n, m1).astype(np.float32)\nb = rng.randn(n, m2).astype(np.float32)\n\nr1 = func_C(a, b)\nr2 = np.einsum(\"ij, ik -&gt; ijk\", a, b)\nassert np.sqrt(((r1 - r2) ** 2).sum()) &lt; 1e-5\n</code></pre>\n\n<p>Where the definition of <code>C</code> is the crucial part. This can also be done with dynamic shapes, but this answer stays within your framework.</p>\n" } ]
26,497,564
5
<python><machine-learning><neural-network><theano>
2014-10-21T22:38:18.433
26,498,509
3,681,744
Theano HiddenLayer Activation Function
<p>Is there anyway to use Rectified Linear Unit (ReLU) as the activation function of the hidden layer instead of or in Theano? The implementation of the hidden layer is as follows and as far as I have searched on the internet ReLU is not implemented inside the Theano.</p> <pre></pre>
[ { "AnswerId": "26605167", "CreationDate": "2014-10-28T09:47:24.223", "ParentId": null, "OwnerUserId": "2166433", "Title": null, "Body": "<p>I wrote it like this:</p>\n\n<pre><code>lambda x: T.maximum(0,x)\n</code></pre>\n\n<p>or:</p>\n\n<pre><code>lambda x: x * (x &gt; 0)\n</code></pre>\n" }, { "AnswerId": "26535270", "CreationDate": "2014-10-23T18:52:43.120", "ParentId": null, "OwnerUserId": null, "Title": null, "Body": "<p>I think it is more precise to write it in this way:</p>\n\n<pre><code>x * (x &gt; 0.) + 0. * (x &lt; 0.)\n</code></pre>\n" }, { "AnswerId": "43770203", "CreationDate": "2017-05-03T21:20:14.160", "ParentId": null, "OwnerUserId": "5754215", "Title": null, "Body": "<p>The function is very simple in Python:</p>\n\n<pre><code>def relu(input):\n output = max(input, 0)\n return(output)\n</code></pre>\n" }, { "AnswerId": "28776557", "CreationDate": "2015-02-28T00:21:37.380", "ParentId": null, "OwnerUserId": "498892", "Title": null, "Body": "<p><strong>UPDATE:</strong> Latest version of theano has native support of ReLU:\n<a href=\"http://deeplearning.net/software/theano/library/tensor/nnet/nnet.html#theano.tensor.nnet.relu\" rel=\"noreferrer\">T.nnet.relu</a>, which should be preferred over custom solutions.</p>\n\n<p>I decided to compare the speed of solutions, since it is very important for NNs. Compared speed of function itself and it's gradient, in first case <code>switch</code> is preferred, the gradient is faster for x * (x>0). \nAll the computed gradients are correct.</p>\n\n<pre><code>def relu1(x):\n return T.switch(x&lt;0, 0, x)\n\ndef relu2(x):\n return T.maximum(x, 0)\n\ndef relu3(x):\n return x * (x &gt; 0)\n\n\nz = numpy.random.normal(size=[1000, 1000])\nfor f in [relu1, relu2, relu3]:\n x = theano.tensor.matrix()\n fun = theano.function([x], f(x))\n %timeit fun(z)\n assert numpy.all(fun(z) == numpy.where(z &gt; 0, z, 0))\n\nOutput: (time to compute ReLU function)\n&gt;100 loops, best of 3: 3.09 ms per loop\n&gt;100 loops, best of 3: 8.47 ms per loop\n&gt;100 loops, best of 3: 7.87 ms per loop\n\nfor f in [relu1, relu2, relu3]:\n x = theano.tensor.matrix()\n fun = theano.function([x], theano.grad(T.sum(f(x)), x))\n %timeit fun(z)\n assert numpy.all(fun(z) == (z &gt; 0)\n\nOutput: time to compute gradient \n&gt;100 loops, best of 3: 8.3 ms per loop\n&gt;100 loops, best of 3: 7.46 ms per loop\n&gt;100 loops, best of 3: 5.74 ms per loop\n</code></pre>\n\n<p>Finally, let's compare to how gradient should be computed (the fastest way)</p>\n\n<pre><code>x = theano.tensor.matrix()\nfun = theano.function([x], x &gt; 0)\n%timeit fun(z)\nOutput:\n&gt;100 loops, best of 3: 2.77 ms per loop\n</code></pre>\n\n<p>So theano generates inoptimal code for gradient. IMHO, switch version today should be preferred.</p>\n" }, { "AnswerId": "26498509", "CreationDate": "2014-10-22T00:19:43.973", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>relu is easy to do in Theano:</p>\n\n<pre><code>switch(x&lt;0, 0, x)\n</code></pre>\n\n<p>To use it in your case make a python function that will implement relu and pass it to activation:</p>\n\n<pre><code>def relu(x):\n return theano.tensor.switch(x&lt;0, 0, x)\nHiddenLayer(..., activation=relu)\n</code></pre>\n\n<p>Some people use this implementation: <code>x * (x &gt; 0)</code></p>\n\n<p>UPDATE: Newer Theano version have theano.tensor.nnet.relu(x) available.</p>\n" } ]
26,499,269
1
<python><theano>
2014-10-22T02:04:40.020
26,555,850
3,681,744
AttributeError in Theano
<p>I am trying to run the installation test of the Theano with the following code:</p> <pre></pre> <p>However, I would see the following error corresponding to blas.py:</p> <pre></pre> <p>I understand that AttributeError is a famous error and there are questions asked about it, but for Theano the only solution I found on the internet was to add: </p> <pre></pre> <p>to the blas.py. However, this does not solve the problem and I am still facing the AttributeError.</p>
[ { "AnswerId": "26555850", "CreationDate": "2014-10-24T20:33:47.003", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>This is fixed in Theano development version.</p>\n\n<p>Use one of those 2 command to update Theano depending if the installation is just for your user or for the OS:</p>\n\n<pre><code>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git --user\npip install --upgrade --no-deps git+git://github.com/Theano/Theano.git\n</code></pre>\n\n<p>Here is the doc for that update:</p>\n\n<pre><code>http://www.deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions\n</code></pre>\n" } ]
26,513,839
0
<python><theano>
2014-10-22T17:49:41.763
null
2,436,416
Theano import error: cannot import name Shape
<p>I am trying to import theano as </p> <pre></pre> <p>But its giving the following error:</p> <pre></pre> <p>I am currently on the bleeding edge version of theano using the following command:</p> <pre></pre>
[]
26,574,293
1
<numpy><matrix><theano>
2014-10-26T14:53:45.757
26,588,143
419,338
Theano broadcasting different to numpy's
<p>Consider the following example of numpy broadcasting:</p> <pre></pre> <p>As expected, the vector is added to each rows of the matrix and the output is:</p> <pre></pre> <p>Trying to replicate the same behaviour in the git version of theano:</p> <pre></pre> <p>I get the following error:</p> <pre></pre> <p>I understand objects such as have a attribute, but I can't find a way to 1) set this correctly for the object or 2) have it correctly inferred. How can I re-implement numpy's behaviour in theano?</p>
[ { "AnswerId": "26588143", "CreationDate": "2014-10-27T12:59:50.387", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Theano need all broadcastable dimensions to be declared in the graph before compilation. NumPy use the run time shape information.</p>\n\n<p>By default, all shared variable dimsions aren't broadcastable, as their shape could change.</p>\n\n<p>To create the shared variable with the broadcastable dimension that you need in your example:</p>\n\n<pre><code>b = theano.shared(bval, broadcastable=(True,False))\n</code></pre>\n\n<p>I'll add this information to the documentation.</p>\n" } ]
26,598,408
1
<numpy><matrix><euclidean-distance><theano>
2014-10-27T23:11:57.870
26,613,970
956,730
A Pure Pythonic Pairwise Euclidean distance of rows of a numpy ndarray
<p>I have a matrix of size (n_classes, n_features) and i want to compute the pairwise euclidean distance of each pair of classes so the output would be a (n_classes, n_classes) matrix where each cell has the value of euclidean_distance(class_i, class_j).</p> <p>I know that there is this scipy spatial distances (<a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/spatial.distance.html" rel="nofollow">http://docs.scipy.org/doc/scipy-0.14.0/reference/spatial.distance.html</a>) and sklearn.metric.euclidean distances (<a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.euclidean_distances.html" rel="nofollow">http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.euclidean_distances.html</a>) but i want to use this in Theano software so i need a pure mathematical formula rather than functions that compute the results.</p> <p>for example i need a series of transformations like A = X * B, D = X.T-X, results = D.T something that contains just matrix mathematical operations not functions.</p>
[ { "AnswerId": "26613970", "CreationDate": "2014-10-28T16:51:11.433", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>You can do this using numpy broadcasting as shown in <a href=\"https://gist.github.com/kastnerkyle/7a985f63c1ba92ac8bca\" rel=\"nofollow\">this gist</a>. It should be straightforward to convert this to Theano code, or just reference @eickenberg's comment above, since he's the one who showed me how to do this!</p>\n" } ]
26,615,835
1
<python><theano>
2014-10-28T18:29:38.693
26,640,860
596,046
Multiple networks in Theano
<p>I'd like to have 2 separate networks running in Theano at the same time, where the first network trains on the results of the second. I could embed both networks in the same structure but that would be a real mess in the entire forward pass (and probably won't even work because of the shared variables etc.)</p> <p>The problem is that when I define a theano function I don't specify the model it's applied on, meaning if I'm having a predict and a train function they'll both work on the first model I define.</p> <p>Is there a way to overcome that issue?</p>
[ { "AnswerId": "26640860", "CreationDate": "2014-10-29T21:18:06.790", "ParentId": null, "OwnerUserId": "596046", "Title": null, "Body": "<p>In a rather simplified way I've managed to find a nice solution. The trick was to create one model, define its function and then create the other model and define the second function. Works like a charm</p>\n" } ]
26,621,341
1
<python><numpy><nan><divide-by-zero><theano>
2014-10-29T01:21:28.740
26,626,732
956,730
How to prevent division by zero or replace infinite values in Theano?
<p>I'm using a cost function in Theano which involves a regularizer term that requires me to compute this term:</p> <pre></pre> <p>as some values of self.squared_euclidean_distances might be zero this produces Nan values. How can i work around this problem? I tried to use T.isinf but were not successful. One solution would be to remove zeros in self.squared_euclidean_distances into a small number or replace infinite numbers in T.sum(c / self.squared_euclidean_distances) to zero. I just don't know how to replace those values in Theano.</p>
[ { "AnswerId": "26626732", "CreationDate": "2014-10-29T09:30:06.837", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>Take a look at <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#condition\" rel=\"nofollow\"><code>T.switch</code></a>. You could do for example</p>\n\n<pre><code>T.switch(T.eq(self.squared_euclidean_distances, 0), 0, c / self.squared_euclidean_distances)\n</code></pre>\n\n<p>(Or, upstream, you make sure that you never compare a vector with itself using squared euclidean distance.)</p>\n" } ]
26,669,099
1
<python><theano>
2014-10-31T07:00:28.877
null
888,051
change a subset of elements' values in theano matrix
<p>I want to create a <strong>mask</strong> matrix dynamically, for example, in </p> <pre></pre> <p>Here is what I tried in ,</p> <pre></pre> <p>the above code failed with error message: . Any other ways?</p>
[ { "AnswerId": "26672284", "CreationDate": "2014-10-31T10:19:19.197", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>This works for me on <code>0.6.0</code>. I used your code and created a function from it to check the output. Try copying and pasting this:</p>\n\n<pre><code>import theano\nfrom theano import tensor\n\nmask = tensor.zeros((5,5))\nrow = tensor.ivector('row')\ncol = tensor.ivector('col')\nmask = tensor.set_subtensor(mask[row, col], 1)\n\nf = theano.function([row, col], mask)\n\nprint f(np.array([0, 1, 2]).astype(np.int32), np.array([1, 2, 3]).astype(np.int32))\n</code></pre>\n\n<p>This yields</p>\n\n<pre><code>array([[ 0., 1., 0., 0., 0.],\n [ 0., 0., 1., 0., 0.],\n [ 0., 0., 0., 1., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 0., 0., 0., 0.]])\n</code></pre>\n" } ]
26,674,433
2
<python><windows><cuda><canopy><theano>
2014-10-31T12:12:06.653
null
2,399,329
Error installing Theano for GPU 750m in Canopy on Windows 8.1
<p>I am using Windows 8.1 64 bit. I have installed canopy with academic license. My laptop has Nvidia GeForce GT750m</p> <p>I have installed Theano using from Theano's git repository. I have Visual Studio 2013 and Cuda Toolkit 64 bit installed. Cuda samples are working fine.</p> <p>Theanoarc file:</p> <pre></pre> <p>I get the following error while doing </p> <pre></pre> <p>VisualStudio Command prompt gives the error:</p> <pre></pre> <p> in the last line</p> <p>I have followed <a href="https://stackoverflow.com/questions/25729969/installing-theano-on-windows-8-with-gpu-enabled">Installing theano on Windows 8 with GPU enabled</a></p> <p>But <strong>Python distutils fix</strong> cannot be applied as I have used Canopy. Please help.</p>
[ { "AnswerId": "26684445", "CreationDate": "2014-10-31T22:39:51.973", "ParentId": null, "OwnerUserId": "1988991", "Title": null, "Body": "<p>Windows Canopy, like all standard CPython 2.7 distributions for Windows, is compiled with Visual C++ 2008, and that is the compiler that you should use for compiling all non-trivial C extensions for it.</p>\n\n<p>The following article includes some links for obtaining this compiler:\n<a href=\"https://support.enthought.com/entries/26864394-Windows-Unable-to-find-vcvarsall-bat-cython-f2py-other-c-extensions-\" rel=\"nofollow\">https://support.enthought.com/entries/26864394-Windows-Unable-to-find-vcvarsall-bat-cython-f2py-other-c-extensions-</a></p>\n" }, { "AnswerId": "26679798", "CreationDate": "2014-10-31T16:58:01.570", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Theano do not support Visual Studio compiler. It is needed for cuda, but not for Theano CPU code. This need g++ compiler.</p>\n\n<p>Is the canopy package for the compiler correctly installed? What did you do to try to compile with microsoft compiler?</p>\n" } ]
26,703,136
1
<install><compiler-flags><theano>
2014-11-02T19:02:04.990
26,723,370
1,452,257
How to set Theano flags in the THEANORC file? And where?
<p>I'm trying to install Theano, but it is more complicated than I thought. I've used Enthought Canopy and the guide on <a href="http://deeplearning.net/software/theano/install.html#install" rel="nofollow">http://deeplearning.net/software/theano/install.html#install</a>. In order to complete the installation I want to set the flags mentioned in the installation guide:</p> <blockquote> <p>(Needed only for Theano 0.6rc3 or earlier) Set the Theano flags blas.ldflags=-LC:\Users\\AppData\Local\Enthought\Canopy\App\appdata\canopy-1.0.0.1160.win-x86_64\Scripts -lmk2_core -lmk2_intel_thread -lmk2_rt.</p> </blockquote> <p>According to <a href="http://deeplearning.net/software/theano/library/config.html" rel="nofollow">http://deeplearning.net/software/theano/library/config.html</a> I should do that by creating a .theanorc file on $HOME/.theanorc:$HOME/.theanorc.txt. However, I don't know how to translate $HOME into a normal windows path - what is $HOME if I have a default installation using Enthought Canopy?</p>
[ { "AnswerId": "26723370", "CreationDate": "2014-11-03T21:17:46.967", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>To find your home on windows, you can click the \"start\" button, then select your username on the right column. This will open a window in the right folder.</p>\n\n<p>On Windows 7, this corredpond to this path: C:\\Users\\YOUR_USER_NAME</p>\n" } ]
26,706,686
1
<numpy><theano>
2014-11-03T01:51:39.993
26,716,174
4,208,837
Theano logistic SGD with per-dimension learning rates
<p>Here's a very simple, beginner Theano question.</p> <p>I'm trying to modify the Logistic SGD code provided with the <a href="http://www.deeplearning.net/tutorial/logreg.html#logreg" rel="nofollow">Deep Learning Tutorials</a>, to switch from a single learning rate to a learning rate that would be dimension-specific. For instance if I have 3 input dimensions, I would like to use 3 different learning rates, one per dimension.</p> <p>The original relevant code is:</p> <pre></pre> <p>In numpy, it would be simply a matter of replacing the scalar learning rate with an array of learning rates and perform element-wise multiplication with the gradients g_W and g_b. In Theano doing so yields an error:</p> <pre></pre> <p>Clearly there is something about Theano that I'm missing. Can anyone enlighten me?</p>
[ { "AnswerId": "26716174", "CreationDate": "2014-11-03T14:25:25.087", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>Indeed, you need to replace the learning rate scalar by an array. You can try e.g. the following:</p>\n\n<pre><code>learning_rate = theano.shared(np.array([0.1, 0.2, 0.05]))\n</code></pre>\n\n<p>It may need to be transposed depending on the shape of the gradient, but essentially you have stated the correct way to go and it should work using a shared variable.</p>\n" } ]
26,718,812
1
<python><theano>
2014-11-03T16:43:48.477
26,789,849
2,663,583
Python - Theano scan() function
<p>I cannot fully understand the behaviour of theano.scan().</p> <p>Here's an example:</p> <pre></pre> <p>The above snippet prints the following sequence, which is perfectly reasonable:</p> <pre></pre> <p>However if I switch the tap index from -2 to -1, i.e.</p> <pre></pre> <p>The result becomes:</p> <pre></pre> <p>instead of what would seem reasonable to me (just take the last value of the vector and add 2):</p> <pre></pre> <p>Any help would be much appreciated.</p> <p>Thanks!</p>
[ { "AnswerId": "26789849", "CreationDate": "2014-11-06T21:37:04.907", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>When you use taps=[-1], scan suppose that the information in the output info is used as is. That mean the addf function will be called with a vector and the non_sequence as inputs. If you convert x0 to a scalar, it will work as you expect:</p>\n\n<pre><code>import numpy as np\nimport theano\nimport theano.tensor as T\n\n\ndef addf(a1,a2):\n print a1.type\n print a2.type\n return a1+a2\n\ni = T.iscalar('i')\nx0 = T.iscalar('x0') \nstep= T.iscalar('step')\n\nresults, updates = theano.scan(fn=addf,\n outputs_info=[{'initial':x0, 'taps':[-1]}],\n non_sequences=step,\n n_steps=i)\n\nf=theano.function([x0,i,step],results)\n\nprint f(1,10,2)\n</code></pre>\n\n<p>This give this output:</p>\n\n<pre><code>TensorType(int32, scalar)\nTensorType(int32, scalar)\n[ 3 5 7 9 11 13 15 17 19 21]\n</code></pre>\n\n<p>In your case as it do addf(vector,scalar), it broadcast the elemwise value.</p>\n\n<p>Explained in another way, if taps is [-1], x0 will be passed \"as is\" to the inner function. If taps contain anything else, what is passed to the inner function will have 1 dimension less then x0, as x0 must provide many initial steps value (-2 and -1).</p>\n" } ]
26,725,593
3
<python><oop><decorator><python-decorators><theano>
2014-11-04T00:10:50.797
26,785,871
851,699
Using Python decorators to add a method to a method
<p>I want to be able to call a method according to some standard format:</p> <pre></pre> <p>, where outputs is a tuple of arrays, and each input is an array.</p> <p>However, in most instances, I only return one array, and don't want to be forced to return a tuple of length 1 just for the sake of the standard format. (My actual formatting problem is more complicated but lets stick with this explanation for now.)</p> <p>I want to be able to define a class like:</p> <pre></pre> <p>And then be able to call it as follows:</p> <pre></pre> <p>Question is: how do I define the decorator to allow this behaviour?</p> <p>I tried:</p> <pre></pre> <p>, but it fails on the line with the second assert with:</p> <pre></pre> <p>Because the requires "self" as an argument, and the the object has not even been created yet at the time the decorator modifies the function.</p> <p>So Stack, how can I do this?</p> <hr> <p>Notes:</p> <p>1) I know I could do this with base-classes and inheritance instead, but that becomes a problem when you have more than one method in the class that you want to decorate this way.</p> <p>2) The actual problem comes from using theano - the standard format is , but most functions don't return any updates, so you want to be able to define those functions in a natural way, but still have the option of calling them according to this standard interface.</p>
[ { "AnswerId": "26729151", "CreationDate": "2014-11-04T06:41:03.870", "ParentId": null, "OwnerUserId": "296974", "Title": null, "Body": "<p>That's indeed a problem, because the way the \"bound\" method is retrieved from the function doesn't consider this way.</p>\n\n<p>I see two ways:</p>\n\n<ol>\n<li><p>You could just wrap the function:</p>\n\n<pre><code>def single_return_format(fcn):\n # TODO Do some functools.wraps here...\n return lambda *args, **kwargs: (fcn(*args, **kwargs), )\n</code></pre>\n\n<p>No fooling around with <code>.standard_format</code>, but a mere replacement of the function. So the function can define itself as returning one value, but can only be called as returning the tuple.</p></li>\n<li><p>If this is not what you want, you can define a class for decorating methods which overrides <code>__get__</code> and does the wrapping in a \"live fashion\". Of course, it can as well redefine <code>__call__</code> so that it is usable for (standalone, non-method) functions as well.</p></li>\n</ol>\n" }, { "AnswerId": "26744995", "CreationDate": "2014-11-04T21:04:27.653", "ParentId": null, "OwnerUserId": "529630", "Title": null, "Body": "<p>To get exactly what you want you'd have to write a non-data descriptor and a set of wrapper classes for your functions. The reason for this is that the process of getting functions from objects as methods is highly optimised and it's not possible to hijack this mechanism. Instead you have to write your own classes that simulate this mechanism -- which will slow down your code if you are making lots of small method calls. </p>\n\n<p>The very best way I can think to get the desired functionality is not to use any of the methods that you describe, but rather write a wrapper function that you use when needed to call a normal function in the standard format. eg.</p>\n\n<pre><code>def vectorise(method, *args, **kwargs):\n return tuple(method(arg, **kwargs) for arg in args)\n\nobj = _SomeClass()\n\nresult = vectorise(obj.add_one, 1, 2, 3)\n</code></pre>\n\n<p>Indeed, this is how <code>numpy</code> takes functions that operate on one argument and turns them into a function that works on arrays.</p>\n\n<pre><code>import numpy\n\ndef add_one(x):\n return x + 1\n\narr = numpy.vectorize(add_one)([1, 2, 3])\n</code></pre>\n\n<p>If you really, really want to use non-data descriptors then following will work. Be warned these method calls are considerably slower. On my computer a normal method call takes 188 nanoseconds versus 1.53 microseconds for a \"simple\" method call -- a ten-fold difference. And <code>vectorise</code> call takes half the time a <code>standard_form</code> call does. The vast majority of that time is the lookup of the methods. The actual method calls are quite fast.</p>\n\n<pre><code>class simple_form:\n \"\"\"Allows a simple function to be called in a standard way.\"\"\"\n\n def __init__(self, func):\n self.func = func \n\n def __get__(self, instance, owner):\n if instance is None:\n return self.func\n return SimpleFormMethod(self.func, instance)\n\n\nclass MethodBase:\n \"\"\"Provides support for getting the string representation of methods.\"\"\"\n\n def __init__(self, func, instance):\n self.func = func\n self.instance = instance\n\n def _format(self):\n return \"&lt;bound {method_class} {obj_class}.{func} of {obj}&gt;\".format(\n method_class=self.__class__.__name__,\n obj_class=self.instance.__class__.__name__,\n func=self.func.__name__,\n obj=self.instance)\n\n def __str__(self):\n return self._format()\n\n def __repr__(self):\n return self._format()\n\n\nclass SimpleFormMethod(MethodBase):\n\n def __call__(self, *args, **kwargs):\n return self.func(self.instance, *args, **kwargs)\n\n @property\n def standard_form(self):\n return StandardFormMethod(self.func, self.instance)\n\n\nclass StandardFormMethod(MethodBase):\n\n def __call__(self, *args, **kwargs):\n return tuple(self.func(self.instance, arg, **kwargs) for arg in args)\n\n\nclass Number(object):\n\n def __init__(self, value):\n self.value = value\n\n def add_to(self, *values):\n return tuple(val + self.value for val in values)\n\n @simple_form\n def divide_into(self, value):\n return value / self.value\n\n\nnum = Number(2)\nprint(\"normal method access:\", num.add_to, sep=\"\\n\")\nprint(\"simple form method access:\", num.divide_into, sep=\"\\n\")\nprint(\"standard form method access:\", num.divide_into.standard_form, sep=\"\\n\")\nprint(\"access to underlying function:\", Number.divide_into, sep=\"\\n\")\nprint(\"simple example usage:\", num.divide_into(3))\nprint(\"standard example usage:\", num.divide_into.standard_form(*range(3)))\n</code></pre>\n" }, { "AnswerId": "26785871", "CreationDate": "2014-11-06T17:39:38.720", "ParentId": null, "OwnerUserId": "851699", "Title": null, "Body": "<p>Dunes gave the correct answer. I've stripped it down to bare bones so that it solves the problem in the question. The stripped-down code is here:</p>\n\n<pre><code>class single_return_format(object):\n\n def __init__(self, func):\n self._func = func\n\n def __get__(self, instance, owner):\n return SimpleFormMethod(instance, self._func)\n\n\nclass SimpleFormMethod(object):\n\n def __init__(self, instance, func):\n self._instance = instance\n self._func = func\n\n def __call__(self, *args, **kwargs):\n return self._func(self._instance, *args, **kwargs)\n\n @property\n def standard_format(self):\n return lambda *args, **kwargs: (self._func(self._instance, *args, **kwargs), )\n\n\nclass _SomeClass(object):\n\n def __init__(self):\n self._amount_to_add = 1\n\n @single_return_format\n def add_one(self, x):\n return x+self._amount_to_add\n\n\nobj = _SomeClass()\nassert obj.add_one(3) == 4\nassert obj.add_one.standard_format(3) == (4, )\n</code></pre>\n" } ]
26,799,771
1
<python><machine-learning><theano>
2014-11-07T11:00:17.530
26,801,592
2,963,058
How to print a matric or vector to screen returned by a function in theano?
<p>I'm new of theano, when I tried to run this code, I got the response below:</p> <pre></pre> <p>Initial model: dot.0</p> <p>how can I actually see the value of the returned value, thanks.</p>
[ { "AnswerId": "26801592", "CreationDate": "2014-11-07T12:48:12.007", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>You need to evaluate the expression, which is best done by turning it into a function. Remove line 18 from your code and then add</p>\n\n<pre><code>dot_product = T.dot(x, w)\nf = theano.function([x], dot_product)\n\nprint f(rng.randn(feats))\n</code></pre>\n" } ]
26,822,010
1
<macos><object><malformed><torch>
2014-11-08T20:57:37.043
null
4,231,095
install_name_tool malformed object (load command 23 cmdsize is zero) - Mac OS X Yosemite
<p>Installation of cunn for torch on Yosemite fails with malformed object error. </p> <pre></pre> <p>Searching online shows that this is related to library corruption or an update to install_name_tool. I replaced the install_name_tool from XCode(6.1) into /usr/bin but I still get the same error. Below are some diagnostics</p> <pre></pre> <p>I need this to work so that I can use CUDA with torch, I have already spent hours on it, please help.</p>
[ { "AnswerId": "26837009", "CreationDate": "2014-11-10T04:55:59.550", "ParentId": null, "OwnerUserId": "4234292", "Title": null, "Body": "<p>I had the same problem and I solved it by building <code>libcunn.so</code> locally.\nRun the following commands:</p>\n\n<pre><code>git clone https://github.com/torch/cunn.git\nls cunn\ncmake -E make_directory build\ncd build/\ncmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=\"/usr/local/bin/..\" -DCMAKE_INSTALL_PREFIX=\"/usr/local/lib/luarocks/rocks/cunn/scm-1\" &amp;&amp; make\n</code></pre>\n\n<p>Then you should have </p>\n\n<pre><code>Linking CXX shared module libcunn.so\n</code></pre>\n\n<p>Then just copy the library to the target folder:</p>\n\n<pre><code>cp libcunn.so /usr/local/lib/lua/5.1/libcunn.so\n</code></pre>\n" } ]
26,828,318
1
<python><theano><pylearn>
2014-11-09T12:45:36.867
null
648,896
How can I transform a novel data by the trained Autoencoder with Pylearn2?
<p>I trained and saved two layer stacked CAE model by Pylearn2. I would like to load these models and transform a novel dataset. How should I do it?</p> <p>This is my model:</p> <pre></pre> <p>I also tried something like this but it does not work.</p> <pre></pre> <p>This is what I do lately but not sure about its correctness:</p> <pre></pre>
[ { "AnswerId": "26949633", "CreationDate": "2014-11-15T19:10:43.200", "ParentId": null, "OwnerUserId": "2805751", "Title": null, "Body": "<p>Check the <code>pylearn2.scripts.autoencoder</code> directory for a configuration file that shows how to use stacked, pretrained autoencoders. You load the pretrained models as a transformer object on the dataset on it's way into the next stage of the model. </p>\n\n<p>If you don't want to use yaml files, you should be able to string the model functions together using the correct methods (untested as I'm writing off the top of my head):</p>\n\n<blockquote>\n <blockquote>\n <p>inputs = T.vector()</p>\n \n <p>enc = l1.encode(inputs) </p>\n \n <p>output = l2.encode(enc)</p>\n \n <p>f= theano.function([inputs], output)</p>\n \n <p>real_transformation = f(real_data)</p>\n </blockquote>\n</blockquote>\n\n<p>And to go back, you can do the same with the <code>ae_layer.decode()</code> method. </p>\n\n<p>If these are part of the layers of and <code>MLP</code>, you can call the <code>.fprop()</code> method on the <code>MLP</code> to do an upward pass through all the layers.</p>\n" } ]
26,879,157
2
<theano>
2014-11-12T04:13:59.933
26,900,149
1,245,262
Purpose of 'givens' variables in Theano.function
<p>I was reading the code for the logistic function given at <a href="http://deeplearning.net/tutorial/logreg.html" rel="nofollow noreferrer">http://deeplearning.net/tutorial/logreg.html</a>. I am confused about the difference between &amp; variables for a function. The functions that compute mistakes made by a model on a minibatch are:</p> <pre></pre> <p>Why couldn't/wouldn't one just make x&amp; y shared input variables and let them be defined when an actual model instance is created?</p>
[ { "AnswerId": "26900149", "CreationDate": "2014-11-13T02:04:57.000", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The <code>givens</code> parameter allows you to separate the description of the model and the exact definition of the inputs variable. This is a consequence of what the given parameter do: modify the graph to compile before compiling it. In other words, we substitute in the graph, the key in givens with the associated value.</p>\n\n<p>In the deep learning tutorial, we use a normal Theano variable to build the model. We use <code>givens</code> to speed up the GPU. Here, if we keep the dataset on the CPU, we will transfer a mini-batch to the GPU at each function call. As we do many iterations on the dataset, we end up transferring the dataset multiple time to the GPU. As the dataset is small enough to fit on the GPU, we put it in a shared variable to have it transferred to the GPU if one is available (or stay on the Central Processing Unit if the Graphics Processing Unit is disabled). Then when compiling the function, we swap the input with a slice corresponding to the mini-batch of the dataset to use. Then the input of the Theano function is just the index of that mini-batch we want to use.</p>\n" }, { "AnswerId": "26890315", "CreationDate": "2014-11-12T15:17:41.747", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>I don't think anything is stopping you from doing it that way (I didn't try the <code>updates=</code> dictionary using an input variable directly, but why not). Remark however that for pushing data to a GPU in a useful manner, you will need it to be in a shared variable (from which <code>x</code> and <code>y</code> are taken in this example).</p>\n" } ]
26,907,913
1
<python><matrix><sparse-matrix><theano><autoencoder>
2014-11-13T11:30:39.073
null
1,780,628
How can I speed up an autoencoder to use on text data written in python's theano package?
<p>I'm new to theano and I'm trying to adapt the autoencoder script <a href="http://deeplearning.net/tutorial/code/dA.py" rel="nofollow">here</a> to work on text data. This code uses the MNIST dataset as training data. This data is in the form of a numpy 2d array. </p> <p>My data is a csr sparse matrix of about 100,000 instances with about 50,000 features. The matrix is the result of using sklearn's tfidfvectorizer to fit and transform the text data. As I'm using sparse matrices I modify the code to use the theano.sparse package to represent my input. My training set is the symbolic variable: </p> <pre></pre> <p>However, theano.sparse matrices cannot perform all of the operations used in the original script (there is a list of sparse operations <a href="http://www.deeplearning.net/software/theano/library/sparse/#list-of-implemented-operations" rel="nofollow">here</a>). The code uses dot and sum from the tensor methods on the input. I have changed the dot to sparse.dot but I can't find out what to replace the sum with so I am converting the training batches to dense matrices and using the original tensor methods as shown in this cost function:</p> <pre></pre> <p>The get_corrupted_input and get_reconstructed_input methods remain as they are in the link above. My question is is there a faster way to do this? </p> <p>Converting the matrices to dense is making running the training very slow. Currently it takes 20.67m to do one training epoch with a batch size of 20 training instances.</p> <p>Any help or tips you could give would be greatly appreciated!</p>
[ { "AnswerId": "26949508", "CreationDate": "2014-11-15T18:57:44.553", "ParentId": null, "OwnerUserId": "2805751", "Title": null, "Body": "<p>In the most recent master branch of theano.sparse there is an sp_sum method listed.</p>\n\n<p><a href=\"http://github.com/Theano/Theano/blob/master/theano/sparse/basic.py\" rel=\"nofollow\">(see here)</a> </p>\n\n<p>If you're not using the bleeding edge version I'd install that and see if calling it will work and if doing so speeds things up: </p>\n\n<pre><code>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git\n</code></pre>\n\n<p>(And if it does, noting it here would be nice, it's not always clear that the sparse functionality is much faster than using dense calculations all the way through, especially on the gpu.) </p>\n" } ]
26,980,666
0
<python><numpy><neural-network><enthought><theano>
2014-11-17T19:45:21.870
null
2,688,733
Theano: Grad error
<p>I have a complicated architecture and I am calculating gradients wrt to diff parameters, but I am getting following strange error, what is this indicative of? I am not getting any clue why is this error popping up?</p> <pre></pre> <p>The sequence steps which lead to this error are: </p> <pre></pre>
[]
26,992,692
1
<neural-network><enthought><theano>
2014-11-18T11:01:18.853
27,046,695
2,688,733
Theano : Extract particular rows in matrix
<p>I have a matrix W and two vectors y1 and y2. I want to extract rows from W. The rows I am interested in are in the range [y1:y2]. What is the best way of doing this in Theano? <strong>Can this be done without using theano.map or tensor.switch method</strong>? This obtained matrix will be used somewhere in grad computation. For e.g.:</p> <pre></pre>
[ { "AnswerId": "27046695", "CreationDate": "2014-11-20T18:18:04.917", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>So tell otherwise what you want is:</p>\n\n<pre><code>out=[]\nfor i,j in zip(y1,y2):\n out.append(W[i:j])\nnumpy.asarray(out)\n</code></pre>\n\n<p>Is the lenght of y1 and y2 constant? If so, you can unroll the loop like this:</p>\n\n<pre><code>out=[]\nfor i in range(LEN):\n out.append(W[y1[i]:y2[j]])\ntheano.stack(*out)\n</code></pre>\n\n<p>Theano support all the power of NumPy advanced indexing. If you can find how to do it with NumPy without the stack, you can do it the same way in Theano.</p>\n" } ]
27,015,150
3
<lua><luajit><torch>
2014-11-19T10:58:59.830
27,019,667
2,783,487
How to get past 1gb memory limit of 64 bit LuaJIT on Linux?
<p>The overview is I am prototyping code to understand my problem space, and I am running into 'PANIC: unprotected error in call to Lua API (not enough memory)' errors. I am looking for ways to get around this limit.</p> <p>The environment bottom line is Torch, a scientific computing framework that runs on LuaJIT, and LuaJIT runs on Lua. I need Torch because I eventually want to hammer on my problem with neural nets on a GPU, but to get there I need a good representation of the problem to feed to the nets. I am (stuck) on Centos Linux, and I suspect that trying to rebuild all the pieces from source in 32bit mode (this is reported to extend the LuaJIT memory limit to 4gb) will be a nightmare if it works at all for all of the libraries.</p> <p>The problem space itself is probably not particularly relevant, but in overview I have datafiles of points that I calculate distances between and then bin (i.e. make histograms of) these distances to try and work out the most useful ranges. Conveniently I can create complicated Lua tables with various sets of bins and torch.save() the mess of counts out, then pick it up later and inspect with different normalisations etc. -- so after one month of playing I am finding this to be really easy and powerful. </p> <p>I can make it work looking at up to 3 distances with 15 bins each (15x15x15 plus overhead), but this only by adding explicit garbagecollection() calls and using fork()/wait() for each datafile so that the outer loop will keep running if one datafile (of several thousand) still blows the memory limit and crashes the child. This gets extra painful as each successful child process now has to read, modify and write the current set of bin counts -- and my largest files for this are currently 36mb. I would like to go larger (more bins), and would really prefer to just hold the counts in the 15 gigs of RAM I can't seem to access.</p> <p>So, here are some paths I have thought of; please do comment if you can confirm/deny that any of them will/won't get me outside of the 1gb boundary, or will just improve my efficiency within it. Please do comment if you can suggest another approach that I have not thought of.</p> <ul> <li><p>am I missing a way to fire off a Lua process that I can read an arbitrary table back in from? No doubt I can break my problem into smaller pieces, but parsing a return table from stdio (as from a system call to another Lua script) seems error prone, and writing/reading small intermediate files will be a lot of disk i/o.</p></li> <li><p>am I missing a stash-and-access-table-in-high-memory module ? This seems like what I really want, but not found it yet</p></li> <li><p>can FFI C data structures be put outside the 1gb? Doesn't seem like that would be the case but certainly I lack a full understanding of what is causing the limit in the first place. I suspect that this will just get me an efficiency improvement over generic Lua tables for the few pieces that have moved beyond prototyping? (unless I do a bunch of coding for each change)</p></li> <li><p>Surely I can get out by writing an extension in C (Torch appears to support nets that should go outside of the limit), but my brief investigation there turns up references to 'lightuserdata' pointers -- does this mean that a more normal extension won't get outside 1gb either? This also seems like it has the heavy development cost for what should be a prototyping exercise.</p></li> </ul> <p>I know C well so going the FFI or extension route doesn't bother me - but I know from experience that encapsulating algorithms in this way can be both really elegant and really painful with two places to hide bugs. Working through data structures containing tables within tables on the stack doesn't seem great either. Before I make this effort I would like to be certain that the end result really will solve my problem.</p> <p>Thanks for reading the long post.</p>
[ { "AnswerId": "41748149", "CreationDate": "2017-01-19T17:33:35.457", "ParentId": null, "OwnerUserId": "4748557", "Title": null, "Body": "<p>You can use the <a href=\"https://github.com/torch/tds\" rel=\"nofollow noreferrer\" title=\"torch tds\">torch tds</a> module. From the README:</p>\n\n<blockquote>\n <p>Data structures which do not rely on Lua memory allocator, nor being limited by Lua garbage collector.</p>\n \n <p>Only C types can be stored: supported types are currently number, strings, the data structures themselves (see nesting: e.g. it is possible to have a Hash containing a Hash or a Vec), and torch tensors and storages. All data structures can store heterogeneous objects, and support torch serialization.</p>\n</blockquote>\n" }, { "AnswerId": "27019667", "CreationDate": "2014-11-19T14:44:31.987", "ParentId": null, "OwnerUserId": "646619", "Title": null, "Body": "<p>Only object allocated by LuaJIT itself are limited to the first 2GB of memory. This means that tables, strings, full userdata (i.e. not lightuserdata), and FFI objects allocated with <code>ffi.new</code> will count towards the limit, but objects allocated with <code>malloc</code>, <code>mmap</code>, etc. are not subjected to this limit (regardless if called by a C module or the FFI).</p>\n\n<p>An example for allocating a structure with <code>malloc</code>:</p>\n\n<pre><code>ffi.cdef[[\n typedef struct { int bar; } foo;\n void* malloc(size_t);\n void free(void*);\n]]\n\nlocal foo_t = ffi.typeof(\"foo\")\nlocal foo_p = ffi.typeof(\"foo*\")\n\nfunction alloc_foo()\n local obj = ffi.C.malloc(ffi.sizeof(foo_t))\n return ffi.cast(foo_p, obj)\nend\n\nfunction free_foo(obj)\n ffi.C.free(obj)\nend\n</code></pre>\n\n<p>The new GC to be implemented in LuaJIT 3.0 IIRC will not have this limit, but I haven't heard any news on it's development recently.</p>\n\n<p>Source: <a href=\"http://lua-users.org/lists/lua-l/2012-04/msg00729.html\">http://lua-users.org/lists/lua-l/2012-04/msg00729.html</a></p>\n" }, { "AnswerId": "27060812", "CreationDate": "2014-11-21T11:54:34.160", "ParentId": null, "OwnerUserId": "2783487", "Title": null, "Body": "<p>Here is some follow-up information for those who find this question later:</p>\n\n<p>The key information is as posted by Colonel Thirty Two, that C module extensions and FFI code can easily get outside of the limit. (and the referenced lua list post reminds that plain Lua tables that go outside the limit will be very slow to garbage collect)</p>\n\n<p>It took me some time to pull the pieces together to both access and save/load my objects, so here it is in one place:</p>\n\n<p>I used lds at <a href=\"https://github.com/neomantra/lds\" rel=\"noreferrer\">https://github.com/neomantra/lds</a> as a starting point, in particular the 1-D Array code.</p>\n\n<p>This broke using torch.save(), as it doesn't know how to write the new objects. For each object I added the code below (using Array as the example):</p>\n\n<pre><code>function Array:load(inp)\n for i=1,#inp do\n self._data[i-1] = tonumber(inp[i])\n end\n return self\nend\n\nfunction Array:serialize ()\n local siz = tonumber(self._size)\n io.write(' lds.ArrayT( ffi.typeof(\"double\"), lds.MallocAllocator )( ', siz , \"):load({\")\n for i=0,siz-1 do\n io.write(string.format(\"%a,\", self._data[i]))\n end\n io.write(\"})\")\nend\n</code></pre>\n\n<p>Note that my application specifically uses doubles and malloc(), so a better implementation would store and use these in self rather than hard coding above.</p>\n\n<p>Then as discussed in PiL and elsewhere, I needed a serializer that would handle the object:</p>\n\n<pre><code>function serialize (o)\n if type(o) == \"number\" then\n io.write(o)\n elseif type(o) == \"string\" then\n io.write(string.format(\"%q\", o))\n elseif type(o) == \"table\" then\n io.write(\"{\\n\")\n for k,v in pairs(o) do\n io.write(\" [\"); serialize(k); io.write(\"] = \")\n serialize(v)\n io.write(\",\\n\")\n end\n io.write(\"}\\n\")\n elseif o.serialize then\n o:serialize()\n else\n error(\"cannot serialize a \" .. type(o))\n end\nend\n</code></pre>\n\n<p>and this needs to be wrapped with:</p>\n\n<pre><code>io.write('do local _ = ')\nserialize( myWeirdTable )\nio.write('; return _; end')\n</code></pre>\n\n<p>and then the output from that can be loaded back in with</p>\n\n<pre><code>local myWeirdTableReloaded = dofile('myWeirdTableSaveFile')\n</code></pre>\n\n<p>See PiL (Programming in Lua book) for dofile()</p>\n\n<p>Hope that helps someone!</p>\n" } ]
27,047,857
1
<python><numpy><scipy><sparse-matrix><theano>
2014-11-20T19:26:02.573
27,107,458
919,431
Theano gradient of sparse matrix multiplication
<p>I'm trying to implement an autoencoder with sparse inputs in Theano.</p> <p>I got the sparse autoencoder to work with a squared error cost function. But if I want to apply a cross-entropy error, which contains matrix multiplications, I get the following error:</p> <pre></pre> <p>I uploaded an example notebook illustrating the problem at <a href="http://nbviewer.ipython.org/urls/gist.githubusercontent.com/peterroelants/4946cdbf189c5e75f2b7/raw/2ee7d3e533a4a6ac2707a2ffa310b81a86e70afd/gistfile1.json" rel="nofollow">http://nbviewer.ipython.org/urls/gist.githubusercontent.com/peterroelants/4946cdbf189c5e75f2b7/raw/2ee7d3e533a4a6ac2707a2ffa310b81a86e70afd/gistfile1.json</a> .</p> <p>I distilled the problem down to the matrix multiplication . This works in the dense case [see cell 2], but gives an error in the sparse case [see cell 3]. Note that changing this cost function in the sparse case [cell 3] to the squared error () will result in a working result.</p> <p>Can anyone point me out what I'm doing wrong? And show me how to get a sparse input autoencoder with cross-entropy error to work in Theano?</p>
[ { "AnswerId": "27107458", "CreationDate": "2014-11-24T14:45:01.640", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>You can not use T.* function on sparse variable. In this case, you can use:</p>\n\n<pre><code>theano.sparse.sp_sum((x * T.log(z))\n</code></pre>\n\n<p>\\edit This diff fix in Theano fix this crash:</p>\n\n<pre><code>diff --git a/theano/sparse/basic.py b/theano/sparse/basic.py\nindex 4620c5a..a352b9a 100644\n--- a/theano/sparse/basic.py\n+++ b/theano/sparse/basic.py\n@@ -2244,7 +2244,7 @@ class MulSD(gof.op.Op):\n def grad(self, (x, y), (gz,)):\n assert _is_sparse_variable(x) and _is_dense_variable(y)\n assert _is_sparse_variable(gz)\n- return y * gz, x * gz\n+ return y * gz, dense_from_sparse(x * gz)\n\n def infer_shape(self, node, shapes):\n return [shapes[0]]\n</code></pre>\n\n<p>I'll try to get the fix merged in Theano this week.</p>\n" } ]
27,064,617
1
<python><neural-network><theano>
2014-11-21T15:20:47.010
27,081,373
1,150,636
Theano multiple tensors as output
<p>I am using Theano to create a neural network, but when I try to <strong>return two lists of tensors at the same time in a list</strong> I get the error:</p> <pre></pre> <p>What kind of Theano structure should I use to return the two tensors together in an array so I can retrieve them like this:</p> <pre></pre> <p>I tried using some of the things I found in the <a href="http://deeplearning.net/software/theano/library/tensor/basic.html" rel="nofollow">basic Tensor functionality page</a> but none of those work. (For example I tried the stack or stacklists)</p> <p>Here is the error I get using theano.tensor.stack or stacklists:</p> <pre></pre> <p>A little extra context to the code:</p> <pre></pre>
[ { "AnswerId": "27081373", "CreationDate": "2014-11-22T19:12:42.537", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>This is not supported by Theano. When you call <code>theano.function(inputs, outputs)</code>, outputs can be only 2 things:</p>\n\n<p>1) a Theano variable\n2) a list of Theano variables</p>\n\n<p>(2) does not allow you to have a list in the top level list, so you should flatten the lists in the outputs. This will return more then 2 outputs.</p>\n\n<p>A compossible solution to your problem is to have the inner list copied into 1 variable.</p>\n\n<pre><code>tensor_nabla_w = theano.tensor.stack(*nabla_w).\n</code></pre>\n\n<p>This asks that all elements in nabla_w is are the same shape. This will add an extra copy in the computation graph (so it could be a little slower).</p>\n\n<p>Update 1: fix call to stack()</p>\n\n<p>Update 2:</p>\n\n<p>As of now, we have the added constraint that all the elements will have different shapes, so stack can not be used. If they all have the same number of dimensions and dtype, you can use <a href=\"http://deeplearning.net/software/theano/library/typed_list.html\" rel=\"nofollow\">typed_list</a>, otherwise you will need to modify Theano yourself or flatten the output's lists.</p>\n" } ]
27,077,788
2
<python><numpy><theano>
2014-11-22T13:14:11.653
null
1,002,394
Theano float64 matrix product value error
<p>I need to do matrix multiplication with float64 precision matrices. The following code works in float32 and matrix() instead of dmatrix(). However, when it comes to float64, it fails.</p> <pre></pre> <p>I have the following error:</p> <pre></pre> <p>How can I fix this? </p>
[ { "AnswerId": "27080228", "CreationDate": "2014-11-22T17:21:05.303", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>Check out <a href=\"http://deeplearning.net/software/theano/tutorial/using_gpu.html#tips-for-improving-performance-on-gpu\" rel=\"nofollow\">the deep learning tutorial here</a> to see that right now theano calculations only benefit from GPU when <code>float32</code> are passed. See also <a href=\"http://comments.gmane.org/gmane.comp.mathematics.theano.user/1087\" rel=\"nofollow\">this thread</a> on the mailing list.</p>\n" }, { "AnswerId": "27576976", "CreationDate": "2014-12-20T04:40:32.630", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The problem is not related to the GPU, but the value you used for the Theano flag blas.ldflags: 'asdfadf'</p>\n\n<p>This is passed to g++ when compiling again blas. You didn't put value g++ parameter.</p>\n" } ]
27,106,371
1
<neural-network><theano><deep-learning><lstm>
2014-11-24T13:47:27.667
null
1,367,788
LSTM implementation using theano
<p>I am using theano scan function for implementing LSTM(long short term memory), but I got error like </p> <pre></pre> <p>I used scan like this </p> <pre></pre> <p>and step_fprop is defined as follows:</p> <pre></pre> <p>Anyone has any idea why I kept getting that kind of error.</p>
[ { "AnswerId": "27335619", "CreationDate": "2014-12-06T19:34:32.167", "ParentId": null, "OwnerUserId": "2805751", "Title": null, "Body": "<p><code>outputs_info</code> expects values for each of the variables passed in the <code>return</code> of the <code>step_fprop</code> function. your code seems to only return the hidden state <code>hh</code> but <code>ouputs_info</code> expects two values whose initial state is defined by <code>p_c</code> and <code>p_hidden_inputs</code></p>\n\n<p>It looks like you'll need to <code>return [_whatever_previous_lstm_state, hh]</code> in the step_fprop function </p>\n" } ]
27,120,646
2
<machine-learning><theano>
2014-11-25T07:07:18.567
null
3,121,136
Theano gradient doesn't work with .sum(), only .mean()?
<p>I'm trying to learn theano and decided to implement linear regression (using their Logistic Regression from the tutorial as a template). I'm getting a wierd thing where T.grad doesn't work if my cost function uses .sum(), but does work if my cost function uses .mean(). Code snippet:</p> <p>(THIS DOESN'T WORK, RESULTS IN A W VECTOR FULL OF NANs):</p> <pre></pre> <p>(THIS DOES WORK, PERFECTLY):</p> <pre></pre> <p>The only difference is in the cost = single_error.sum() vs single_error.mean(). What I don't understand is that the gradient should be the exact same in both cases (one is just a scaled version of the other). So what gives?</p>
[ { "AnswerId": "27139964", "CreationDate": "2014-11-26T01:42:00.320", "ParentId": null, "OwnerUserId": "1209307", "Title": null, "Body": "<p>Try dividing your gradient descent step size by the number of training examples.</p>\n" }, { "AnswerId": "27576961", "CreationDate": "2014-12-20T04:37:00.160", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The learning rate (0.1) is way to big. Using mean make it divided by the batch size, so this help. But I'm pretty sure you should make it much smaller. Not just dividing by the batch size (which is equivalent to using mean).</p>\n\n<p>Try a learning rate of 0.001.</p>\n" } ]
27,191,631
1
<python><neural-network><theano>
2014-11-28T15:20:29.287
27,202,899
4,279,087
I'm trying to use CNN with data that aren't images
<p>The code being used is the CNN from <a href="http://deeplearning.net/tutorial/lenet.html#lenet" rel="nofollow">http://deeplearning.net/tutorial/lenet.html#lenet</a> but i'm having problems to understand what i need to change in order to accept other types of data. The file that i used has the same format of the MNIST but much smaller, this is the data being used <a href="https://archive.ics.uci.edu/ml/datasets/Iris" rel="nofollow">https://archive.ics.uci.edu/ml/datasets/Iris</a></p> <p>These are the two parts that i changed:</p> <p>Batch_size, nkers, n_epocchs</p> <pre></pre> <p>All parameters of the layers</p> <pre></pre> <p>When i execute this code i get this error:</p> <pre></pre> <p>Probably the answer is really easy but i'm around a week looking this code and nothing pops in my mind</p>
[ { "AnswerId": "27202899", "CreationDate": "2014-11-29T13:35:55.523", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>The C in CNN stands for convolution. In order to perform a convolution you need variables that taken together form some sort of spatial/temporal/in any way continuous extent, on which a group structure holds, such as translation in space, translation in time, rotations, or something more exotic. This is not the case for the data you are working with, so using a CNN does not make much sense. (That doesn't stop you from trying to arrange the variables in a 2D space and to see what comes out, but it doesn't seem at all useful.) If you want to do NNs, stick to fully connected ones and start by evaluating logistic regression.</p>\n" } ]
27,276,822
2
<lua><mingw><luarocks><torch>
2014-12-03T16:30:46.630
null
2,698,948
Installing Torch7 with Luarocks on Windows with mingw build error
<p>I followed the instructions <a href="http://www.thijsschreijer.nl/blog/?p=863" rel="noreferrer">here</a> and set up Lua and Luarocks from scratch, with Mingw. Everything worked fine and I was able to install rocks, including ones which require compiling like LuaSocket.</p> <p>I followed the instructions on the <a href="http://torch.ch/" rel="noreferrer">Torch7</a> page to install Torch via luarocks. But it fails building. I do not understand why.</p> <p>Here is the console output. My best guess is that it has something to do with when I think I want it to use Mingw.</p> <pre></pre>
[ { "AnswerId": "31332589", "CreationDate": "2015-07-10T04:41:36.040", "ParentId": null, "OwnerUserId": "1442917", "Title": null, "Body": "<p>The command looks mostly correct, but I think the cmake command needs <code>-G \"MSYS Makefiles\"</code> option to use mingw instead of VS. You may also need to pull the most recent torch version as it includes <a href=\"https://github.com/torch/torch7/pull/287\" rel=\"nofollow\">several changes</a> that fix some compilation issues with mingw.</p>\n\n<p>Note that I haven't tested the changes with LuaRocks and not sure how to pass that additional option to it, but you should be able to run the same command manually to get the desired result (I compiled it from the command line).</p>\n" }, { "AnswerId": "34643788", "CreationDate": "2016-01-06T22:10:23.100", "ParentId": null, "OwnerUserId": "5498261", "Title": null, "Body": "<p><strong>cmake</strong> appears to use Visual Studio 9 2008, but it \"wrongly\" uses <strong>mingw32-make.exe</strong> instead of maybe... <strong>nmake.exe</strong>.\nYou could run this command:<code>\"c:\\Program Files (x86)\\Microsoft Visual Studio 9.0\\VC\\bin\\vcvars32.bat\"</code> (adapt to your visual studio path) in the same console, before you execute the <code>luarocks</code> command. Now, <strong>cmake</strong> should use <strong>nmake</strong>.</p>\n" } ]
27,288,032
1
<theano>
2014-12-04T06:58:44.270
27,323,988
1,926,931
How to create layer0 input for input images with 3 channels
<p>Hi I am following the <a href="http://deeplearning.net/tutorial/code/convolutional_mlp.py" rel="nofollow">http://deeplearning.net/tutorial/code/convolutional_mlp.py</a> code to implement a conv neural net. I have input images where the channel is important and hence I want to have 3 channel feature map as layer 0 input.</p> <p>So I need something like this</p> <p></p> <p><em>instead of</em></p> <p></p> <p>which will be used here</p> <pre></pre> <p>where that x is provided to theano as </p> <pre></pre> <p>So - my question is - how should I create (shape) that train_set_x ?</p> <p>With (gray scale intensity - i.e single channel) train_set_x is created as</p> <pre></pre> <p>where data_x is a flattened numpy array of length 784 (for 28*28 pixels)</p> <p>Thanks a lot for advice</p>
[ { "AnswerId": "27323988", "CreationDate": "2014-12-05T20:15:49.867", "ParentId": null, "OwnerUserId": "1926931", "Title": null, "Body": "<p>I was able to get it working. I am pasting some code here which might help some one. Not very elegant - but works. </p>\n\n<p><code>\ndef shuffle_in_unison(a, b):\n #courtsey http://stackoverflow.com/users/190280/josh-bleecher-snyder\n assert len(a) == len(b)\n shuffled_a = np.empty(a.shape, dtype=a.dtype)\n shuffled_b = np.empty(b.shape, dtype=b.dtype)\n permutation = np.random.permutation(len(a))\n for old_index, new_index in enumerate(permutation):\n shuffled_a[new_index] = a[old_index]\n shuffled_b[new_index] = b[old_index]\n return shuffled_a, shuffled_b\n</code></p>\n\n<pre><code>def createDataSet(imagefolder):\n\nos.chdir(imagefolder)\n\n# total number of files\nnumber_of_files = len([item for item in os.listdir('.') if os.path.isfile(os.path.join('.', item))])\n\n# get a shuffled list : I needed this because my image names were of the format n_x_&lt;some details&gt;.jpg\n# where n was my target and x was a number from 0 to m-1 where m was the number of samples\n# of the target value n. So I needed so shuffle and iterate while putting images in train\n# test and validate arrays\nimage_index_array = range(0,number_of_files)\nrandom.seed(12)\nrandom.shuffle(image_index_array)\n# split 80/10/10 - train/test/val\ntrainsize = int(number_of_files*.8)\ntestsize = int(number_of_files*.1)\nvalsize = number_of_files - trainsize - testsize\n\n# create the random value arrays of train/test/val by slicing the total image index array\ntrain_index_array = image_index_array[0:trainsize]\ntest_index_array = image_index_array[trainsize:trainsize+testsize]\nvalidate_index_array = image_index_array[trainsize+testsize:]\n\n# initialize the data structures\ndataset = {'train':[[],[]],'test':[[],[]],'validate':[[],[]]}\n\ni_counter = 0\ntrain_X = []\ntrain_y = []\n\ntest_X = []\ntest_y = []\n\nval_X = []\nval_y = []\n\nfor item in os.listdir('.'):\n if not os.path.isfile(os.path.join('.', item)):\n continue\n\n if item.endswith('.pkl'):\n continue\n\n print 'Processing item ' + item\n item_y = item.split('_')[0]\n item_x = cv2.imread(item)\n\n height, width = item_x.shape[:2]\n\n # this was my requirement - skip it if you do not need it\n if(height != 135 or width != 240):\n continue\n\n # get 3 channels\n b,g,r = cv2.split(item_x)\n\n item_x = [b,g,r]\n item_x = np.array(item_x)\n item_x = item_x.reshape(3,135*240)\n\n if i_counter in test_index_array:\n test_X.append(item_x)\n test_y.append(item_y)\n elif i_counter in validate_index_array:\n val_X.append(item_x)\n val_y.append(item_y)\n else:\n train_X.append(item_x)\n train_y.append(item_y)\n\n i_counter = i_counter + 1\n\n# fix the dimensions. Flatten out the channel and intensity dimensions \ntrain_X = np.array(train_X)\ntrain_X = train_X.reshape(train_X.shape[0],train_X.shape[1]*train_X.shape[2])\ntest_X = np.array(test_X)\ntest_X = test_X.reshape(test_X.shape[0],test_X.shape[1]*test_X.shape[2])\nval_X = np.array(val_X)\nval_X = val_X.reshape(val_X.shape[0],val_X.shape[1]*val_X.shape[2])\n\ntrain_y = np.array(train_y)\ntest_y = np.array(test_y)\nval_y = np.array(val_y)\n\n# shuffle the train and test arrays in unison\ntrain_X,train_y = shuffle_in_unison(train_X,train_y)\ntest_X,test_y = shuffle_in_unison(test_X,test_y)\n\n# pickle them\ndataset['train'] = [train_X,train_y]\ndataset['test'] = [test_X,test_y]\ndataset['validate'] = [val_X,val_y]\noutput = open('pcount.pkl', 'wb')\ncPickle.dump(dataset, output)\noutput.close`\n</code></pre>\n\n<p>Once you have this pickle file\nYou can use it in convolutional_mlp.py like this. </p>\n\n<pre><code> layer0_input = x.reshape((batch_size, 3, 135, 240))\n\n# Construct the first convolutional pooling layer:\n# filtering reduces the image size to (135-8+1 , 240-5+1) = (128, 236)\n# maxpooling reduces this further to (128/2, 236/2) = (64, 118)\n# 4D output tensor is thus of shape (batch_size, nkerns[0], 64, 118)\nlayer0 = LeNetConvPoolLayer(\n rng,\n input=layer0_input,\n image_shape=(batch_size, 3, 135, 240),\n filter_shape=(nkerns[0], 3, 8, 5),\n poolsize=(2, 2)\n)\n</code></pre>\n\n<p>The load_data function in logistic_sgd.py will need a small change as below</p>\n\n<pre><code> f = open(dataset, 'rb')\ndump = cPickle.load(f)\ntrain_set = dump['train']\nvalid_set = dump['validate']\ntest_set = dump['test']\nf.close()\n</code></pre>\n\n<p>Hope this helps</p>\n" } ]
27,288,990
3
<python><python-3.x><windows-7-x64><theano>
2014-12-04T08:04:24.370
null
1,253,251
Theano installation for Windows, Python 3, 64bit
<p>Is there a procedure to install Theano for Python 3.4 64bit on Windows 7, manually, without using any of the bundles?</p>
[ { "AnswerId": "29552088", "CreationDate": "2015-04-10T01:56:56.277", "ParentId": null, "OwnerUserId": "3403018", "Title": null, "Body": "<p>I wrote a step by step tutorial on this today, check it out at:<br>\n<a href=\"http://www.islandman93.com/2015/04/tutorial-python-34-theano-and-windows-7.html\" rel=\"nofollow\">http://www.islandman93.com/2015/04/tutorial-python-34-theano-and-windows-7.html</a></p>\n\n<p>Some specifics from my post:</p>\n\n<p>As user2805751 said, you'll need LibPython, Numpy, and Scipy.</p>\n\n<p>You'll also need: MinGW <a href=\"http://mingw-w64.yaxm.org/doku.php/download\" rel=\"nofollow\">http://mingw-w64.yaxm.org/doku.php/download</a></p>\n\n<p>Installing Theano with python setup.py install will automatically do the 2to3 conversion.</p>\n" }, { "AnswerId": "27327337", "CreationDate": "2014-12-06T01:36:35.733", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>There is no such instruction. There is a new PR with instruction that do not use any bundles, but it is for python2. You can try it, but change python2 for python3:</p>\n\n<p><a href=\"https://github.com/Theano/Theano/pull/2155\" rel=\"nofollow\">https://github.com/Theano/Theano/pull/2155</a></p>\n" }, { "AnswerId": "27328131", "CreationDate": "2014-12-06T03:56:00.297", "ParentId": null, "OwnerUserId": "2805751", "Title": null, "Body": "<p>If the above PR doesn't work, I was able to get the following working for windows 8.1 64 bit (python 2.7.8, but you might try 3.4) </p>\n\n<p><a href=\"http://pavel.surmenok.com/2014/05/31/installing-theano-with-gpu-on-windows-64-bit/\" rel=\"nofollow\">http://pavel.surmenok.com/2014/05/31/installing-theano-with-gpu-on-windows-64-bit/</a></p>\n\n<p>I needed to also install the LibPython package from <a href=\"http://www.lfd.uci.edu/~gohlke/pythonlibs/#libpython\" rel=\"nofollow\">http://www.lfd.uci.edu/~gohlke/pythonlibs/#libpython</a> inorder to fix an ld linker problem.</p>\n" } ]
27,343,379
1
<python><neural-network><theano>
2014-12-07T13:57:35.423
27,383,389
2,278,493
RNN in generative mode with Theano (Scan op)
<p>I have a question regarding my RNN implementation.</p> <p>I have the following code</p> <pre></pre> <p>Now after training I could predict output of course as such:</p> <pre></pre> <p>However this is running the network in a predictive mode (i.e. take X steps of input and predict an output). I would like to run this in a generative mode, meaning that I want to start from an initial state and have the RNN run for any arbitrary amount of steps and feed its output back as input.</p> <p>How could I do this?</p> <p>Thanks!</p>
[ { "AnswerId": "27383389", "CreationDate": "2014-12-09T16:02:15.810", "ParentId": null, "OwnerUserId": "2805751", "Title": null, "Body": "<p>You can run scan with an arbitrary number of steps using the <code>n_steps</code> parameter. To pass <code>y_t</code> to the next computation -- and assuming <code>x_t.shape == y_t.shape</code> -- you can use <code>outputs_info=[h0, x_t]</code> as an argument to scan and modify the step function to <code>one_step(h_tm1, y_tm1, W_ih, W_hh, b_h, W_ho, b_o)</code> </p>\n\n<p>More complex termination criteria could be created with the <code>theano.scan_module.until()</code> but those are a question of design not implementation. See <a href=\"http://deeplearning.net/software/theano/library/scan.html\" rel=\"nofollow\">here</a> for examples.</p>\n" } ]
27,357,307
1
<neural-network><lmdb><caffe>
2014-12-08T11:47:04.623
27,375,961
213,615
Caffe convert_imageset with one class/ label
<p>I want to create an lmdb dataset from images which part of them contain the feature I want caffe to learn, and part of them don't.<br> My question is - in the text input file transferred to convert_imageset - how should I label those images that don't contain the feature?<br> I know the format is</p> <pre></pre> <p>But which label should I assign to images <strong>without</strong> the feature?<br> For example, img1.jpg contain the feature, img2.jpg and img3.jpg don't. So should the text file look like -</p> <pre></pre> <p>Thanks!</p>
[ { "AnswerId": "27375961", "CreationDate": "2014-12-09T09:54:19.587", "ParentId": null, "OwnerUserId": "213615", "Title": null, "Body": "<p>Got an answer from <a href=\"https://groups.google.com/forum/#!forum/caffe-users\" rel=\"nofollow\">Caffe-users Google Group</a> - yes, creating a dummy feature is the right way for this.<br>\nSo it is:</p>\n\n<pre><code>img1.jpg 0\nimg2.jpg 1\nimg3.jpg 1\n</code></pre>\n" } ]
27,382,474
1
<pymc><theano><pymc3>
2014-12-09T15:19:52.470
31,576,199
1,509,797
pymc3: parallel computing with njobs>1 vs. GPU
<p>I am trying to speed-up pymc3 sampling with parallelisation and I see only modest benefit.</p> <p>I was able to decrease total running time from 25 minutes (njobs=1) to 13 minutes (njobs=6) on i7 MacBook Pro. Due to the fact that it takes about 4 minutes before pymc actually starts sampling, the increase is relatively small.</p> <p>The question is - does anyone successfully using GPU with pymc3 and how much benefit can I get for models that take 6-8 minutes to sample? (My MacBook has nvidia GT 750M 2Gb)</p>
[ { "AnswerId": "31576199", "CreationDate": "2015-07-23T00:25:25.043", "ParentId": null, "OwnerUserId": "1690016", "Title": null, "Body": "<p>I'm running Linux on an Intel i7-4930.</p>\n\n<p>I ran a PyMC3 model that took 90 minutes on the CPU (utilizing all cores), but only took 18 minutes on my GeForce GTX 970.</p>\n\n<p>So a speed-up of almost 5x.</p>\n" } ]
27,396,664
2
<python><python-2.7><ubuntu><scikit-image><caffe>
2014-12-10T08:49:26.193
27,396,789
1,111,479
ImportError cannot import name BytesIO when import caffe on ubuntu
<p>I am trying to make <a href="http://caffe.berkeleyvision.org/">caffe</a> running on my machine equipped with Ubuntu 12.04LTS. After finishing all the steps on the <a href="http://caffe.berkeleyvision.org/installation.html">Installation page</a>, I trained the LeNet model successfully and tried to use it as the tutorial from <a href="http://radar.oreilly.com/2014/07/how-to-build-and-run-your-first-deep-learning-network.html">here</a>. Then I got the following error:</p> <pre></pre> <p>I set the in file before I did the above. What is the problem? Could anyone give some hint? I am really confused. After running the command in the very directory:</p> <pre></pre> <p>So, the problem becomes to: how to solve the name issue? P.S.: I also inserted an issue at <a href="https://github.com/BVLC/caffe/issues/1549">the repository of caffe</a>. </p>
[ { "AnswerId": "34188511", "CreationDate": "2015-12-09T20:37:44.233", "ParentId": null, "OwnerUserId": "1695866", "Title": null, "Body": "<p>I ran into this problem as well, installing caffe on an AWS ubuntu 14.04 instance following the script as outlined on the BVLC github repo here: <a href=\"https://github.com/BVLC/caffe/wiki/Caffe-on-EC2-Ubuntu-14.04-Cuda-7\" rel=\"nofollow\" title=\"Caffe on EC2 Ubuntu 14.04\">\"Caffe on EC2 Ubuntu 14.04\"</a>.</p>\n\n<p>I have setup the python path as instructed. As diagnosed by @Martijn Pieters, the problem is that caffe is importing its own io library, which is then importing scikit-image's io library, which in turn is trying (but failing) to load the standard python io library (where BytesIO is located). Instead, due to the python path, when scikit-image tries to import BytesIO from the module io, it is circularly leading back to caffe's io module.</p>\n\n<p>I also found that even when not trying to import caffe, but due to having set my python path to include caffe, that this same problem hits me elsewhere.</p>\n\n<p>There are probably several ways to address this. But the essence is that the top-level import of caffe is at fault. To verify this, I altered the caffe code as follows:</p>\n\n<ol>\n<li><p>I renamed <code>.../caffe/io.py</code> module to <code>.../caffe/caffe_io.py</code> to be safe (although with correct namespace care, this shouldn't be necessary)</p></li>\n<li><p>I modified the import at the top of the <code>pycaffe.py</code> module from: <code>import caffe.io</code> to <code>import caffe.caffe_io</code></p></li>\n<li><p>I modified the import in <code>__init__.py</code> the same way (from <code>import caffe.io</code> to <code>import caffe.caffe_io</code>)</p></li>\n</ol>\n\n<p>Now, when you import io from python, it won't pick up the io library in caffe. When you import caffe, it will import its custom caffe_io library, and all should be well. You may want to do a more thorough scan through the python caffe modules to ensure I haven't overlooked other places where the import needs to change.</p>\n\n<p>I hope this helps. Perhaps when I have time, I'll issue a pull request with these (or similar) changes to the caffe github repo.</p>\n" }, { "AnswerId": "27396789", "CreationDate": "2014-12-10T08:56:52.597", "ParentId": null, "OwnerUserId": "100297", "Title": null, "Body": "<p>You appear to have a package or module named <code>io</code> in your Python path that is masking the standard library package. It is imported instead but doesn't have a <code>BytesIO</code> object to import.</p>\n\n<p>Try running:</p>\n\n<pre><code>python -c 'import io; print io.__file__'\n</code></pre>\n\n<p>in the same location you are running the tutorial and rename or move the file named by that import, presuming it is not the standard library version (ending in <code>lib/python2.7/io.pyc</code>).</p>\n\n<p>It could be you set your Python path to the wrong directory. You should include <code>path/to/caffe/python</code>, not <code>path/to/caffe/python/caffe</code>, nor should you try and run python with the latter as your current working directory. In both cases <a href=\"https://github.com/BVLC/caffe/blob/master/python/caffe/io.py\" rel=\"noreferrer\"><code>caffe/python/caffe/io.py</code></a> instead of the standard library version.</p>\n\n<p>The installation instructions are not at fault here; they clearly tell you to use:</p>\n\n\n\n<pre><code>export PYTHONPATH=/path/to/caffe/python:$PYTHONPATH\n</code></pre>\n\n<p>Note the lack of <code>/caffe</code> at the end of that path.</p>\n" } ]
27,413,989
2
<python><theano>
2014-12-11T01:39:47.823
27,516,347
1,191,551
Passing a symbolic theano.tensor into a compiled theano.function
<p>I'm trying to refactor my code to be easier to change architectures easily. Currently, I'm constructing a recurrent neural network as follows.</p> <pre></pre> <p>This works just fine. But the problem with this implementation is that I have to hard code the weights and make sure all the math is correct every time I change architectures. Inspired by the <a href="http://deeplearning.net/tutorial/mlp.html" rel="nofollow">Multilayer Perceptron tutorial</a>, I tried refactoring my code by introducing a Layer class.</p> <pre></pre> <p>This allows me to write the RNN code much cleaner, less error-prone, and a lot easier to change the architectures.</p> <pre></pre> <p>However, when I run my program I get an error</p> <pre></pre> <p>The problem here is the scan operation is passing a symbolic variable (a subtensor of Xs) to the compiled step function.</p> <p>The whole point of refactoring my code was so that I wouldn't have to define all of the computation within the step function. Now I am left with 4 symbolic variables (, , , ) that define a segment of the computational graph that I need to scan over with . However, I'm not sure how to do this because cannot take in a symbolic variable. </p> <p>Here is a simplified example of what I am trying to do using the <a href="http://deeplearning.net/software/theano/library/scan.html" rel="nofollow">exponentiation example</a>.</p> <pre></pre> <p>Any ideas how to get around this error?</p>
[ { "AnswerId": "27509494", "CreationDate": "2014-12-16T16:36:20.050", "ParentId": null, "OwnerUserId": "1675559", "Title": null, "Body": "<p>You basically just can't use a compiled Theano function as a scan op.</p>\n\n<p>The way around this is to get your Layer classes to have a function that returns a function, which builds your computation tree, which you can then in turn use to compile the scan op. </p>\n" }, { "AnswerId": "27516347", "CreationDate": "2014-12-17T00:17:45.250", "ParentId": null, "OwnerUserId": "1191551", "Title": null, "Body": "<p>So the solution is to use <code>theano.clone</code> with the <code>replaces</code> keyword argument. For example, in the exponentiation example, you can define a step function as follows:</p>\n\n<pre><code>def step(p, a):\n replaces = {prior_result: p, A: a}\n n = theano.clone(next_result, replace=replaces)\n return n\n</code></pre>\n" } ]
27,436,987
2
<image><machine-learning><neural-network><deep-learning><caffe>
2014-12-12T04:40:59.780
null
3,958,348
Caffe Multiple Input Images
<p>I'm looking at implementing a Caffe CNN which accepts two input images and a label (later perhaps other data) and was wondering if anyone was aware of the correct syntax in the prototxt file for doing this? Is it simply an IMAGE_DATA layer with additional tops? Or should I use separate IMAGE_DATA layers for each?</p> <p>Thanks, James</p>
[ { "AnswerId": "30555159", "CreationDate": "2015-05-31T08:01:14.683", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>You may also consider using HDF5_DATA layer with multiple \"top\"s</p>\n" }, { "AnswerId": "29162458", "CreationDate": "2015-03-20T08:39:14.340", "ParentId": null, "OwnerUserId": "2237635", "Title": null, "Body": "<p>Edit: I have been using the HDF5_DATA layer lately for this and it is definitely the way to go. </p>\n\n<p>HDF5 is a key value store, where each key is a string, and each value is a multi-dimensional array. Thus, to use the HDF5_DATA layer, just add a new key for each top you want to use, and set the value for that key to store the image you want to use. Writing these HDF5 files from python is easy:</p>\n\n<pre><code>import h5py\nimport numpy as np\n\nfilelist = []\nfor i in range(100):\n image1 = get_some_image(i)\n image2 = get_another_image(i)\n filename = '/tmp/my_hdf5%d.h5' % i\n with hypy.File(filename, 'w') as f:\n f['data1'] = np.transpose(image1, (2, 0, 1))\n f['data2'] = np.transpose(image2, (2, 0, 1))\n filelist.append(filename)\nwith open('/tmp/filelist.txt', 'w') as f:\n for filename in filelist:\n f.write(filename + '\\n')\n</code></pre>\n\n<p>Then simply set the source of the HDF5_DATA param to be '/tmp/filelist.txt', and set the tops to be \"data1\" and \"data2\".</p>\n\n<p>I'm leaving the original response below:</p>\n\n<p>====================================================</p>\n\n<p>There are two good ways of doing this. The easiest is probably to use two separate IMAGE_DATA layers, one with the first image and label, and a second with the second image. Caffe retrieves images from LMDB or LEVELDB, which are key value stores, and assuming you create your two databases with corresponding images having the same integer id key, Caffe will in fact load the images correctly, and you can proceed to construct your net with the data/labels of both layers.</p>\n\n<p>The problem with this approach is that having two data layers is not really very satisfying, and it doesn't scale very well if you want to do more advanced things like having non-integer labels for things like bounding boxes, etc. If you're prepared to make a time investment in this, you can do a better job by modifying the tools/convert_imageset.cpp file to stack images or other data across channels. For example you could create a datum with 6 channels - the first 3 for your first image's RGB, and the second 3 for your second image's RGB. After reading this in using the IMAGE_DATA layer, you can split the stream into two images using a SLICE layer with a slice_point at index 3 along the slice_dim = 1 dimension. If further down the road, you decide that you want to load even more complex assortments of data, you'll understand the encoding scheme and can write your own decoding layer based off of src/caffe/layers/data_layer.cpp to gain full control of the pipeline.</p>\n" } ]
27,438,528
1
<integration><ros><caffe>
2014-12-12T07:03:56.887
null
3,958,348
Integrate Caffe model into ROS
<p>I'm looking at integrating a trained Caffe network into ROS and was wondering if anyone had any experience with this?</p> <p>Thanks, James</p>
[ { "AnswerId": "30250973", "CreationDate": "2015-05-15T03:18:31.250", "ParentId": null, "OwnerUserId": "3113676", "Title": null, "Body": "<p>Visit <a href=\"http://tzutalin.blogspot.tw/2015/06/integrate-ros-into-caffe.html\" rel=\"nofollow\">http://tzutalin.blogspot.tw/2015/06/integrate-ros-into-caffe.html</a>\nIntegrate Caffe into ROS indigo.</p>\n" } ]
27,484,484
1
<python><theano>
2014-12-15T12:48:12.363
null
2,278,493
Error when using multiple taps using Scan in Theano
<p>I have made a simple RNN with Theano to test some stuff out. However for a small test I would like to add more taps to the hidden layer's output.</p> <p>However, this gives me the following error:</p> <pre></pre> <p>Below you can find the full code I am using to reproduce this error</p> <pre></pre> <p>This works fine without having the second tap added. (so only taps=[-1] and having only h_tm1 in the one_step() function.)</p> <p>Am I doing something wrong here or may this be a bug with Theano?</p>
[ { "AnswerId": "27509347", "CreationDate": "2014-12-16T16:29:05.073", "ParentId": null, "OwnerUserId": "1675559", "Title": null, "Body": "<p>I'm not sure if this is your problem, but writing your <code>outputs_info</code> in this way expects only one return value of type matrix. The output sequence will then be a 3d tensor of (sequence_length,2,input_width).</p>\n" } ]
27,497,502
1
<python><python-2.7><theano>
2014-12-16T04:25:21.690
null
4,127,806
undefined symbol: ATL_chemv
<p>When I run this command</p> <p>sudo python -c "import numpy; numpy.test()"</p> <p>i get this error</p> <p>ImportError: /usr/lib/liblapack.so.3gf: undefined symbol: ATL_chemv</p> <p>How can I fix it? </p>
[ { "AnswerId": "32422573", "CreationDate": "2015-09-06T10:30:30.840", "ParentId": null, "OwnerUserId": "5305715", "Title": null, "Body": "<p>Actually, the answers in <a href=\"https://stackoverflow.com/questions/12249089\">this question</a> will help resolve things better, especially if you want to keep all the options. Most others will tell you to remove packages when you don't need to.</p>\n" } ]
27,541,733
1
<deep-learning><theano><caffe><logistic-regression><softmax>
2014-12-18T08:00:45.000
null
1,926,931
Loss function for ordinal target on SoftMax over Logistic Regression
<p>I am using Pylearn2 OR Caffe to build a deep network. My target is ordered nominal. I am trying to find a proper loss function but cannot find any in Pylearn2 or Caffe. </p> <p>I read a paper "Loss Functions for Preference Levels: Regression with Discrete Ordered Labels" . I get the general idea - but I am not sure I understand what will the thresholds be, if my final layer is a SoftMax over Logistic Regression (outputting probabilities). </p> <p>Can some help me by pointing to any implementation of such a loss function ?</p> <p>Thanks Regards</p>
[ { "AnswerId": "28131434", "CreationDate": "2015-01-24T23:25:10.993", "ParentId": null, "OwnerUserId": "1269942", "Title": null, "Body": "<p>For both pylearn2 and caffe, your labels will need to be 0-4 instead of 1-5...it's just the way they work. The output layer will be 5 units, each is a essentially a logistic unit...and the softmax can be thought of as an adaptor that normalizes the final outputs. But \"softmax\" is commonly used as an output type. When training, the value of any individual unit is rarely ever exactly 0.0 or 1.0...it's always a distribution across your units - which log-loss can be calculated on. This loss is used to compare against the \"perfect\" case and the error is back-propped to update your network weights. Note that a raw output from PL2 or Caffe is not a specific digit 0,1,2,3, or 5...it's 5 number, each associated to the likelihood of each of the 5 classes. When classifying, one just takes the class with the highest value as the 'winner'. </p>\n\n<p>I'll try to give an example...\nsay I have a 3 class problem, I train a network with a 3 unit softmax.\nthe first unit represents the first class, second the second and third, third.</p>\n\n<p>Say I feed a test case through and get...</p>\n\n<p>0.25, 0.5, 0.25 ...0.5 is the highest, so a classifier would say \"2\". this is the softmax output...it makes sure the sum of the output units is one.</p>\n" } ]
27,544,698
1
<python><neural-network><backpropagation><theano>
2014-12-18T10:45:20.757
null
4,373,828
pure-python RNN and theano RNN computing different gradients -- code and results provided
<p>I've been banging my head against this for a while and can't figure out what I've done wrong (if anything) in implementing these RNNs. To spare you guys the forward phase, I can tell you that the two implementations compute the same outputs, so the forward phase is correct. The problem is in the backwards phase.</p> <p>Here is my python backwards code. It follows the style of karpathy's neuraltalk quite closely but not exactly:</p> <pre></pre> <p>And here's the theano code (mainly copied from another implementation I found online. I initialize the weights to my pure-python rnn's randomized weights so that everything is the same.):</p> <pre></pre> <p>Now here's the crazy thing. If i run the following:</p> <pre></pre> <p>I get the following output:</p> <pre></pre> <p>So, I'm getting the derivatives of the weight matrix between the hidden layer and output right, but not the derivatives of the weight matrix hidden -> hidden or input -> hidden. But this insane thing is that I ALWAYS get the LAST ROW of the weight matrix input -> hidden correct. This is insanity to me. I have no idea what's happening here. Note that the last row of the weight matrix input -> hidden does NOT correspond to the last timestep or anything (this would be explained, for example, by me calculating the derivatives correctly for the last timestep but not propagating back through time correctly). dcdWxh is the sum over all time steps of dcdWxh -- so how can I get one row of this correct but none of the others???</p> <p>Can anyone help? I'm all out of ideas here.</p>
[ { "AnswerId": "33660600", "CreationDate": "2015-11-11T21:56:56.593", "ParentId": null, "OwnerUserId": "2886336", "Title": null, "Body": "<p>You should compute the sum of the <strong>pointwise absolute value</strong> of the difference of the two matrices. The plain sum could be close to zero due to the specific learning tasks (do you emulate the zero function? :), whichever that is.</p>\n\n<p>The last row presumably implements the weights from a constant-on-neuron, i.e. the bias, so you - seem to - always get the bias right (however, check the sum of absolute values) .</p>\n\n<p>It also looks like row-major and column-major notation of matrices are confused, like in</p>\n\n<pre><code>gWhx - bc['dcdWxh']\n</code></pre>\n\n<p>which reads like weight from \"hidden to x\" in opposite to \"x to hidden\".</p>\n\n<p>I'd rather post this as a comment, but I lack the reputation in doing so. sorry!</p>\n" } ]
27,554,484
2
<python-2.7><cuda><nvidia><theano><pylearn>
2014-12-18T19:45:41.290
null
2,423,116
import theano results in ImportError
<p>I'm trying to use theano but I get an error when I import it. I've installed cuda_6.5.14_linux_64.run, and passed all the recommended test in Chapter 6 of <a href="http://developer.download.nvidia.com/compute/cuda/6_5/rel/docs/CUDA_Getting_Started_Linux.pdf" rel="nofollow">this</a> NVIDIA PDF. Ultimately I want to be able to install pylearn2, but I get the exact same error as below when I try to compile it.</p> <p>EDIT1: My theanorc looks like:</p> <pre></pre> <p>If I replace gpu with cpu, the command import theano succeeds.</p> <pre></pre>
[ { "AnswerId": "29040199", "CreationDate": "2015-03-13T19:13:09.187", "ParentId": null, "OwnerUserId": null, "Title": null, "Body": "<p>We also saw this error. We found that putting /usr/local/cuda-6.5/bin in $PATH seemed to fix it (even with the root = ... line in .theanorc).</p>\n" }, { "AnswerId": "27833622", "CreationDate": "2015-01-08T05:40:29.923", "ParentId": null, "OwnerUserId": "2341715", "Title": null, "Body": "<p>I encountered exactly the same question.</p>\n\n<p>My solution is to replace cuda-6.5 with cuda-5.5, and everything works fine.</p>\n" } ]
27,600,893
1
<python><matrix><theano>
2014-12-22T10:32:18.887
null
2,398,548
theano: summation by class label
<p>I have a matrix which represents a distances to the k-nearest neighbour of a set of points, and there is a matrix of class labels of the nearest neighbours. (both N-by-k matrix)</p> <p>What is the best way in theano to build a (N-by-#classes) matrix whose (i,j) element will be the sum of distances from i-th point to its k-NN points with the class label 'j'?</p> <p>Example: </p> <pre></pre> <p>this task in theano?</p> <pre class="lang-py prettyprint-override"></pre>
[ { "AnswerId": "27603816", "CreationDate": "2014-12-22T13:37:59.863", "ParentId": null, "OwnerUserId": "764322", "Title": null, "Body": "<p>You might be interesting in having a look to this repo: \n<a href=\"https://github.com/erogol/KLP_KMEANS/blob/master/klp_kmeans.py\" rel=\"nofollow\">https://github.com/erogol/KLP_KMEANS/blob/master/klp_kmeans.py</a></p>\n\n<p>Is a K-Means implementation using theano (func <code>kpl_kmeans</code>). I believe what you want is the matrix <code>W</code> used in the function <code>find_bmu</code>. </p>\n\n<p>Hope you find it useful.</p>\n" } ]
27,609,843
1
<python><theano><pylearn>
2014-12-22T20:20:35.800
null
534,080
pylearn2 CSVDataset TypeError
<p>I'm having an issue loading a custom dataset into pylearn2. I'm trying to get a simple MLP to train using a tiny XOR dataset. I have a dataset named in the same directory as my yaml file, which is not in the same directory as pylearn2's script.</p> <p>Here's the entire contents of :</p> <pre></pre> <p>Here is the entire contents of my YAML file:</p> <pre></pre> <p>When I run pylearn2's script, it fails before training (presumably while compiling the theano functions). Here's the entirety of the output:</p> <pre></pre> <p>What does this mean, exactly? I looked into the code for , and it loads the data in using , which should bring them in as floats. Nothing changes if I edit to look like floats (, for example).</p>
[ { "AnswerId": "27621601", "CreationDate": "2014-12-23T13:43:04.897", "ParentId": null, "OwnerUserId": "4388766", "Title": null, "Body": "<p>This is because type of y attribute of CSVDataset is set float64.<br>\nI've fixed __init__() of csv_dataset.py as follows, and it works.<br>\nI don't know this is pylearn2's problem or not.</p>\n\n<pre><code> if self.task == 'regression':\n super(CSVDataset, self).__init__(X=X, y=y)\n else:\n super(CSVDataset, self).__init__(X=X, y=y.astype(int),\n y_labels=np.max(y) + 1)\n</code></pre>\n\n<p>BTW you should fix your yaml</p>\n\n<ul>\n<li>n_classes of Softmax layer should be 2</li>\n<li>\"channel_name: 'valid_y_misclass'\" causes error because you don't set \"valid\" attribute of monitoring_dataset.<br>\nTry to set \"valid\" of monitoring dataset, or use 'train_y_misclass' instead.</li>\n</ul>\n" } ]
27,624,812
1
<python><theano>
2014-12-23T17:01:42.577
null
2,491,687
Tiling Theano tensor
<p>I have two tensors A and B where the first one has size (500,10) and the second has size (500). I want to find A / B. I am using regular / operator and it Theano compiler says that they should be the same size. Then I tried using tensor.tile on B to make it the same size as A. It has three parameters(x, reps, ndim). I tried different values and I am limited to these constraints: x.ndim = len(reps) and ndim = len(reps) Then having these constraints how can I tile the array to a matrix?! Is this a bug in Theano?</p>
[ { "AnswerId": "27658318", "CreationDate": "2014-12-26T14:23:27.687", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>You can just broadcast it, and there are several ways to do it. Take the following example</p>\n\n<pre><code>import numpy as np\nA = np.arange(1., 5001., 1.).reshape(500, 10)\nB = np.arange(1., 501., 1.)\n\nimport theano\nAs = theano.shared(A)\nBs = theano.shared(B)\n</code></pre>\n\n<p>The failsafe way of doing this is adding an appropriate axis</p>\n\n<pre><code>AoverB = A / B[:, np.newaxis]\nAoverBalso = A / B.reshape((-1, 1))\nAsoverBs = As / Bs.reshape((-1, 1))\n</code></pre>\n\n<p>Another way is to exploit the fact that there is implicit broadcasting to pad the first axes if they are missing</p>\n\n<pre><code>AoverBT = A.T / B.T # no axis was added here\nAsoverBsT = As.T / Bs.T\n</code></pre>\n\n<p>In order to show that all these calculate the same thing, we use <code>numpy.testing</code></p>\n\n<pre><code>from numpy.testing import assert_array_equal\nassert_array_equal(AoverB, AoverBalso)\nassert_array_equal(AoverB, AsoverBs.eval())\nassert_array_equal(AoverB, AoverBT.T)\nassert_array_equal(AoverB, AsoverBsT.T.eval())\n</code></pre>\n" } ]
27,629,347
3
<python><python-3.x><anaconda><theano><conda>
2014-12-23T23:10:04.367
null
4,386,560
How do I install theano in Anaconda ver. 2.1 Windows 64 bit for Python 3.4?
<p>I've installed <a href="https://en.wikipedia.org/wiki/Anaconda_%28Python_distribution%29" rel="nofollow noreferrer">Anaconda</a>. Now I want to install the Theano library in Anaconda. I have tried:</p> <ol> <li><p>The Theano installer for Anaconda from <a href="http://deeplearning.net/software/theano/install.html#windows-installer-for-anacondace" rel="nofollow noreferrer">http://deeplearning.net/software/theano/install.html#windows-installer-for-anacondace</a>, but it raises error "The installer could not find a version of Anaconda installed. Please download and install Anaconda CE". I have added ~/anaconda3, ~/anaconda3/scripts to the environment variable path.</p></li> <li><p>I have tried to install it by building the package as mentioned on Stack&nbsp;Overflow, <em><a href="https://stackoverflow.com/questions/18640305">How do I keep track of pip-installed packages in an Anaconda (Conda) environment?</a></em>, but this also fails during the testing of package. The error screenshot is below:</p></li> </ol> <p><img src="https://i.stack.imgur.com/80vxe.jpg" alt="Theano error message"></p> <p>I have even installed Python ver. 3.4, installed Theano using pip install, and when I tried to import Theano it gave an error similar to the error in the screen shot. I tried the changes mentioned in this <a href="https://en.wikipedia.org/wiki/Google_Groups" rel="nofollow noreferrer">Google Groups</a> discussion, <strong><a href="http://groups.google.com/d/msg/theano-users/RcANq0NQgPE/IQMHD3mggi0J" rel="nofollow noreferrer">Re: [theano-users] Install Theano on Windows for Python 3</a></strong>, but no luck.</p>
[ { "AnswerId": "42214273", "CreationDate": "2017-02-13T21:56:08.553", "ParentId": null, "OwnerUserId": "1887281", "Title": null, "Body": "<p>Note to moderator: This is NOT a duplicate post. All of my other posts were deleted, so I'm leaving this one here and will flag the other questions as duplicate. </p>\n\n<p>I could never get a working installation of Theano using Anaconda with Python 3.4, and I also could never get the manual installation working with MinGW, but I was able to get it working flawlessly using WinPython 3.4. </p>\n\n<p><strong>Theano Installation and Configuration on Windows 10 with GPU Acceleration and Python 3.4</strong></p>\n\n<p>If you're using Windows, Theano can be tricky to install and configure. I was able to get it working by following a combination of these tutorials:</p>\n\n<ul>\n<li><a href=\"http://ankivil.com/installing-keras-theano-and-dependencies-on-windows-10/\" rel=\"nofollow noreferrer\" title=\"Installing Keras and Theano with GPU-acceleration on Windows 10\">Installing Keras and Theano with GPU-acceleration on Windows 10</a></li>\n<li><a href=\"http://ankivil.com/making-theano-faster-with-cudnn-and-cnmem-on-windows-10/\" rel=\"nofollow noreferrer\" title=\"Making Theano faster on Windows 10 with CuDNN and CNMeM\">Making Theano faster on Windows 10 with CuDNN and CNMeM</a></li>\n<li><a href=\"http://deeplearning.net/software/theano_versions/dev/install_windows.html\" rel=\"nofollow noreferrer\" title=\"Official Theano installation instructions for Windows\">Official Theano installation instructions for Windows</a></li>\n</ul>\n\n<p><strong>Easier configuration of Theano with Python 3.4 using WinPython instead of Anaconda Python</strong></p>\n\n<p>It was much easier to get Theano working on Python 3.4 when using <a href=\"http://winpython.sourceforge.net/\" rel=\"nofollow noreferrer\" title=\"WinPython download\">WinPython</a> instead of <a href=\"https://www.continuum.io/downloads\" rel=\"nofollow noreferrer\" title=\"Anaconda Python download\">Anaconda Python</a>, but WinPython stores environment settings in its settings directory (e.g. <code>C:\\SciSoft\\WinPython-64bit-3.4.4.2\\settings\\.keras\\</code>) rather than looking in your <code>%USERPROFILE%</code> for the keras.json file when you're wanting it to pick up your environment settings (as explained in the setup guides). Also, if you are still having trouble, you might just need to set the <code>THEANO_FLAGS</code> system environment variable to something like this: <code>floatX=float32,device=gpu,nvcc.fastmath=True,lib.cnmem=0.8,blas.ldflags=-LC:\\src\\OpenBLAS -lopenblas</code>. (Note that this environment variable overrides the settings in any .theanorc setup file as detailed <a href=\"http://deeplearning.net/software/theano/library/config.html\" rel=\"nofollow noreferrer\" title=\"Official Theano configuration instructions\">here in the Theano configuration documentation</a> except if using WinPython, the .theanorc file would go into <code>C:\\SciSoft\\WinPython-64bit-3.4.4.2\\settings\\.theanorc</code> rather than <code>%USERPROFILE\\.theanorc</code>.)</p>\n\n<p>When installing Theano with WinPython, installation is much easier if you use the suggested Theano installation location (<code>C:\\SciSoft\\</code>). In that case, your installation directory should look like this:</p>\n\n<p><a href=\"http://ankivil.com/installing-keras-theano-and-dependencies-on-windows-10/\" rel=\"nofollow noreferrer\" title=\"Installing Keras and Theano with GPU-acceleration on Windows 10\"><img src=\"https://i.stack.imgur.com/gnwme.png\" alt=\"Picture of SciSoft installation directory\" title=\"Image of installation directory for Theano\"></a></p>\n\n<p><strong>Fixing bugs in Theano environment batch file when using WinPython</strong></p>\n\n<p>The other issue I ran into with the Theano installation guides is that the batch script had some bugs in it that were causing the dependency paths to be incorrect. Here was my final version of the <code>env.bat</code> file:</p>\n\n<pre><code>REM configuration of paths\nset VSFORPYTHON=\"C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\"\nset SCISOFT=%~dp0\n\nREM add tdm gcc stuff\nset PATH=%SCISOFT%TDM-GCC-64\\bin;%SCISOFT%TDM-GCC-64\\x86_64-w64-mingw32\\bin;%PATH%\n\nREM add winpython stuff\nCALL %SCISOFT%WinPython-64bit-3.4.4.2\\scripts\\env.bat\n\nREM configure path for msvc compilers\nREM for a 32 bit installation change this line to\nREM CALL %VSFORPYTHON%\\vcvarsall.bat\nCALL %VSFORPYTHON%\\vcvarsall.bat amd64\n\nREM return a shell\ncmd.exe /k\n</code></pre>\n\n<p>If using Theano, your .keras file will need to be setup like:</p>\n\n<pre><code>{\n \"floatx\": \"float32\",\n \"epsilon\": 1e-07,\n \"image_dim_ordering\": \"th\",\n \"backend\": \"theano\" \n}\n</code></pre>\n\n<p><strong>Issue with installing CuDNN</strong></p>\n\n<p>Another key thing was that the CuDNN DLLs need to be copied into their corresponding folders in the CUDA installation directory in order for them to be detected. Instructions are detailed here: <a href=\"https://stackoverflow.com/questions/36248056/how-to-setup-cudnn-with-theano-on-windows-7-64-bit\" title=\"How to install CuDNN into CUDA on Windows\">Instructions for installing CuDNN into CUDA on Windows</a></p>\n\n<p><strong>If still having issues with Theano installation on Windows with Python 3.4:</strong></p>\n\n<p>Then please review the information here: <a href=\"https://github.com/Theano/Theano/issues/3376\" rel=\"nofollow noreferrer\" title=\"How to get Theano working with Python 3.5\">Full installation guide for Theano on Windows with Python 3.4, including all required environment variables and PATH directories</a></p>\n\n<p><strong>Another key issue with installing the C++ dependencies for Theano</strong></p>\n\n<p>Another thing I was tripped up by is that in the <a href=\"http://deeplearning.net/software/theano_versions/dev/install_windows.html\" rel=\"nofollow noreferrer\" title=\"Official Theano Windows installation instructions\">official Theano documentation</a>, it provides very specific instructions on installing the <a href=\"http://www.microsoft.com/en-us/download/details.aspx?id=44266\" rel=\"nofollow noreferrer\" title=\"Official Microsoft Visual C++ libraries required for Python\">Microsoft Visual C++ Compiler for Python 2.7</a>. It seems to be that this compiler is also required to be installed <em>in exactly the way that the Theano documentation specifies to perform the installation on the command line</em> to get Python 3.4 to work. I will quote the official Theano documentation, which states:</p>\n\n<blockquote>\n <ol>\n <li>open an administrator’s console (got to <code>start</code>, then type <code>cmd</code>,\n right-click on the command prompt icon and select <code>run as\n administrator</code>) </li>\n <li><code>cd</code> to your downloads directory and execute <code>msiexec /i\n VCForPython27.msi ALLUSERS=1</code></li>\n </ol>\n</blockquote>\n\n<p><strong>General advice about GPU-acceleration</strong></p>\n\n<p>And FYI, if you haven't tried configuring a neural network library, I highly recommend that you use GPU-acceleration. </p>\n" }, { "AnswerId": "36744339", "CreationDate": "2016-04-20T12:49:13.953", "ParentId": null, "OwnerUserId": "3667840", "Title": null, "Body": "<p>As we can see, you have tried to use Theano under Windows. Please, ensure that you have a <a href=\"http://en.wikipedia.org/wiki/MinGW\" rel=\"nofollow noreferrer\">MinGW</a> compiler. Further, ensure that you have MinGW and libpython packages.</p>\n\n<p>Generally, I recommend to use the answer <em><a href=\"https://stackoverflow.com/questions/34097988/how-to-install-keras-and-theano-in-anaconda-python-2-7-in-windows/34975902#34975902\">How do I install Keras and Theano in Anaconda Python 2.7 on Windows?</a></em>, but without the last step.</p>\n" }, { "AnswerId": "27752192", "CreationDate": "2015-01-03T05:57:36.073", "ParentId": null, "OwnerUserId": "2900367", "Title": null, "Body": "<p>Running Theano on Python 3.4 is complicated. So far I would recommend that you run Theano in Python 2.7. The libraries written for Theano are Python 2.6+ based. So in order to get Theano running in Python 3.4, you would be needing the 2to3 automated python 2 to 3 code translation tool. I haven't tested Theano using 2to3, so I can't comment on whether it would work or not. But, I am using Python 2.7 and Theano works smoothly. Also, you might want to use AnacondaCE with Python 2.7 installer which pretty much gives you everything you need to start developing.</p>\n\n<p>You would also need to reinstall Theano using</p>\n\n<pre><code>pip install Theano\n</code></pre>\n" } ]
27,632,440
3
<numpy><neural-network><protocol-buffers><deep-learning><caffe>
2014-12-24T06:20:31.667
27,645,934
1,714,410
InfogainLoss layer
<p>I wish to use a loss layer of type <a href="http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1InfogainLossLayer.html#details" rel="noreferrer"></a> in my model. But I am having difficulties defining it properly.</p> <ol> <li><p>Is there any tutorial/example on the usage of layer?</p></li> <li><p>Should the input to this layer, the class probabilities, be the output of a layer, or is it enough to input the "top" of a fully connected layer?</p></li> </ol> <p> requires three inputs: class probabilities, labels and the matrix . The matrix can be provided either as a layer parameters .<br> Suppose I have a python script that computes as a of shape with (where is the number of labels in my model).</p> <ol start="3"> <li><p>How can I convert my into a file that can be provided as a to the model?</p></li> <li><p>Suppose I want to be provided as the third input (bottom) to the loss layer (rather than as a model parameter). How can I do this?<br> Do I define a new data layer which "top" is ? If so, wouldn't the data of this layer be incremented every training iteration like the training data is incremented? How can I define multiple unrelated input "data" layers, and how does caffe know to read from the training/testing "data" layer batch after batch, while from the "data" layer it knows to read only once for all the training process?</p></li> </ol>
[ { "AnswerId": "27645934", "CreationDate": "2014-12-25T09:27:10.477", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p><em>1. Is there any tutorial/example on the usage of <strong>InfogainLoss</strong> layer?</em>:<br>\nA nice example can be found <a href=\"https://stackoverflow.com/questions/30486033/tackling-class-imbalance-scaling-contribution-to-loss-and-sgd?lq=1\">here</a>: using <strong>InfogainLoss</strong> to tackle class imbalance.</p>\n\n<hr>\n\n<p><em>2. Should the input to this layer, the class probabilities, be the output of a <strong>Softmax</strong> layer?</em><br>\nHistorically, the answer used to be <em>YES</em> according to <a href=\"https://stackoverflow.com/a/30037614/1714410\">Yair's answer</a>. The old implementation of <code>\"InfogainLoss\"</code> needed to be the output of <code>\"Softmax\"</code> layer or any other layer that makes sure the input values are in range [0..1]. </p>\n\n<p>The <a href=\"https://stackoverflow.com/users/1714410/shai\">OP</a> noticed that using <code>\"InfogainLoss\"</code> on top of <code>\"Softmax\"</code> layer can lead to numerical instability. <a href=\"https://github.com/BVLC/caffe/pull/3855\" rel=\"nofollow noreferrer\">His pull request</a>, combining these two layers into a single one (much like <code>\"SoftmaxWithLoss\"</code> layer), was accepted and merged into the official Caffe repositories on 14/04/2017. The mathematics of this combined layer are given <a href=\"https://stackoverflow.com/a/34917052/1714410\">here</a>.</p>\n\n<p>The upgraded layer \"look and feel\" is exactly like the old one, apart from the fact that <strong>one no longer needs to explicitly pass the input through a <code>\"Softmax\"</code> layer.</strong></p>\n\n<hr>\n\n<p><em>3. How can I convert an numpy.array into a binproto file</em>: </p>\n\n<p>In python</p>\n\n<pre><code>H = np.eye( L, dtype = 'f4' ) \nimport caffe\nblob = caffe.io.array_to_blobproto( H.reshape( (1,1,L,L) ) )\nwith open( 'infogainH.binaryproto', 'wb' ) as f :\n f.write( blob.SerializeToString() )\n</code></pre>\n\n<p>Now you can add to the model prototext the <code>INFOGAIN_LOSS</code> layer with <code>H</code> as a parameter:</p>\n\n<pre><code>layer {\n bottom: \"topOfPrevLayer\"\n bottom: \"label\"\n top: \"infoGainLoss\"\n name: \"infoGainLoss\"\n type: \"InfogainLoss\"\n infogain_loss_param {\n source: \"infogainH.binaryproto\"\n }\n}\n</code></pre>\n\n<hr>\n\n<p><em>4. How to load <code>H</code> as part of a DATA layer</em> </p>\n\n<p>Quoting <a href=\"https://github.com/BVLC/caffe/issues/1640\" rel=\"nofollow noreferrer\">Evan Shelhamer's post</a>:</p>\n\n<blockquote>\n <p>There's no way at present to make data layers load input at different rates. Every forward pass all data layers will advance. However, the constant H input could be done by making an input lmdb / leveldb / hdf5 file that is only H since the data layer will loop and keep loading the same H. This obviously wastes disk IO. </p>\n</blockquote>\n" }, { "AnswerId": "30037614", "CreationDate": "2015-05-04T18:44:06.337", "ParentId": null, "OwnerUserId": "101433", "Title": null, "Body": "<p>The layer is summing up </p>\n\n<pre><code>-log(p_i)\n</code></pre>\n\n<p>and so the p_i's need to be in (0, 1] to make sense as a loss function (otherwise higher confidence scores will produce a higher loss). See the curve below for the values of log(p).</p>\n\n<p><img src=\"https://i.stack.imgur.com/qxmZO.png\" alt=\"enter image description here\"></p>\n\n<p>I don't think they have to sum up to 1, but passing them through a Softmax layer will achieve both properties.</p>\n" }, { "AnswerId": "51641525", "CreationDate": "2018-08-01T20:07:50.010", "ParentId": null, "OwnerUserId": "844293", "Title": null, "Body": "<p>Since I had to search through many websites to puzzle the complete\ncode, I thought I share my implementation:</p>\n\n<p>Python layer for computing the H-matrix with weights for each class:</p>\n\n<pre><code>import numpy as np\nimport caffe\n\n\nclass ComputeH(caffe.Layer):\n def __init__(self, p_object, *args, **kwargs):\n super(ComputeH, self).__init__(p_object, *args, **kwargs)\n self.n_classes = -1\n\n def setup(self, bottom, top):\n if len(bottom) != 1:\n raise Exception(\"Need (only) one input to compute H matrix.\")\n\n params = eval(self.param_str)\n if 'n_classes' in params:\n self.n_classes = int(params['n_classes'])\n else:\n raise Exception('The number of classes (n_classes) must be specified.')\n\n def reshape(self, bottom, top):\n top[0].reshape(1, 1, self.n_classes, self.n_classes)\n\n def forward(self, bottom, top):\n classes, cls_num = np.unique(bottom[0].data, return_counts=True)\n\n if np.size(classes) != self.n_classes or self.n_classes == -1:\n raise Exception(\"Invalid number of classes\")\n\n cls_num = cls_num.astype(float)\n\n cls_num = cls_num.max() / cls_num\n weights = cls_num / np.sum(cls_num)\n\n top[0].data[...] = np.diag(weights)\n\n def backward(self, top, propagate_down, bottom):\n pass\n</code></pre>\n\n<p>and the relevant part from the train_val.prototxt:</p>\n\n<pre><code>layer {\n name: \"computeH\"\n bottom: \"label\"\n top: \"H\"\n type: \"Python\"\n python_param {\n module: \"digits_python_layers\"\n layer: \"ComputeH\"\n param_str: '{\"n_classes\": 7}'\n }\n exclude { stage: \"deploy\" }\n}\nlayer {\n name: \"loss\"\n type: \"InfogainLoss\"\n bottom: \"score\"\n bottom: \"label\"\n bottom: \"H\"\n top: \"loss\"\n infogain_loss_param {\n axis: 1 # compute loss and probability along axis\n }\n loss_param {\n normalization: 0\n }\n exclude {\n stage: \"deploy\"\n }\n}\n</code></pre>\n" } ]
27,661,975
1
<python><numpy><theano>
2014-12-26T21:05:39.587
27,728,366
4,396,662
ValueError : Length Not Known while mapping a list in Theano
<p>I am a beginner in Theano but despite a lot of research I still don't understand why the following code </p> <pre></pre> <p>returns this error message</p> <pre></pre> <p>When I type T.sqrt(A) in the console it returns "Elemwise{sqrt,no_inplace}.0" and I can't get the value of T.sqrt(A). I wonder if I can use matplotlib with theano tensors and otherwise how I can get a numpy array list mapped by a theano function back into the ndarray type.</p>
[ { "AnswerId": "27728366", "CreationDate": "2015-01-01T04:55:36.990", "ParentId": null, "OwnerUserId": "2580227", "Title": null, "Body": "<p>The reason you get \"Elemwise{sqrt,no_inplace}.0\" is because Theano is symbolic, so tensors do not have a value until evaluated. To fix this:</p>\n\n<pre><code>from matplotlib.pyplot import plot\nfrom numpy import linspace\nimport theano.tensor as T\nA = linspace(0,1,100)\nplot(A, T.sqrt(A).eval())\n</code></pre>\n\n<p>to evaluate the tensor and get its numeric value.</p>\n\n<p>Also, you should import theano.tensor, not theano.tensors.</p>\n" } ]
27,690,125
2
<c++><caffe>
2014-12-29T13:48:02.070
27,690,893
4,402,334
caffe.cpp RegisterBrewFunction
<p>I was reading the caffe source codes. In the caffe.cpp which is the source of tools/caffe, I encounter the following codes which puzzles me:</p> <pre></pre> <p>Based on my knowledge, this macro replace RegisterBrewFunction(func) with a anonymous class, and the only thing it has done is add &lt;#func, &amp;func> to g_brew_map. So why not just doing this like this?</p> <pre></pre> <p>Hope someone help me about this.</p>
[ { "AnswerId": "27690893", "CreationDate": "2014-12-29T14:40:09.110", "ParentId": null, "OwnerUserId": "266487", "Title": null, "Body": "<p>Your <code>#define</code> defines a macro that can be used inside a function. When the code reaches this function then the macro argument will be registered. You have to call the function for brewing the coffee.</p>\n\n<p>The original <code>#define</code> defines a macro that shall be used at file scope. It creates a more or less anonymous object. The constructor of that object's class will run before <code>main</code> is called and registers the function. There is no other function necessary.</p>\n" }, { "AnswerId": "27693465", "CreationDate": "2014-12-29T17:37:47.777", "ParentId": null, "OwnerUserId": "560648", "Title": null, "Body": "<p>Because an assignment may not live at global scope, only inside a function.</p>\n\n<p>This macro can be used in many more places, and indeed will normally live at the top of some source file so that you don't have to call a function to make it work.</p>\n" } ]
27,698,386
1
<python><testing><theano>
2014-12-30T01:17:04.543
27,822,231
2,644,074
Why do the Theano tests fail with many "KnownFailureTest"s?
<p>Theano is failing it's tests when I do:</p> <pre></pre> <p>If these are known failures, shouldn't it still pass? IE when I test other libraries, KnownFailures sometimes trigger, but the overall test still passes with "OK" (but will still note the KnownFails and Skipped tests).</p> <p>My guess is this is ok, and the test really is "passing", but since I'm doing a fresh install following the deeplearning.net tutorials, and I'm getting this error, I assume others might have this question as well, and a search on Google, and SO, isn't really helpful.</p> <p>Forgive the error-code-dump, I am sure no one will need to read all through this, but it's here for reference if someone else has this question. Here are the errors at the end of the tests:</p> <pre></pre> <p>Thanks!</p>
[ { "AnswerId": "27822231", "CreationDate": "2015-01-07T15:01:32.267", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>KnownFailureTest are a valid return value for nosetests. When Theano started, we where creating tests for features to implement and raised KnownFailureTest in them until we implement them. We do not do that anymore as we end up with to much questions from people about this. So this cause too much distraction. But we didn't changed the old tests that did that.</p>\n\n<p>I just created an issue to change that: <a href=\"https://github.com/Theano/Theano/issues/2375\" rel=\"nofollow\">https://github.com/Theano/Theano/issues/2375</a></p>\n\n<p>I do not know when it will be changed.</p>\n" } ]
27,720,437
1
<c++><opencv><caffe>
2014-12-31T11:24:36.997
null
648,896
How can I read opencv images into caffe format for real-time prediction?
<p>I try to create a real-time prediction program from video frames. All the code is backended by opencv. However, I trained a caffe model and I try to use it by this <a href="https://github.com/niuzhiheng/caffe" rel="nofollow">windows port</a> of caffe. It does not support opencv to caffe conversion of the images in memory. How can I do it externally? In the original caffe there is a solution that is <a href="https://github.com/BVLC/caffe/tree/80c9a2c17f5469371b434fde4072bc13b49ee1eb" rel="nofollow">recently merged</a> but cannot apply this to this windows version.</p>
[ { "AnswerId": "30557410", "CreationDate": "2015-05-31T12:27:11.957", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>You should be able to define the input to your network using a <strong><a href=\"https://github.com/niuzhiheng/caffe/blob/windows/src/caffe/layers/memory_data_layer.cpp\" rel=\"nofollow\">MEMORY_DATA</a></strong> layer, then, using the <code>Reset</code> method you should be able to set the input according to the data stored in your opencv representation.</p>\n" } ]
27,732,543
1
<python><numpy><random><cuda><theano>
2015-01-01T16:32:07.490
null
1,245,262
Why does creation of a Theano shared variable on GPU effect numpy's random streams?
<p>I'm just starting to play with Theano, and am wondering why the first creation of a shared variable on the gpu seems to effect numpy's random number generator. At times this initial creation seems to advance the random number generator.</p> <p>I've explored the following test cases in this code:</p> <pre></pre> <ol> <li>rand_test0 - A sanity check to show I can reset a random stream using numpy.random.seed</li> <li>rand_test1 - Show creation of shared variable on cpu does nothing unexpected</li> <li>rand_test2 - Show creation of shared variable on gpu does have an unexpected effect</li> <li>rand_test3 - Show it is only the initial creation of the shared variable on the gpu with an unexpected effect</li> <li>rand_test4 - A verification of rand_test3</li> </ol> <p>The results I got were as follows:</p> <pre></pre> <p>Does this make sense to anyone? Is it a Theano artifact? Is it a CUDA artifact, caused by my initial access to the GPU (i.e. the fact that I was playing with shared variables is only incidental to what I'm seeing). Or, am I misunderstanding something else?</p>
[ { "AnswerId": "32687028", "CreationDate": "2015-09-21T03:43:40.663", "ParentId": null, "OwnerUserId": "3299394", "Title": null, "Body": "<p>Theano's <a href=\"http://deeplearning.net/software/theano/sandbox/randomnumbers.html\" rel=\"nofollow\">documentation</a> talks about the difficulties of seeding random variables and why they seed each graph instance with its own random number generator. </p>\n\n<blockquote>\n <p>Sharing a random number generator between different {{{RandomOp}}}\n instances makes it difficult to producing the same stream regardless\n of other ops in graph, and to keep {{{RandomOps}}} isolated.\n Therefore, each {{{RandomOp}}} instance in a graph will have its very\n own random number generator. That random number generator is an input\n to the function. In typical usage, we will use the new features of\n function inputs ({{{value}}}, {{{update}}}) to pass and update the rng\n for each {{{RandomOp}}}. By passing RNGs as inputs, it is possible to\n use the normal methods of accessing function inputs to access each\n {{{RandomOp}}}’s rng. In this approach it there is no pre-existing\n mechanism to work with the combined random number state of an entire\n graph. So the proposal is to provide the missing functionality (the\n last three requirements) via auxiliary functions: {{{seed, getstate,\n setstate}}}.</p>\n</blockquote>\n\n<p>They also provide <a href=\"http://deeplearning.net/software/theano/tutorial/examples.html\" rel=\"nofollow\">examples</a> on how to seed all the random number generators. </p>\n\n<blockquote>\n <p>You can also seed all of the random variables allocated by a\n RandomStreams object by that object’s seed method. This seed will be\n used to seed a temporary random number generator, that will in turn\n generate seeds for each of the random variables.</p>\n</blockquote>\n\n<pre><code>&gt;&gt;&gt; srng.seed(902340) # seeds rv_u and rv_n with different seeds each\n</code></pre>\n\n<p>Try seeding the random variable's using Theano's seed functionality instead of numpy's. </p>\n" } ]
27,756,908
1
<python><theano>
2015-01-03T16:27:08.187
null
2,278,493
Random numbers error in scan, theano.gof.fg.MissingInputError
<p>I have a small problem using random numbers together with scan.</p> <p>Please see this small example in which I tried to isolate my problem.</p> <pre></pre> <p>What this code should do is the following: - begin with an array, in this case [1,2,3,4,5] - generate random numbers sampled from the normal distribution with the average being the average of the previous output (or the initial observations) In this case the average for the first step would be 3. - Let's say the sampled numbers are: [2,3,3.5,4,5], the new average is now 3.5 - Repeat the above for 10 timesteps</p> <p>Instead I get the following error output:</p> <pre></pre> <p>I am probably missing something simple and obvious again here.</p> <p>Help is much appreciated, thanks!</p>
[ { "AnswerId": "28864024", "CreationDate": "2015-03-04T20:15:16.470", "ParentId": null, "OwnerUserId": "554606", "Title": null, "Body": "<p>You need to pass the updates dictionary that you get form scan to your function, so:</p>\n\n<pre><code>f=th.function([initials], result, updates=updates)\n</code></pre>\n\n<p>Also, you can't have shared variables as input to your function.</p>\n\n<p>You can achieve what you are trying to do for example like this:</p>\n\n<pre><code>import theano as th\nimport numpy as np\nfrom theano import tensor as T\n\nstream=th.tensor.shared_randomstreams.RandomStreams()\n\navg=T.vector()\ninitials=T.fvector()\n\ndef get_output(prev_rand):\n return stream.normal(size=prev_rand.shape, avg=prev_rand.mean())\n\nresult, updates = th.scan(get_output, outputs_info=[initials], n_steps=10)\n\nf = th.function([initials], result, updates=updates)\n\ninitial_values = np.array([1,2,3,4,5], dtype=th.config.floatX)\n\nprint f(initial_values)\n</code></pre>\n" } ]
27,770,975
1
<python><linear-algebra><logistic-regression><theano>
2015-01-04T22:16:31.123
null
3,998,526
Theano inner product 3d matrix
<p>thanks for reading this.</p> <p>I'm trying to implement a multi-label logistic regression using theano:</p> <pre></pre> <p>but -T.dot(x, w) product fails with this error:</p> <p>TypeError: ('Bad input argument to theano function with name "train" at index 0(0-based)', 'Wrong number of dimensions: expected 2, got 3 with shape (5, 10, 2).')</p> <p>x has shape (5, 2, 10) And W (1, 2, 10). I would expect the dot product to have shape (5,2).</p> <p>My questions are: Is there anyway to do this inner product? Do you think there is a better way to achieve a multi-label logistic regression?</p> <p>thanks!</p> <p>---- EDIT -----</p> <p>So here is an implementation of what I would like to do using numpy.</p> <pre></pre> <p>output:</p> <pre></pre> <p>But I don't know how to do this symbolically using Theano.</p>
[ { "AnswerId": "27772429", "CreationDate": "2015-01-05T01:52:51.083", "ParentId": null, "OwnerUserId": "3998526", "Title": null, "Body": "<p>After some hours of fighting this seems to produce the right results:</p>\n\n<p>I had an error which was having the input as rng.randn(examples,features,labels) instead of rng.randn(examples,features). This means, that besides having more labels, the inputs should be the same size.</p>\n\n<p>And the way of computing the dot product the right way was using theano.scan method like:\nresults, updates = theano.scan(lambda label: T.dot(x, w[label,:]) - b[label], sequences=T.arange(labels))</p>\n\n<p>thanks everybody for their help!</p>\n\n<pre><code>import numpy as np\nimport theano\nimport theano.tensor as T\nrng = np.random\n\nexamples = 5\nfeatures = 10\nlabels = 2\nD = (rng.randn(examples,features), rng.randint(size=(labels, examples), low=0, high=2))\ntraining_steps = 10000\n\n# Declare Theano symbolic variables\nx = T.matrix(\"x\")\ny = T.matrix(\"y\")\nw = theano.shared(rng.randn(labels ,features), name=\"w\")\nb = theano.shared(np.zeros(labels), name=\"b\")\nprint \"Initial model:\"\nprint w.get_value(), b.get_value()\n\nresults, updates = theano.scan(lambda label: T.dot(x, w[label,:]) - b[label], sequences=T.arange(labels))\n\n# Construct Theano expression graph\np_1 = 1 / (1 + T.exp(- results)) # Probability that target = 1\nprediction = p_1 &gt; .5 # The prediction thresholded\nxent = -y * T.log(p_1) - (1-y) * T.log(1-p_1) # Cross-entropy loss function\ncost = xent.mean() + 0.01 * (w ** 2).sum()# The cost to minimize\ngw, gb = T.grad(cost, [w, b]) # Compute the gradient of the cost\n # (we shall return to this in a\n # following section of this tutorial)\n\n# Compile\ntrain = theano.function(\n inputs=[x,y],\n outputs=[prediction, xent],\n updates=((w, w - 0.1 * gw), (b, b - 0.1 * gb)),\n name='train')\npredict = theano.function(inputs=[x], outputs=prediction , name='predict')\n\n# Train\nfor i in range(training_steps):\n pred, err = train(D[0], D[1])\n\nprint \"Final model:\"\nprint w.get_value(), b.get_value()\nprint \"target values for D:\", D[1]\nprint \"prediction on D:\", predict(D[0])\n</code></pre>\n" } ]