QuestionId
int64
388k
59.1M
AnswerCount
int64
0
47
Tags
stringlengths
7
102
CreationDate
stringlengths
23
23
AcceptedAnswerId
float64
388k
59.1M
OwnerUserId
float64
184
12.5M
Title
stringlengths
15
150
Body
stringlengths
12
29.3k
answers
listlengths
0
47
388,172
6
<python><lua><scipy><scientific-computing><torch>
2008-12-23T04:25:07.537
388,184
41,718
Scientific libraries for Lua?
<p>Are there any scientific packages for Lua comparable to Scipy?</p>
[ { "AnswerId": "9254474", "CreationDate": "2012-02-13T01:20:28.603", "ParentId": null, "OwnerUserId": "221509", "Title": null, "Body": "<p>I'm not sure if it is comparable to Scipy, but there is <a href=\"http://www.nongnu.org/gsl-shell/\" rel=\"nofollow\">GSL Shell</a> which is based on LuaJIT and GNU Scientific Library, which offers many numerical algorithms and vector/matrix linear algebra operations.</p>\n" }, { "AnswerId": "9253469", "CreationDate": "2012-02-12T22:52:02.930", "ParentId": null, "OwnerUserId": "1205758", "Title": null, "Body": "<p>There's a Numpy-like extension for Lua which runs without dependencies at</p>\n\n<p><a href=\"https://github.com/jzrake/lunum\" rel=\"nofollow\">https://github.com/jzrake/lunum</a></p>\n\n<p>In the future it will provide FFT's and linear algebra like Numpy+Scipy. Presently it supports numeric array manipulation like in Numpy.</p>\n" }, { "AnswerId": "18776865", "CreationDate": "2013-09-13T01:33:04.577", "ParentId": null, "OwnerUserId": "18403", "Title": null, "Body": "<p>You have some options:</p>\n\n<ul>\n<li><a href=\"http://numlua.luaforge.net/\" rel=\"noreferrer\">Numeric Lua</a> - C module for Lua 5.1/5.2, provides matrices, FFT, complex numbers and others</li>\n<li><a href=\"http://www.nongnu.org/gsl-shell/\" rel=\"noreferrer\">GSL Shell</a> - Modification of Lua (supports Lua libraries) with a nice syntax. Provides almost everything that Numeric Lua does, plus ODE solvers, plotting capabilities, and other nice things. Has a great documentation.</li>\n<li><a href=\"http://www.scilua.org/\" rel=\"noreferrer\">SciLua</a> - Pure LuaJIT module. Aims to be a complete framework for scientific computing in Lua. Provides vectors and matrices, random numbers / distributions, optimization, others. Still in early development.</li>\n<li><a href=\"https://bitbucket.org/lucashnegri/lna\" rel=\"noreferrer\">Lua Numerical Algorithms</a> - Pure LuaJIT module (uses blas/lapack via LuaJIT FFI). Provides matrices / linear algebra, FFT, complex numbers, optimization algorithms, ODE solver, basic statistics (+ PCA, LDA), and others. Still in early development, but has a somewhat complete documentation and test suits.</li>\n</ul>\n" }, { "AnswerId": "10631596", "CreationDate": "2012-05-17T07:30:57.633", "ParentId": null, "OwnerUserId": "1400386", "Title": null, "Body": "<p>You should try <strong><a href=\"http://www.torch.ch\" rel=\"noreferrer\">Torch7</a></strong> (<a href=\"https://github.com/andresy/torch\" rel=\"noreferrer\">github</a>).</p>\n\n<p>Torch7 has a very nice and efficient vector/matrix/tensor numerical library\nwith a Lua front-end. It also has a bunch of functions for computer vision\nand machine learning. </p>\n\n<p>It's pretty recent but getting better quickly.</p>\n" }, { "AnswerId": "388184", "CreationDate": "2008-12-23T04:36:07.220", "ParentId": null, "OwnerUserId": "33252", "Title": null, "Body": "<p>There is the basis for one in <a href=\"http://numlua.luaforge.net/\" rel=\"nofollow noreferrer\">Numeric Lua</a>.</p>\n" }, { "AnswerId": "388587", "CreationDate": "2008-12-23T10:33:39.823", "ParentId": null, "OwnerUserId": "17160", "Title": null, "Body": "<p>One can always use <a href=\"http://labix.org/lunatic-python\" rel=\"noreferrer\">Lunatic Python</a> and access scipy inside lua.</p>\n\n<pre><code>&gt; require(\"python\")\n&gt; numpy = python.import(\"numpy\")\n&gt; numpy.array ... etc ..\n</code></pre>\n" } ]
11,987,325
4
<numpy><scipy><gfortran><theano>
2012-08-16T12:39:30.310
null
380,038
Theano fails due to NumPy Fortran mixup under Ubuntu
<p>I installed <a href="http://deeplearning.net/software/theano/" rel="noreferrer">Theano</a> on my machine, but the nosetests break with a Numpy/Fortran related error message. For me it looks like Numpy was compiled with a different Fortran version than Theano. I already reinstalled Theano ( + ) and Numpy / Scipy (), but this did not help.</p> <p>What steps would you recommend?</p> <h3>Complete error message:</h3> <pre></pre> <h3>My research:</h3> <p>The <a href="http://www.scipy.org/Installing_SciPy/BuildingGeneral" rel="noreferrer">Installing SciPy / BuildingGeneral</a> page about the error:</p> <p>If you see an error message</p> <p></p> <p>when building SciPy, it means that NumPy picked up the wrong Fortran compiler during build (e.g. ifort). </p> <p>Recompile NumPy using:</p> <p></p> <p>or whichever is appropriate (see ).</p> <p>But:</p> <pre></pre> <h3>Used software versions:</h3> <ul> <li>scipy 0.10.1 (scipy.test() works)</li> <li>NumPy 1.6.2 (numpy.test() works)</li> <li>theano 0.5.0 (several tests fails with )</li> <li>python 2.6.6</li> <li>Ubuntu 10.10</li> </ul> <h2>[UPDATE]</h2> <p>So I removed numpy and scipy from my system with and using of what was left.</p> <p>Than I installed numpy and scipy from the github sources with .</p> <p>Afterwards I entered again and . </p> <p>Error persists :/</p> <p>I also tried the ... + approach, but for my old Ubuntu (10.10) it installs too old version of numpy and scipy for theano: </p>
[ { "AnswerId": "18241213", "CreationDate": "2013-08-14T20:03:51.183", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>A better fix is to remove atlas and install openblas. openblas is faster then atlas. Also, openblas don't request gfortran and is the one numpy was linked with. So it will work out of the box.</p>\n" }, { "AnswerId": "18238732", "CreationDate": "2013-08-14T17:48:02.127", "ParentId": null, "OwnerUserId": "430281", "Title": null, "Body": "<p>I had the same problem, and after reviewing the source code, user212658's answer seemed like it would work (I have not tried it). I then looked for a way to deploy user212658's hack without modifying the source code.</p>\n\n<p>Put these lines in your <a href=\"http://deeplearning.net/software/theano/library/config.html#envvar-THEANORC\">theanorc</a> file:</p>\n\n<pre><code>[blas]\nldflags = -lblas -lgfortran\n</code></pre>\n\n<p>This worked for me.</p>\n" }, { "AnswerId": "12142579", "CreationDate": "2012-08-27T13:06:16.947", "ParentId": null, "OwnerUserId": "1491200", "Title": null, "Body": "<p>Have you tried to recompile NumPy from the sources? </p>\n\n<p>I'm not familiar with the Ubuntu package system, so I can't check what's in your <code>dist-packages/numpy</code>. With a clean archive of the <code>NumPy</code> sources, you should have a <code>setup.py</code> at the same level as the directories <code>numpy</code>, <code>tools</code> and <code>benchmarks</code> (among others). I'm pretty sure that's the one you want to use for a <code>python setup.py build</code>.</p>\n\n<p><strong>[EDIT]</strong></p>\n\n<p>Now that you have recompiled <code>numpy</code> with the proper <code>--fcompiler</code> option, perhaps could you try to do the same with <code>Theano</code>, that is, compiling directly from sources <em>without</em> relying on a <code>apt-get</code> or even <code>pip</code>. You should have a better control on the build process that way, which will make debugging/trying to find a solution easier. </p>\n" }, { "AnswerId": "13579399", "CreationDate": "2012-11-27T07:34:10.480", "ParentId": null, "OwnerUserId": "212658", "Title": null, "Body": "<p>I had the same problem. The solution I found is to add a hack in theano/gof/cmodule.py to link against gfortran whenever 'blas' is in the libs. That fixed it.</p>\n\n<pre><code>class GCC_compiler(object):\n ...\n @staticmethod\n def compile_str(module_name, src_code, location=None,\n include_dirs=None, lib_dirs=None, libs=None,\n preargs=None):\n ...\n cmd.extend(['-l%s' % l for l in libs])\n if 'blas' in libs:\n cmd.append('-lgfortran')\n</code></pre>\n" } ]
13,757,826
2
<numpy><pypy><theano>
2012-12-07T06:07:58.497
13,762,698
901,263
NumPyPy vs Theano?
<p>I am wondering: do these two projects basically have the same goal -- to speed up numerical work in Python?</p> <p>What are the similarities and differences?</p> <p>I know that Theano does not aim to re-implement all of NumPy like NumPyPy does, but from what I've read, Theano can already lead to some really impressive speedup results. So why do we need NumPyPy if we can just write code for Theano that runs fast?</p>
[ { "AnswerId": "20026491", "CreationDate": "2013-11-17T02:34:11.297", "ParentId": null, "OwnerUserId": "179081", "Title": null, "Body": "<p>Theano, seeks to improve NumPy,\nNumPy is a prerequisite for Theano.</p>\n\n<p>A large feature of Theano, is it's transparent use of CUDA GPU's, when possible.</p>\n" }, { "AnswerId": "13762698", "CreationDate": "2012-12-07T12:01:37.330", "ParentId": null, "OwnerUserId": "1098041", "Title": null, "Body": "<p>Well for one thing : millions of lines of code use numpy, so porting Numpy to pypy would be a great step forward for the porting of many other (scientific and other) librairies to Pypy.</p>\n\n<p>Re-implementing all Numpy in pypy may sound like a chore, and it is, but the alternative in just insane : re-implementing hundreds or librairies to use XXX instead.</p>\n\n<p>And by the way I don't know theano really well, but I know it isn't a substitute for Numpy.\nthey are different projects, with different features.</p>\n" } ]
14,178,339
1
<python><machine-learning><theano>
2013-01-06T01:24:08.887
null
199,785
theano define function which repeatedly calls another function?
<p>My training function:</p> <pre class="lang-python prettyprint-override"></pre> <p>Then from elsewhere:</p> <pre class="lang-python prettyprint-override"></pre> <p>So what I'd like this to look like is</p> <pre class="lang-python prettyprint-override"></pre> <p>I'm really not even sure how to start on this, as theano is still pretty mind-bending for me. I was able to get this far but loops are seriously challenging.</p> <p>I have the vague notion that if I can turn the theano.function into a theano.scan, and then put an outer theano.function around it - that might work. However, theano.scan is still magical to me (despite my best efforts). </p> <p>How can I make it so that the looping over minibatches is incorporated into a single function call? </p> <p>Update:</p> <p>I thought I had it! I got this:</p> <pre></pre> <p>But unfortunately it seems like since I use index to calculate the batches in the givens, I can't also update on it:</p> <pre></pre> <hr> <p>Update 2:</p> <pre></pre> <p>This actually runs, but it's output is weird:</p> <pre></pre> <p>Everytime I run it I get the same output, even though X &amp; y are initialized to random values each run.</p>
[ { "AnswerId": "16868431", "CreationDate": "2013-06-01T02:15:08.277", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Everybody I know do the loop over minibatch in python. This can be done with scan, but all your tries here did not used scan. So it is normal that they didn't worked. You need to call the scan function somewhere to use it (or its higher level interface like map). In fact, in your case, I think you can use <code>theano.scan(fn, theano.tensor.arange(N))</code>.</p>\n\n<p>I can't answer all your questions in this post as the snippet of code is incomplete, but here is some information:</p>\n\n<pre><code>return theano.function(\n inputs=[], outputs=[self.cost] * n_batches,\n</code></pre>\n\n<p>Here: <code>[self.cost] * n_batches</code> is pure python code. This create a list of <code>n_batches</code> element, where each elements are <code>self.cos</code>t. So if <code>n_batches</code> is 3, you will have <code>outputs=[self.cost, self.cost, self.cost]</code>. That is why you got the same value outputed multiple time.</p>\n\n<p>I can't tell you why you always add the same answer as I need information that wasn't provided.</p>\n" } ]
14,311,688
1
<python><warnings><theano>
2013-01-14T02:40:27.977
null
1,030,101
how to set the Theano flag warn.sum_div_dimshuffle_bug to False
<p>I am using the theano package to find the derivative of a sigmoid function, using the cross entropy as the cost. This is my code:</p> <pre></pre> <p>when I run my code, I get the following error:</p> <pre></pre> <p>But I dont know how to do that. I tried this:</p> <pre></pre> <p>but it is giving me an error on warn, saying it is not recognized as a variable.</p>
[ { "AnswerId": "14631722", "CreationDate": "2013-01-31T17:33:43.570", "ParentId": null, "OwnerUserId": "359944", "Title": null, "Body": "<p>Try </p>\n\n<pre><code>from theano import config\nconfig.warn.sum_div_dimshuffle_bug = False\n</code></pre>\n\n<p>This worked for me</p>\n" } ]
15,542,043
2
<python><g++><theano>
2013-03-21T07:51:40.970
15,542,158
2,193,800
Theano install warning: g++ not detected
<p>After I installed Theano I tried to run it but got following error message:</p> <pre></pre> <p>Why?</p>
[ { "AnswerId": "37846374", "CreationDate": "2016-06-15T21:48:49.483", "ParentId": null, "OwnerUserId": "2534758", "Title": null, "Body": "<p>You can try installing mingw (<a href=\"http://www.mingw.org/\" rel=\"nofollow noreferrer\">http://www.mingw.org/</a>) as mentioned in the answer posted here - <a href=\"https://stackoverflow.com/questions/36722975/theano-g-not-detected/37846308#37846308\">theano g++ not detected</a></p>\n" }, { "AnswerId": "15542158", "CreationDate": "2013-03-21T08:00:17.403", "ParentId": null, "OwnerUserId": "1060350", "Title": null, "Body": "<p>Try installing the <code>g++</code> compiler.</p>\n\n<p>Python is rather slow, so for performance-critical parts you need a compiled language such as C++.</p>\n" } ]
15,917,849
2
<python><numpy><theano>
2013-04-10T05:42:09.473
null
2,168,406
How can I assign/update subset of tensor shared variable in Theano?
<p>When compiling a function in , a shared variable(say X) can be updated by specifying . Now I am trying to update only subset of a shared variable:</p> <pre></pre> <p>The codes will raise a error "update target must be a SharedVariable", I guess that means update targets can't be non-shared variables. So is there any way to compile a function to just udpate subset of shared variables?</p>
[ { "AnswerId": "19216429", "CreationDate": "2013-10-07T02:56:48.000", "ParentId": null, "OwnerUserId": "1319683", "Title": null, "Body": "<p>Use <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.set_subtensor\" rel=\"noreferrer\">set_subtensor</a> or <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.inc_subtensor\" rel=\"noreferrer\">inc_subtensor</a>:</p>\n\n<pre><code>from theano import tensor as T\nfrom theano import function, shared\nimport numpy\n\nX = shared(numpy.array([0,1,2,3,4]))\nY = T.vector()\nX_update = (X, T.set_subtensor(X[2:4], Y))\nf = function([Y], updates=[X_update])\nf([100,10])\nprint X.get_value() # [0 1 100 10 4]\n</code></pre>\n\n<p>There's now a page about this in the Theano FAQ: <a href=\"http://deeplearning.net/software/theano/tutorial/faq_tutorial.html\" rel=\"noreferrer\">http://deeplearning.net/software/theano/tutorial/faq_tutorial.html</a></p>\n" }, { "AnswerId": "16739636", "CreationDate": "2013-05-24T16:28:42.940", "ParentId": null, "OwnerUserId": "380038", "Title": null, "Body": "<p>This code should solve your problem:</p>\n\n<pre><code>from theano import tensor as T\nfrom theano import function, shared\nimport numpy\n\nX = shared(numpy.array([0,1,2,3,4], dtype='int'))\nY = T.lvector()\nX_update = (X, X[2:4]+Y)\nf = function(inputs=[Y], updates=[X_update])\nf([100,10])\nprint X.get_value()\n# output: [102 13]\n</code></pre>\n\n<p>And here is the <a href=\"http://deeplearning.net/software/theano/tutorial/examples.html#using-shared-variables\" rel=\"nofollow\">introduction about shared variables in the official tutorial</a>.</p>\n\n<p>Please ask, if you have further questions!</p>\n" } ]
16,426,641
1
<python><theano>
2013-05-07T19:04:11.087
16,446,170
413,345
Disconnected Input in Gradient of Theano Scan Op
<p>I have a number of items in groups of varying size. For each of these groups, one (known) item is the "correct" one. There is a function which will assign a score to each of item. This results in a flat vector of item scores, as well as vectors telling the index where each group begins and how big it is. I wish to do a "softmax" operation over the scores in each group to assign the items probabilities, and then take the sum of the logs of the probabilities of the correct answers. Here is a simpler version, where we simply return the score of the correct answer without the softmax and the logarithm.</p> <pre></pre> <p>This correctly calculates the output, but when I attempt to take the gradient with respect to the parameter , I get (paths abbreviated):</p> <pre></pre> <p>Now, the are constant, so there's no reason to need to take any gradients with respect to it. Ordinarily you could deal with this by either suppressing s or telling Theano to treat as a constant in your call (see the last lines of the sample script). But there doesn't seem to be any way to pass such things down to the internal calls in the gradient computation for the .</p> <p>Am I missing something? Is these a way to get the gradient computation to work through the ScanOp here?</p>
[ { "AnswerId": "16446170", "CreationDate": "2013-05-08T16:56:35.097", "ParentId": null, "OwnerUserId": "413345", "Title": null, "Body": "<p>This turns out to be a Theano bug as of mid-Feb. 2013 (0.6.0rc-2). It is fixed in the development version on github as of the date of this post.</p>\n" } ]
16,512,803
1
<python><mingw><neural-network><theano>
2013-05-12T22:44:11.393
16,868,378
420,774
Compilation error when importing Theano on Windows 7
<p>I'm trying to use Theano on Windows 7. I was able to install Theano and import Theano, but after seeing the warning about not having a C compiler installed I also installed mingw. Now when I try "import theano" I get a compilation error. The message is rather long, but the relevant parts (from what I could tell) look like this:</p> <pre></pre> <p>and later in the error message this:</p> <pre></pre> <p>Any idea what I'm doing wrong? The files referenced in the compile statement don't exist, so that may be part of the problem, but it doesn't explain why Theano thinks they should be there.</p>
[ { "AnswerId": "16868378", "CreationDate": "2013-06-01T02:06:05.347", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Installation on Windows isn't trivial. The simplest way is to use EPD(not the free version, it miss some depenency). Are you an academic? If so, it is free. It install all Theano dependency, so you only need to install Theano after that and we link to MKL for BLAS.</p>\n\n<p><a href=\"https://www.enthought.com/products/epd\" rel=\"nofollow\">https://www.enthought.com/products/epd</a></p>\n\n<p>We had an Theano installer that worked with Anaconda 1.4, but the newer released Anaconda 1.5 broke it. We haven't repaired it yet.</p>\n" } ]
16,528,804
1
<enthought><epd-python><theano>
2013-05-13T18:35:02.220
null
2,378,916
linking to python library using EPD Canopy
<p>Summary: I'm trying to install the theano python package, and the theano install can't find "-lpython2.7" in my EPD Canopy installation.</p> <p>More details: Recently I installed the Enthought EPD Canopy python distribution (64-bit academic) in OS X 10.6.8. Next I installed pip via "easy_install pip".</p> <p>Next I installed Theano via "sudo pip install theano". The install looks OK, but then python -c "import theano" fails. The full output is at <a href="https://gist.github.com/anonymous/5548936" rel="nofollow">https://gist.github.com/anonymous/5548936</a>, but it seems like the main point is:</p> <p>Problem occurred during compilation with the command line below:</p> <pre></pre> <h1>===============================</h1> <p>ld: library not found for -lpython2.7 collect2: ld returned 1 exit status</p> <p>I've had some discussions with the theano google group, and the main message I get is to look for "libpython2.7.so", which I can't find. I checked /Users/rkeisler/Library/Enthought/Canopy_64bit/User/lib/. Inside is python2.7/os.py and python2.7/site-packages, but no "libpython*" files.</p> <p>I also did a more thorough check for libpython* files. The only things I could find were:</p> <pre></pre> <p>Finally, on the EPD Canopy package list, I see "libpython" listed. However, when I try to install libpython using the Canopy package manager, "libpython" doesn't appear. It's not an available package. I'm not sure where to go from here.</p>
[ { "AnswerId": "16926207", "CreationDate": "2013-06-04T19:45:34.950", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Following @RobertKern@ information, it is now fixed in the development version of Theano.</p>\n\n<p>To update to the development version, do:</p>\n\n<pre><code>pip -U --no-deps git+git://github.com/Theano/Theano.git\n</code></pre>\n" } ]
16,641,014
2
<python><theano>
2013-05-20T00:37:34.880
16,641,107
2,276,326
Theano import error: cannot import name stacklists
<pre></pre> <p>this is my program. i am getting following error.can anybody help</p> <pre></pre>
[ { "AnswerId": "16641107", "CreationDate": "2013-05-20T00:51:03.093", "ParentId": null, "OwnerUserId": "1330293", "Title": null, "Body": "<p>You probably have an old version of Theano; <code>stacklist</code> was <a href=\"https://github.com/mrocklin/Theano/commit/8eefe5a0094e84d470865e42cc16edcbdeda58e3\" rel=\"nofollow\">recently introduced/renamed</a> (a month ago). Sou you should update to the latest/dev version. If you want to stay in your version try importing <code>tensor_of_scalars</code> instead of <code>stacklist</code>.</p>\n\n<p>To update follow the instructions <a href=\"http://deeplearning.net/software/theano/install.html#install\" rel=\"nofollow\">here</a>.</p>\n" }, { "AnswerId": "16641059", "CreationDate": "2013-05-20T00:43:30.710", "ParentId": null, "OwnerUserId": "1103045", "Title": null, "Body": "<p>This error can be caused by one of two things.</p>\n\n<p>The first one is pretty obvious: does <code>theano.tensor</code> define a name <code>stacklists</code>? Should it be, for example, <code>stacklist</code>?</p>\n\n<p>Secondly, it can happen if something else you're importing has already imported the name in a way where doing so again would cause a circular reference. The second would have to be fixed by looking at your source files.</p>\n" } ]
16,683,390
3
<python><theano>
2013-05-22T04:18:56.527
16,683,692
2,276,326
Python RuntimeError: Failed to import pydot
<p>I am learning concepts of logistic regression concepts. When i implement it in python, it shows me some error mentioned below. I am beginner in python. Could anybody help to rectify this error?</p> <p>RuntimeError Traceback (most recent call last) in ()</p> <pre></pre> <p>C:\Anaconda\lib\site-packages\theano\printing.pyc in pydotprint(fct, outfile, compact, format, with_ids, high_contrast, cond_highlight, colorCodes, max_label_size, scan_graphs, var_with_name_simple, print_output_file, assert_nb_all_strings)</p> <pre></pre> <p>RuntimeError: Failed to import pydot. You must install pydot for to work.</p>
[ { "AnswerId": "39842355", "CreationDate": "2016-10-04T00:15:59.297", "ParentId": null, "OwnerUserId": "6901690", "Title": null, "Body": "<p>I got the same error and I did the following sequence to make it work, in a Python 3:</p>\n\n<pre><code>source activate anaconda\npip install pydot\npip install pydotplus\npip install pydot-ng\n</code></pre>\n\n<p>Then you download and install Graphviz from here according to your OS type:\n<a href=\"http://www.graphviz.org/Download..php\" rel=\"nofollow noreferrer\">http://www.graphviz.org/Download..php</a></p>\n\n<p>If you are running Python on Anaconda, open Spyder from terminal, not from Anaconda. Go to terminal and type:</p>\n\n<pre><code>spyder\n</code></pre>\n\n<p>Then:</p>\n\n<pre><code>import theano\nimport theano.tensor as T\n.\n.\n.\nimport pydot\nimport graphviz\nimport pydot_ng as pydot\n</code></pre>\n\n<p>Develop your model and:</p>\n\n<pre><code>theano.printing.pydotprint(prediction, outfile=\"/Volumes/Python/prediction.png\", var_with_name_simple=True)\n</code></pre>\n\n<p>You will have a picture like this:\n<a href=\"https://i.stack.imgur.com/CO8GR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/CO8GR.png\" alt=\"enter image description here\"></a></p>\n" }, { "AnswerId": "21513225", "CreationDate": "2014-02-02T16:46:32.297", "ParentId": null, "OwnerUserId": "3262258", "Title": null, "Body": "<p>I also have the same issue. I would suggest you post this in the Github Theano Issues forum:</p>\n\n<p><a href=\"https://github.com/Theano/Theano/issues?direction=desc&amp;sort=updated&amp;state=open\" rel=\"nofollow\">https://github.com/Theano/Theano/issues?direction=desc&amp;sort=updated&amp;state=open</a></p>\n\n<p>It seems to me, that since this instance of the pydotprint() function is actually part of the printing module within the Theano library that this should not be an issue (but it is) and hence it should be brought to the attention of the developers in order to fix it.</p>\n\n<p>Please correct me if this is not the case.</p>\n" }, { "AnswerId": "16683692", "CreationDate": "2013-05-22T04:51:24.647", "ParentId": null, "OwnerUserId": "2318122", "Title": null, "Body": "<p>It mainly depends on where you put the pydot files. If you are running it straight from the Python Shell then you should have them installed in the modules folder which is most commonly the \"Lib\" folder inside the main python folder.</p>\n" } ]
16,738,937
1
<python><theano>
2013-05-24T15:49:47.633
16,782,594
380,038
Using Theano.scan with shared variables
<p>I want to calculate the <a href="http://office.microsoft.com/en-001/excel-help/sumproduct-HP005209293.aspx" rel="nofollow"></a> of two arrays in Theano. Both arrays are declared as shared variables and are the result of prior computations. Reading the <a href="http://deeplearning.net/software/theano/library/scan.html" rel="nofollow">tutorial</a>, I found out how to use scan to compute what I want using 'normal' tensor arrays, but when I tried to adapt the code to shared arrays I got the error message . (See minimal running code example below)</p> <p>Where is the mistake in my code? Where is my misconception? I am also open to a different approach for solving my problem.</p> <p>Generally I would prefer a version which takes the shared variables directly, because in my understanding, converting the arrays first back to Numpy arrays and than again passing them to Theano, would be wasteful. </p> <p>Error message producing code using <strong>shared</strong> variables:</p> <pre></pre> <p>Error message: </p> <pre></pre> <p>Working code using <strong>non-shared</strong> variables:</p> <pre></pre>
[ { "AnswerId": "16782594", "CreationDate": "2013-05-28T01:33:56.643", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>You need to change the compilation line to this one:</p>\n\n<pre><code>Tsumprod = theano.function([], outputs=Tsumprod_result)\n</code></pre>\n\n<p>theano.function() always need a list of inputs. If the function take 0 input, like in this case, you need to give an empty list for the inputs.</p>\n" } ]
16,787,358
1
<python><theano>
2013-05-28T08:29:27.513
16,787,359
380,038
Theano TypeError: function() takes at least 1 argument (1 given)
<p>One of my Theano functions does not take any inputs and only uses shared variables to calculate the output. But this function throws a . </p> <p>Here a minimal example: </p> <pre></pre>
[ { "AnswerId": "16787359", "CreationDate": "2013-05-28T08:29:27.513", "ParentId": null, "OwnerUserId": "380038", "Title": null, "Body": "<p>From: <a href=\"https://stackoverflow.com/a/16782594/380038\">https://stackoverflow.com/a/16782594/380038</a></p>\n\n<p>\"<code>theano.function()</code> always needs a list of inputs. If the function takes 0 input, like in this case, you need to give an empty list for the inputs.\"</p>\n\n<p><code>f2 = th.function(outputs=z)</code> has to be <code>f2 = th.function([], outputs=z)</code></p>\n" } ]
16,810,371
1
<theano>
2013-05-29T09:36:14.473
16,815,974
380,038
Using Theano.scan with multidimensional arrays
<p>To speed up my code I am converting a multidimensional sumproduct function from Python to Theano. My Theano code reaches the same result, but only calculates the result for one dimension at a time, so that I have to use a Python for-loop to get the end result. I assume that would make the code slow, because Theano cannot optimize memory usage and transfer (for the gpu) between multiple function calls. Or is this a wrong assumption?</p> <p>So how can I change the Theano code, so that the sumprod is calculated in one function call?</p> <p>The original Python function:</p> <pre></pre> <p>For the following input </p> <pre></pre> <p>the output would be: that is 1*1 + 5*5, 2*2 + 6*6 and 4*4 + 7*7</p> <p>The Theano version of the code:</p> <pre></pre>
[ { "AnswerId": "16815974", "CreationDate": "2013-05-29T14:00:42.667", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>First, there is more people that will answer your questions on theano mailing list then on stackoverflow. But I'm here:)</p>\n\n<p>First, your function isn't a good fit for GPU. Even if everything was well optimized, the transfer of the input to the gpu just to add and sum the result will take more time to run then the python version.</p>\n\n<p>Your python code is slow, here is a version that should be faster:</p>\n\n<pre><code>def sumprod(a1, a2):\n \"\"\"Sum the element-wise products of the `a1` and `a2`.\"\"\"\n a1 = numpy.asarray(a1)\n a2 = numpy.asarray(a2)\n result (a1 * a2).sum(axis=0)\n return result\n</code></pre>\n\n<p>For the theano code, here is the equivalent of this faster python version(no need of scan)</p>\n\n<pre><code>m1 = theano.tensor.matrix()\nm2 = theano.tensor.matrix()\nf = theano.function([m1, m2], (m1 * m2).sum(axis=0))\n</code></pre>\n\n<p>The think to remember from this is that you need to \"vectorize\" your code. The \"vectorize\" is used in the NumPy context and it mean to use numpy.ndarray and use function that work on the full tensor at a time. This is always faster then doing it with loop (python loop or theano scan). Also, Theano optimize some of thoses cases by moving the computation outside the scan, but it don't always do it.</p>\n" } ]
16,848,650
1
<python><numpy><theano>
2013-05-31T01:54:35.713
16,860,115
1,658,908
Theano: Why does indexing fail in this case?
<p>I'm trying to get the max of a vector given a boolean value.</p> <p>With Numpy:</p> <pre></pre> <p>But with Theano:</p> <pre></pre> <p>Why does this happen? Is this a subtle nuance that i'm missing?</p>
[ { "AnswerId": "16860115", "CreationDate": "2013-05-31T14:51:21.860", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>You are using a version of Theano that is too old. In fact, tensor_var.nonzero() isn't in any released version. You need to update to the development version.</p>\n\n<p>With the development version I have this:</p>\n\n<pre><code>&gt;&gt;&gt; that[~(that&gt;=5).nonzero()].max().eval()\nTraceback (most recent call last):\n File \"&lt;stdin&gt;\", line 1, in &lt;module&gt;\nTypeError: bad operand type for unary ~: 'tuple'\n</code></pre>\n\n<p>This is because you are missing parenthesis in your line. Here is the good line:</p>\n\n<pre><code>&gt;&gt;&gt; that[(~(that&gt;=5)).nonzero()].max().eval()\narray(9, dtype=int32)\n</code></pre>\n\n<p>But we still have unexpected result! The problem is that Theano do not support bool. Doing ~ on int8, is doing the bitwise invert on 8 bits, not 1 bit. It give this result:</p>\n\n<pre><code>&gt;&gt;&gt; (that&gt;=5).eval()\narray([0, 0, 0, 0, 0, 1, 1, 1, 1, 1], dtype=int8)\n&gt;&gt;&gt; (~(that&gt;=5)).eval()\narray([-1, -1, -1, -1, -1, -2, -2, -2, -2, -2], dtype=int8)\n</code></pre>\n\n<p>You can remove the ~ with this:</p>\n\n<pre><code>&gt;&gt;&gt; that[(that&lt;5).nonzero()].max().eval()\narray(4, dtype=int32)\n</code></pre>\n" } ]
17,026,496
1
<python><numpy><theano>
2013-06-10T14:39:54.977
17,026,497
380,038
Theano shared variable update causes `ValueError: length not known`
<p>Minimal example code: </p> <pre></pre> <p>Error message:</p> <pre></pre> <p>What is the cause of this error?</p>
[ { "AnswerId": "17026497", "CreationDate": "2013-06-10T14:39:54.977", "ParentId": null, "OwnerUserId": "380038", "Title": null, "Body": "<p><code>Updates</code> must contain <code>a list of pairs</code>. See official tutorial on <a href=\"http://deeplearning.net/software/theano/tutorial/examples.html#using-shared-variables\" rel=\"nofollow\">using shared variables</a>.</p>\n\n<p>Correct code:</p>\n\n<pre><code>import theano as th\nimport theano.tensor as T\nimport numpy as np\n\nx = T.dscalars('x')\nz = th.shared(np.zeros(2))\nupdates = [(z, z+x)]\n\nf1 = th.function(inputs=[x], updates=updates) \nf1(3)\nprint z.get_value()\n</code></pre>\n" } ]
17,039,908
2
<theano>
2013-06-11T08:45:59.827
null
380,038
How to reuse Theano function with different shared variables without rebuilding graph?
<p>I have a Theano function that is called several times, each time with different shared variables. The way it is implemented now, the Theano function gets redefined every time it is run. I assume, that this make the whole program slow, because every time the Theano functions gets defined the graph is rebuild.</p> <pre class="lang-py prettyprint-override"></pre> <p>For non shared (normal) variables I can define the function once and then call it with different variables without redefining. </p> <pre class="lang-py prettyprint-override"></pre> <p>Is this possible also for shared variables?</p>
[ { "AnswerId": "17296524", "CreationDate": "2013-06-25T11:49:33.330", "ParentId": null, "OwnerUserId": "857617", "Title": null, "Body": "<p>You can use the givens keyword in theano.function for that. Basically, you do the following.</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>m1 = theano.shared(name='m1', value = np.zeros((3,2)) )\nm2 = theano.shared(name='m2', value = np.zeros((3,2)) )\n\nx1 = theano.tensor.dmatrix('x1')\nx2 = theano.tensor.dmatrix('x2')\n\ny = (x1*x2).sum(axis=0)\nf = theano.function([],y,givens=[(x1,m1),(x2,m2)],on_unused_input='ignore')\n</code></pre>\n\n<p>then to loop through values you just set the value of the shared variables to the value you'd like. You have to set the on_unused_input to 'ignore' to use functions with no arguments in theano, by the way. Like this:</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>array1 = array([[1,2,3],[4,5,6]])\narray2 = array([[2,4,6],[8,10,12]])\n\nfor i in range(10):\n m1.set_value(i*array1)\n m2.set_value(i*array2)\n print f()\n</code></pre>\n\n<p>It should work, at least that's how I've been working around it.</p>\n" }, { "AnswerId": "17045919", "CreationDate": "2013-06-11T13:54:41.687", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Currently it is not easily possible to reuse a Theano function with different shared variable.</p>\n\n<p>But you have alternative:</p>\n\n<ol>\n<li>Is it really a bottleneck? In the example, it is, but I suppose it is a simplified case. The only way to know is to profile it.</li>\n<li>You compile 1 Theano function with the first shared variable. Then you can call the get_value/set_value on those shared variables before calling the Theano function. This way, you won't need to recompile the Theano function.</li>\n</ol>\n" } ]
17,125,845
4
<python-2.7><enthought><win64><theano>
2013-06-15T16:48:56.567
null
1,961,354
Installing Theano on EPD (Windows x64), g++ not detected
<p>I am trying to run theano on Enthought Python Distribution (academic license) under windows 7 64. Followin the topic <a href="https://stackoverflow.com/questions/10270871/installing-theano-on-epd-windows-x64">Installing Theano on EPD (Windows x64)</a> I installed bleeding edge version of theano since I got the same error. But now I have this problem:</p> <pre></pre> <p>EPD installs its own version of mingw, so I do not uderstand why the problem occurs. I tried to find g++ (assuming EPD installed it) through window search to put in PATH but there is nothing. </p> <p>I've separatly installed mingw64, but when I type in command prompt</p> <pre></pre> <p>it's hanging</p> <p>Thanks in advance.</p>
[ { "AnswerId": "17134799", "CreationDate": "2013-06-16T15:24:24.933", "ParentId": null, "OwnerUserId": "1961354", "Title": null, "Body": "<p>I solved this problem by adding Visual C++ Compilers feature to my current VS2010 installtion.\nNow I can import theano and console shows that I use gpu</p>\n\n<pre><code>&gt;&gt;import theano\nForcing DISTUTILS_USE_SDK=1\nUsing gpu device 0: GeForce GT 630M\n</code></pre>\n\n<p>But when I am trying to run this code:</p>\n\n<pre><code>from theano import function, config, shared, sandbox\nimport theano.tensor as T\nimport numpy\nimport time\n\nvlen = 10 * 30 * 768 # 10 x #cores x # threads per core\niters = 1000\n\nrng = numpy.random.RandomState(22)\nx = shared(numpy.asarray(rng.rand(vlen), config.floatX))\nf = function([], T.exp(x),mode='DebugMode')\n</code></pre>\n\n<p>I get \n<code>NVCC: nvcc : fatal error : Could not set up the environment for Microsoft Visual Studio using 'c:/Program Files (x86)/Microsoft Visual Studio 10.0/VC/bin/../../VC/bin/amd64/vcvars64.bat</code></p>\n" }, { "AnswerId": "37846439", "CreationDate": "2016-06-15T21:53:14.907", "ParentId": null, "OwnerUserId": "2534758", "Title": null, "Body": "<p>Instead of using git command to install Theano try downloading the zip from the theano repository on GitHub. To install theano, use <code>python setup.py install</code> command. \nAlso try using Anaconda distribution to install Python3.4 or older versions. Then use <code>conda install</code> command to install mingw for g++ support.</p>\n" }, { "AnswerId": "17132910", "CreationDate": "2013-06-16T11:38:28.800", "ParentId": null, "OwnerUserId": "1961354", "Title": null, "Body": "<p>The problem was that I installed Enthought Canopy and it does not contain mingw. The issue can be solved by installing Enthought Python Distribution. Following <a href=\"https://stackoverflow.com/questions/2970493/cuda-linking-error-visual-express-2008-nvcc-fatal-due-to-null-configuratio\">CUDA linking error - Visual Express 2008</a> I created vcvars64.bat in \nc:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\bin\\amd64\\ to avoid nvcc fatal : Visual Studio configuration file '(null)' error.\n But now I get this exception:</p>\n\n<pre><code>c:\\program files (x86)\\microsoft visual studio 10.0\\vc\\include\\codeanalysis\\sourceannotations.h(29): error: invalid redeclaration of type name \"size_t\"\n</code></pre>\n" }, { "AnswerId": "17149578", "CreationDate": "2013-06-17T14:06:54.983", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The problem isn't with Theano, but with nvcc. For Theano to use the GPU, it need a working nvcc installation. But it is not the case currently.</p>\n\n<p>To help you fix this problem, try to compile nvcc example. They will also fail. When you fix this problem, Theano will work. For this, check nvcc installation/tests documentations.</p>\n\n<p>I suspecct that you didn't used the right microsoft compiler version. nvcc don't accept version of msvc.</p>\n" } ]
17,323,040
3
<python><theano>
2013-06-26T14:33:11.717
17,358,567
1,323,010
What is the prupose/meaning of passing "input" to a function in Theano?
<p>Example will make that clearer I hope, (This is a Logistic Regression object, the Theano Tensor library is imported as T)</p> <pre></pre> <p>Which is called down in main...</p> <pre></pre> <p>If these snippits aren't enough to get an understanding, the code is on this page under "Putting it All Together"- <a href="http://deeplearning.net/tutorial/logreg.html#logreg" rel="nofollow">http://deeplearning.net/tutorial/logreg.html#logreg</a></p>
[ { "AnswerId": "17323199", "CreationDate": "2013-06-26T14:39:56.270", "ParentId": null, "OwnerUserId": "64250", "Title": null, "Body": "<p>This is a Python feature called <a href=\"http://www.diveintopython.net/power_of_introspection/optional_arguments.html\" rel=\"nofollow\">named parameters</a>. For functions with optional parameters or many parameters it is helpful to pass the parameters by name, instead of just relying on the order on which they were passed to the function. In your specific case you can see the meaning of the <code>input</code> parameter <a href=\"http://deeplearning.net/tutorial/logreg.html#creating-a-logisticregression-class\" rel=\"nofollow\">here</a>. </p>\n" }, { "AnswerId": "17323872", "CreationDate": "2013-06-26T15:09:31.163", "ParentId": null, "OwnerUserId": "1634191", "Title": null, "Body": "<p>Named parameters, or default keyword arguments, like <code>input</code>, <code>n_in</code>, and <code>n_out</code> are useful for several reasons.</p>\n\n<ul>\n<li>If a function/method have many parameters, it becomes easier to pass them by name instead of having to remember the functional order of the parameters.</li>\n<li>Many functions/methods have default use cases which are used often, and specialty use cases which are used rarely. If the specialty use case requires passing additional arguments to the function, those will most likely take the form of named parameters with default values. This way, when the function is used in the default use case, the user does not have to specify any additional parameters. Only when someone wants to use the specialty case will they have to specify something extra. This keeps function and method calls readable and simple when they aren't used in complex or specialty ways.</li>\n</ul>\n" }, { "AnswerId": "17358567", "CreationDate": "2013-06-28T06:29:44.473", "ParentId": null, "OwnerUserId": "857617", "Title": null, "Body": "<p>so... Theano builds graphs for the expressions it computes before evaluating them. By passing a theano variable such as 'x' in the example to the initialization of the logistic regression object, you will create a number of expressions such as p_y_given_x in your object which are theano expressions dependent on x. This is later used for symbolic gradient calculation.</p>\n\n<p>To get a better feel for it you can do the following:</p>\n\n<p></p>\n\n<pre><code>import theano.pp #pp is for pretty print\nx = T.dmatrix('x') #naming your variables is a good idea, and important i think\nlr = LogisticRegression(x,n_in = 28*28, n_out= 10)\nprint pp(lr.p_y_given_x)\n</code></pre>\n\n<p>This should given you an output such as</p>\n\n<pre><code>softmax( W \\dot x + b)\n</code></pre>\n\n<p>And while you're at it go ahead and try out</p>\n\n<p></p>\n\n<pre><code>print pp(T.grad(lr._y_given_x,x)) #might need syntax checkng\n</code></pre>\n\n<p>which is how theano internally stores the expression. Then you can use these expressions to create functions in theano, such as</p>\n\n<p></p>\n\n<pre><code>values = theano.shared( value = mydata, name = 'values')\nf = theano.function([],lr.p_y_given_x , \n givens ={x:values},on_unused_input='ignore')\nprint f()\n</code></pre>\n\n<p>then calling f should give you the predicted class probabilities for the values defined in mydata. The way to do this in theano (and the way it's done in the DL tutorials) is by passing a \"dummy\" theano variable and then using the \"givens\" keyword to set it to a shared variable containing your data. That's important because storing your variables in a shared variable allows theano to use your GPU for matrix operations.</p>\n" } ]
17,426,742
2
<python><numpy><scipy><sparse-matrix><theano>
2013-07-02T13:20:51.677
null
2,542,699
How to calculate (1 - SparseMatrix) of a huge sparse matrix?
<p>I researched a lot on this but couldn't find a practical solution to this problem. I am using scipy to create csr sparse matrix and want to substract this matrix from an equivalent matrix of all ones. In scipy and numpy notations, if matrix is not sparse, we can do so by simply writing 1 - MatrixVariable. However, this operation is not implemented if Matrix is sparse. I could just think of the following obvious solution:</p> <p>Iterate through the entire sparse matrix, set all zero elements to 1 and all non-zero elements to 0.</p> <p>But this would create a matrix where most elements are 1 and only a few are 0, which is no longer sparse and due its huge size could not be converted to dense. </p> <p>What could be an alternative and effective way of doing this?</p> <p>Thanks.</p>
[ { "AnswerId": "17426874", "CreationDate": "2013-07-02T13:26:35.310", "ParentId": null, "OwnerUserId": "832621", "Title": null, "Body": "<p>You can access the data from your sparse matrix as a <code>1D array</code> so that:</p>\n\n<pre><code>ss.data *= -1\nss.data += 1\n</code></pre>\n\n<p>will work like <code>1 - ss</code>, for all non-zero elements in your sparse matrix.</p>\n" }, { "AnswerId": "17427551", "CreationDate": "2013-07-02T13:57:01.747", "ParentId": null, "OwnerUserId": "110026", "Title": null, "Body": "<p>Your new matrix will not be sparse, because it will have <code>1</code>s everywhere, so you will need a dense array to hold it:</p>\n\n<pre><code>new_mat = np.ones(sps_mat.shape, sps_mat.dtype) - sps_mat.todense()\n</code></pre>\n\n<p>This requires that your matrix fits in memory. It actually requires that it fits in memory 3 times. If that is an issue, you can get it to be more efficient doing something like:</p>\n\n<pre><code>new_mat = sps_mat.todense()\nnew_mat *= -1\nnew_mat += 1\n</code></pre>\n" } ]
17,445,280
4
<python><debugging><theano>
2013-07-03T10:13:41.920
17,453,598
869,402
theano - print value of TensorVariable
<p><strong>How can I print the numerical value of a theano TensorVariable?</strong> I'm new to theano, so please be patient :)</p> <p>I have a function where I get as a parameter. Now I want to debug-print the shape of this to the console. Using</p> <pre></pre> <p>results in the console output (i was expecting numbers, i.e. ):</p> <pre></pre> <p>Or how can I print the numerical result of for example the following code (this counts how many values in are bigger than half the maximum):</p> <pre></pre> <p> should be a single number because sums up all the values. But using</p> <pre></pre> <p>gives me (expected something like ):</p> <pre></pre>
[ { "AnswerId": "38538089", "CreationDate": "2016-07-23T04:12:09.713", "ParentId": null, "OwnerUserId": "615130", "Title": null, "Body": "<p>print Value of a Tensor Variable.</p>\n\n<p>Do the following:</p>\n\n<p><code>print tensor[dimension].eval()</code> # this will print the content/value at that position in the Tensor</p>\n\n<p>Example, for a 1 d tensor:</p>\n\n<pre><code>print tensor[0].eval()\n</code></pre>\n" }, { "AnswerId": "29963238", "CreationDate": "2015-04-30T08:54:07.183", "ParentId": null, "OwnerUserId": "828368", "Title": null, "Body": "<p>For future readers: the previous answer is quite good. \nBut, I found the 'tag.test_value' mechanism more beneficial for debugging purposes (see <a href=\"http://deeplearning.net/software/theano/tutorial/debug_faq.html#using-test-values\" rel=\"noreferrer\">theano-debug-faq</a>):</p>\n\n<pre><code>from theano import config\nfrom theano import tensor as T\nconfig.compute_test_value = 'raise'\nimport numpy as np \n#define a variable, and use the 'tag.test_value' option:\nx = T.matrix('x')\nx.tag.test_value = np.random.randint(100,size=(5,5))\n\n#define how y is dependent on x:\ny = x*x\n\n#define how some other value (here 'errorCount') depends on y:\nerrorCount = T.sum(y)\n\n#print the tag.test_value result for debug purposes!\nerrorCount.tag.test_value\n</code></pre>\n\n<p>For me, this is much more helpful; e.g., checking correct dimensions etc.</p>\n" }, { "AnswerId": "17453598", "CreationDate": "2013-07-03T16:40:18.020", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>If y is a theano variable, y.shape will be a theano variable. so it is normal that </p>\n\n<pre><code>print y.shape\n</code></pre>\n\n<p>return:</p>\n\n<pre><code>Shape.0\n</code></pre>\n\n<p>If you want to evaluate the expression y.shape, you can do:</p>\n\n<pre><code>y.shape.eval()\n</code></pre>\n\n<p>if <code>y.shape</code> do not input to compute itself(it depend only on shared variable and constant). Otherwise, if <code>y</code> depend on the <code>x</code> Theano variable you can pass the inputs value like this:</p>\n\n<pre><code>y.shape.eval(x=numpy.random.rand(...))\n</code></pre>\n\n<p>this is the same thing for the <code>sum</code>. Theano graph are symbolic variable that do not do computation until you compile it with <code>theano.function</code> or call <code>eval()</code> on them.</p>\n\n<p><strong>EDIT:</strong> Per the <a href=\"http://deeplearning.net/software/theano/library/gof/graph.html#theano.gof.graph.Variable.eval\" rel=\"noreferrer\">docs</a>, the syntax in newer versions of theano is</p>\n\n<pre><code>y.shape.eval({x: numpy.random.rand(...)})\n</code></pre>\n" }, { "AnswerId": "51192683", "CreationDate": "2018-07-05T13:32:34.520", "ParentId": null, "OwnerUserId": "1712419", "Title": null, "Body": "<p>Use <code>theano.printing.Print</code> to add print operator to your computational graph.</p>\n\n<p>Example:</p>\n\n<pre><code>import numpy\nimport theano\n\nx = theano.tensor.dvector('x')\n\nx_printed = theano.printing.Print('this is a very important value')(x)\n\nf = theano.function([x], x * 5)\nf_with_print = theano.function([x], x_printed * 5)\n\n#this runs the graph without any printing\nassert numpy.all( f([1, 2, 3]) == [5, 10, 15])\n\n#this runs the graph with the message, and value printed\nassert numpy.all( f_with_print([1, 2, 3]) == [5, 10, 15])\n</code></pre>\n\n<p>Output:</p>\n\n<p><code>this is a very important value __str__ = [ 1. 2. 3.]</code></p>\n\n<p>Source: <a href=\"http://deeplearning.net/software/theano/tutorial/debug_faq.html#how-do-i-print-an-intermediate-value-in-a-function\" rel=\"nofollow noreferrer\">Theano 1.0 docs: “How do I Print an Intermediate Value in a Function?”</a></p>\n" } ]
17,641,566
1
<python><eclipse><pydev><theano><mnist>
2013-07-14T17:00:30.507
17,641,624
1,908,423
PyDev can't find machine learning data
<p>I have a problem I hoped someone would be able to help me with regarding the tutorial at <a href="http://deeplearning.net/tutorial/gettingstarted.html#gettingstarted" rel="nofollow">http://deeplearning.net/tutorial/gettingstarted.html#gettingstarted</a></p> <p>I keep receiving an error when I try to run the code to load the data set, that is this code here: </p> <pre></pre> <p>I am using Eclipse with PyDev and have numpy, Scipy and Theano working. I ran the command to clone the git repository and have downloaded the data set as per the instructions, however running the code above still returns </p> <pre></pre> <p>I am new to python in general and this really has me stumped as I am not even sure what the cause of the problem could be, nor how what to search for in order to resolve it myself.</p> <p>Thanks in advance.</p>
[ { "AnswerId": "17641624", "CreationDate": "2013-07-14T17:07:12.220", "ParentId": null, "OwnerUserId": "506544", "Title": null, "Body": "<p>The file mnist.pkl.gz is probably not in the same directory as the script that you are trying to run.</p>\n\n<p>You would be better off receiving the actual location of the file as a command line parameter to the script and then load the file using that path</p>\n" } ]
17,678,905
1
<c++><tensorflow><ros><ofstream>
2013-07-16T14:12:00.800
17,679,464
1,569,693
writing a tf::transform object to a file
<p>I'm attempting to do something resembling the following block of code:</p> <pre></pre> <p>Assuming that I've properly set up that and when I'm trying to write to , how would I do so? I tried a number of variants of this, and all of them result in a huge wall of text to the effect that doesn't know how to handle a tf::transform object, which isn't too surprising.</p> <p>Is there some way to make take arbitrary objects? Is there some format that I could readily convert it to that's more conducive to streaming? Ideally, if I convert it, I'd like to have a way to reversibly convert it to some matrix that I can pipe straight into and out of a file.</p>
[ { "AnswerId": "17679464", "CreationDate": "2013-07-16T14:35:20.347", "ParentId": null, "OwnerUserId": "1648011", "Title": null, "Body": "<p>Implement the operator<br>\nI'm not sure of the contents of the transform struct in this case, but assuming it is:</p>\n\n<pre><code>struct transform { float mat[16]; }\n</code></pre>\n\n<p>Then the implementation can be something like:</p>\n\n<pre><code>std::ostream&amp; operator&lt;&lt; (std::ostream&amp; os, const tf::transform&amp; t)\n{\n os &lt;&lt; t.mat[0];\n for(int i=1;i&lt;16;++i) os &lt;&lt; ',' &lt;&lt; t.mat[i];\n return os;\n}\n</code></pre>\n" } ]
17,857,737
1
<python><numpy><atlas><theano>
2013-07-25T12:14:23.193
null
2,491,687
Running Python code for theano: /usr/bin/ld: cannot find -latlas
<p>I am trying to run theano on ubuntu which requires .</p> <p>I have already installed libatlas but I can find it in </p> <p>I have also copied all of the files to a new folder called :</p> <pre></pre> <p>But still, when I run the python code I see:</p> <pre></pre> <p>I also tried adding to environment variables but didn't work:</p> <pre></pre> <p>Also I tried adding the path path to ld file:</p> <pre></pre> <p>or</p> <pre></pre> <p>None of them worked and I still see the error running the Python code.</p>
[ { "AnswerId": "17859308", "CreationDate": "2013-07-25T13:23:44.903", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>To change how Theano link to BLAS, you need to use Theano flags[1]. They can be set with an environment variable THEANO_FLAGS or with a configuration file.</p>\n\n<p>How did you told Theano to use atlas? If you just installed the atlas packages, it won't work. You need to install the libatlas-dev pacakge as per this Theano installation instruction for Ubuntu[2]</p>\n\n<p>A last point, we don't recommand atlas, especially for Ubuntu. OpenBLAS is packaged for Unbuntu and is faster. See [2] for detail on how to installed them. You will need to remove atlas before installing openblas, otherwise, there will be conflict.</p>\n\n<p>[1]<a href=\"http://www.deeplearning.net/software/theano/library/config.html#envvar-THEANO_FLAGS\" rel=\"nofollow\">http://www.deeplearning.net/software/theano/library/config.html#envvar-THEANO_FLAGS</a>\n[2]<a href=\"http://www.deeplearning.net/software/theano/install_ubuntu.html#install-ubuntu\" rel=\"nofollow\">http://www.deeplearning.net/software/theano/install_ubuntu.html#install-ubuntu</a></p>\n" } ]
18,006,270
1
<python><function><theano>
2013-08-01T23:22:13.783
null
2,491,687
Function declaration in Python
<p>I am using Theano in Python. I have the following code:</p> <pre></pre> <p>I cannot find any declaration of the function, while I can only find a piece of code before the previous one as:</p> <pre></pre> <p>I can find the declaration of but the two functions ( and ) do not have the same signature (input parameters).</p> <p>What does this mean, and is the second code segment the declaration of ?</p>
[ { "AnswerId": "18006380", "CreationDate": "2013-08-01T23:32:20.517", "ParentId": null, "OwnerUserId": "1053992", "Title": null, "Body": "<p>Mostly when name starts with uppercase letter then it is class name (unless it is completely upper cased, then it mostly means constant). There is no <code>new</code> operator in Python so this naming pattern is pretty important to distinguish functions from classes (anyway it is general pattern). I don't know Theano, but I suppose that <code>TrainFn1Member</code> is a class that implements <code>__call__</code> method, so you can call it's instance like a function. Search for <code>__call__</code> in <code>TrainFn1Member</code> class definition.</p>\n\n<hr>\n\n<p>UPDATE:</p>\n\n<p>According to your comment, <code>TrainFn1Member</code> is a function (what is pretty strange according to what I said above and what is not my idea ;)). In this case it has to return some <code>callable</code> what means that it returns one of 3 things (I hope I have not missed anything): </p>\n\n<ol>\n<li>function (<code>def</code> or <code>lambda</code>)</li>\n<li>instance of class that implements <code>__call__</code></li>\n<li>method of some object (function bound to some class instance)</li>\n</ol>\n\n<p>As I don't know <code>Theano</code> at all, I can only suggest to search deeper being aware of those above.. (and welcome to Hogwarts ;))</p>\n" } ]
18,165,131
2
<eclipse><gpu><pydev><theano>
2013-08-10T18:32:54.210
null
1,908,423
Getting Theano to use the GPU
<p>I am having quite a bit of trouble setting up Theano to work with my graphics card - I hope you guys can give me a hand.</p> <p>I have used CUDA before and it is properly installed as would be necessary to run Nvidia Nsight. However, I now want to use it with PyDev and am having several problems following the 'Using the GPU' part of the tutorial at <a href="http://deeplearning.net/software/theano/install.html#gpu-linux" rel="noreferrer">http://deeplearning.net/software/theano/install.html#gpu-linux</a></p> <p>The first is quite basic, and that is how to set up the environment variables. It says I should '<strong>Define a $CUDA_ROOT environment variable</strong>'. Several sources have said to create a new '.pam_environment' file in my home directory. I have done this and written the following: </p> <pre></pre> <p>I am not sure if this is exactly the way it has to be written - apologies if this is a basic question. If I could get confirmation that this is indeed the correct place to have written it, too, that would be helpful.</p> <p>The second problem is in the following part of the tutorial. It says to '<strong>change the device option to name the GPU device in your computer</strong>'. Apparently this has something to do with THEANO_FLAGS and .theanorc, but nowhere am I able to find out what these are: are they files? If so where do I find them? The tutorial seems to be assuming some knowledge that I don't have!</p> <p>Thanks for taking the time to read this: any and all answers are greatly appreciated - I am very much completely stuck at the moment!</p>
[ { "AnswerId": "18189699", "CreationDate": "2013-08-12T14:32:25.263", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p><code>THEANO_FLAGS</code> is an environment variable and .theanorc is a configuration file. You can use both mechanism to configure Theano. This is described <a href=\"http://deeplearning.net/software/theano/library/config.html\" rel=\"nofollow noreferrer\">here</a>.</p>\n\n<p>I never heard of the .pam_environment file. Also, you shouldn't just override the value of <code>LD_LIBRARY_PATH</code>, but append/prepend to it like this:</p>\n\n<pre><code>LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-5.5/lib64/lib\n</code></pre>\n\n<p>For Theano, if you define <code>CUDA_ROOT</code>, you don't need to modify <code>LD_LIBRARY_PATH</code>, so I would just remove the last line.</p>\n\n<p>Normally, if your shell is bash, people define the env variable <code>CUDA_ROOT</code> in the .bashrc file like this:</p>\n\n<pre><code>export CUDA_ROOT=/usr/local/cuda-5.5/bin\n</code></pre>\n\n<p>The change to .bashrc will only be used if you log out and log it again.</p>\n" }, { "AnswerId": "21170740", "CreationDate": "2014-01-16T19:11:04.887", "ParentId": null, "OwnerUserId": "851699", "Title": null, "Body": "<p>On Linux/OSX:</p>\n\n<p>Edit or create the file <code>~/.theanorc</code>. The file should contain:</p>\n\n<pre><code>[global]\nfloatX = float32\ndevice = gpu0\n\n[nvcc]\nfastmath = True\n\n[cuda]\nroot=/usr/local/cuda-5.5/ \n# On a mac, this will probably be /Developer/NVIDIA/CUDA-5.5/\n</code></pre>\n\n<p>You need to add cuda to the $LD_LIBRARY_PATH variable. If you're running eclipse, you can go to Project properties > Interpreters > Configure and interpreter ... > Environment, and then add an LD_LIBRARY_PATH variable that points to your cuda lib folder (probably /Developer/NVIDIA/CUDA-5.5/lib64)</p>\n\n<p>Now when you import theano it should print a message about finding the gpu. You can run the test code at <a href=\"http://deeplearning.net/software/theano/tutorial/using_gpu.html\">http://deeplearning.net/software/theano/tutorial/using_gpu.html</a> to see if it's using the gpu.</p>\n" } ]
18,174,876
1
<python><scipy><theano>
2013-08-11T17:49:07.123
null
2,491,687
how to find elements and dimentions of theano csr_matrix?
<ul> <li>What is the difference between theano.sparse and scipy.sparse?</li> <li>How can I find the dimensions and elements of a scipy.sparse.csr_matrix()?</li> </ul>
[ { "AnswerId": "18189847", "CreationDate": "2013-08-12T14:38:05.080", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>theano.sparse.csr_matrix is a symbolic variable. It don't contain any data. So if you access its .shape and .data, you also obtain a symbolic variable.</p>\n\n<p>Symbolic variable are used to create a Theano graph that you can compile to a function. So this mean Theano is a compiler. As all compiler, it work in 2 steps:</p>\n\n<ol>\n<li>Create a Theano graph and compile it</li>\n<li>Use the compiled function.</li>\n</ol>\n\n<p>There is a shortcut, you can do this to hide the compilation phase:</p>\n\n<pre><code>a_theano_symbolique_variable.eval().\n</code></pre>\n\n<p>If this need input to evalute the symbolic variable, you can pass it as parameters to eval() like this:</p>\n\n<pre><code>a_theano_symbolique_variable.eval(another_theano_var=the_value_of_the_theano_var)\n</code></pre>\n" } ]
19,489,259
2
<python-2.7><machine-learning><computer-vision><theano><mnist>
2013-10-21T08:13:19.543
null
667,127
Accessing Data of a Theano Shared Variable
<p>I'v successfully loaded the MNIST dataset into Theano shared variables as follows</p> <pre></pre> <p>My question is how do I access the data in both train_set_x and train_set_y. Each image in the data set is 28 * 28 pixels. That is a vector of length 784 with all elements in the vector as floats representing values between 0.0 and 1.0 inclusive. The labels are casted into int because it represents the label associated to each vector image and is a value between 0 and 9. I want to be able to loop over the train_set_x matrix images and train_set_y labels to view the data of each image and its label separately and eventually plot the images on screen.</p>
[ { "AnswerId": "19515782", "CreationDate": "2013-10-22T10:50:24.620", "ParentId": null, "OwnerUserId": "667127", "Title": null, "Body": "<p>@Nouiz has pointed out the right way to show the values of both train_set_x and train_set_y. The problem was related to the environment variable \"DYLD_FALLBACK_LIBRARY_PATH\" which was not setup.\nI have a couple of python installations on my mac machine. An installation that was there as part of XCode. Another one which I have installed from python.org and a third installation which I installed from anaconda. Internally only the anaconda' python was able to run native c code for theano. My problem was due to me using the other python installations.\nThe way I fixed this was by editing the some lines in .bash_profile in my home directory. I pointed the default version to be the one I installed with anaconda and also set the environment variable DYLD_FALLBACK_LIBRARY_PATH=\"/Users/Me/anaconda/lib\". This solved the problem and everything works like a charm.</p>\n" }, { "AnswerId": "19495252", "CreationDate": "2013-10-21T12:59:18.563", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>First, <code>train_set_x</code> and <code>train_set_y</code> <em>(before the cast)</em> and <code>train_set</code> are separate copy of the same train set. So I suppose you simplified your example too much, as you say that <code>train_set_x</code> is the input and <code>train_set_y</code> is the corresponding label and this don't make sense with the code.</p>\n\n<p>The answer of you question depend of the contain of <code>mnist.pkl.gz</code>. Where did you get it? From the Deep Learning Tutorial? For my answer, I'll suppose <code>train_set</code> is a 2d numpy <code>ndarray</code>. So that you use a different <code>mnist.pkl.gz</code> file then the one from DLT.</p>\n\n<p>With that supposition, you can call <code>train_set_x.get_value()</code> and this will return a copy of the <code>ndarray</code> in shared variable. If you don't want a copy, you can do <code>train_set_x.get_value(borrow=True)</code> and this will work. If the shared variable is on the GPU, this will copy the data from the GPU to the CPU, but it won't copy the data if it is already on the CPU.</p>\n\n<p><code>train_set_y</code> is a <strong>Theano graph</strong>, not a <strong>Theano shared variable</strong>. So you can't call <code>get_value()</code> on it. You need to compile and run the graph that give <code>train_set_y</code>. If you want to evaluate it only once, you can call <code>train_set_y.eval()</code> as a shortcut to compile and run it as it do not take any input except shared variable.</p>\n\n<p>So you can do this:</p>\n\n<pre><code>for x,y in zip(train_set_x.get_value(), train_set_y.eval()):\n print x, y\n</code></pre>\n" } ]
19,581,711
1
<theano><pymc>
2013-10-25T05:23:32.727
19,644,730
2,918,385
Assembling univariate priors into a matrix for use in MvNormal
<p>When using pymc 3 , is it possible to assemble univariate random variables into a matrix which is then used as a prior for a multivariate distribution? If so, how can I best go about this?</p> <p>Here is a specific example. I would like to take three R.V.'s and create a triangle matrix, A, with them:</p> <pre></pre> <p>After some manipulation I would then use this matrix as the prior for the precision parameter in the multivariate normal distribution.</p> <p>I assume this probably has more to do with operations with tensor variables in theano, so I will add the theano tag as well.</p> <p>Thank you for your time!</p> <p>Edit 1: Here is a minimal example of what I am trying to do:</p> <pre></pre> <p>Edit 2: Here is a test to show that seems to do the job outside of pymc</p> <pre></pre>
[ { "AnswerId": "19644730", "CreationDate": "2013-10-28T20:43:04.490", "ParentId": null, "OwnerUserId": "359944", "Title": null, "Body": "<p>Looks like <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#tensor.stacklists\" rel=\"nofollow\"><code>stacklist</code></a> might do what you want. </p>\n" } ]
19,720,332
2
<pymc><theano>
2013-11-01T03:30:17.457
null
1,502,840
How to build a model requiring external package in PyMC3?
<p>I'm not sure if this is a PyMC3 question or a Theano question. I've used PyMC2 for a long time to fit a cosmology to supernova data. This requires some messy integrals (see i.e. <a href="http://arxiv.org/abs/astroph/9905116" rel="nofollow noreferrer">http://arxiv.org/abs/astroph/9905116</a> )</p> <p>So I use a package in python called Cosmolopy to do the integration and for some other convenience functions. Whereas this used to work fine with PyMC2, with the reliance on theano in PyMC3, I can't figure out if there is even a way to use Cosmolopy.</p> <p>Here is some example code of my current understanding of how to build a model in PyMC3</p> <pre class="lang-py prettyprint-override"></pre> <p>This code crashes because Cosmolopy expects a float for omega_matter but receives a theano.TensorVariable instead.</p> <p>So the question is two-fold:</p> <ol> <li><p>Am I just missing something syntactically with PyMC3 that would allow me to do this (possibly because I am still stuck somehow on PyMC2 model-building)?</p></li> <li><p>If not 1, then do I need to find a way to do the integrals in theano?</p></li> </ol>
[ { "AnswerId": "24258828", "CreationDate": "2014-06-17T08:13:07.617", "ParentId": null, "OwnerUserId": "3609835", "Title": null, "Body": "<p>I think one possible solution would be to write a custom Theano Op following the instructions at <a href=\"http://deeplearning.net/software/theano/extending/\" rel=\"nofollow\">http://deeplearning.net/software/theano/extending/</a> </p>\n\n<p>I would write a pure Python op without support for gradient computation, in which you would only have to implement the make_node() and perform() methods.</p>\n" }, { "AnswerId": "19734538", "CreationDate": "2013-11-01T20:05:25.737", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>I don't know well PyMC3, but I know well Theano. Theano use symbolic compiler and TensorVariable are such symbolic variable. You need to compile and execute the function to get a value out of it. I don't know where to do this in PyMC3. A fast thing to try that will work if the variable depend only on constant and shared variable is to do this call::</p>\n\n<pre><code>the_tensor_variable.eval()\n</code></pre>\n\n<p>This will compile the function and suppose it don't take any variable input and if it compile, it will run it and return the value.</p>\n" } ]
19,798,958
1
<python><pymc><theano>
2013-11-05T21:06:17.270
19,868,858
2,918,385
Supplying test values in pymc 3
<p>I am exploring the use of bounded distributions in pymc. I am trying to bound a Gamma prior distribution between two values. The model specification seems to fail due to the absence of test values. How may I pass a testval argument such that I am able to specify these sorts of models?</p> <p>For completeness I have included the error, as well as a minimal example below. Thank you!</p> <p></p> <pre></pre> <p>edit: for reference purposes, here is a simple working model utilizing a bounded gamma prior distribution:</p> <pre></pre>
[ { "AnswerId": "19868858", "CreationDate": "2013-11-08T21:31:53.353", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Use that line:</p>\n\n<pre><code>xbound = BoundedGamma('xbound', alpha=1, beta=2, testval=1)\n</code></pre>\n" } ]
20,135,786
1
<theano>
2013-11-22T02:18:51.443
null
550,879
Can I avoid using `Theano.scan`?
<p>I have 3-dimensional tensor ( -- an array of matrices), and I'd like to compute the determinant () of each matrix. Is there a way to compute each determinant without using ? When I try calling directly on the tensor I get the error</p> <pre></pre> <p>But I read that is slow and doesn't parallelize well, and that one should use only tensor operations if possible. Is that so? Can I avoid using scan in this case?</p>
[ { "AnswerId": "20149947", "CreationDate": "2013-11-22T16:38:10.997", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>I see 3 possibilities:</p>\n\n<ul>\n<li>If you know before compiling the Theano function the number of matrix in the tensor3 variable, you could use the split() op or just call det() on all matrix in the tensor3.</li>\n<li>If you don't know the shape, you can make your own op, that will loop over the input and call the numpy fct. See for <a href=\"http://www.deeplearning.net/software/theano/tutorial/extending_theano.html\" rel=\"nofollow\">an example</a> on how to make an op.</li>\n<li>Use scan. It is easy to use it for this case. See <a href=\"https://github.com/nicholas-leonard/Theano/commit/74a6265f15ef332d98e3e55aef0bc3c979682269\" rel=\"nofollow\">this example</a>, just change the call from tensordot to det().</li>\n</ul>\n" } ]
20,235,070
2
<python><numpy><theano>
2013-11-27T06:04:31.567
20,235,613
179,081
Theano element wise maximum
<p>I'm trying to find find the value of element-wise on a matrix in theano. I don't have much experience with theano.</p> <p>So far I have</p> <pre></pre> <p>This works, but isn't pretty and I thought I should be able to use for this.</p> <p>In matlab, to do what I want to do, I would just write </p>
[ { "AnswerId": "20235613", "CreationDate": "2013-11-27T06:41:09.193", "ParentId": null, "OwnerUserId": "2015778", "Title": null, "Body": "<p>This works for me:</p>\n\n<pre><code>import theano.tensor as T\nfrom theano import function\n\nx = T.dmatrix('x')\nlinmax = function([x], T.maximum(x,0))\n</code></pre>\n\n<p>Testing:</p>\n\n<pre><code>linmax([[-1,-2],[3,4]])\n</code></pre>\n\n<p>Outputs:</p>\n\n<pre><code>array([[0.,0.],[3.,4.]])\n</code></pre>\n" }, { "AnswerId": "24887725", "CreationDate": "2014-07-22T12:41:56.670", "ParentId": null, "OwnerUserId": "3760518", "Title": null, "Body": "<p>I have seen this implemented as </p>\n\n<pre><code>s = x*(x&gt;0)\n</code></pre>\n\n<p>several times. Dont know if that's faster than T.maximum()</p>\n" } ]
20,237,812
1
<python><function><operation><theano>
2013-11-27T08:59:28.013
20,244,704
179,081
What is the difference between a Op and a Function
<p>Theano has Ops and functions.<br> <strong>What is the difference?</strong></p> <p>Functions seem nice and easy to define, eg: </p> <pre></pre> <p>Ops seem complex to define. All abstract classes and such but things like and are defined as Ops. I'm not to sure on the difference.</p> <p>How would I write the above function as a Op?</p>
[ { "AnswerId": "20244704", "CreationDate": "2013-11-27T14:11:09.900", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>theano.function() return a python object that is callable. So you can use it do the the computation you described when it was called.</p>\n\n<p>Theano Ops are part of the symbolic graph that describe the computation that you want. Do not forget that Theano have two step as many other language as C and others. You first need to describe the computation that you want, then compile. In C, you define that computation in text file. In Theano, you describe it with a Theano symbolic graph and that graph include Ops.</p>\n\n<p>Then you compile, with possible gcc for C and with theano.function() in Theano.</p>\n\n<p>So Op is the element op the symbolic graph. It describe the computation done at one point in the graph. This page in Theano tutorial describe the graph in more detail:</p>\n\n<p><a href=\"http://deeplearning.net/software/theano/tutorial/symbolic_graphs.html#theano-graphs\" rel=\"nofollow\">http://deeplearning.net/software/theano/tutorial/symbolic_graphs.html#theano-graphs</a></p>\n\n<p>This page describe how to make an Op in Theano:</p>\n\n<p><a href=\"http://deeplearning.net/software/theano/tutorial/extending_theano.html\" rel=\"nofollow\">http://deeplearning.net/software/theano/tutorial/extending_theano.html</a></p>\n\n<p>You can skip the section for optional part. So you can skip most of that page if you don't plan to make one and just want to understand the usage.</p>\n" } ]
20,284,663
2
<python><theano>
2013-11-29T11:25:19.870
20,285,275
522,000
Can somebody help explain a line of code in an example of Theano tutorial?
<p>In the <a href="http://deeplearning.net/tutorial/logreg.html" rel="nofollow">logistic regression example</a> provided in the Theano tutorial, there is one line of code in the function as below:</p> <pre></pre> <p>Can someone help explain what exactly the use of square bracket in the last line of the above code? How is gonna be interpreted? </p> <p>Thanks!</p>
[ { "AnswerId": "20285275", "CreationDate": "2013-11-29T11:56:41.720", "ParentId": null, "OwnerUserId": "1398797", "Title": null, "Body": "<p>You have most of the information you need in the comments of the function.</p>\n\n<p><code>T.log(self.p_y_give_x)</code> returns a numpy matrix.</p>\n\n<p>So the [T.arange(y.shape[0]), y] is a slice of the matrix. Here we are using numpy advanced slicing. See: <a href=\"http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html\" rel=\"nofollow\">http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html</a></p>\n" }, { "AnswerId": "35822682", "CreationDate": "2016-03-06T03:13:59.817", "ParentId": null, "OwnerUserId": "5424661", "Title": null, "Body": "<p>I'm also confused about the matrix slicing here. T.arange(y.shape[0]) is a 1d list. y.shape[0] depends on the size of the mini batch you set. y is a list of labels who has the same dimension as T.arange(y.shape[0]). Therefore, this slicing, according to @William Denman's reference, means: for every row in the matrix of T.log(self.p_y_give_x), we select a column index y(in which y denotes the golden label, who is also been used as index here).</p>\n" } ]
20,376,435
1
<python><tensorflow><neural-network><keras><autoencoder>
2013-12-04T13:17:32.023
null
2,873,565
Auto-encoder to reduce input data size
<p>Currently, I want to use the autoencoder for reducing the input data size in order to use the reduced data for another neural networks. My task is to take a video and then give the images of the video to the autoencoder. When I use only a few images as input, the autoencoder works well but when I want to have a sequences of images, it does not. </p> <p>Imagine taking video from a ball moving. We have for example 200 images. If I use autoencoder for 200 images the error is big but if I use only for 5 images, the reconstruction error is small and acceptable. It seems that autoencoder does not learn the sequence or temporal movement of the ball circulating. I also tries denoting stacked autoencoder but the results are not good. </p> <p>Does any one know what the problem is or it is possible to use the autoencoder for this task? </p>
[ { "AnswerId": "53204813", "CreationDate": "2018-11-08T09:27:55.523", "ParentId": null, "OwnerUserId": "5108062", "Title": null, "Body": "<p>Autoencoders/Variational Autoencoders does not learn about sequences, it learns to \"map\" the input data to a latent space which has fewer dimensions. For example if the image is <code>64x64x3</code> you could map that to a <code>32 dim</code> tensor/array.</p>\n\n<p>For learning a sequence of images, you would need to connect the output of the autoencoder encoder part to a RNN (LSTM/GRU) which could learn about the sequence of the encoded frames (consecutive frames in latent space). After that, the output of the RNN could connect to the decoder part of the autoencoder so you could see the reconstructed frames.</p>\n\n<p><a href=\"https://github.com/gaborvecsei/Neural-Network-Dreams\" rel=\"nofollow noreferrer\">Here you can find a GitHub project which tries to encode the video frames and then predict sequences</a></p>\n" } ]
20,415,867
1
<generics><printing><types><probability><theano>
2013-12-06T04:16:58.417
null
2,514,749
Theano printing probabilities for test-set samples
<p>in DL tutorials I'm trying to print the probability of the test samples according to <a href="https://stackoverflow.com/questions/17323040/what-is-the-prupose-meaning-of-passing-input-to-a-function-in-theano">What is the prupose/meaning of passing &quot;input&quot; to a function in Theano?</a> but I get the following Error. Do I need to add some theano_flags? </p> <p>how to solve the problem?</p> <blockquote> <p>TypeError: Cannot convert Type Generic (of Variable ) into Type TensorType(float64, matrix). You can try to manually convert into a TensorType(float64, matrix).</p> <p>(number of features of my data=120,classes=2,test_set batch size=1)</p> </blockquote> <p>part of the code is:</p> <p>from theano import pp</p> <pre></pre>
[ { "AnswerId": "20481585", "CreationDate": "2013-12-09T21:52:39.857", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>There is an error in your code when you create values. test_set_x.get_value is a python function. So it should be test_set_x.get_value() as you want the value, not the callable that return it. As theano.shared() receive as input a callable, it create a generic Theano varible that isn't a tensor. So when you try to replace x that is a tensor variable with a generic variable, it raise an error as it is not an allowed replacement.</p>\n\n<p>But even better, you don't need to create a new shared variable, just compile the function like this:</p>\n\n<pre><code> f=theano.function([],classifier.p_y_given_x, \n givens={x:test_set_x},on_unused_input='ignore')\n</code></pre>\n" } ]
20,530,874
1
<python><theano>
2013-12-11T22:03:44.663
null
648,896
Finding closest point with Theano
<p>among a set of weight vector how could I find the closest to a given instance vector in optimize theano way. Or do I need to use numpy?</p>
[ { "AnswerId": "20570978", "CreationDate": "2013-12-13T16:13:28.727", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>This question was answered on Theano mailing list:</p>\n\n<p><a href=\"https://groups.google.com/forum/#!topic/theano-users/J-l9UmpSG2Y\" rel=\"nofollow\">https://groups.google.com/forum/#!topic/theano-users/J-l9UmpSG2Y</a></p>\n" } ]
20,590,909
1
<python><theano>
2013-12-15T03:32:05.730
20,602,231
429,726
Returning the index of a value in Theano vector
<p>What is the procedure in Theano for returning the index of a particular value in a Vector? In NumPy, this would be . Theano's is a switch statement.</p>
[ { "AnswerId": "20602231", "CreationDate": "2013-12-16T01:46:20.903", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>There is 2 behavior of numpy.where(condition, [x ,y]). Theano always support you provide 3 parameter to where(). As said in NumPy doc[1], numpy.where(cond) is equivalent to nonzero().</p>\n\n<p>You can do it like this in Theano:</p>\n\n<pre><code>import theano\nimport numpy as np\nv = np.arange(10)\nvar = theano.tensor.vector()\nout = theano.tensor.eq(var, 2).nonzero()[0]\nprint out.eval({var: v})\n</code></pre>\n\n<p>Check line 5. NumPy nonzero() return a tuple. Theano do the same. There is one vector in that tuple per dimensions in the input of nonzero().</p>\n\n<p>[1] <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html\" rel=\"noreferrer\">http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html</a></p>\n" } ]
21,112,016
1
<c++><python-2.7><theano>
2014-01-14T11:07:55.593
21,121,992
2,546,106
Reusing compiled Theano functions
<p>Suppose I have implemented the following function in Theano:</p> <pre></pre> <p>When I try to run it a graph of computations is constructed, the function gets optimized and compiled. </p> <p>How can I reuse this compiled chunk of code from within a Python script and/or a C++ application?</p> <p><strong>EDIT:</strong> The goal is to construct a deep learning network and reuse it in a final C++ app.</p>
[ { "AnswerId": "21121992", "CreationDate": "2014-01-14T19:14:33.227", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Currently this isn't possible. There is user that modified Theano to allow pickling the Theano function, but during unpickling we already re optimize the graph.</p>\n\n<p>There is a Pull Request that allow Theano to generate a C++ library. The user can then compile it himself and use it as a normal C++ library. The lib links against the python lib and requires numpy to be installed. But this isn't ready for broad usage.</p>\n\n<p>What is your goal? To save on the compilation time? If so Theano already caches the c++ module that it compiles, so the next time it is reused, the compilation will be faster. But for a big graph, the optimization phase is always redone as told above, and this can take a significant time.</p>\n\n<p>So what is your goal?</p>\n\n<p>This is something that we are working on. Make sure to use the latest Theano release (0.6) as it compiles faster. The development version is also a little faster.</p>\n" } ]
21,290,373
2
<python><numpy><theano>
2014-01-22T17:55:41.377
21,323,254
899,275
Theano integration with already existing python code
<p>I have a simple question, for which I expected to already have found an easy answer online, but I did not.</p> <p>I have a python project that works a lot with numpy, due to matrix operations. I wanted to speed the code up, so I did some profiling to find out the right tool for the job. Turns out the overhead is not python, but the matrix operations. Hence I thought I should use Theano (especially given the case that I am implementing machine learning algorithms, and this is what is was made for).</p> <p>Most of the overhead of my project is in one function, and I was wondering it is somehow possible to just rewrite that function with theano and then get the numpy arrays out of it and continue the computation as usual.</p> <p>This is again just to test how much speed up I will obtain without committing myself to changing a lot of code.</p> <p>Thank you!</p> <p>Edit: function in case is this one</p> <pre></pre>
[ { "AnswerId": "21320362", "CreationDate": "2014-01-23T22:08:51.633", "ParentId": null, "OwnerUserId": "425797", "Title": null, "Body": "<p>If you have code running in Numpy, translating it to Theano is not going to make it magically faster, and it's going to take a significant coding effort.</p>\n\n<p>Theano is really nice, but choosing it is more of a design-time decision - you wanna have all the niceties of symbolic differentiation so you don't have to calculate your gradients by hand for backpropr, so you use that framework. </p>\n\n<p>If you already have the Numpy code ready, just optimize the bottleneck, which in your case must be that dot product. I would try using the dot function instead of einsum, i.e. instead of:</p>\n\n<pre><code>dw = np.einsum('ij,ik-&gt;jk', layerValues[layer - 1], deDz)\n</code></pre>\n\n<p>Try:</p>\n\n<pre><code>dw = layerValues[layer-1].T.dot(deDz)\n</code></pre>\n\n<p>Sometimes einsum is dumb so that might be faster. If not, consider either </p>\n\n<ol>\n<li>using the GPU for the matrix-vector multiplication, via gnumpy <em>or</em></li>\n<li>using a better algorithm that'll converge faster than plain old gradient descent (which I assume is what you're using): adagrad or a second-order optimization algorithm - lbfgs, say - </li>\n</ol>\n" }, { "AnswerId": "21323254", "CreationDate": "2014-01-24T02:24:03.667", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>You don't need to change all your script to use Theano. You can re implement just part of your code with Theano and it will work. Theano take numpy ndarray as input and return numpy ndarray as output by default. So the integration is easy with numpy.</p>\n\n<p>Theano don't implement einsum. So I would recommend you to start by replacing that with call to dot as Patrick said. I have see and hear many times that einsum is slower then call to dot in some cases. If that speed up isn't good enough, Theano can help you. If you just move the dot call to Theano, Theano won't be faster if Theano and NumPy are linked to the same BLAS library. But Theano should make you code faster if you move more computation to it. But it isn't perferct and some case don't have speed up and rare cases have slowdown compared to NumPy(mostly when the input shapes aren't big enough like with scalar) </p>\n\n<p>About Patrick answer, you don't need to use the symbolic gradient of Theano to benefit from Theano. But if you want to use the symbolic gradient, Theano can only compute the symbolic gradient inside computation graph done in Theano. So you will need to convert that part completely to Theano. But as your code already work, it mean you have manually implement the grad. This is fine and don't cause any problem. You can move that manual implementation of the grad to Theano without using the symbolic gradient.</p>\n\n<p>About Patrick comment on GPU. Don't forget that the transfer CPU/GPU of data is the most costly operation on the GPU. It can completely cancel the GPU speed up in many cases. So it isn't sure that doing only the dot on the GPU will help. In Theano, we put the weight on the GPU and without doing that I don't think you can get speed up from the GPU (which ever gnumpy/Theano/something else). The cases when doing only the DOT on the GPU will still give speed up is with gigantic matrices.</p>\n" } ]
21,342,931
3
<python><theano>
2014-01-24T21:36:38.557
21,345,831
217,802
Error importing Theano
<p>After installing python, numpy, scipy and theano to ~/.local, I tried to import theano but it threw an error:</p> <pre></pre> <p>I'm installing on a Red Hat box:</p> <pre></pre> <p>What should I do...?</p>
[ { "AnswerId": "54492982", "CreationDate": "2019-02-02T12:13:35.980", "ParentId": null, "OwnerUserId": "1744914", "Title": null, "Body": "<p>In case you are using <a href=\"https://github.com/pyenv/pyenv\" rel=\"nofollow noreferrer\">pyenv</a> you can do that with</p>\n\n<pre><code>CONFIGURE_OPTS=--enable-shared pyenv install 3.6.5\n</code></pre>\n" }, { "AnswerId": "21345831", "CreationDate": "2014-01-25T02:25:28.387", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>You didn't build correctly python. It wasn't compiled with the -fPIC parameter. Look at how to compile python with a shared library.</p>\n\n<p>EDIT:\nYou need to compile python like this:</p>\n\n<pre><code>./configure --enable-shared\nmake\nmake install\n</code></pre>\n" }, { "AnswerId": "44847761", "CreationDate": "2017-06-30T13:32:29.110", "ParentId": null, "OwnerUserId": "8047453", "Title": null, "Body": "<p>I used a virtualenv on Ubuntu 16.04, and before installing python with <code>pyenv install 3.6.1</code> I ran the following command to ensure python is built with the fPIC flag:</p>\n\n<pre><code>export CONFIGURE_OPTS=\"OPT=\\\"-fPIC\\\"\"\n</code></pre>\n\n<p>Then I was able to install and run theano properly.</p>\n" } ]
21,474,782
1
<machine-learning><neural-network><theano>
2014-01-31T07:37:33.620
21,483,126
3,256,363
How can we see the transformed vaue z, in autoencoder link: http://deeplearning.net/tutorial/dA.html
<p>How can we see the z value , which is the reconstruction of x (dataset )</p> <p>Please see the link : <a href="http://deeplearning.net/tutorial/dA.html" rel="nofollow">http://deeplearning.net/tutorial/dA.html</a></p>
[ { "AnswerId": "21483126", "CreationDate": "2014-01-31T15:01:28.823", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The function \"get_reconstructed_input\" will return a Theano variable that represent z from the hidden representation. Check the function \"get_cost_updates\" It use it to train the model.</p>\n\n<pre><code> tilde_x = self.get_corrupted_input(self.x, corruption_level)\n y = self.get_hidden_values( tilde_x)\n z = self.get_reconstructed_input(y)\n</code></pre>\n\n<p>If you don't want to train the model, you can do this:</p>\n\n<pre><code> y = self.get_hidden_values(self.x)\n z = self.get_reconstructed_input(y)\n</code></pre>\n\n<p>To make an executable function that compute this:</p>\n\n<pre><code>f = theano.function([x], z)\n</code></pre>\n" } ]
21,608,025
5
<theano>
2014-02-06T16:07:51.887
21,703,319
1,724,926
How to set up theano config
<p>I'm new to Theano. Trying to set up a config file.</p> <p>First of all, I notice that I have no .theanorc file:</p> <ol> <li> - returns nothing</li> <li> - returns nothing</li> <li> - passes ok</li> </ol> <p>I'm guessing some default configuration was created wen i installed theano. Where is it?</p>
[ { "AnswerId": "21703319", "CreationDate": "2014-02-11T13:32:15.160", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Theano does not create any configuration file by itself, but has default values for all its configuration flags. You only need such a file if you want to modify the default values.</p>\n\n<p>This can be done by creating a .theanorc file in your home directory. For example, if you want floatX to be always float32, you can do this:</p>\n\n<pre><code>echo -e \"\\n[global]\\nfloatX=float32\\n\" &gt;&gt; ~/.theanorc\n</code></pre>\n\n<p>under Linux and Mac. Under windows, this can also be done. See this page for more details:</p>\n\n<p><a href=\"http://deeplearning.net/software/theano/library/config.html\" rel=\"noreferrer\">http://deeplearning.net/software/theano/library/config.html</a></p>\n" }, { "AnswerId": "39484318", "CreationDate": "2016-09-14T07:02:45.287", "ParentId": null, "OwnerUserId": "6713172", "Title": null, "Body": "<p>I had a similar question and this is what helped me:</p>\n\n<pre><code>import theano\n//...\ntheano.config.floatX = 'float32' //or 'float64' whatever you want\n</code></pre>\n" }, { "AnswerId": "47619039", "CreationDate": "2017-12-03T13:47:55.660", "ParentId": null, "OwnerUserId": "4494896", "Title": null, "Body": "<p>I have been having similar problems. I have NVIDIA 1070 GPU on a desktop machine with Asus Z270E motherboard and was able to import theano after setting up the .theanorc file as below. (And rebooting afterwards)</p>\n\n<pre><code>[global]\nfloatX = float32\ndevice = gpu\n\n[cuda]\nroot = /usr/local/cuda\n[lib]\ncnmem = 1 \n</code></pre>\n" }, { "AnswerId": "44446622", "CreationDate": "2017-06-08T22:36:05.993", "ParentId": null, "OwnerUserId": "1673784", "Title": null, "Body": "<p>This worked for me:</p>\n\n<pre><code>nano ~/.theanorc\n</code></pre>\n\n<p>Then I entered:</p>\n\n<pre><code>[global]\nfloatX = float32\ndevice = cuda\n</code></pre>\n\n<p>Code to check if Theano is using the GPU is on the <a href=\"http://deeplearning.net/software/theano/tutorial/using_gpu.html\" rel=\"nofollow noreferrer\">Theano doc page</a>.</p>\n\n<p>(I am using Ubuntu 14.04, Theano 0.9.0 (conda), NVIDIA 1080 Ti GPU).</p>\n" }, { "AnswerId": "40015751", "CreationDate": "2016-10-13T08:19:01.857", "ParentId": null, "OwnerUserId": "4755098", "Title": null, "Body": "<p>In Linux in terminal Home directory write:</p>\n\n<pre><code>nano .theanorc\n</code></pre>\n\n<p>In the file copy the following lines</p>\n\n<pre><code>[global]\nfloatX = float32\ndevice = gpu0\n\n[lib]\ncnmem = 1 \n</code></pre>\n\n<p>Save it.</p>\n\n<p>When I import theano in python I was having cnmem memory problems. Seems that is because the monitor is connected to the gpu. To resolve it change cnmem to 0.8. This number below 1 is the percentage of gpu reserved for theano</p>\n" } ]
21,644,214
1
<python><theano>
2014-02-08T09:19:04.863
null
1,953,384
IPython module import error: /usr/bin/ld: cannot find -lpython2.7. collect2: ld returned 1 exit status
<p>I'm using the Python module <a href="https://github.com/Theano/" rel="nofollow">Theano</a> on a server. It is not pre-installed on there so I installed it in my home folder along with some other modules that weren't on the server. I get the following error when I "import theano" in IPython.</p> <pre></pre> <p>How can I fix the above error?</p> <p>Another thing is that whenever I run a Python job on the server, I first do</p> <pre></pre> <p>before executing my Python script and the server has libpython2.6.so in its /usr/lib64 folder. I think this is related to the problem.</p>
[ { "AnswerId": "21703090", "CreationDate": "2014-02-11T13:23:03.837", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Your python installation is incomplete. When you do:</p>\n\n<pre><code>module load libs/python/2.7.3\n</code></pre>\n\n<p>information is added to your environment variables to make you use python 2.7.3. But this version doesn't include the development header of python (Theano needs this). Or it doesn't put the right path in your environment.</p>\n\n<p>To debug this, run this before and after running \"module load libs/python/2.7.3\" to compare what module load does:</p>\n\n<pre><code>env &amp;&gt; BEFORE.txt\nmodule load libs/python/2.7.3\nenv &amp;&gt; AFTER.txt\ndiff BEFORE.txt AFTER.txt\n</code></pre>\n\n<p>Then check the paths added to your env variables and look in those directories. It should have modified the LD_LIBRARY_PATH variable, but it should have done the same modification to LIBRARY_PATH. If it doesn't, do that modification yourself and tell your system admin about this.</p>\n\n<p>This will solve your problem.</p>\n\n<p>Otherwise, use the python 2.6 from the OS, maybe in include the development version.</p>\n\n<p>Otherwise, check if other python versions are available via module.</p>\n\n<p>Lastly, contact your system admin to add the development and shared version of python or install it yourself:</p>\n\n<pre><code>wget -c http://www.python.org/ftp/python/2.7.6/Python-2.7.6.tgz\ntar -jxf Python-2.7.6.tar.bz2 \ncd Python-2.7.6\n./configure --prefix=~/python2.7.6 --enable-shared\nmake\nmake install\n</code></pre>\n" } ]
21,713,148
1
<python><macos><pymc><theano><psycopg>
2014-02-11T21:13:55.817
21,714,067
283,296
psycopg2, pymc, theano and DYLD_FALLBACK_LIBRARY_PATH
<p>I am unable to use along with . The following simple snippet from the tutorial:</p> <pre></pre> <p>results in the following error:</p> <blockquote> <p>Exception: The environment variable 'DYLD_FALLBACK_LIBRARY_PATH' does not contain the '/Users/josh/anaconda/envs/py27/lib' path in its value. This will make Theano unable to compile c code. Update 'DYLD_FALLBACK_LIBRARY_PATH' to contain the said value, this will fix this error.</p> </blockquote> <p>I was able to fix this problem by adding:</p> <pre></pre> <p>to my shell init file . <strong>However</strong>, and this is the part I don't understand, that line breaks :</p> <pre></pre> <p>How can I have and (here ) live happily together?</p> <p>This is on OS X with a Python 2.7.6 installation with an Python environment created with <a href="http://docs.continuum.io/conda/index.html" rel="nofollow">conda</a>.</p>
[ { "AnswerId": "21714067", "CreationDate": "2014-02-11T22:04:06.240", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The development version of Theano don't need changes to DYLD_FALLBACK_LIBRARY_PATH. So undo the change to it and update your Theano version. From:</p>\n\n<p><a href=\"http://www.deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions\">http://www.deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions</a></p>\n\n<p>Run one of those 2 command depending of your need:</p>\n\n<pre><code>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git --install-option='--prefix=~/.local'\npip install --upgrade --no-deps git+git://github.com/Theano/Theano.git\n</code></pre>\n\n<p>EDIT: I removed the link to an answer elsewhere and copied the answer here. Thanks</p>\n" } ]
21,945,079
0
<python><machine-learning><theano>
2014-02-21T21:01:12.113
null
742,616
Theano maximum likelihood for geometric brownian motion
<p>I'm trying to do maximum likelihood estimation (MLE) for a geometric brownian motion with Theano. I know, Theano is primarily a ML library, but it <em>should</em> work... (of course I'm just trying out Theano and my eventual goal is to do MLE for a slightly more complicated model....)</p> <p>Writing math at SO is a bitch so look up GBM e.g. <a href="http://en.wikipedia.org/wiki/Geometric_Brownian_motion" rel="nofollow">here</a>.</p> <p>Here's the Theano code, which is slightly modified from the logistic regression example in the Theano tutorial:</p> <pre></pre> <ul> <li>loghood is the log likelihood, which should be minimized w.r.t mu and sigma2</li> <li>Xdata is simulated from a GBM, dXdata[n] = Xdata[n+1] - Xdata[n]</li> <li>autodifferentiation should work OK, I also tried putting the gradients by hand</li> <li>costlist, mulist, siglist were added for debugging purposes</li> </ul> <p>So what I get is that the train function evaluates OK for the initial mu, sigma2, but right at step 2 where the mu, sigma2 are updated as per the gradients, the train function seems to backfire by giving either crazy numbers or nan:s...</p> <p>I compared the result with exact MLE and also minimization by scipy.optimize.minimize, which both work just fine. I think the problem is somewhere in the "loghood" above, but I just can't figure it out...</p> <p><strong>So can anyone please figure out where the code goes wrong?</strong></p>
[]
22,036,710
3
<theano><word2vec><deep-learning>
2014-02-26T09:15:35.600
34,802,772
956,730
How to compute a language model with word2vec tool?
<p>I'm trying to build a neural network language model and it seems that word2vec tool by Mikolov et al is a good tool for this purpose. I tried that but it just produces word representations. Does anybody know how i can produce a language model by that tool or any other reasonable deep learning framework?</p>
[ { "AnswerId": "26458679", "CreationDate": "2014-10-20T04:42:00.403", "ParentId": null, "OwnerUserId": "2440487", "Title": null, "Body": "<p>Microsoft Research has released a toolkit for language modelling with word2vec-style vectors. You can find it <a href=\"http://research.microsoft.com/en-us/projects/rnn/\" rel=\"noreferrer\">here</a>.</p>\n" }, { "AnswerId": "22037003", "CreationDate": "2014-02-26T09:27:12.083", "ParentId": null, "OwnerUserId": "3153644", "Title": null, "Body": "<p><code>word2vec</code> is a tool to represent a single word (o a group of words) as a numerical vector. So it is not directly related to a language model.</p>\n\n<p>To generate a Language model you can use the <a href=\"https://code.google.com/p/mitlm/\" rel=\"nofollow\">MITLM</a> to do it. For example you can create a N-gram model using the corpus <code>Lectures.txt</code> with this command:</p>\n\n<pre><code>estimate-ngram -text Lectures.txt -write-lm Lectures.lm\n</code></pre>\n\n<p>A great tutorial can be found <a href=\"http://projects.csail.mit.edu/cgi-bin/wiki/view/SLS/MITLMTutorial\" rel=\"nofollow\">here</a>.</p>\n" }, { "AnswerId": "34802772", "CreationDate": "2016-01-15T01:12:05.107", "ParentId": null, "OwnerUserId": "956730", "Title": null, "Body": "<p>Doc2Vec implemented in Gensim does the job. The trick is that they use the document ID as a context word, which is present in all window sizes of all the words in the document.</p>\n\n<p>Code is <a href=\"https://radimrehurek.com/gensim/models/doc2vec.html\" rel=\"nofollow\">here in Python/Gensim</a></p>\n" } ]
22,350,482
1
<python><python-2.7><theano>
2014-03-12T11:44:59.643
22,352,914
888,849
Theano installation in Windows 32bit
<p>I am trying to install Theano library on a windows 32-bit machine. I've already installed python 2.7, numpy, scipy, mingw. The next dependency is blas. How can I install it on Windows? I've also installed canopy in order to install pip. The next steps are to install Theano with:</p> <pre></pre> <p>Am I missing any step except in blas installation? When I try to use the canopy platform to perform the installation, I've noticed that I couldn't install from the package manager the needed dependencies “mingw 4.5.2” and “libpython 1.2”.</p> <p>I also tried to follow AnacondaCE instructions. I've downloaded it and configured it using Windows installer for Theano on AnacondaCE for Windows configuration. </p>
[ { "AnswerId": "22352914", "CreationDate": "2014-03-12T13:23:40.827", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>How did you install numpy/scipy/mingw? I think they are installed by default with canopy. As you tried many things, I would suggest that you remove all other python installation and only keep canopy. Canopy provide blas, so you don't need to install one. It is better to install the academic/full version of canopy as it provide a faster BLAS. Their should be newer version of mingw and libpython in canopy since the instruction was done. Install them.</p>\n\n<p>Then this is done, you only need to install theano with this command:</p>\n\n<pre><code>pip install --upgrade --no-deps Theano\n</code></pre>\n\n<p>All the command you provides are different way to install Theano. You only need one! If you want the development version, you can use this instead:</p>\n\n<pre><code>git clone git://github.com/Theano/Theano.git\ncd Theano\npython setup.py develop\n</code></pre>\n" } ]
22,527,072
1
<python><neural-network><backpropagation><theano><deep-learning>
2014-03-20T08:15:29.947
24,844,120
2,489,122
Multilayer perceptron with target variable as array instead of a single value
<p>I am new to deep learning and I have been trying to use the theano library to train my data. <a href="http://deeplearning.net/tutorial/mlp.html" rel="nofollow">MLP tutorial</a> here has a scalar output value while my use case has an array with a 1 corresponding to the value depicted in the output.</p> <p>For example (assume the possible scalar values are 0,1,2,3,4,5),</p> <pre></pre> <p>I have only modified the code to read my input and output (output now is a 2 dimensional array or matrix in the parlance of theano). Other parts of the code is as is from the MLP tutorial pasted above. <br></p> <p>The error I am getting is in the following function</p> <pre></pre> <p>Error stack:</p> <pre></pre> <p>I would like to know how to change this theano.function to accommodate the y value as a matrix. </p>
[ { "AnswerId": "24844120", "CreationDate": "2014-07-19T19:23:07.013", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>You need to define <code>y</code> as <code>T.imatrix()</code> instead of <code>T.lvector()</code>.</p>\n" } ]
22,579,246
1
<python><theano>
2014-03-22T15:00:41.380
22,612,374
826,983
Theano: Get matrix dimension and value of matrix (SharedVariable)
<p>I would like to know how to retrieve the dimension of a SharedVariable from theano.</p> <p>This here e.g. does not work:</p> <pre></pre> <p>and only returns</p> <pre></pre> <p>I am also interested in printing/retriving the values of a matrix or vector ..</p>
[ { "AnswerId": "22612374", "CreationDate": "2014-03-24T14:33:38.670", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>You can get the value of a shared variable like this:</p>\n\n<pre><code>w.get_value()\n</code></pre>\n\n<p>Then this would work:</p>\n\n<pre><code>w.get_value().shape\n</code></pre>\n\n<p>But this will copy the shared variable content. To remove the copy you can use the borrow parameter like this:</p>\n\n<pre><code>w.get_value(borrow=True).shape\n</code></pre>\n\n<p>But if the shared variable is on the GPU, this will still copy the data from the GPU to the CPU. To don't do this:</p>\n\n<pre><code>w.get_value(borrow=True, return_internal_type=True).shape\n</code></pre>\n\n<p>Their is a simpler way to do this, compile a Theano function that return the shape:</p>\n\n<pre><code>w.shape.eval()\n</code></pre>\n\n<p><code>w.shape</code> return a symbolic variable. .eval() will compile a Theano function and return the value of shape.</p>\n\n<p>If you want to know more about how Theano handle the memory, check this web page: <a href=\"http://www.deeplearning.net/software/theano/tutorial/aliasing.html\">http://www.deeplearning.net/software/theano/tutorial/aliasing.html</a></p>\n" } ]
22,591,812
1
<python><performance><optimization><parallel-processing><theano>
2014-03-23T13:55:33.583
22,787,775
826,983
Parallelize iterative calculation using Theano
<p>This is a piece of python code that basically just calculates the activations of a neural network and then updates the new state for the next input value with respect to an arbitrary leaking rate .</p> <pre></pre> <p>It is not that important to understand what this is exactly doing. My question is: Can I parallelize this using the GPU with ? You can see that the new depends on the previous value of so what I would want to do is to parrallelize the calculations for those vectors and matrices. If those arrays become considerably large this will result in a much better performance.</p> <p>Could anybody tell me how to do this?</p>
[ { "AnswerId": "22787775", "CreationDate": "2014-04-01T13:43:32.170", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>This ticket was closed as off-topick, so I answered it in the comment. So here my answer as a post.</p>\n\n<p>My answer is generic and apply to all system, not just Theano. As each iteration of your loop depend on the previous one, you can't paralelize your iterations completly. You could parallelize the <code>u=data[t]</code> as it don't depend on the previous x. You could parallelize <code>dot( Win, vstack((1,u)) )</code> for the same reason. But you can't parallelize <code>dot(W,x)</code> and what depend on it like tanh and the lines afters.</p>\n\n<p>If you want to optimize this, you can move outside the loop all computation that don't depend on x. This will allow to work with more data at the same time and so could be faster. So the <code>dot(win, ...)</code> could be speed up. But this will raise the memory usage.</p>\n" } ]
22,613,364
2
<sentiment-analysis><text-classification><theano><deep-learning>
2014-03-24T15:15:03.183
22,682,341
956,730
Theano Classification Task always gives 50% validation error and test error?
<p>I am doing a text classification experiment with Theano's DBN (Deep Belief Network) and SDA (Stacked Denoising Autoencoder) examples. I have produced a feature/label dataset just as Theano's MINST dataset is produced and changed the feature length and output values of those examples to adopt to my dataset (2 outputs instead of 10 outputs, and the number of features is adopted to my dataset). Every time i run the experiments (both DBN and SDA) i get an exact 50% validation error and test error. Do you have any ideas what i'm doing wrong? because i have just produced a dataset out of Movie Review Dataset as MINST dataset format and pickled it.</p> <p>my code is the same code you can find in <a href="http://www.deeplearning.net/tutorial/DBN.html" rel="nofollow">http://www.deeplearning.net/tutorial/DBN.html</a> and my SDA code is the same code you can find in <a href="http://www.deeplearning.net/tutorial/SdA.html" rel="nofollow">http://www.deeplearning.net/tutorial/SdA.html</a></p> <p>The only difference is that i have made my own dataset instead of MINST digit recognition dataset. My dataset is Bag of Words features from Movie Review Dataset which of course has different number of features and output classes so i just have made tiny modifications in function parameters number of inputs and output classes. The code runs beautifully but the results are always 50%. This is a sample output:</p> <pre></pre> <p>The pretraining code for file DBN_MovieReview.py ran for 430.33m</p> <pre></pre> <p>The fine tuning code for file DBN_MovieReview.py ran for 5.48m</p> <p>I ran both SDA and DBN with two different feature sets. So i got this exact 50% accuracy on all these 4 experiments.</p>
[ { "AnswerId": "33166614", "CreationDate": "2015-10-16T09:13:11.607", "ParentId": null, "OwnerUserId": "5453108", "Title": null, "Body": "<p>I had same problem. I think this problem is because of Overshooting.\nSo I decreased learning rate 0.1 to 0.013 and increase epoch.\nThen it works. \nBut I'm not sure your problem is same.</p>\n" }, { "AnswerId": "22682341", "CreationDate": "2014-03-27T08:46:24.453", "ParentId": null, "OwnerUserId": "956730", "Title": null, "Body": "<p>I asked the same question in Theano's user groups and they answered that feature values should be between 0 and 1.</p>\n\n<p>So i used a normalizer to normalize feature values and it solved the problem.</p>\n" } ]
22,675,274
2
<python><eclipse><macos><machine-learning><theano>
2014-03-26T23:31:32.547
22,793,080
1,568,741
Python won't find variable in module
<p>I just started playing around with Theano but have a strange problem in Eclipse. I am trying to import the config module to run some example code. The import works fine and I can see what's in the module.</p> <p>Here is the simple code I am trying:</p> <pre></pre> <p>This works fine and I get an output like:</p> <pre></pre> <p>and some more lines like that. Unfortunately if I use the following code, I get an "undefined variable from import"-error for the floatX:</p> <pre></pre> <p>This is only happening in Eclipse. In the console I get "float32", which is the correct output. Any idea why this is happening and how I can get to give me the value behind that variable? Thank you!</p> <p>System: OSX 10.9.2 / Python: 2.7.6 (Macports installation) / Theano: 0.6.0 (Macports installation) / Eclipse: Kepler Service Release 2</p>
[ { "AnswerId": "22701190", "CreationDate": "2014-03-27T23:16:05.883", "ParentId": null, "OwnerUserId": "2317533", "Title": null, "Body": "<p>Is Eclipse using the same version of python as what you are running in the shell (console)? Does Eclipse know where to find theano- does it have a PYTHONPATH setting for it?</p>\n\n<p>What OS are you using?</p>\n" }, { "AnswerId": "22793080", "CreationDate": "2014-04-01T17:43:38.147", "ParentId": null, "OwnerUserId": "1568741", "Title": null, "Body": "<p>ok, I found the answer finally. I never really had an error. I did not find that out because I never tried to actually run the script because the editor indicated there was an error... The maker of PyDev answered the following question himself and provides a workaround:</p>\n\n<p><a href=\"https://stackoverflow.com/questions/2112715/how-do-i-fix-pydev-undefined-variable-from-import-errors\">How do I fix PyDev &quot;Undefined variable from import&quot; errors?</a></p>\n\n<p>For code in your project, the only way is adding a comment saying that you expected that (the static code-analysis only sees what you see, not runtime info -- if you opened that module yourself, you'd have no indication that main was expected).</p>\n\n<p>You can use ctrl+1 (Cmd+1 for Mac) in a line with an error and pydev will present you an option to add a comment to ignore that error.</p>\n" } ]
22,735,082
1
<python><serialization><memory-management><theano>
2014-03-29T18:28:40.250
22,774,654
1,323,010
Python Memory Error with Python pkl files
<p>I'm using a python library for deep learning and neural networks. The computer i'm running on has 16 gb of ram@1866 MHz. At first my input data file was too large, so I broke it smaller:</p> <p></p> <p>caused:</p> <p></p> <p>Since the file was just an numpy array of numpy arrays, I could break it into seperate files, and would recreate the larger file dynamically in the program by loading numerous pickle files.</p> <p></p> <p>And this solution worked fine. <strong>My problem is</strong> that now I have an enourmous file that is causing a that I don't know how to break up further. It is a theano tensor variable representing a 30,000x30,000 matrix of floating point numbers. My questions:</p> <ol> <li>Is there a method to save something across multiple pkl files even if you are unsure of how to divide the underlying data structure?</li> <li>Will running this on our labs server (48 gb) work better? Or is this memory error independent of the architecture?</li> <li>Is the huge pkl file I have now that is too large to use worthless? I hope not, it was around 8 hours of neural network training.</li> <li>Are there any other solutions besides using a database that anyone can think of? If at all possible, I would strongly prefer to <strong>not</strong> use databases because I've already had to transfer the software to numerous servers, many of which I do not have root access to and is a pain to get other things installed.</li> </ol>
[ { "AnswerId": "22774654", "CreationDate": "2014-04-01T00:27:01.520", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>First, pkl aren't great to save binary data and aren't memory friendly. It must copy all the data in ram before writing to disk. So this double the memory usage! You can use numpy.save and numpy.load to stores ndarray without that memory doubling.</p>\n\n<p>For the Theano variable, I guess you are using a Theano shared variable. By default, when you get it via <code>get_value()</code>, it copy the data. You can use <code>get_value(borrow=True)</code> to don't copy this.</p>\n\n<p>Both of those change together could lower the memory usage by 3x. If this isn't enough or if you are sick of handling multiple files yourself, I would suggest that you use pytables: <a href=\"http://www.pytables.org/\" rel=\"nofollow\">http://www.pytables.org/</a> It allow to have one big ndarray stored in a file bigger then the ram avaiable, but it give an object similar to ndarray that you can manipulate very similarly to an ndarray.</p>\n" } ]
22,799,362
1
<pymc><theano><pymc3>
2014-04-02T00:22:38.377
22,803,664
3,297,752
Deterministic variables and a Fortran Scipy function in PyMC 3
<p>I am trying to build a simple PyMC 3 model in which I estimate two cut points and a correlation parameter in a latent bivariate Gaussian density, producing four predicted probabilities for a vector of (multinomial) counts. (This will, I hope, eventually be part of a larger model in which these and other parameters are estimated for a number of latent multivariate Gaussian densities.)</p> <p>So, I want to model the cut points cx and cy as normal random variables and the correlation parameter rho as a scaled Beta random variable (as a side note, I'd love to hear a better way to deal with rho - does PyMC 3 have truncated normal random variables, for example?). And I want to use the function mvnun to calculate the predicted probabilities for given values of cx, cy, and rho. The function mvnun is part of scipy.stats.mvn, which is a bit of compiled Fortran code that has two functions for calculating very accurate multivariate normal CDF values.</p> <p>If I try to stick rho in a correlation matrix S or if I put cx or cy into arrays indicating the integration limits, I get this:</p> <pre></pre> <p>If I use fixed numerical values for cx, cy, and/or rho, mvnun works just fine (in or out of the 'with model:' block). I've been poking around, trying to figure out why the PyMC RVs cause this error, but I'm stumped. I gather that cx, cy, and rho are theano TensorVariables, but I can't figure out what, if anything, about theano TensorVariables would cause these problems.</p> <p>Is there a fundamental problem with trying to call a Fortran function with PyMC RVs as arguments? Or is my code flawed in some way? Both? Something else entirely?</p> <p>I'm new to PyMC, and I installed PyMC 3 thinking it was the current version (I swear the note about it being the alpha version wasn't there when I installed it a few weeks ago). Perhaps I should install 2.3 and figure out how to put this together with that?</p> <p>In any case, any advice about how to fix things would be much appreciated.</p> <p>Here's my code:</p> <pre></pre>
[ { "AnswerId": "22803664", "CreationDate": "2014-04-02T06:54:31.207", "ParentId": null, "OwnerUserId": "359944", "Title": null, "Body": "<p>This will be somewhat easier in PyMC 2.3, so installing 2.3 is a pretty good option.</p>\n\n<p>In PyMC 3 that doesn't work because cx, cy, and rho are theano TensorVariables and mvncdf expects numpy arrays. Theano variables are not a kind of numpy array. Instead they <em>represent</em> a computation leading to a numpy array. </p>\n\n<p>You'll need to make mvncdf into a Theano operator (theano analog of a numpy function). There unfortunately isn't a super simple way to do that right now, though <a href=\"https://github.com/Theano/Theano/issues/1784\" rel=\"nofollow\">it is coming</a>. You can also take a look at the <a href=\"http://deeplearning.net/software/theano/tutorial/extending_theano.html\" rel=\"nofollow\">guide to making a Theano Op</a>.</p>\n" } ]
22,860,717
1
<python><entropy><theano>
2014-04-04T10:50:43.743
31,435,097
862,629
Entropy and Probability in Theano
<p>I have a written a simple python code to calculate the entropy of a set and I am trying to write the same thing in Theano.</p> <pre></pre> <p>I am trying to write the equivalent code in Theno, but I am not sure how to do it:</p> <pre></pre> <p>However, the scan line is not correct and I get tons of errors. I am not sure if there is a simple way to do it (to compute the entropy of a vector), or if I have to put more effort on the scan function. Any ideas?</p>
[ { "AnswerId": "31435097", "CreationDate": "2015-07-15T15:46:09.383", "ParentId": null, "OwnerUserId": "5116849", "Title": null, "Body": "<p>Other than the point raised by nouiz, P should not be declared as a T.vector because it will be the result of computation on your vector of values.</p>\n\n<p>Also, to compute something like entropy, you do not need to use Scan (Scan introduces a computation overhead so it should only be used because there's no other way of computing what you want or to reduce memory usage); you can take a approach like this :</p>\n\n<pre><code>values = T.vector('values')\nnb_values = values.shape[0]\n\n# For every element in 'values', obtain the total number of times\n# its value occurs in 'values'.\n# NOTE : I've done the broadcasting a bit more explicitly than\n# needed, for clarity.\nfreqs = T.eq(values[:,None], values[None, :]).sum(0).astype(\"float32\")\n\n# Compute a vector containing, for every value in 'values', the\n# probability of that value in the vector 'values'.\n# NOTE : these probabilities do *not* sum to 1 because they do not\n# correspond to the probability of every element in the vector 'values\n# but to the probability of every value in 'values'. For instance, if\n# 'values' is [1, 1, 0] then 'probs' will be [2/3, 2/3, 1/3] because the\n# value 1 has probability 2/3 and the value 0 has probability 1/3 in\n# values'.\nprobs = freqs / nb_values\n\nentropy = -T.sum(T.log2(probs) / nb_values)\nfct = theano.function([values], entropy)\n\n# Will output 0.918296...\nprint fct([0, 1, 1])\n</code></pre>\n" } ]
22,883,149
1
<python><numpy><theano>
2014-04-05T15:54:55.013
22,883,792
826,983
Can I configure Theano's x divided by 0 behavior
<p>I have a little problem using Theano. It seems that a results in not as using e.g. Numpy this results in 0 (at least the inverse function do behave like that). Take a look:</p> <pre></pre> <p>Results:</p> <pre></pre> <p>This is a problem for some of my calculations I want to perform in my GPU and I don't want to get back the data from it to replace by to continue my calculations of course since this would slow down the process considerably.</p>
[ { "AnswerId": "22883792", "CreationDate": "2014-04-05T16:48:54.983", "ParentId": null, "OwnerUserId": "102441", "Title": null, "Body": "<p><code>theano.tensor</code> calculates the <em>elementwise</em> inverse</p>\n\n<p><code>np.linalg.inv</code> calculates the inverse <em>matrix</em></p>\n\n<p>These are not the same thing mathematically</p>\n\n<hr>\n\n<p>You're probably looking for the <strong>experimental</strong> <a href=\"http://deeplearning.net/software/theano/library/sandbox/linalg.html#theano.sandbox.linalg.ops.MatrixInverse\" rel=\"nofollow\"><code>theano.sandbox.linalg.ops.MatrixInverse</code></a></p>\n" } ]
22,914,786
3
<python><numpy><neural-network><theano>
2014-04-07T14:06:03.850
22,930,872
643,014
Error because of Theano and NumPy variable types
<p>I am writing this code using numpy 1.9 and the latest version of Theano but I get an error which I can't fix. I doubt it could be the way I declare variable types but I can't work it around. I appreciate your suggestions. I want to product a matrix with a vector and sum it with a bias. </p> <pre></pre> <p>I get the following error:</p> <pre></pre> <p>Thanks for your time!</p>
[ { "AnswerId": "27993778", "CreationDate": "2015-01-16T22:25:59.767", "ParentId": null, "OwnerUserId": "1639345", "Title": null, "Body": "<p>I had a similar error and was able to resolve it by adding a <code>.theanorc</code> file containing the following two lines:</p>\n\n<pre><code>[global]\n\nfloatX = float32\n</code></pre>\n\n<p>That seemed to fix everything. However, your problem shows a slightly different error. But I think it's worth trying.</p>\n" }, { "AnswerId": "22914819", "CreationDate": "2014-04-07T14:07:50.993", "ParentId": null, "OwnerUserId": "1590306", "Title": null, "Body": "<p>Error looks quite self-explanatory; have you tried:</p>\n\n<pre><code>dt = np.dtype(np.float32) \n</code></pre>\n\n<p>??</p>\n" }, { "AnswerId": "22930872", "CreationDate": "2014-04-08T07:53:35.570", "ParentId": null, "OwnerUserId": "643014", "Title": null, "Body": "<p>This answer comes from <a href=\"https://groups.google.com/forum/#!topic/theano-users/ofTRZJz7oZQ\" rel=\"nofollow noreferrer\">Theano-users google group</a>.</p>\n\n<p>You define your <code>x</code> variable as:</p>\n\n<pre><code>x=T.vector(dtype=theano.config.floatX)\n</code></pre>\n\n<p>This is it is a vector(i.e. it only have 1 dimensions).</p>\n\n<pre><code>x_inp = np.matrix('2;1',dtype=dt)\n</code></pre>\n\n<p>create a matrix, not a vector.</p>\n\n<p>Theano graph are strongly typed, you must defined the good number of\ndimensions. Just use:</p>\n\n<pre><code>x_inp = np.asarray([2,1]) \n</code></pre>\n\n<hr>\n\n<p>I actually ended up defining <code>x</code> and <code>b</code> as matrices.</p>\n" } ]
23,314,351
1
<python><scipy><theano>
2014-04-26T17:30:52.767
23,314,906
283,296
What exactly is a tensor in theano?
<p>What exactly is a <strong>Tensor</strong> in <a href="http://www.deeplearning.net/software/theano/index.html#">Theano</a>, and what is the precise connection with <a href="http://en.wikipedia.org/wiki/Tensor#Definition">Tensors</a> as they are typically understood in Physics or Math?</p> <p>I went through the <a href="http://www.deeplearning.net/software/theano/introduction.html#introduction">Theano at Glance</a> and the <a href="http://deeplearning.net/software/theano/library/tensor/basic.html">Basic Tensor functionality</a>, but I could not find a clear connection.</p>
[ { "AnswerId": "23314906", "CreationDate": "2014-04-26T18:26:33.160", "ParentId": null, "OwnerUserId": "773960", "Title": null, "Body": "<p>There is a good breakdown of different physics/math ways to think of tensor in Jim Belk's <a href=\"https://math.stackexchange.com/a/192441\">answer</a> to a question on math.stackexchange. After looking over the <a href=\"http://deeplearning.net/software/theano/library/tensor/index.html\" rel=\"nofollow noreferrer\">documentation</a> on tensor and the various operations Theano provides I'd say Theano's notion of tensor corresponds to the first way of thinking of tensor. In Jim's words:</p>\n\n<blockquote>\n <p>Tensors are sometimes defined as multidimensional arrays, in the same way that a matrix is a two-dimensional array. From this point of view, a matrix is certainly a special case of a tensor.</p>\n</blockquote>\n\n<p>In any case, I don't see anything myself in the docs indicating that Theano's tensor implementation knows about global properties of manifolds or tensor products in linear algebra beyond defining dot-products and the like. This would indicate Theano is taking a local point of view in it's implementation as opposed to the global.</p>\n" } ]
23,634,582
1
<python><numpy><theano>
2014-05-13T14:48:14.470
null
3,627,108
What's the difference between the shapes of variable in Theano
<p>Could you tell me what's the difference with the shapes 0, (?,), (1,?), (?,?) in Theano? Why the array I defined as</p> <pre></pre> <p>is an array of (3,) ? How could I defined an array of (3,1) ? </p> <p>Besides, I write the code as below:</p> <pre></pre> <p>Why it report error:</p> <pre></pre> <p>Thank you very much!</p>
[ { "AnswerId": "23635590", "CreationDate": "2014-05-13T15:33:08.417", "ParentId": null, "OwnerUserId": "1461210", "Title": null, "Body": "<p>You are trying to pass <code>a.z</code> to <code>a.ac()</code>, whereas <code>a.z</code> is actually the <em>result</em> of <code>a.ac(x)</code>!</p>\n\n<p>Instead, you probably want do this:</p>\n\n<pre><code>a.ac(a.inputs)\n# array([ 8.61379147, -13.0183053 , -4.41056323])\n</code></pre>\n\n<p>The value of the symbolic variable <code>a.z</code> is undetermined until <code>a.x</code>, <code>a.W</code> and <code>a.b</code> can all be evaluated. The syntax for <a href=\"http://deeplearning.net/software/theano/library/compile/function.html\" rel=\"nofollow\"><code>theano.function</code></a> works like this:</p>\n\n<pre><code>find_x = theano.function([&lt;inputs needed to compute x&gt;], &lt;output x&gt;)\n</code></pre>\n\n<p>When you actually want to call <code>find_x()</code>, you only need to give it the stuff in the square brackets, and the second argument to <code>theano.function</code> will be the return value of <code>find_x()</code>.</p>\n" } ]
23,643,850
2
<python><numpy><theano>
2014-05-14T00:41:46.220
null
3,633,343
How to swich theano.tensor to numpy.array?
<p>I have simple codes as shown below:</p> <pre></pre> <p>However, I get the following error information:</p> <pre></pre> <p>Furthermore, when I use the function of theano.tensor, it seems that what it returns is called "tensor", and I can't simply switch it to the type numpy.array, even though what the result should shape like a matrix.</p> <p>So that's my question:how can I switch outxx to type numpy.array? </p>
[ { "AnswerId": "23660792", "CreationDate": "2014-05-14T16:58:56.603", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Theano \"tensor\" variable are symbolic variable. What you build with them are like a programme that you write. You need to compile a Theano function to execute what this program do. There is 2 ways to compile a Theano function:</p>\n\n<pre><code>f = theano.function([testxx.input], [outxx])\nf_a1 = f(a)\n\n# Or the combined computation/execution\nf_a2 = outxx.eval({testxx.input: a})\n</code></pre>\n\n<p>When you compile a Theano function, your must tell what the input are and what the output are. That is why there is 2 parameter in the call to theano.function(). eval() is a interface that will compile and execute a Theano function on a given symbolic inputs with corresponding values.</p>\n" }, { "AnswerId": "31883372", "CreationDate": "2015-08-07T17:22:07.730", "ParentId": null, "OwnerUserId": "3497273", "Title": null, "Body": "<p>Since <code>testxx</code> uses <code>sum()</code> from <code>theano.tensor</code> and not from <code>numpy</code>, it probably expects a <code>TensorVariable</code> as input, and not a numpy array.</p>\n\n<p>=> Replace <code>a = np.array(...)</code> with <code>a = T.matrix(dtype=theano.config.floatX)</code>.</p>\n\n<p>Before your last line, <code>outxx</code> will then be a <code>TensorVariable</code> that depends on <code>a</code>. So you can evaluate it by giving the value of <code>a</code>.</p>\n\n<p>=> Replace your last line <code>outxx = np.asarray(...)</code> with the following two lines.</p>\n\n<pre><code>f = theano.function([a], outxx)\noutxx = f(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype = np.float32))\n</code></pre>\n\n<p>The following code runs without errors.</p>\n\n<pre><code>import theano\nimport theano.tensor as T\nimport numpy as np\n\nclass testxx(object):\n def __init__(self, input):\n self.input = input\n self.output = T.sum(input)\na = T.matrix(dtype=theano.config.floatX)\nclassfier = testxx(a)\noutxx = classfier.output\nf = theano.function([a], outxx)\noutxx = f(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype = np.float32))\n</code></pre>\n\n<p>Theano documentation on <a href=\"http://deeplearning.net/software/theano/tutorial/adding.html\" rel=\"nofollow\">adding scalars</a> gives other similar examples.</p>\n" } ]
23,656,777
1
<python><theano>
2014-05-14T13:57:07.707
null
3,627,108
Using Theano to compute forward_propagation
<p>I tried to code the forward propagation with Theano. I define a class name hiddenLayer as below:</p> <p>import theano.tensor as T from theano import shared import numpy as np from theano import function</p> <pre></pre> <p>I want to set up a list of hiddenLayer objects and the current's hiddenLayer is the input of the next hiddenLayer. Finally I defined a function named forward buy it raises error and the code is as follow:</p> <pre></pre> <p>The error message is:</p> <pre></pre> <p>Could you tell me why what's the problem with and how to solve it? Thanks</p>
[ { "AnswerId": "23661302", "CreationDate": "2014-05-14T17:29:04.027", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The problem is that you override the l.x in the second loop. You can't do that. Once self.x is used in the <strong>init</strong>, the result based on it are based on the current instance of self.x. So when you override it, it don't recreate the other stuff on the new x.</p>\n\n<p>You should pass x as in input to <strong>init</strong>. If None, create one. This is for the first layer. For the other, it should be the previous layer output.</p>\n\n<pre><code>def __init__(self, n_in, n_out, x=None):\n if x is not None:\n self.x = x\n else:\n x = T.dvector('x')\n</code></pre>\n" } ]
23,709,983
1
<python><theano>
2014-05-17T10:01:50.540
null
3,627,108
Theano function raises ValueError with 'givens' attribute
<p>I use function and want to use to iterate all the input samples. The code is as below:</p> <pre></pre> <p>It eventually raises an error:</p> <pre></pre> <p>Could you tell me why, and how to solve the problem?</p>
[ { "AnswerId": "23996593", "CreationDate": "2014-06-02T14:10:47.983", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The problem is this: <code>train_set[index]</code></p>\n\n<p>Here train_set is a numpy ndarray and index a Theano variable. NumPy don't know how to work with Theano variables. You must convert train_set to a Theano variable like a shared variable:</p>\n\n<pre><code>train_set = theano.shared(train_set)\n</code></pre>\n\n<p>You also need to change your declaration of index as Theano don't support real value for index:</p>\n\n<pre><code>index = T.iscalar()\n</code></pre>\n" } ]
23,729,919
1
<python><theano><summarization><deep-learning>
2014-05-19T04:53:35.457
null
3,025,272
Text summarization using deep learning techniques
<p><strong>I am trying to summarize text documents that belong to legal domain.</strong> </p> <p>I am referring to the site deeplearning.net on how to implement the deep learning architectures. I have read quite a few research papers on document summarization (both single document and multidocument) but I am unable to figure to how exactly the summary is generated for each document. </p> <p>Once the training is done, the network stabilizes during testing phase. So even if I know the set of features (which I have figured out) that are learnt during the training phase, it would be difficult to find out the importance of each feature (because the weight vector of the network is stabilized) during the testing phase where I will be trying to generate summary for each document. </p> <p>I tried to figure this out for a long time but it's in vain. </p> <p>If anybody has worked on it or have any idea regarding the same, please give me some pointers. I really appreciate your help. Thank you.</p>
[ { "AnswerId": "23765727", "CreationDate": "2014-05-20T16:53:55.277", "ParentId": null, "OwnerUserId": "1724013", "Title": null, "Body": "<p>I think you need to be a little more specific. When you say \"I am unable to figure to how exactly the summary is generated for each document\", do you mean that you don't know how to <em>interpret</em> the learned features, or don't you understand the algorithm? Also, \"deep learning techniques\" covers a very broad range of models - which one are you actually trying to use?</p>\n\n<p>In the general case, deep learning models do not learn features that are humanly intepretable (albeit, you can of course try to look for correlations between the given inputs and the corresponding activations in the model). So, if that's what you're asking, there really is no good answer. If you're having difficulties understanding the model you're using, I can probably help you :-) Let me know. </p>\n" } ]
23,744,826
1
<python><networking><theano><nnet>
2014-05-19T18:34:16.810
23,819,788
1,735,729
Binary output Neural Network in Python Theano
<p>As part of a personal project I'm trying to modify the example code given in Theano documentation (<a href="http://deeplearning.net/tutorial/mlp.html" rel="nofollow">Multilayer Perceptron</a>) with my own data.</p> <p>Till now I managed to bring my own (text) data in the format required and I want to build a binary classifier. The thing is that when I write that the number of outputs is 1 i.e.</p> <pre></pre> <p>I get the following error:</p> <pre></pre> <p>The output of my training data (before casting to theano shared type) is like this:</p> <pre></pre> <p>The strange thing is that if I use as a number of output neurons ANYTHING above 1 (e.g. n_out=2), the code is running without any errors but of course now there are many output neurons that have no practical meaning.</p> <p>Could some please explain why the code with binary output seems to give me an error? How can I get this working?</p> <p>Thank you! </p>
[ { "AnswerId": "23819788", "CreationDate": "2014-05-23T01:47:43.927", "ParentId": null, "OwnerUserId": "534969", "Title": null, "Body": "<p>The logistic regression class used as output layer in the MLP tutorial is not the \"standard\" logistic regression, which gives as output a single value and discriminates between just two classes, but rather a Multinomial Logistic Regression (a.k.a Softmax Regression), which gives as output one value for each class, telling the probability of the input belonging to them. So, if you have 10 classes, you'll also need 10 units and obviously the sum of all output units equals 1, since it's a probability distribution.</p>\n\n<p>Despite of the class name used (\"LogistRegression\"), its doctring in the <a href=\"http://deeplearning.net/tutorial/code/logistic_sgd.py\" rel=\"nofollow\">linked source code</a> leaves no doubts of its real intent (<code>'''Multi-class Logistic Regression Class [...]'''</code>).</p>\n\n<p>Whereas in your problem you have two classes, you'll also need 2 output units and the value for your <code>n_out</code> must be 2 instead of 1. Of course, with two classes the value for one output will be always 1 minus the value of the other.</p>\n\n<p>Also, check if you really need int64 instead of int32. Theano has much better support for the second.</p>\n" } ]
23,783,391
1
<theano><deep-learning>
2014-05-21T12:37:08.187
null
2,166,433
ImportError: No module named 'theano.floatX'
<p>I am following a tutorial to create <a href="http://deeplearning.net/tutorial/lenet.html#lenet" rel="nofollow">convolution neural network with Theano</a>. Although, I got a problem in a piece of code:</p> <pre></pre> <p>I loaded floatX with: </p> <pre></pre> <p>and checked with:</p> <pre></pre> <p>But still cannot load the module , which should be in , judging from <a href="http://deeplearning.net/software/theano/library/floatX.html" rel="nofollow">documentation</a>. Does somebody know where can I find it?</p> <p>Thank you in advance! </p>
[ { "AnswerId": "23789646", "CreationDate": "2014-05-21T17:05:01.477", "ParentId": null, "OwnerUserId": "534969", "Title": null, "Body": "<p>This sections of the convnet tutorial has a bug or is very outdated. Symbolic variables in Theano are located in theano.tensor package. This package theano.floatX even doesn't exist!</p>\n\n<p>The current version in the tutorial github repository works fine. They allocate the symbolic variable the right way:</p>\n\n<pre><code># allocate symbolic variables for the data\n index = T.lscalar() # index to a [mini]batch\n x = T.matrix('x') # the data is presented as rasterized images\n y = T.ivector('y') # the labels are presented as 1D vector of\n # [int] labels\n</code></pre>\n\n<p>Browsing the tutorial repository I found the <a href=\"https://github.com/lisa-lab/DeepLearningTutorials/commit/21065c8271913be6667357cbd3df5d37bba144d1\" rel=\"nofollow\">revision where this bug was corrected</a>. \nThey seem to have forgotten to update the tutorial text with this fix.</p>\n" } ]
23,788,179
2
<numpy><pycuda><theano><deep-learning>
2014-05-21T15:53:16.577
23,811,048
2,462,245
Is there a GPU accelerated numpy.max(X, axis=0) implementation in Theano?
<p>Do we have a GPU accelerated of version of in Theano. I looked into the documentation and found , but it is 4-5 times slower than the numpy implementation. </p> <p>I can assure you, it is not slow because of some bad choice of matrix size. Same matrix under theano.tensor.exp is 40 times faster than its numpy counterpart.</p> <p>Any suggestions?</p>
[ { "AnswerId": "23797179", "CreationDate": "2014-05-22T03:05:50.957", "ParentId": null, "OwnerUserId": "2014584", "Title": null, "Body": "<p>The <code>max</code> and <code>exp</code> operations are fundamentally different; <code>exp</code> (and other operations like addition, <code>sin</code>, etc.) is an elementwise operation that is embarrassingly parallelizable, while <code>max</code> requires a parallel-processing scan algorithm that basically builds up a tree of pairwise comparisons over an array. It's not impossible to speed up <code>max</code>, but it's not as easy as <code>exp</code>.</p>\n\n<p>Anyway, the <code>theano</code> implementation of <code>max</code> basically consists of the following lines (in theano/tensor/basic.py):</p>\n\n<pre><code>try:\n out = max_and_argmax(x, axis)[0]\nexcept Exception:\n out = CAReduce(scal.maximum, axis)(x)\n</code></pre>\n\n<p>where <code>max_and_argmax</code> is a bunch of custom code that, to my eye, implements a max+argmax operation using <code>numpy</code>, and <code>CAReduce</code> is a generic GPU-accelerated scan operation used as a fallback (which, according to the comments, doesn't support <code>grad</code> etc.). You could try using the fallback directly and see whether that is faster, maybe something like this:</p>\n\n<pre><code>from theano.tensor.elemwise import CAReduce\nfrom theano.scalar import maximum\n\ndef mymax(X, axis=None):\n CAReduce(maximum, axis)(X)\n</code></pre>\n" }, { "AnswerId": "23811048", "CreationDate": "2014-05-22T15:24:39.330", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The previous answer is partial. The suggestion should not work, as the work around is the one used in the final compiled code. There is optimization that will do this transformation automatically.</p>\n\n<p>The title of the question isn't the same as the content. They differ by the axis argument. I'll answer both questions.</p>\n\n<p>If the axis is 0 or None we support this on the GPU for that operation for matrix. If the axis is None, we have a basic implementation that isn't well optimized as it is harder to parallelize. If the axis is 0, we have a basic implementation, but it is faster as it is easier to parallelize.</p>\n\n<p>Also, how did you do your timing? If you just make one function with only that operation and test it via the device=gpu flags to do your comparison, this will include the transfer time between CPU and GPU. This is a memory bound operation, so if you include the transfer in your timming, personnaly I don't expect any speed op for that case. To see only the GPU operation, use Theano profiler: run with the Theano flag profile=True.</p>\n" } ]
23,978,598
1
<python><numpy><scipy><theano>
2014-06-01T09:50:55.863
23,978,780
1,828,289
Why does theano conv2d add empty dimension?
<p>I am playing around with some simple Theano code, and I ran into the following:</p> <pre></pre> <p>Result: (1, 91, 100)</p> <p>The result of a 2d convolution of 2d inputs is expected to be 2d, but it is actually 3d. Why?</p>
[ { "AnswerId": "23978780", "CreationDate": "2014-06-01T10:14:47.000", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>The docstring of <code>conv2d</code> says <em>signal.conv.conv2d performs a basic 2D convolution of the input with the\ngiven <strong>filters</strong>.</em> (note the plural)</p>\n\n<p>You could pass it <strong>several</strong> filters and it will return the convolutions with all of those. Try e.g.</p>\n\n<pre><code>c = conv2d(m,np.array([w, w, w]))\nf = theano.function([m], c)\nprint f(numpy.ones([100,100], dtype=numpy.float32)).shape # outputs (3, 91, 100)\n</code></pre>\n\n<p>So it seems that by default it will add a degenerate axis if you only pass 1 filter (probably because it adds this axis internally to your filter if you didn't pass it in that way yourself. In other words, it doesn't keep track of the input shape in order to return something that corresponds. Looks like a design choice more than anything else.)</p>\n" } ]
24,172,613
1
<python-2.7><lambda><theano>
2014-06-11T21:08:54.840
24,187,030
3,113,501
Theano scan function
<p>Example taken from: <a href="http://deeplearning.net/software/theano/library/scan.html" rel="nofollow">http://deeplearning.net/software/theano/library/scan.html</a></p> <pre></pre> <p>What is prior_result? More accurately, where is prior_result defined?</p> <p>I have this same question for lot of the examples given on:<a href="http://deeplearning.net/software/theano/library/scan.html" rel="nofollow">http://deeplearning.net/software/theano/library/scan.html</a></p> <p>For example, </p> <pre></pre> <p>Where is power and free_variables defined?</p>
[ { "AnswerId": "24187030", "CreationDate": "2014-06-12T14:31:15.513", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>This is using a Python feature call \"lambda\". lambda are unnamed python function of 1 line. They have this forme:</p>\n\n<pre><code>lambda [param...]: code\n</code></pre>\n\n<p>In your example it is:</p>\n\n<pre><code>lambda prior_result, A: prior_result * A\n</code></pre>\n\n<p>This is a function that take prior_result and A as input. This function, is passed to the scan() function as the fn parameter. scan() will call it with 2 variables. The first one will be the correspondance of what was provided in the output_info parameter. The other is what is provided in the non_sequence parameter.</p>\n" } ]
24,183,108
1
<theano><matrix-indexing>
2014-06-12T11:22:30.603
24,186,817
1,099,534
Indexing in Theano
<p>How can I index a matrix in Theano by a vector of indices? <br> More precisely, say that:</p> <ul> <li><em>v</em> has type theano.tensor.vector (e.g. [0,2]) </li> <li><em>A</em> has type theano.tensor.matrix (e.g. [[1,0,0], [0,1,0], [0,0,1]])</li> </ul> <p>The desired result is [[1,0,0], [0,0,1]]. <br> I mention that my goal is to convert a list of indices into a matrix of one-hot row vectors, where the indices indicate the hot position. My initial attempt was to let A = theano.tensor.eye and index it using the vector of indices.</p>
[ { "AnswerId": "24186817", "CreationDate": "2014-06-12T14:22:19.933", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>You can do:</p>\n\n<pre><code>A[v]\n</code></pre>\n\n<p>It will do what you want.</p>\n" } ]
24,203,451
2
<numpy><matrix><theano>
2014-06-13T10:43:07.220
24,203,540
1,099,534
Matrices with different row lengths in numpy
<p>Is there a way of defining a matrix (say <em>m</em>) in numpy with rows of different lengths, but such that <em>m</em> stays 2-dimensional (i.e. m.ndim = 2)?</p> <p>For example, if you define <em>m = numpy.array([[1,2,3], [4,5]])</em>, then <em>m.ndim</em> = 1. I understand why this happens, but I'm interested if there is any way to trick numpy into viewing <em>m</em> as 2D. One idea would be padding with a dummy value so that rows become equally sized, but I have lots of such matrices and it would take up too much space. The reason why I really need <em>m</em> to be 2D is that I am working with Theano, and the tensor which will be given the value of <em>m</em> expects a 2D value.</p>
[ { "AnswerId": "24203540", "CreationDate": "2014-06-13T10:47:55.023", "ParentId": null, "OwnerUserId": "166749", "Title": null, "Body": "<p>No, this is not possible. NumPy arrays need to be rectangular in every pair of dimensions. This is due to the way they map onto memory buffers, as a pointer, itemsize, stride triple.</p>\n\n<p>As for this taking up space: <code>np.array([[1,2,3], [4,5]])</code> actually takes up more space than a 2×3 array, because it's an array of two pointers to Python lists (and even if the elements were converted to arrays, the memory layout would still be inefficient).</p>\n" }, { "AnswerId": "24206476", "CreationDate": "2014-06-13T13:26:46.603", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>I'll give here very new information about Theano. We have a new TypedList() type, that allow to have python list with all elements with the same type: like 1d ndarray. All is done, except the documentation.</p>\n\n<p>There is limited functionality you can do with them. But we did it to allow looping over the typed list with scan. It is not yet integrated with scan, but you can use it now like this:</p>\n\n<pre><code>import theano\nimport theano.typed_list\n\na = theano.typed_list.TypedListType(theano.tensor.fvector)()\ns, _ = theano.scan(fn=lambda i, tl: tl[i].sum(),\n non_sequences=[a],\n sequences=[theano.tensor.arange(2, dtype='int64')])\n\nf = theano.function([a], s)\nf([[1, 2, 3], [4, 5]])\n</code></pre>\n\n<p>One limitation is that the output of scan must be an ndarray, not a typed list.</p>\n" } ]
24,205,187
1
<theano>
2014-06-13T12:22:09.910
null
1,099,534
Variable-length tensors in Theano
<p>This question refers to best practices in Theano. Here is what I am trying to do:</p> <p>I am building a neural network for an SMT system. In this context, I conceptually represent sentences as variable-length lists of words, and words as fixed-length lists of integers. Ideally, I would like to represent my corpus as a 3D tensor (first dimension = sentences in corpus, second dimension = words in sentence, third dimension = integer features in words). The difficulty is that sentences have variable length and, to my knowledge, tensors in Theano have the strict requirement that all lengths in one dimension must be the same.</p> <p>Solutions I have thought of include:</p> <ol> <li> Use padding with dummy words so that sentences become equally sized. But this means that whenever I iterate over a sentence, I need to include special code to discard the padding. <li> Represent the corpus as a vector of matrices. However, this makes it hard to work with certain functions. For instance, if I want to add up the representations of all the words in a sentence, I can't simply use *corpus.sum(axis=1)*. I would have to loop over sentences, do *sentence.sum(axis=0)*, and then gather the results into another tensor. </ol> <p>My question is: which of these alternatives are preferred, or is there a better one?</p>
[ { "AnswerId": "30507375", "CreationDate": "2015-05-28T13:02:29.613", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>The first option is probably the best option in most cases. It's what I do though it does mean passing around a separate vector of sentence lengths and masking certain results to eliminate the padding region when needed.</p>\n\n<p>In general, if you want to perform a consistent operation to all sentences then you'll usually get much better speed applying that operation to a single 3D tensor than sequentially to a series of matrices. This is especially true for operations running on a GPU.</p>\n\n<p>If you're using scan operations the speed differences will become even more magnified. You'll be better off scanning over a 3D tensor and operating on a per-word matrix in your step function that covers all (or a minibatch of) sentences. If needed, you may need to know which rows of that matrix are real data and which are padding. As an aside, I find that setting the first dimension of a 3D tensor to be the temporal/sequence position dimension helps when using scan, which always scans over the first dimension.</p>\n\n<p>Often, using the value zero as your padding value will result in the padding have no impact on your operations.</p>\n\n<p>The other option, looping over the sentences, would mean mixing Theano and Python code which can make some computations difficult or impossible. For example, getting the gradient of a cost function with respect to some parameters over a all (or batch) of your sentences may not be possible if the data is stored in lots of separate matrices.</p>\n" } ]
24,225,948
1
<math><numpy><theano>
2014-06-15T02:04:34.090
null
1,641,608
Revisit forcing python math functions to operate on float32
<p>I am aware of howto force python operations to work on float32: <a href="https://stackoverflow.com/questions/20768590/how-to-force-python-float-operation-on-float32-rather-than-float64">How to force python float operation on float32 rather than float64</a></p> <p>But there is no Q or A about forcing the built in functions to work on float32. I wanted to ask how about forcing the built-in math or numpy functions such as math.sqrt or numpy.sqrt to work on float32. FYI, I could not comment on the question yet.</p> <p>In theano, we can easily configure the function, e.g., sqrt to work on float32 or float64 as follows:</p> <pre></pre> <p>and the result is:</p> <pre></pre> <p>Now I tried to force math.sqrt and numpy.sqrt to do the same as follows:</p> <pre></pre> <p>But the result is still seems in float64 (I confirmed it that the result would be the same, i.e., 3.87298334621, if I set theano.config.floatX='float64'):</p> <pre></pre> <p>I am curious to know howto force math.sqrt, numpy.sqrt to work on float32?</p>
[ { "AnswerId": "24225977", "CreationDate": "2014-06-15T02:10:54.737", "ParentId": null, "OwnerUserId": "2357112", "Title": null, "Body": "<pre><code>&gt;&gt;&gt; type(numpy.sqrt(numpy.float32(2)))\n&lt;type 'numpy.float32'&gt;\n</code></pre>\n\n<p><code>numpy.sqrt</code> already does what you want. On the other hand, the <code>math</code> functions always cast their input to <code>float</code> and return <code>float</code>, with no option to change that. Stick with NumPy operations instead of the <code>math</code> module for NumPy data types, and you should be fine.</p>\n" } ]
24,229,361
1
<python><gpgpu><theano>
2014-06-15T11:58:23.743
24,329,482
826,983
Theano: Indexing inside a compiled function (GPU)
<p>Since Theano allows to <em>update</em> the memory on the graphics card DRAM by simply defining which memory area has to be updated and how this should be done, I was wondering if the following is somehoe possible (imho it should be).</p> <p>I have a 2x5 randomly initialized matrix which first column will be initialized with start-values. I would like to write a function that depends of the preceeding column and updates the next one based on arbitrary calulations. </p> <p>I think this code explains it very well:</p> <p><strong>Note:</strong> This code is <em>not</em> working, it's just an illustration:</p> <pre></pre> <p>My desired output would be:</p> <pre></pre> <p>but instead I get </p> <pre></pre> <p>Can anybody help me here?</p> <p><strong>Wow:</strong> This question is 9 minutes after asking already in the Top 4 of google results for "<em>Theano indexing gpu</em>" for me. O_o</p>
[ { "AnswerId": "24329482", "CreationDate": "2014-06-20T14:22:41.983", "ParentId": null, "OwnerUserId": "3760518", "Title": null, "Body": "<p>Have a look at: <a href=\"https://stackoverflow.com/questions/15917849/how-can-i-assign-update-subset-of-tensor-shared-variable-in-theano\">How can I assign/update subset of tensor shared variable in Theano?</a></p>\n\n<p>For your code this translates to:</p>\n\n<pre><code>from theano import function, sandbox, shared\nimport theano.tensor as T\nimport numpy as np\n\nreservoirSize = 2\nsamples = 5\n\n# To initialize _mat first column\n_vec = np.asarray([1 for i in range(reservoirSize)], np.float32)\n\n# Random matrix, first column will be initialized correctly (_vec)\n_mat = np.asarray(np.random.rand(reservoirSize, samples), np.float32)\n_mat[:,0] = _vec\n\nprint \"Init:\\n\",_mat\n\n_mat = shared(_mat)\n\nidx = T.iscalar()\ntest = function([idx], updates= {\n # -&gt; instead of\n #_mat[:,idx]:sandbox.cuda.basic_ops.gpu_from_host(_mat[:,idx-1] * 2)\n # -&gt; do this:\n _mat:T.set_subtensor(_mat[:,idx], _mat[:,idx-1]*2) \n })\n\nfor i in range(1, samples):\n test(i)\n\nprint \"Done:\\n\",_mat.get_value() # use get_value() here to retrieve the data\n</code></pre>\n" } ]
24,230,227
1
<theano><machine-translation>
2014-06-15T13:47:18.537
24,244,862
1,099,534
Typed Lists in Theano
<p>Consider the following machine translation problem. Let be a source sentence and be a target sentence. Both sentences are conceptually represented as lists of indices, where the indices correspond to the position of the words in the associated dictionaries. Example:</p> <pre></pre> <p>Note that and don't necessarily have the same length. Now let and be sets of such instances. In other words, they are a parallel corpus. Example:</p> <pre></pre> <p>Note that not all 's in have the same length. That is, sentences have variable numbers of words. <p> I am implementing a machine translation system in Theano, and the first design decision is what kind of data structures to use for and . From one of the answers posted on <a href="https://stackoverflow.com/questions/24203451/matrices-with-different-row-lengths-in-numpy">Matrices with different row lengths in numpy</a> , I learnt that typed lists are a good solution for storing variable length tensors. <p> However, I realise that they complicate my code a lot. Let me give you one example. Say that we have two typed lists and and aim to calculate the negative loss likelihood. If they were regular tensors, a simple statement like this would suffice:</p> <pre></pre> <p>But can only be applied to tensors, so in case of typed lists I have to iterate over them and apply the function separately to each element:</p> <pre></pre> <p>On top of making my code more and more messy, these problems propagate. For instance, if I want to calculate the gradient of the loss, the following doesn't work anymore:</p> <pre></pre> <p>I don't know exactly why it doesn't work. I'm sure it has to do with the type of , but I am not interested in investigating any further how I could make it work. The mess is growing exponentially, and what I would like is to know whether I am using typed lists in the wrong way, or if it is time to give up on them because they are not well enough supported yet.</p>
[ { "AnswerId": "24244862", "CreationDate": "2014-06-16T13:25:57.407", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Typed list isn't used by anybody yet. But the idea for having them is that you iterate on them with scan for each sentence. Then you do everything you need in 1 scan. You don't do 1 scan for each operation.</p>\n\n<p>So the scan is only used to do the iteration on each example in the minibatch, and the inside of scan is all what is done on one example.</p>\n\n<p>We haven't tested typed list with grad yet. It is possible that it is missing some implementations.</p>\n" } ]
24,233,605
1
<python><theano>
2014-06-15T20:01:43.980
24,234,094
826,983
Concatenation/creation of symbolic expressions/statements
<p>I want to create something like this programmatically:</p> <pre></pre> <p>We got so a is that whole statement. This is not doing anything yet. Then there is which is depending on and is another statement itself.This goes on until . I know it looks ugly but it just updates the column of a vector like for returning a vector looking like this:</p> <pre></pre> <p>Now Imagine I would want to do this a thousand times.. therefore I am wondering if I could generate such statements since they follow an easy pattern. </p> <p>These "<em>concatenated</em>" statements are not evaluated until </p> <pre></pre> <p>which is when they are getting compiled to and</p> <pre></pre> <p>does execute everything.</p>
[ { "AnswerId": "24234094", "CreationDate": "2014-06-15T21:02:48.470", "ParentId": null, "OwnerUserId": "3741258", "Title": null, "Body": "<p>You can use function nesting:</p>\n\n<pre><code>def nest(f, g):\n def h(x):\n return f(g(x), x)\n return h\n\nexpr = lambda (a,b) : a\nexpr = nest((lambda x, (a, b): x + (a - b)), expr)\nexpr = nest((lambda x, (a, b): x + (a - b)), expr)\n\nprint expr((1,2)) # prints -1\n</code></pre>\n\n<p>Regarding the example code, you could do something like (modifying <code>nest</code> to use no arguments):</p>\n\n<pre><code>def nest(f, g):\n def h():\n return f(g())\n return h\n\nexpr = lambda: (_vec, _init)[1]\nexpr = nest(lambda x: T.set_subtensor(x[1], x[0] * 2)[1], expr)\nexpr = nest(lambda x: T.set_subtensor(x[2], x[1] * 2)[1], expr)\nexpr = nest(lambda x: T.set_subtensor(x[3], x[2] * 2)[1], expr)\nexpr = nest(lambda x: T.set_subtensor(x[4], x[3] * 2)[1], expr)\n\ntest_vector = function([], outputs=expr)\nsubt = test_vector()\n</code></pre>\n" } ]
24,263,390
3
<python><matrix><theano>
2014-06-17T12:07:19.943
24,268,694
1,099,534
From list of indices to one-hot matrix
<p>What is the best (elegant and efficient) way in Theano to convert a vector of indices to a matrix of zeros and ones, in which every row is the one-of-N representation of an index?</p> <pre></pre> <p>Example:</p> <pre></pre>
[ { "AnswerId": "40622953", "CreationDate": "2016-11-16T02:11:18.507", "ParentId": null, "OwnerUserId": "3041068", "Title": null, "Body": "<p>There's now a built in function for this <a href=\"http://deeplearning.net/software/theano/library/tensor/extra_ops.html#theano.tensor.extra_ops.to_one_hot\" rel=\"nofollow noreferrer\"><code>theano.tensor.extra_ops.to_one_hot</code></a>.</p>\n\n<pre><code>y = tensor.as_tensor([3,2,1])\nfn = theano.function([], tensor.extra_ops.to_one_hot(y, 4))\nprint fn()\n# [[ 0. 0. 0. 1.]\n# [ 0. 0. 1. 0.]\n# [ 0. 1. 0. 0.]]\n</code></pre>\n" }, { "AnswerId": "24268694", "CreationDate": "2014-06-17T16:14:20.890", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>I didn't compare the different option, but you can also do it like this. It don't request extra memory.</p>\n\n<pre><code>import numpy as np\nimport theano\n\nn_val = 4\nv_val = np.asarray([1,0,3])\nidx = theano.tensor.lvector()\nz = theano.tensor.zeros((idx.shape[0], n_val))\none_hot = theano.tensor.set_subtensor(z[theano.tensor.arange(idx.shape[0]), idx], 1)\nf = theano.function([idx], one_hot)\nprint f(v_val)[[ 0. 1. 0. 0.]\n [ 1. 0. 0. 0.]\n [ 0. 0. 0. 1.]]\n</code></pre>\n" }, { "AnswerId": "24264952", "CreationDate": "2014-06-17T13:19:09.633", "ParentId": null, "OwnerUserId": "1099534", "Title": null, "Body": "<p>It's as simple as:</p>\n\n<pre><code>convert = t.eye(n,n)[v]\n</code></pre>\n\n<p>There still might be a more efficient solution that doesn't require building the whole identity matrix. This might be problematic for large n and short v's.</p>\n" } ]
24,286,238
1
<python><theano>
2014-06-18T12:58:30.263
24,314,997
1,099,534
Obscure Looping Error in Theano
<p>I am trying to simulate a repeat-until loop in Theano:</p> <pre></pre> <p>I encountered the following strange behaviour. Let be a constant. Whenever is a constant, everything works fine. However, when is a scalar, I get an error related to optimisation:</p> <pre></pre> <p>I'd appreciate if someone could help me understand the error. I'm assuming the doesn't refer to or , because I can print their and see that they do have one. Other than that, I can't make any sense out of it.</p> <p>[Edit] This is not a fatal error. The code runs normally and the process finishes with exit code 0. It looks like Theano is trying to optimise the graph and fails to do so, which doesn't really impact the program.</p>
[ { "AnswerId": "24314997", "CreationDate": "2014-06-19T19:41:47.303", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The traceback indicate that in the function equal_compuations(), we didn't cover all case, when doing some comparison.</p>\n\n<p>I have a PR with a fix for it here:</p>\n\n<pre><code>https://github.com/Theano/Theano/pull/1928\n</code></pre>\n\n<p>thanks for the report.</p>\n\n<p>Your [edit] section, indicate me that you cut some of the errors message. If this happear during optimization with a warning, it mean an optimization was just skipped. It is possible that the optimization just don't apply, but it could be possible that with the fix, now the optimization apply. If that is the case, there could be some speed up with the fix.</p>\n" } ]
24,331,893
1
<theano>
2014-06-20T16:35:15.843
null
1,784,599
Theano SDE example
<p>I'm trying to work through an example on solving SDEs on GPU using Theano found <a href="http://www.nehalemlabs.net/prototype/blog/2013/10/17/solving-stochastic-differential-equations-with-theano/" rel="nofollow">here</a></p> <p>I'm stuck with a GpuDimShuffle error, but I'm not seeing how any of the dims don't match...</p> <pre></pre> <p>Yielding:</p> <p>TypeError: ('The following error happened while compiling the node', Rebroadcast{0(GpuDimShuffle{x,0}.0), '\n', 'super(type, obj): obj must be an instance or subtype of type')</p> <p>I'm on Theano 0.6.0</p>
[ { "AnswerId": "24410185", "CreationDate": "2014-06-25T13:33:15.273", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>If you update to the development version, it work for me:</p>\n\n<p><a href=\"http://www.deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions\" rel=\"nofollow\">http://www.deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions</a></p>\n\n<p>We try to keep the development version very stable and I recomment everybody to use it, since it contain many fixes since the last release.</p>\n\n<p>If that don't fix it, the error is specific to your OS or python version. We will need more information about it. Also always provide the full error message with the traceback. This provide much more debugging information.</p>\n" } ]
24,347,565
1
<theano><deep-learning><automatic-differentiation><autoencoder>
2014-06-22T02:15:41.530
null
2,782,619
Does Theano support variable split?
<p>In my Theano program, I want to split the tensor matrix into two parts, with each of them making different contributions to the error function. Can anyone tell me whether automatic differentiation support this?</p> <p>For example, for a tensor matrix variable M, I want to split it into M1=M[:300,] and M2=M[300:,], then the cost function is defined as 0.5* M1 * w + 0.8*M2*w. Is it still possible to get the gradient with T.grad(cost,w)? </p> <p>Or more specifically, I want to construct an Autoencoder with different features having different weights in contribution to the total cost. </p> <p>Thanks for anyone who answers my question.</p>
[ { "AnswerId": "24410773", "CreationDate": "2014-06-25T13:59:26.653", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Theano support this out of the box. You have nothing particular to do. If Theano don't support something in the crash, it should raise an error. But you won't have it for this, if there isn't problem in the way you call it. But the current pseudo-code should work.</p>\n" } ]
24,402,213
1
<python><ubuntu><theano><theano-cuda>
2014-06-25T07:03:05.193
null
3,568,055
theano.test() : optimization failure due to constant_folding (on ubuntu)
<p>When running theano.test() on an Ubuntu operating system, some error message about an optimization failure is produced as follows:</p> <pre></pre> <p>Does anybody know a way to fix these problem, or what exactly is going on?</p>
[ { "AnswerId": "24409647", "CreationDate": "2014-06-25T13:10:35.310", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>This can be caused by many things. The error is related to the GPU. So first, make sure you can compile nvidia example and that they run fine. To be sure this isn't the problem.</p>\n\n<p>The problem is that Theano isn't able to import a GPU module it compiled, because he didn't found the symbol it need. This missing symbol \"_Z25CudaNdarray_CopyFromArrayP11CudaNdarrayP23tagPyArrayObject_fields\" is in a shared library that Theano already pre-compiled.</p>\n\n<p>What is your OS? Make sure to update to the latest development version of Theano. There was a fix recently (Monday if my memory is exact) that could solve this on some OS.</p>\n" } ]
24,431,621
1
<python><gradient><backpropagation><theano>
2014-06-26T13:17:00.947
24,437,169
1,099,534
Does Theano do automatic unfolding for BPTT?
<p>I am implementing an RNN in Theano and I have difficulties training it. It doesn't even come near to memorising the training corpus. My mistake is most likely caused by me not understanding exactly how Theano copes with backpropagation through time. Right now, my code is as simple as it gets:</p> <pre></pre> <p>My question is: given that my network is recurrent, does this automatically do the unfolding of the architecture into a feed-forward one? On one hand, <a href="https://github.com/gwtaylor/theano-rnn/blob/master/rnn_minibatch.py" rel="noreferrer">this</a> example does exactly what I am doing. On the other hand, <a href="https://groups.google.com/forum/#!topic/theano-users/RAfkjvI2CEU" rel="noreferrer">this</a> thread makes me think I'm wrong.</p> <p>In case it does do the unfolding for me, how can I truncate it? I can see that there is a way, from the <a href="http://deeplearning.net/software/theano/library/scan.html" rel="noreferrer">documentation</a> of , but I can't come up with the code to do it.</p>
[ { "AnswerId": "24437169", "CreationDate": "2014-06-26T17:57:21.027", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>I wouldn't say it does automatic \"unfolding\" - rather, Theano has a notion of what variables are connected, and can pass updates along that chain. If this is what you mean by unfolding, then maybe we are talking about the same thing. </p>\n\n<p>I am stepping through this as well, but using <a href=\"https://11350770138305416713.googlegroups.com/attach/2628c6c56beb34bf/rnn.py?part=0.1&amp;view=1&amp;vt=ANaJVrFlEcr9KydEXVwhub2Lhwv0bs8OZIc678NDDHXGzyPIfr39rH0EiiXXbi4Q9GHbNmAhFQzp7l3hapna0fV3lyR1ICmPe_rij_aL292mq8ioP2lccQg\" rel=\"nofollow noreferrer\">Rasvan Pascanu's rnn.py</a> code (from <a href=\"https://groups.google.com/forum/#!msg/theano-users/OcdVuTZ19Dc/3m1Xlb1xxmoJ\" rel=\"nofollow noreferrer\">this thread</a>) for reference. It seems much more straightforward for a learning example. </p>\n\n<p>You might gain some value from visualizing/drawing graphs from the <a href=\"http://deeplearning.net/software/theano/tutorial/printing_drawing.html\" rel=\"nofollow noreferrer\">tutorial</a>. There is also set of slides online with a <a href=\"http://www.cs.bham.ac.uk/~jxb/INC/l12.pdf\" rel=\"nofollow noreferrer\">simple drawing</a> that shows the diagram from a 1 layer \"unfolding\" of an RNN, which you discuss in your post. </p>\n\n<p>Specifically, look at the <code>step</code> function:</p>\n\n<pre><code>def step(u_t, h_tm1, W, W_in, W_out):\n h_t = TT.tanh(TT.dot(u_t, W_in) + TT.dot(h_tm1, W))\n y_t = TT.dot(h_t, W_out)\n return h_t, y_t\n</code></pre>\n\n<p>This function represents the \"simple recurrent net\" shown in <a href=\"http://www.cs.bham.ac.uk/~jxb/INC/l12.pdf\" rel=\"nofollow noreferrer\">these slides, pg 10</a>. When you do updates, you simply pass the gradient w.r.t. W, W_in, and W_out, respectively (remember that y is connected to those three via the <code>step</code> function! This is how the gradient magic works).</p>\n\n<p>If you had multiple W layers (or indexes into one big W, as I believe gwtaylor is doing), then that would create multiple layers of \"unfolding\". From what I understand, this network only looks 1 step backward in time. If it helps, <a href=\"https://github.com/lmjohns3/theano-nets\" rel=\"nofollow noreferrer\">theanonets</a> also has an RNN implementation in Theano.</p>\n\n<p>As an additional note, training RNNs with BPTT is <em>hard</em>. <a href=\"http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf\" rel=\"nofollow noreferrer\">Ilya Sutskever's dissertation</a> discusses this at great length - if you can, try to tie into a <a href=\"https://github.com/boulanni/theano-hf\" rel=\"nofollow noreferrer\">Hessian Free optimizer, there is also a reference RNN implementation here</a>. Theanets also does this, and may be a good reference.</p>\n" } ]
24,459,062
1
<python><macos><matplotlib><theano>
2014-06-27T18:57:28.570
null
3,784,367
Compilation error ld: library not found for -lgcc_ext in MacOSX
<ol> <li><p>I'm trying to replicate a tutorial example in the <a href="http://deeplearning.net/software/pylearn2/" rel="nofollow">pyLearn2</a> documentation. When I run , in the example, I got this error:</p> <pre></pre> <p>Referenced from: /System/Library/Frameworks/ImageIO.framework/Versions/A/ImageIO</p> <p>Expected in: </p> <p>I can import and image in python in command line. Could someone help me understand what it is complaining about and how to fix the error?</p></li> <li><p>Another problem (which may or may not be related to the problem above) is a linking error</p> <p>Problem occurred during compilation with the command line below:</p> <pre></pre> <p>I am running on Mac OSX Mavericks. I am not sure how to fix the error: the library seems to be in my system in several places:</p> <pre></pre> <p>I am not sure which one to link and how I should link.</p> <p>I have changed (added) the paths to which didn't fix the problem.</p> <p>Any help would be greatly appreciated.</p></li> </ol>
[ { "AnswerId": "29567965", "CreationDate": "2015-04-10T18:03:24.547", "ParentId": null, "OwnerUserId": "1318153", "Title": null, "Body": "<p>It looks like <a href=\"https://lists.macosforge.org/pipermail/macports-users/2010-November/022662.html\" rel=\"nofollow\">one possible solution</a> is to manually create a link to the .dylib file:</p>\n\n<blockquote>\n <p>sudo ln -sf\n /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib\n /opt/local/lib/libGL.dylib</p>\n \n <p>sudo ln -sf\n /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libPng.dylib\n /opt/local/lib/libpng.dylib</p>\n \n <p>sudo ln -sf\n /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib\n /opt/local/lib/libtiff.dylib</p>\n \n <p>sudo ln -sf\n /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libJPEG.dylib\n /opt/local/lib/libjpeg.dylib</p>\n \n <p>Why does this work? Because for whatever reason, when Apple requests\n libJPEG.dyld, whatever system searches for it finds\n /opt/local/lib/libjpeg.dyld because the search is case insensitive.\n For a Unix operating system. Go figure. The lines above will force any\n program looking for libjpeg.dyld to be redirected to Apple's\n libJPEG.dyld.</p>\n</blockquote>\n\n<p>Another user suggested <a href=\"http://comments.gmane.org/gmane.os.apple.macports.user/22154\" rel=\"nofollow\">setting DYLD_FALLBACK_LIBRARY_PATH</a>:</p>\n\n<blockquote>\n <p>You could also use DYLD_FALLBACK_LIBRARY_PATH. I had to use that to\n avoid Apple's libJPEG getting in the way of apps that expected the\n libjpeg port. See the man page for dlopen() for details.</p>\n</blockquote>\n" } ]
24,468,482
1
<python><machine-learning><theano>
2014-06-28T15:49:52.687
24,496,769
3,054,726
Defining a gradient with respect to a subtensor in Theano
<p>I have what is conceptually a simple question about Theano but I haven't been able to find the answer (I'll confess upfront to not really understanding how shared variables work in Theano, despite many hours with the tutorials).</p> <p>I'm trying to implement a "deconvolutional network"; specifically I have a 3-tensor of inputs (each input is a 2D image) and a 4-tensor of codes; for the ith input codes[i] represents a set of codewords which together code for input i.</p> <p>I've been having a lot of trouble figuring out how to do gradient descent on the codewords. Here are the relevant parts of my code:</p> <pre></pre> <p>(here codes and dicts are shared tensor variables). Theano is unhappy with this, specifically with defining</p> <pre></pre> <p>The error message I'm getting is: <em>theano.gradient.DisconnectedInputError: grad method was asked to compute the gradient with respect to a variable that is not part of the computational graph of the cost, or is used only by a non-differentiable operator: Subtensor{int64}.0</em></p> <p>I'm guessing that it wants a symbolic variable instead of codes[idx]; but then I'm not sure how to get everything connected to get the intended effect. I'm guessing I'll need to change the final line to something like</p> <pre></pre> <p>Can someone give me some pointers as to how to define this function properly? I think I'm probably missing something basic about working with Theano but I'm not sure what. </p> <p>Thanks in advance!</p> <p>-Justin</p> <p>Update: Kyle's suggestion worked very nicely. Here's the specific code I used</p> <pre></pre>
[ { "AnswerId": "24496769", "CreationDate": "2014-06-30T18:40:12.860", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>To summarize the findings:</p>\n\n<p>Assigning <code>grad_var = codes[idx]</code>, then making a new variable such as: \n<code>subgrad = T.set_subtensor(codes[input_index], codes[input_index] - learning_rate*del_codes[input_index])</code></p>\n\n<p>Then calling \n<code>train_codes = function([input_index], loss, updates = [[codes, subgrad]])</code></p>\n\n<p>seemed to do the trick. In general, I try to make variables for as many things as possible. Sometimes tricky problems can arise from trying to do too much in a single statement, plus it is hard to debug and understand later! Also, in this case I think theano needs a shared variable, but has issues if the shared variable is <em>created</em> inside the function that requires it. </p>\n\n<p>Glad this worked for you!</p>\n" } ]
24,505,682
1
<python><python-2.7><spyder><theano>
2014-07-01T08:28:16.847
24,738,726
3,592,346
spyder: python: theano: How to disable warnings in spyder?
<p>I running machine learning algo's with theano. I have been getting lot of warnings of DeprecationWarning. coming from numpy package. I want to disable this warnings pls suggest option. warning nature:fromnumeric.py:932: DeprecationWarning: converting an array with ndim > 0 to an index will result in an error in the future</p> <p>I tried following run configure added command line option -W ignore or -W ignore::DeprecationWarning but none of this is working</p> <p>alternatively fixing the warning solution is fine for me. looks like its fixed in theano <a href="https://groups.google.com/forum/#!topic/theano-users/Hf7soRrnh8w" rel="nofollow">https://groups.google.com/forum/#!topic/theano-users/Hf7soRrnh8w</a> but I dont know where to find this updated version of theano</p> <p>I am using Anaconda distribution 2.0.1 windows 8.1 - 64 bit</p> <p>thanks</p>
[ { "AnswerId": "24738726", "CreationDate": "2014-07-14T14:20:27.470", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>See this link to update Theano to the development version. I should be fixed:</p>\n\n<p><a href=\"http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions\" rel=\"nofollow\">http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions</a></p>\n\n<p>In resume, run one of those 2 command:</p>\n\n<pre><code>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git\n</code></pre>\n\n<p>or (if you want to install it for the current user only):</p>\n\n<pre><code>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git --user\n</code></pre>\n" } ]
24,524,621
1
<python><pip><theano>
2014-07-02T06:51:19.163
null
1,358,855
use package from pip local instead pip global
<p>I have Theano library installed on </p> <pre></pre> <p>but the Theano installed is the old one and I am using some library that can't import some packages.</p> <p>So I tried to install the new one using </p> <pre></pre> <p>but everytime I import theano, the version is the old one which come from </p> <pre></pre> <p>So I need to know how to make import theano load my theano, not the theano.</p> <p>Thank you :)</p>
[ { "AnswerId": "24709025", "CreationDate": "2014-07-12T02:26:05.297", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>The problem is that the old version wasn't installed with pip, but probably with easy_install. This cause many type of problems.</p>\n\n<p>You can fix it by changing the import order after starting python. To do so, in your python script before importing theano do something like this:</p>\n\n<pre><code>import sys\nsys.path[0:0] = [\"THE_PYTHON_PATH_YOU_WANT_TO_ADD\"]\n</code></pre>\n\n<p>THE_PYTHON_PATH_YOU_WANT_TO_ADD is something like <code>~/.local/lib/python2.7/site-packages/</code></p>\n" } ]
24,540,358
1
<python><numpy><scipy><theano>
2014-07-02T20:21:45.677
null
1,460,123
Pylearn2 Tutorial Import Error
<p>While running python make_dataset in the quick start example for Pylearn2, I've run across an import error in a Theano file. The heart of the issue seems to be this: . I'm running development versions of numpy, scipy, Theano, and Pylearn2. Any ideas?</p> <pre></pre>
[ { "AnswerId": "24540875", "CreationDate": "2014-07-02T20:55:42.133", "ParentId": null, "OwnerUserId": "1460123", "Title": null, "Body": "<p>The problem was resolved after removing ATLAS, installing OpenBLAS, reinstalling numpy + scipy + theano, and then installing Pylearn2 from source. </p>\n\n<p>See <a href=\"https://github.com/Theano/Theano/blob/master/doc/install_ubuntu.txt#L70\" rel=\"nofollow\">https://github.com/Theano/Theano/blob/master/doc/install_ubuntu.txt#L70</a> of the Theano install guide for a more detailed explanation.</p>\n" } ]
24,549,256
1
<python><theano>
2014-07-03T08:56:45.340
24,626,805
3,279,225
How to access theano.tensor.var.TensorVariable?
<p>Let's say I have a matrix w of size(1152, 10) like this:</p> <pre></pre> <p>and I have a input of size(1152, 1) like this:</p> <pre></pre> <p>Now I wanna calculate the dot multilication of them like this:</p> <pre></pre> <p>and it gave me:</p> <pre></pre> <p>Does theano.tensor.dot return a symbolic expression instead of a value?</p>
[ { "AnswerId": "24626805", "CreationDate": "2014-07-08T08:15:09.180", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>In one word: yes.</p>\n\n<p>To see the result of the operation, use <code>result.eval()</code></p>\n\n<p>Minimal working example:</p>\n\n<pre><code>import numpy as np\nimport theano\nfrom theano import tensor as T\nw_values = np.random.randn(1152, 10).astype(theano.config.floatX)\ninput_values = np.random.randn(1152, 1).astype(theano.config.floatX)\nw = theano.shared(w_values, 'w')\ninput = theano.shared(input_values, 'input')\nresult = T.dot(input.T, w)\nprint(result.eval())\n</code></pre>\n" } ]
24,555,984
1
<python><theano>
2014-07-03T14:08:37.663
24,609,262
1,099,534
Invert Theano tensor element-wise
<p>My goal is to invert the values in a Theano tensor element-wise. For instance, I want to turn into . If there is a zero element, I want to keep it unchanged (e.g. should be turned into ). What is the most elegant way of doing this?</p> <p>My solution was to add a very small value to all the elements:</p> <pre></pre> <p>I am aware that I could use a to iterate over the elements and check individually whether they can be inverted, but I want my code to look as mathsy as possible.</p>
[ { "AnswerId": "24609262", "CreationDate": "2014-07-07T11:12:53.310", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>You can get this using <code>T.sgn()</code> since the sign of 0 is 0, though it will probably still fail for very, very small values <em>near</em> 0</p>\n\n<p>Minimal example</p>\n\n<pre><code>import numpy as np\nimport theano\nfrom theano import tensor as T\n\nX_values = np.arange(10).astype(theano.config.floatX)\nX = T.shared(X_values, 'X')\nY = (X ** (-1 * abs(T.sgn(X)))) * abs(T.sgn(X))\nprint(Y.eval())\n</code></pre>\n\n<p>I think what user189 was saying is that mathematically this is not really correct - the result <em>should</em> go to infinity as the denominator approaches 0. What is the application where you need this? You can always test for T.isinf and T.isnan to try and catch numerical problems without resorting to tricks, and raising an exception is probably a cleaner approach than altering the math.</p>\n" } ]
24,577,630
2
<theano>
2014-07-04T15:42:32.153
24,605,866
1,099,534
How can a tensor be flipped in Theano?
<p>Given a tensor , how can I flip it? For instance, flipped is .</p>
[ { "AnswerId": "24605866", "CreationDate": "2014-07-07T08:13:57.537", "ParentId": null, "OwnerUserId": "2855342", "Title": null, "Body": "<p>You can simply do <code>v[::-1].eval()</code>, or just <code>v[::-1]</code> in the middle of your computational graph.</p>\n\n<p>Minimal example:</p>\n\n<pre><code>import numpy as np\nimport theano\nfrom theano import tensor as T\n\nX_values = np.arange(10).astype(theano.config.floatX)\nX = T.shared(X_values, 'X')\nprint(X.eval())\nprint(X[::-1].eval())\n</code></pre>\n\n<p>See the section on indexing <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html\" rel=\"nofollow\">here</a> for more details.</p>\n" }, { "AnswerId": "26808859", "CreationDate": "2014-11-07T19:41:57.103", "ParentId": null, "OwnerUserId": "1245262", "Title": null, "Body": "<p>OK, I know I'm late to the party here, but I've just started playing with Theano, and thought I'd throw in this variation, since I don't think shared values were necessary:</p>\n\n<pre><code> from theano import tensor as T\n from theano import function as Tfunc\n\n z = T.vector()\n f = Tfunc([z],z[::-1])\n</code></pre>\n\n<p>This gives:</p>\n\n<pre><code> &gt;&gt;&gt; f([1,3,5,7,9])\n array([ 9., 7., 5., 3., 1.])\n</code></pre>\n" } ]
24,606,346
1
<python><machine-learning><neural-network><theano><autoencoder>
2014-07-07T08:40:34.263
null
3,568,055
Theano implementation of Stacked DenoisingAutoencoders - Why same input to dA layers?
<p>In the tutorial Stacked DenoisingAutoencoders on <a href="http://deeplearning.net/tutorial/SdA.html#sda" rel="nofollow">http://deeplearning.net/tutorial/SdA.html#sda</a>, the pretraining_functions return a list of functions which represent the train function of each dA layer. But I don't understand why it gives all the dA layers the same input (). Actually, the input of each dA layer should be the output of the layer below except the first dA layer. Can anybody tell me why these codes are correct?</p> <pre></pre>
[ { "AnswerId": "24810827", "CreationDate": "2014-07-17T18:21:17.970", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>Since the inputs of each hidden layer are configured as the outputs of the previous layer:</p>\n\n<pre><code># the input to this layer is either the activation of the hidden\n# layer below or the input of the SdA if you are on the first\n# layer\nif i == 0:\n layer_input = self.x\nelse:\n layer_input = self.sigmoid_layers[-1].output\n</code></pre>\n\n<p>When setting <code>self.x</code> to <code>train_set_x[batch_begin:batch_end]</code> in <code>givens</code> section of the pretraining function it actually makes theano propagate the inputs from one layer to the other, so when you pre-train the second layer the inputs will first propagate through the first layer and then be processed by the second. </p>\n\n<p>If you look closely at the end of the <a href=\"http://deeplearning.net/tutorial/SdA.html#tips-and-tricks\" rel=\"nofollow\">tutorial</a> there is a tip how to reduce the training run-time by precomputing the explicit inputs per layer.</p>\n" } ]
24,654,389
1
<neural-network><pooling><theano>
2014-07-09T12:58:26.130
null
2,166,433
Average pooling with Theano
<p>I am trying to implement another pooling function for neural network with Theano, expect of already existing maxpool, for example average pool.</p> <p>Using to <a href="https://github.com/quaizarv/deeplearning-benchmark/blob/master/common/theanoWrappers.py" rel="nofollow">this source</a>, where average pooling is already implemented, my code looks like:</p> <p>Random initialization just to test:</p> <pre></pre> <p>Definition of Theano scalars and functions:</p> <pre></pre> <p>TSN is <a href="http://deeplearning.net/software/theano/library/sandbox/neighbours.html" rel="nofollow">theano.sandbox.neighbours</a></p> <p>And the call of the function:</p> <pre></pre> <p>And I am getting an error:</p> <pre></pre> <p>I don't really understand this error. Would be glad to have any suggestions how to correct this error or example of other pooling techniques, programmed in Theano.</p> <p>Thanks!</p> <p>Edit: with the ignoring the border, it works perfectly</p> <pre></pre>
[ { "AnswerId": "24865492", "CreationDate": "2014-07-21T12:47:39.920", "ParentId": null, "OwnerUserId": "216880", "Title": null, "Body": "<p><code>invals</code> has shape <code>(5, 5)</code> in the last two dimensions, however you want to pool over <code>(2, 2)</code> subsets. This only works if you ignore the border (i.e. the last column and the last row of <code>invals</code>).</p>\n" } ]
24,665,056
5
<python><cuda><gpu><nvidia><theano>
2014-07-09T23:06:20.643
null
3,822,367
Using Theano with GPU on Ubuntu 14.04 on AWS g2
<p>I'm having trouble getting Theano to use the GPU on my machine. </p> <p>When I run: /usr/local/lib/python2.7/dist-packages/theano/misc$ THEANO_FLAGS=floatX=float32,device=gpu python check_blas.py WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not available (error: Unable to get the number of gpus available: no CUDA-capable device is detected)</p> <p>I've also checked that the NVIDIA driver is installed with: lspci -vnn | grep -i VGA -A 12</p> <p>with result: Kernel driver in use: nvidia</p> <p>However, when I run: nvidia-smi result: NVIDIA: could not open the device file /dev/nvidiactl (No such file or directory). NVIDIA-SMI has failed because it couldn't communicate with NVIDIA driver. Make sure that latest NVIDIA driver is installed and running.</p> <p>and /dev/nvidiaactl doesn't exist. What's going on?</p> <p>UPDATE: /nvidia-smi works with result:</p> <pre></pre> <p>and after compiling the NVIDIA_CUDA-6.0_Samples then running deviceQuery I get result:</p> <p>cudaGetDeviceCount returned 35 -> CUDA driver version is insufficient for CUDA runtime version Result = FAIL</p>
[ { "AnswerId": "29877206", "CreationDate": "2015-04-26T11:47:34.713", "ParentId": null, "OwnerUserId": "2930037", "Title": null, "Body": "<p>Had the same problem and reinstalled Cuda and at the end it says i have to update PATH to include /usr/local/cuda7.0/bin and LD_LIBRARY_PATH to include /usr/local/cuda7.0/lib64. The PATH (add LD_LIBRARY_PATH in same file) can be found in /etc/environment. Then theano found gpu. Basic error on my part...</p>\n" }, { "AnswerId": "49819761", "CreationDate": "2018-04-13T14:52:26.577", "ParentId": null, "OwnerUserId": "1052899", "Title": null, "Body": "<p>I got </p>\n\n<pre><code>-&gt; CUDA driver version is insufficient for CUDA runtime version\n</code></pre>\n\n<p>and my problem is related with the selected GPU mode.\nIn other words, the problem may be related to the selected GPU mode (Performance/Power Saving Mode), when you select (with nvidia-settings utility, in the \"PRIME Profiles\" configurations) the integrated Intel GPU and you execute the <code>deviceQuery</code> script... you get this error:</p>\n\n<p>But this error is misleading,\nby <strong>selecting</strong> back the <strong>NVIDIA(Performance mode)</strong> with <strong>nvidia-settings</strong> utility the problem disappears.</p>\n\n<p><strong>This is not a version problem</strong>. </p>\n\n<p>Regards</p>\n\n<p>P.s: The selection is available when Prime-related-stuff is installed. Further details: <a href=\"https://askubuntu.com/questions/858030/nvidia-prime-in-nvidia-x-server-settings-in-16-04-1\">https://askubuntu.com/questions/858030/nvidia-prime-in-nvidia-x-server-settings-in-16-04-1</a> </p>\n" }, { "AnswerId": "38781071", "CreationDate": "2016-08-05T04:22:02.097", "ParentId": null, "OwnerUserId": "1787555", "Title": null, "Body": "<p>If you are using CUDA 7.5, make sure follow official instruction:\nCUDA 7.5 doesn't support the default g++ version. Install an supported version and make it the default.</p>\n\n<pre><code>sudo apt-get install g++-4.9\n\nsudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 20\nsudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 10\n\nsudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.9 20\nsudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-5 10\n\nsudo update-alternatives --install /usr/bin/cc cc /usr/bin/gcc 30\nsudo update-alternatives --set cc /usr/bin/gcc\n\nsudo update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++ 30\nsudo update-alternatives --set c++ /usr/bin/g++\n</code></pre>\n\n<p>If theano GPU test code has error:</p>\n\n<blockquote>\n <p>ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu:\n libcublas.so.7.5: cannot open shared object file: No such file or\n directory WARNING (theano.sandbox.cuda): CUDA is installed, but\n device gpu is not available (error: cuda unavilable)</p>\n</blockquote>\n\n<p>Just using <code>ldconfig</code> command to link the shared object of cuda 7.5:</p>\n\n<pre><code>sudo ldconfig /usr/local/cuda-7.5/lib64\n</code></pre>\n" }, { "AnswerId": "24665354", "CreationDate": "2014-07-09T23:38:24.583", "ParentId": null, "OwnerUserId": "1695960", "Title": null, "Body": "<p>CUDA GPUs in a linux system are not usable until certain \"device files\" have been properly established.</p>\n\n<p>There is a note to this effect in <a href=\"http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-linux/index.html#runfile-verifications\" rel=\"nofollow\">the documentation</a>.</p>\n\n<p>In general there are several ways these device files can be established:</p>\n\n<ol>\n<li>If an X-server is running.</li>\n<li>If a GPU activity is initiated as root user (such as running nvidia-smi, or any CUDA app.)</li>\n<li>Via startup scripts (refer to the documentation linked above for an example).</li>\n</ol>\n\n<p>If none of these steps are taken, the GPUs will not be functional for non-root users. Note that the files do not persist through re-boots, and must be re-established on each boot cycle, through one of the 3 above methods. If you use method 2, and reboot, the GPUs will not be available until you use method 2 again.</p>\n\n<p>I suggest reading the linux getting started guide entirely (linked above), if you are having trouble setting up a linux system for CUDA GPU usage.</p>\n" }, { "AnswerId": "24913032", "CreationDate": "2014-07-23T14:19:43.733", "ParentId": null, "OwnerUserId": "2304604", "Title": null, "Body": "<p>I've wasted a lot of hours trying to get AWS G2 to work on ubuntu but failed by getting exact error like you did. Currently I'm running Theano with gpu smoothly with <a href=\"https://aws.amazon.com/marketplace/pp/B00FYCDDTE\" rel=\"nofollow\">this</a> redhat AMI. To install Theano on Redhat follow the process of <em>Installing Theano in CentOS</em> in Theano documentation.</p>\n" } ]
24,697,064
1
<python><theano><deep-learning>
2014-07-11T11:52:35.777
null
499,439
How to get predictions vector from Theano stacked autoencoder
<p>I'm trying to modify Stacked Autoencoder for classification from Theano <a href="http://deeplearning.net/tutorial/deeplearning.pdf" rel="nofollow">deep learning tutorial</a>, chapter 8. The code of autoencoder I'm dealing with is available <a href="https://github.com/lisa-lab/DeepLearningTutorials/blob/master/code/SdA.py" rel="nofollow">here</a>.</p> <p>My dataset consists of 4 arrays: test_set_x, test_set_y, valid_set_x, valid_set_y. The names are self-explained.</p> <p>This is how the trained autoencoder is checked on validation set:</p> <pre></pre> <p>This code prints out "0.87" on my dataset, so it does work.</p> <p>Expressing it more verbose</p> <pre></pre> <p>still gives correct answer 87%.</p> <p>But whenever I'm trying to get directly the real class prediction vector, I get some very wrong result: all elements of result vector are equal to 4 (one on my classes).</p> <p>My try looks like this:</p> <pre></pre> <p>This prints out "[4, 4, 4, ....., 4, 4]". Comparing this result with valid_set_y vector gives about 12% correctness, not even nearly 87%.</p> <p>I don't understand what I'm doing wrong.</p> <p>Please help me if you've ever had a deal with theano autoencoders and/or the mentioned tutorial.</p> <p>Thank you.</p>
[ { "AnswerId": "24722883", "CreationDate": "2014-07-13T12:50:41.893", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>The <code>valid_score</code> output is the <strong>error</strong> rate on the validation set. A validation score of <code>87%</code> means that you managed to classify correctly only ~12% of your validation examples. This result seems to be consistet with an \"all 4\" prediction rule.</p>\n" } ]
24,728,255
3
<python><macos><theano><deep-learning>
2014-07-14T00:51:42.317
null
3,835,247
How to install theano library on OS X?
<p>In my machine learning course we are going to start using <a href="http://deeplearning.net/software/theano/install.html#install" rel="nofollow">theano</a>, a very well known library for deep learning architectures. I all ready installed it with the following command:</p> <pre></pre> <p>By the way, when i want to test if it installed correctly, the python interpreter can´t fin the module. I dont know if im installing it right. Reading the documentation i found <a href="http://continuum.io/downloads" rel="nofollow">anaconda</a>, is it right to first install anaconda and then try to install again with theano?. Is this the right way to install this library on MAC OS X?. How can i install this library correctly in order to use theano succesfully?</p>
[ { "AnswerId": "32152584", "CreationDate": "2015-08-22T05:15:37.183", "ParentId": null, "OwnerUserId": "4127806", "Title": null, "Body": "<p>Consider installing Theano in a virtual enviroment as oppose to installing it globally via sudo. The significance of this step is nicely described <a href=\"http://www.dabapps.com/blog/introduction-to-pip-and-virtualenv-python/\" rel=\"nofollow\">here</a>. \nIn your terminal window, do the following:</p>\n\n<pre><code>virtualenv --system-site-packages -p python2.7 theano-env\nsource theano-env/bin/activate\npip install -r https://raw.githubusercontent.com/Lasagne/Lasagne/v0.1/requirements.txt\n</code></pre>\n" }, { "AnswerId": "44283294", "CreationDate": "2017-05-31T11:14:29.553", "ParentId": null, "OwnerUserId": "3232179", "Title": null, "Body": "<p>Anaconda is indeed highly recommended for python libraries. In fact, Anaconda is not only for python in contrast to pip. You can read more here: <a href=\"https://stackoverflow.com/questions/20994716/what-is-the-difference-between-pip-and-conda\">What is the difference between pip and conda?</a></p>\n\n<p>For installing Theano, I had already installed Anaconda. I just simply did:</p>\n\n<p><code>conda install theano</code></p>\n\n<p>Then, in Ipython, I successfully imported theano:</p>\n\n<p><code>import theano</code></p>\n" }, { "AnswerId": "28209171", "CreationDate": "2015-01-29T07:30:07.693", "ParentId": null, "OwnerUserId": "666043", "Title": null, "Body": "<p>Installing Python with Homebrew and installing Theano with pip after that worked fine for me. I just needed to install nose after that to be able to run the tests.</p>\n\n<pre><code>brew install python\npip install Theano\npip install nose\n</code></pre>\n\n<p>I cannot help much installing the framework with Anaconda though.</p>\n" } ]
24,744,701
1
<python><machine-learning><neural-network><gpu><theano>
2014-07-14T19:49:53.977
24,765,554
1,460,123
Convert CudaNdarraySharedVariable to TensorVariable
<p>I'm trying to convert a GPU model to a CPU compatible version for prediction on a remote server -- how can I convert 's to 's to avoid an error calling cuda code on a GPU-less machine? The experimental theano flag seems to have left a few 's hanging around (specifically ).</p>
[ { "AnswerId": "24765554", "CreationDate": "2014-07-15T18:29:32.457", "ParentId": null, "OwnerUserId": "2616754", "Title": null, "Body": "<p>For a plain CudaNdarray variable, something like this should work:</p>\n\n<p>'''x = CudaNdarray... x_new=theano.tensor.TensorVariable(CudaNdarrayType([False] * tensor_dim))<br>\nf = theano.function([x_new], x_new)</p>\n\n<p>converted_x = f(x)\n'''</p>\n" } ]
24,752,655
1
<python><neural-network><theano><deep-learning><unsupervised-learning>
2014-07-15T07:47:24.470
26,047,542
1,714,410
Unsupervised pre-training for convolutional neural network in theano
<p>I would like to design a deep net with one (or more) convolutional layers (CNN) and one or more fully connected hidden layers on top.<br> For deep network with fully connected layers there are methods in theano for unsupervised pre-training, e.g., using <a href="http://www.deeplearning.net/tutorial/SdA.html">denoising auto-encoders</a> or <a href="http://www.deeplearning.net/tutorial/rbm.html">RBMs</a>.</p> <p>My question is: How can I implement (in theano) an unsupervised pre-training stage for convolutional layers?</p> <p>I do not expect a full implementation as an answer, but I would appreciate a link to a good tutorial or a reliable reference.</p>
[ { "AnswerId": "26047542", "CreationDate": "2014-09-25T20:29:26.867", "ParentId": null, "OwnerUserId": "1936499", "Title": null, "Body": "<p><a href=\"http://people.idsia.ch/~ciresan/data/icann2011.pdf\" rel=\"noreferrer\">This paper</a> describes an approach for building a stacked convolutional autoencoder. Based on that paper and some Google searches I was able to implement the described network. Basically, everything you need is described in the Theano convolutional network and denoising autoencoder tutorials with one crucial exception: how to reverse the max-pooling step in the convolutional network. I was able to work that out using a method from <a href=\"https://groups.google.com/forum/#!msg/theano-users/7t7_hxbAMdA/V1tp7YZ50PYJ\" rel=\"noreferrer\">this discussion</a> - the trickiest part is figuring out the right dimensions for W_prime as these will depend on the feed forward filter sizes and the pooling ratio. Here is my inverting function:</p>\n\n<pre><code> def get_reconstructed_input(self, hidden):\n \"\"\" Computes the reconstructed input given the values of the hidden layer \"\"\"\n repeated_conv = conv.conv2d(input = hidden, filters = self.W_prime, border_mode='full')\n\n multiple_conv_out = [repeated_conv.flatten()] * np.prod(self.poolsize)\n\n stacked_conv_neibs = T.stack(*multiple_conv_out).T\n\n stretch_unpooling_out = neibs2images(stacked_conv_neibs, self.pl, self.x.shape)\n\n rectified_linear_activation = lambda x: T.maximum(0.0, x)\n return rectified_linear_activation(stretch_unpooling_out + self.b_prime.dimshuffle('x', 0, 'x', 'x'))\n</code></pre>\n" } ]
24,763,371
1
<python><numpy><scipy><gpu><theano>
2014-07-15T16:24:03.990
null
1,460,123
Theano OSError on function declaration
<p>On declaration of a Theano symbolic function, I get an OSError and traceback. Interestingly enough, the same code functions on a different machine. One machine is configured to use the GPU, while the other (with the error) is CPU-only. Has anyone else experienced this sort of behavior and have a clue how to proceed? </p> <pre></pre>
[ { "AnswerId": "24765088", "CreationDate": "2014-07-15T18:02:24.630", "ParentId": null, "OwnerUserId": "1460123", "Title": null, "Body": "<p>I'm not sure why it was triggered on declaration of Theano's symbolic function, but it was a simple memory issue. Cutting down on memory usage solved the problem for me.</p>\n" } ]