QuestionId
int64
388k
59.1M
AnswerCount
int64
0
47
Tags
stringlengths
7
102
CreationDate
stringlengths
23
23
AcceptedAnswerId
float64
388k
59.1M
OwnerUserId
float64
184
12.5M
Title
stringlengths
15
150
Body
stringlengths
12
29.3k
answers
listlengths
0
47
31,615,990
1
<neural-network><deep-learning><caffe><nvidia-digits>
2015-07-24T17:00:36.960
null
2,284,821
Caffe web demo error when running a model trained on Digits
<p>I trained a neural network model on Digits and it seemed to run fine there.<br> Then i exported the trained model files and copied them into a different system running the standard caffe web demo. I hoped to just be able to plug those files in and have them run in Caffe but i am getting an error.</p> <p>Specifically I copied my model into bvlc_reference_caffenet.caffemodel, the deploy.prototxt into deploy.prototxt, and the mean.binaryproto into the ilsvrc_2012_mean.npy file. However when I try to run it , it appears to not like the format of the mean.binaryproto file as indicated by the error message:</p> <pre></pre> <p>what am I doing wrong here? Do I need to process the mean.binaryproto file from Digits somehow before I use it with caffe?</p>
[ { "AnswerId": "33775703", "CreationDate": "2015-11-18T09:05:26.887", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>You need to convert the <code>.binaryproto</code> file to a numpy file.\nThere is a nice example <a href=\"https://github.com/BVLC/caffe/issues/290\" rel=\"nofollow\">here</a> using <code>caffe.io</code> and <code>caffe.proto</code>.</p>\n" } ]
31,617,486
1
<python><caffe><kaggle>
2015-07-24T18:34:47.860
31,634,295
864,128
use caffe to train Lenet with CSV data
<p>Excuse me, I have a question on using caffe for hd data? I try to run an example on the Kaggle mnist csv data with the following steps</p> <ol> <li><p>use to convert it to h5 data. (I use the caffe-example.py to convert)</p></li> <li><p>Then modify the lenet_train_test_prototxt and train it. I am quite at a loss of this step. </p></li> </ol> <p>The only change I made here is</p> <pre></pre> <p>how to change the lenet_train_test_prototxt to suit the data? Or also there are some other files I need to change? The error log is</p> <pre></pre>
[ { "AnswerId": "31634295", "CreationDate": "2015-07-26T06:22:04.260", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>I assume you have one hdf5 data file <code>'data/mnist_train_h5.hd5'</code>. </p>\n\n<ol>\n<li><p>As you can see from the error message you got, <code>\"HDF5Data\"</code> layer does not support data transformation. Specifically, you cannot scale the data by the layer.<br>\nThus, any transformations you wish to have, you must apply them <em>yourself</em> during the creation of <code>'data/mnist_train_h5.hd5'</code>.</p></li>\n<li><p><code>\"HDF5Data\"</code> layer does not accept <code>data_param</code>, but rather <code>hdf5_data_param</code> with a parameter <code>source</code> specifying a <em>list</em> of hd5 binary files. In your case you should prepare an extra <em>text</em> file <code>'data/mnist_train_h5.txt'</code> with a single line:</p></li>\n</ol>\n\n<blockquote>\n <p>data/mnist_train_h5.hd5</p>\n</blockquote>\n\n<p>This text file will tell caffe to read <code>'data/mnist_train_h5.hd5'</code>.</p>\n\n<p>The resulting layer should look like:</p>\n\n<pre><code>layer {\n name: \"mnist\"\n type: \"HDF5Data\"\n top: \"data\"\n top: \"label\"\n hdf5_data_param {\n source: \"data/mnist_train_h5.txt\"\n batch_size: 64\n }\n include {\n phase: TRAIN\n }\n}\n</code></pre>\n" } ]
31,623,272
0
<python><arrays><numpy><python-imaging-library><caffe>
2015-07-25T05:26:46.563
null
3,785,114
How to save a jpegtran JPEG in a numpy same as with PIL?
<p>I have started using a new library in my Python program to process images due to speed. I was using PIL but now I am using jpegtran. My previous code I would save the image to a numpy array and that worked perfectly fine, but now since the datatype is different I am having problems creating the same numpy array.</p> <p>PIL code:</p> <pre></pre> <p>jpegtran code: </p> <pre></pre> <p>I need the original shape (2136, 3216, 3) for it to work with the rest of my code.</p>
[]
31,627,380
1
<python><keras><kaggle>
2015-07-25T14:17:47.200
null
5,155,533
Trying Kaggle Titanic with keras .. getting loss and valid_loss -0.0000
<p>Hi I am getting weird results for the following code for the problem posted here (<a href="https://www.kaggle.com/c/titanic" rel="nofollow">https://www.kaggle.com/c/titanic</a>) - </p> <pre></pre> <p>I am getting following results :</p> <pre></pre> <p>I am trying to create a simple 3 layer network. Totally basic code. I have tried these kind of classification problems before using keras on kaggle. But this time getting this error.</p> <p>Is it overfitting due to less data. What I am missing ? Can someone help ?</p>
[ { "AnswerId": "40352913", "CreationDate": "2016-11-01T01:25:22.950", "ParentId": null, "OwnerUserId": "4120005", "Title": null, "Body": "<p>Old post, but answering anyway in case someone else attempts Titanic with Keras.</p>\n\n<p>Your network may have too many parameters and too little regularization (e.g. dropout).</p>\n\n<p>Call model.summary() right before the model.compile and it will show you how many parameters your network has. Just between your two Dense layers you should have 512 X 512 = 262,144 paramters. That's a lot for 762 examples.</p>\n\n<p>Also you may want to use a sigmoid activation on the last layer and binary_cross entropy loss as you only have two output classes.</p>\n" } ]
31,639,331
2
<python><unix><machine-learning><neural-network><caffe>
2015-07-26T16:31:52.323
null
4,969,461
Caffe: Almost done but stuck at the last step
<p>I'm currently setting up caffe with python on my Macbook. I swear all the prerequisites are ok, but it returns error when I tried to build caffe. What's wrong? Here is the console:</p> <pre></pre> <hr> <p>Ok, Now the aforementioned problem have been solved, but there is another one, could someone help me?</p> <pre></pre>
[ { "AnswerId": "31639789", "CreationDate": "2015-07-26T17:17:56.663", "ParentId": null, "OwnerUserId": "4592059", "Title": null, "Body": "<p>I guess you have to install cuda dev. kit.</p>\n" }, { "AnswerId": "46592024", "CreationDate": "2017-10-05T17:57:27.453", "ParentId": null, "OwnerUserId": "880837", "Title": null, "Body": "<p>For anyone else that finds this. The CUDA dev kit is only needed if you have an Nvidia card on the machines you'll use this on. For most Mac users this likely isn't the case (check your hardware specs). If you don't have that kind of graphics card, you can't use CUDA. Just disable it before compiling Caffe:</p>\n\n<ul>\n<li>In your caffe dir, edit the Makefile.config</li>\n<li>Uncomment this line: <code>CPU_ONLY := 1</code> to indicate CUDA won't be needed.</li>\n</ul>\n" } ]
31,643,409
0
<theano><autoencoder>
2015-07-27T00:29:34.913
null
864,128
How can the numerical values of Theano hidden layers be accessed efficiently, i.e. not eval()?
<p>Excuse me, is there any efficient way of extracting the numerical values of hidden layer in theano? I can do this work with .eval(), which is super slow.</p> <p>The code is (based on modification of dA.py)</p> <pre></pre> <p>It turns out .eval() is super slow, is there anyway I can improve it? I try to write it with theano.function, but I failed. Here is my code</p> <pre></pre> <p>It doesn't work, the error log is</p> <pre></pre> <p>So I have two questions here:</p> <ol> <li>How to write the function code here?</li> <li>Is there any efficient way to output the numerical values of hidden layers in theano?</li> </ol>
[]
31,649,216
3
<python><caffe><lmdb>
2015-07-27T09:16:19.880
null
1,300,575
Writing data to LMDB with Python very slow
<p>Creating datasets for training with <a href="http://caffe.berkeleyvision.org/" rel="noreferrer">Caffe</a> I both tried using HDF5 and LMDB. However, creating a LMDB is very slow even slower than HDF5. I am trying to write ~20,000 images.</p> <p>Am I doing something terribly wrong? Is there something I am not aware of?</p> <p>This is my code for LMDB creation:</p> <pre></pre> <p>As you can see I am creating a transaction for every 1,000 images, because I thought creating a transaction for each image would create an overhead, but it seems this doesn't influence performance too much.</p>
[ { "AnswerId": "32576493", "CreationDate": "2015-09-15T01:56:16.323", "ParentId": null, "OwnerUserId": "4323148", "Title": null, "Body": "<p>Try this:</p>\n\n<pre><code>DB_KEY_FORMAT = \"{:0&gt;10d}\"\ndb = lmdb.open(path, map_size=int(1e12))\n curr_idx = 0\n commit_size = 1000\n with in_db_data.begin(write=True) as in_txn:\n for curr_commit_idx in range(0, num_data, commit_size):\n for i in range(curr_commit_idx, min(curr_commit_idx + commit_size, num_data)):\n d, l = data[i], labels[i]\n im_dat = caffe.io.array_to_datum(d.astype(float), label=int(l))\n key = DB_KEY_FORMAT.format(curr_idx)\n in_txn.put(key, im_dat.SerializeToString())\n curr_idx += 1\n db.close()\n</code></pre>\n\n<p>the code</p>\n\n<pre><code>with in_db_data.begin(write=True) as in_txn:\n</code></pre>\n\n<p>takes much time.</p>\n" }, { "AnswerId": "37149186", "CreationDate": "2016-05-10T21:21:23.713", "ParentId": null, "OwnerUserId": "2617351", "Title": null, "Body": "<p>In my experience, I've had <strong>50-100 ms writes to LMDB from Python</strong> writing Caffe data on ext4 hard disk on Ubuntu. <strong>That's why I use tmpfs</strong> (<strong>RAM disk</strong> functionality built into Linux) and get these writes done in around <strong>0.07 ms</strong>. You can make smaller databases on your ramdisk and copy them to a hard disk and later train on all of them. I'm making around 20-40GB ones as I have 64 GB of RAM.</p>\n\n<p>Some pieces of code to help you guys dynamically create, fill and move LMDBs to storage. Feel free to edit it to fit your case. It should save you some time getting your head around how LMDB and file manipulation works in Python.</p>\n\n<pre><code>import shutil\nimport lmdb\nimport random\n\n\ndef move_db():\n global image_db\n image_db.close();\n rnd = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(5))\n shutil.move( fold + 'ram/train_images', '/storage/lmdb/'+rnd)\n open_db()\n\n\ndef open_db():\n global image_db\n image_db = lmdb.open(os.path.join(fold, 'ram/train_images'),\n map_async=True,\n max_dbs=0)\n\ndef write_to_lmdb(db, key, value):\n \"\"\"\n Write (key,value) to db\n \"\"\"\n success = False\n while not success:\n txn = db.begin(write=True)\n try:\n txn.put(key, value)\n txn.commit()\n success = True\n except lmdb.MapFullError:\n txn.abort()\n # double the map_size\n curr_limit = db.info()['map_size']\n new_limit = curr_limit*2\n print '&gt;&gt;&gt; Doubling LMDB map size to %sMB ...' % (new_limit&gt;&gt;20,)\n db.set_mapsize(new_limit) # double it\n\n...\n\nimage_datum = caffe.io.array_to_datum( transformed_image, label )\nwrite_to_lmdb(image_db, str(itr), image_datum.SerializeToString())\n</code></pre>\n" }, { "AnswerId": "44697800", "CreationDate": "2017-06-22T11:15:52.363", "ParentId": null, "OwnerUserId": "590335", "Title": null, "Body": "<p>LMDB writes are very sensitive to order - If you can sort the data before insertion speed will improve significantly</p>\n" } ]
31,658,771
1
<python><theano>
2015-07-27T16:46:05.100
null
5,130,560
Issue with Theano scan function - TypeError: Cannot convert Type TensorType(float64, 3D)
<p>I am having some trouble with the Theano scan function and the following code:</p> <pre></pre> <p>As you can see, the variable start is an integer that get a new value after each call of step_ and I want to get the sequences of its values after an arbitrary number of steps n_steps. If I run the code with n_steps = 1, everything works. However, for n_steps > 1, I get this error:</p> <blockquote> <p>TypeError: Cannot convert Type TensorType(float64, 3D) (of Variable IncSubtensor{Set;:int64:}.0) into Type TensorType(float64, (False, True, False)). You can try to manually convert IncSubtensor{Set;:int64:}.0 into a TensorType(float64, (False, True, False)).</p> </blockquote> <p>I don't get where it comes from as none of my variable is a 3D tensor (I have checked with theano.printing.debugprinting and h and c are rows as expected and sample a scalar).</p> <p>Do you have any clue?</p> <p>Thanks</p>
[ { "AnswerId": "31673948", "CreationDate": "2015-07-28T10:40:23.447", "ParentId": null, "OwnerUserId": "5130560", "Title": null, "Body": "<p>Actually I have found the solution to my problem. I changed this</p>\n\n<pre><code>def _slice(_x, n, dim):\nif _x.ndim == 3:\n return _x[:, :, n * dim:(n + 1) * dim]\nreturn _x[:, n * dim:(n + 1) * dim]\n</code></pre>\n\n<p>by this</p>\n\n<pre><code> def _slice(_x, n, dim):\n if _x.ndim == 3:\n return _x[:, :, n * dim:(n + 1) * dim]\n if _x.ndim == 2:\n return _x[:, n * dim:(n + 1) * dim]\n return _x[n * dim:(n + 1) * dim]\n</code></pre>\n\n<p>and this</p>\n\n<pre><code>x_ = tensor.dot(emb[None,:], tparams[_p(prefix, 'W')]) + tparams[_p(prefix, 'b')]\n</code></pre>\n\n<p>by this</p>\n\n<pre><code> x_ = tensor.dot(emb, tparams[_p(prefix, 'W')]) + tparams[_p(prefix, 'b')]\n</code></pre>\n\n<p>This makes <code>x_, h_ and c_</code> theano vector instead of rows as previously and removes the error (though I am not exactly sure why).</p>\n\n<p>Of course I've also update the call to <code>scan</code></p>\n\n<pre><code> rval, updates = theano.scan(_step,\n outputs_info=[start, tensor.alloc(numpy_floatX(0.),\n dim_proj),\n tensor.alloc(numpy_floatX(0.),\n dim_proj)],\n name=_p(prefix, '_layers'),\n n_steps=2)\n</code></pre>\n" } ]
31,659,899
1
<matlab><neural-network><deep-learning><caffe><matcaffe>
2015-07-27T17:50:10.927
31,668,306
5,161,771
Trying to use matcaffe for the interface between MATLAB and Caffe, but cannot find caffe_.cpp
<p>With being a private function, when I call functions like , there is always an error telling me it cannot find .</p> <p>So how to use that in MATLAB?</p>
[ { "AnswerId": "31668306", "CreationDate": "2015-07-28T05:51:25.880", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>You can find <code>'caffe_.cpp'</code> under <a href=\"https://github.com/BVLC/caffe/tree/master/matlab/%2Bcaffe/private\" rel=\"nofollow\"><code>matlab/+caffe/private/</code></a>.<br>\nMake sure you cloned caffe git properly and that you built the matlab interface:</p>\n\n<pre><code>~$ make matcaffe\n</code></pre>\n" } ]
31,664,988
1
<python-2.7><ssh><gnu-screen><theano><keras>
2015-07-27T23:23:41.007
31,673,239
5,106,953
Using screen session with Theano - race conditions
<p>When training a neural net implemented in Keras in a screen session, I appear to be running into race conditions with Theano.</p> <p>I proceed as follows. I ssh into the compute cluster I am using (which I am <em>not</em> a root user of).</p> <p>Then I run:</p> <pre></pre> <p>Then, once I'm in this screen session, I run the Python script which trains my model. I detach the screen (Ctrl+A+D), and when I do screen -r, everything is fine. However, if I exit my ssh session before I run screen -r, and run screen -r upon logging back in, then I get the following error:</p> <pre></pre> <p>Does anyone know why this happens? It's interesting that it only happens when I logout and try to run screen -r after logging in.</p>
[ { "AnswerId": "31673239", "CreationDate": "2015-07-28T10:08:17.247", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>My guess is that your home directory is on a networked filesystem of some kind (e.g. AFS). If so, as soon as you end the session the filesystem security credentials are invalidated and the process, though it continues to run inside the screen, no longer has permission to work with files in the Theano cache directory <code>~/.theano</code>. If this guess is correct then the problem is not a race condition.</p>\n\n<p>If the problem relates to AFS credential expiry then a solution is to use a credential cache with the <code>kinit</code> command (see the <code>-c</code> option in <a href=\"http://web.mit.edu/kerberos/krb5-1.12/doc/user/user_commands/kinit.html\" rel=\"nofollow\">http://web.mit.edu/kerberos/krb5-1.12/doc/user/user_commands/kinit.html</a>).</p>\n" } ]
31,673,350
1
<python><theano><pymc3><softmax>
2015-07-28T10:13:04.867
null
5,160,585
Softmax choice probabilities with Categorical in PyMC3
<p>I am trying to perform a parameter estimation for a single parameter of a softmax choice function in the following scenario:</p> <p>In each trial, three option values are given (e.g., [1,2,3]), and a subject makes a choice between the options (0, 1 or 2). The softmax function transforms option values into choice probabilities (vector of 3 probabilities, summing to 1), depending on a temperature parameter (here bound between 0 and 10).</p> <p>The choice in each trial is supposed to be modelled as a Categorical distribution with trial choice probabilities calculated from the softmax. Note that the choice probabilities of the Categorical depend on the option values and are therefore different in each trial.</p> <p>Here's what I came up with:</p> <pre class="lang-python prettyprint-override"></pre> <p>This code fails for nTrials bigger than something like 50 with an extremely long warning / error message:</p> <p>Warning:</p> <pre></pre> <p>Error:</p> <pre></pre> <p>I am rather new to PyMC (and Theano) and I feel my implementation is really clunky and suboptimal. Any help and advice is strongly appreciated!</p> <p>Felix</p> <p>Edit: I've uploaded the code as a notebook, showing the warnings and error messages in full: <a href="http://nbviewer.ipython.org/github/moltaire/softmaxPyMC/blob/master/softmax_stackoverflow.ipynb" rel="nofollow">http://nbviewer.ipython.org/github/moltaire/softmaxPyMC/blob/master/softmax_stackoverflow.ipynb</a></p>
[ { "AnswerId": "33740496", "CreationDate": "2015-11-16T16:50:10.457", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>I found again this case. Just as a follow up, now not able able to reproduce it now. So I think it got fixed. We fixed related problem that could hve caused in some case this error.</p>\n\n<p>It work with g++ 4.5.1. If you have this problem update Theano to the development version. If that don't fix it, try to use a g++ more recent, this could be related to older g++ version.</p>\n" } ]
31,673,958
4
<cuda><artificial-intelligence><neural-network><nvcc><caffe>
2015-07-28T10:40:49.407
31,674,451
4,969,461
Caffe Installation: How to use libc++ instead of clang to power CUDA? (Mac)
<p>I know it dim-witted to do so, but I have to. 'Cause currently CUDA 7.0 doesn't support clang 7.0 while I'm using Xcode 7 beta, and it'll be virtually impossible for me to roll back to Xcode 6.0</p> <pre></pre>
[ { "AnswerId": "32642537", "CreationDate": "2015-09-18T01:38:53.070", "ParentId": null, "OwnerUserId": "4696622", "Title": null, "Body": "<p>I had the same issue and I'm guessing lot's more will since Xcode 7.0 is available on App Store now, was going to comment on Jay's but I dont have enough reputation.<br>\nWhat you can do instead of downloading the whole 5gb Xcode 6.x is check Xcode under </p>\n\n<p>settings > locations > Command Line Tools </p>\n\n<p>if there is an option to use 6.4 just switch to that. If not, install <em>JUST</em> the command line tools 6.4 <a href=\"https://developer.apple.com/downloads/?name=Xcode\" rel=\"nofollow\">here</a> and then the option should be available.<br>\nThe way he posted work's as well but then you need two versions of Xcode on your machine.</p>\n" }, { "AnswerId": "32599934", "CreationDate": "2015-09-16T04:41:09.890", "ParentId": null, "OwnerUserId": "5158230", "Title": null, "Body": "<p>I had the same issue. Instead of rolling back to Xcode 6 you can just install it alongside Xcode 7 by downloading an older release from Apple's <a href=\"https://developer.apple.com/downloads/?name=Xcode\" rel=\"nofollow\">Developer Download Center</a>. Give it a name different from Xcode 7's name (e.g. call it \"Xcode 6.4\") and they can live side by side without issue.</p>\n\n<p>Just make sure you set the command line tools in your Xcode preferences to the older version when you want to compile with CUDA.</p>\n" }, { "AnswerId": "31674451", "CreationDate": "2015-07-28T11:04:59.873", "ParentId": null, "OwnerUserId": "678093", "Title": null, "Body": "<p>The <a href=\"http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-mac-os-x/\" rel=\"nofollow\">documentation</a> for CUDA 7.0 states that for Mac OS X <strong>Clang 6.0 (Xcode 6)</strong> is the most recent supported version.</p>\n" }, { "AnswerId": "32980764", "CreationDate": "2015-10-06T22:19:52.547", "ParentId": null, "OwnerUserId": "5228839", "Title": null, "Body": "<h1><code>nvcc</code> actual situation</h1>\n\n<p><code>nvcc</code> is <strong>parsed from a current clang binary version.</strong><br>\ne.g,</p>\n\n<pre><code>&gt; export\nPATH=/usr/local/bin:/usr/bin:/usr/sbin:/usr/local/var/rbenv/shims:/sbin\n\n&gt; clang -v\nclang version 3.8.0 (git@github.com:llvm-mirror/clang.git\n1082a41a5196e0fdddf1af1aa388af197cfc4514) (git@github.com:llvm\nmirror/llvm.git 60fe48f86639ab4472d186bc97e7676f269cae18)\nTarget: x86_64-apple-darwin15.0.0\nThread model: posix\nInstalledDir: /usr/local/bin\n\n&gt; make\n/Developer/NVIDIA/CUDA-7.5/bin/nvcc -ccbin clang++ -I../../common/inc -m64 -Xcompiler -arch -Xcompiler x86_64 -gencode arch=compute_20,code=sm_20 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_52,code=compute_52 -o asyncAPI.o -c asyncAPI.cu\nnvcc fatal : The version ('30800') of the host compiler ('clang') is not supported\nMakefile:229: recipe for target 'asyncAPI.o' failed\nmake: *** [asyncAPI.o] Error 1\n</code></pre>\n\n<blockquote>\n <p>nvcc fatal : The version ('30800') of the host compiler ('clang') is not supported</p>\n</blockquote>\n\n<p>That means...?</p>\n\n<pre><code>&gt; export PATH=/Applications/Xcode-6.4.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin:/usr/local/bin:/usr/bin:/usr/sbin:/usr/local/var/rbenv/shims:/sbin\n\n&gt; CC=/usr/bin/clang CXX=/usr/bin/clang++ make\n/Developer/NVIDIA/CUDA-7.5/bin/nvcc -ccbin clang++ -I../../common/inc -m64 -Xcompiler -arch -Xcompiler x86_64 -gencode arch=compute_20,code=sm_20 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_52,code=compute_52 -o asyncAPI.o -c asyncAPI.cu\n/Developer/NVIDIA/CUDA-7.5/bin/nvcc -ccbin clang++ -m64 -Xcompiler -arch -Xcompiler x86_64 -Xlinker -rpath -Xlinker /Developer/NVIDIA/CUDA-7.5/lib -gencode arch=compute_20,code=sm_20 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_52,code=compute_52 -o asyncAPI asyncAPI.o\nmkdir -p ../../bin/x86_64/darwin/release\ncp asyncAPI ../../bin/x86_64/darwin/release\n</code></pre>\n\n<p>Success.<br>\n...However, unknown actually what nvcc are using the clang of CXX.</p>\n\n<h1>Solution</h1>\n\n<p>So, Let's edit <code>clang</code> binary.</p>\n\n<pre><code>&gt; cp /Applications/Xcode-beta.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang /tmp/clang\n\n&gt; cd /tmp\n\n&gt; vim ./clang\n# or nvim, emacs, etc...\n\n# Searching `700.1.75` and replace `602.0.53`. \n# In case of vim,\n:%s/700.1.75/602.0.53/g\n:wq\n\n&gt; export PATH=/tmp:/usr/local/bin:/usr/bin:/usr/sbin:/usr/local/var/rbenv/shims:/sbin\n\n&gt; wihch clang\n/tmp/clang\n\n&gt; clang -v\nApple LLVM version 7.0.0 (clang-602.0.53)\nTarget: x86_64-apple-darwin15.0.0\nThread model: posix\n# Faking clang version 602.0.53 (Xcode 6.4's clang)\n# but, Actual implementation clang 7.0.0\n\n&gt; make\n/Developer/NVIDIA/CUDA-7.5/bin/nvcc -ccbin clang++ -I../../common/inc -m64 -Xcompiler -arch -Xcompiler x86_64 -gencode arch=compute_20,code=sm_20 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_52,code=compute_52 -o asyncAPI.o -c asyncAPI.cu\n/Developer/NVIDIA/CUDA-7.5/bin/nvcc -ccbin clang++ -m64 -Xcompiler -arch -Xcompiler x86_64 -Xlinker -rpath -Xlinker /Developer/NVIDIA/CUDA-7.5/lib -gencode arch=compute_20,code=sm_20 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_52,code=compute_52 -o asyncAPI asyncAPI.o\nmkdir -p ../../bin/x86_64/darwin/release\ncp asyncAPI ../../bin/x86_64/darwin/release\n</code></pre>\n\n<p>Yeeeeees!<br>\n<code>nvcc</code> compiled using <code>clang version 7.0.0</code>. </p>\n\n<p>In this way,</p>\n\n<pre><code>sudo xcode-select -s /Applications/Xcode-6.4.app/Contents/Developer\n</code></pre>\n\n<p>is <strong>not</strong> required. </p>\n\n<h1>Be careful.</h1>\n\n<p>Now, Your <code>clang</code> is remains became <code>/tmp/clang</code>.</p>\n\n<h1>FYI</h1>\n\n<p>My gist article,<br>\n<a href=\"https://gist.github.com/zchee/7833bf67013e83523181\" rel=\"nofollow\">gist - How to compiling use CUDA nvcc with Xcode7.0 clang 7.0.0</a></p>\n" } ]
31,683,098
1
<python><numpy><theano><lasagne>
2015-07-28T17:19:15.630
31,774,478
5,165,960
numpy array from csv file for lasagne
<p>I started learning how to use theano with lasagne, and started with the mnist example. Now, I want to try my own example: I have a train.csv file, in which every row starts with 0 or 1 which represents the correct answer, followed by 773 0s and 1s which represent the input. I didn't understand how can I turn this file to the wanted numpy arrays in the load_database() function. this is the part from the original function for the mnist database:</p> <pre></pre> <p>and I need to get the X_train (the input) and the y_train (the beginning of every row) from my csv files.</p> <p>Thanks!</p>
[ { "AnswerId": "31774478", "CreationDate": "2015-08-02T17:17:43.177", "ParentId": null, "OwnerUserId": "4592059", "Title": null, "Body": "<p>You can use <code>numpy.genfromtxt()</code> or <code>numpy.loadtxt()</code> as follows:</p>\n\n<pre><code>from sklearn.cross_validation import KFold\n\nXy = numpy.genfromtxt('yourfile.csv', delimiter=\",\")\n\n# the next section provides the required\n# training-validation set splitting but \n# you can do it manually too, if you want\n\nskf = KFold(len(Xy))\n\nfor train_index, valid_index in skf:\n ind_train, ind_valid = train_index, valid_index\n break\n\nXy_train, Xy_valid = Xy[ind_train], Xy[ind_valid]\n\nX_train = Xy_train[:, 1:]\ny_train = Xy_train[:, 0]\n\nX_valid = Xy_valid[:, 1:]\ny_valid = Xy_valid[:, 0]\n\n\n...\n\n# you can simply ignore the test sets in your case\nreturn X_train, y_train, X_val, y_val #, X_test, y_test\n</code></pre>\n\n<p>In the code snippet we ignored passing the <strong><code>test</code></strong> set.</p>\n\n<p>Now you can import your dataset to the main modul or script or whatever, but be aware to remove all the test part from that too. </p>\n\n<p>Or alternatively you can simply pass the valid sets as <strong><code>test</code></strong> set:</p>\n\n<pre><code># you can simply pass the valid sets as `test` set\nreturn X_train, y_train, X_val, y_val, X_val, y_val\n</code></pre>\n\n<p>In the latter case we don't have to care about the main moduls sections refer to the excepted <strong><code>test</code></strong> set, but as scores (if have) you will get the the <code>validation scores</code> twice i.e. as <code>test scores</code>.</p>\n\n<p><strong>Note:</strong> I don't know, which mnist example is that one, but probably, after you prepared your data as above, you have to make further modifications in your trainer module too to suit to your data. For example: input shape of data, output shape i.e. the number of classes e.g. in your case the former is <code>773</code>, the latter is <code>2</code>.</p>\n" } ]
31,688,759
1
<python><macos><cuda><wrapper><caffe>
2015-07-28T23:20:32.807
31,757,650
3,543,300
Mac Caffe CUDA driver issue
<p>I'm trying to build caffe with the python wrapper on Mac OSX 10.0, but keep getting the following error when I execute the command: make runtest (make all -j8 and make test work fine).</p> <blockquote> <p>Check failed: error == cudaSuccess (35 vs. 0) CUDA driver version is insufficient for CUDA runtime version</p> </blockquote> <p>I have updated the CUDA driver to the latest version online. I also tried uninstalling and reinstalling CUDA and the driver, but the error still persists. How can I solve this?</p>
[ { "AnswerId": "31757650", "CreationDate": "2015-08-01T02:39:50.170", "ParentId": null, "OwnerUserId": "1695960", "Title": null, "Body": "<p>As was teased out in the comments, the basic problem here is an attempt to use CUDA on a GPU that does not support it (AMD Radeon ...).</p>\n\n<p><a href=\"https://stackoverflow.com/tags/cuda/info\">CUDA</a> is a GPU programming technology that only runs on NVIDIA GPUs (ignoring emulators and the like.)</p>\n\n<p>To make forward progress, some possibilities might be:</p>\n\n<ol>\n<li>Switch to another machine that has an NVIDIA GPU.</li>\n<li>Modify the configuration of Caffe so that it does not use or depend on CUDA.</li>\n</ol>\n" } ]
31,700,179
1
<python><machine-learning><nlp><theano><deep-learning>
2015-07-29T12:17:26.610
31,706,487
4,763,311
Unable to make sense of how Theano works in RNN NLP for classification
<pre></pre> <p>Explanation of code:</p> <p>I know this will not be a good thing to do in stackoverflow. But I am struggling for more than a week to decode this code which is used to train a Recurrent Neural Network. I am newbie to theano first of all.</p> <p>word_batch = array([[ -1, -1, -1, 194, 358, 463, 208]], dtype=int32) label_last_word = 126</p> <p>Thw word_batch is an index for a sentence like the following:</p> <p>'I am going to USA from England'</p> <p>Here the word_batch is a context window associated with one particular word say USA. So, if the context windows is seven the middle ( 194 ) in the word batch represent the index of that word in the dataset. I want to know, when I am passing this as argument to rnn.sentence_train , how the training is happen inside the RNNSLU class. I am confused with the usage of variables like idx, x inside that class. I know how this happens in theory, but unable to decode the theano part explicitly. If my question doesn't make sense, please let me know.</p> <p>Thanks.</p>
[ { "AnswerId": "31706487", "CreationDate": "2015-07-29T16:48:10.333", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p><code>rnn.sentence_train</code> is Theano function that has <code>updates=sentence_updates</code>. This means that on each call to <code>rnn.sentence_train</code> all of the shared variables in the <code>sentence_updates</code> dictionary's keys will be updated according to the symbolic update expressions in the corresponding <code>sentence_updates</code> dictionary values. Those expressions are all classical gradient descent (current parameter value - learning rate * gradient of cost with respect to parameter).</p>\n\n<p><code>idxs</code> is the symbolic placeholder for the input to the training function. In your example, <code>word_batch</code> fills in that placeholder when the training function is called.</p>\n" } ]
31,710,586
1
<python><lazy-evaluation><symbolic-math><theano>
2015-07-29T20:35:53.133
31,711,156
1,815,451
Lazy evaluation of .dot or other theano function
<p>I am very new at python and theano so this question may be silly. </p> <p>I've read in documentation that .dot produces symbolic tensor. I am debugging some program right now and I can't see TensorVariable without any container or anything and I don't know how to get the values from it. </p>
[ { "AnswerId": "31711156", "CreationDate": "2015-07-29T21:09:11.410", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>It isn't really possible to get a value from a symbolic variable because they don't have a value; instead they stand in for a value that is provided later.</p>\n\n<p>Consider the following example:</p>\n\n<pre><code>x = theano.tensor.matrix()\ny = theano.tensor.matrix()\nz = theano.dot(x, y)\nf = theano.function(inputs=[x, y], outputs=z)\na = numpy.array([[1,2,3],[4,5,6]])\nb = numpy.array([[1,2],[3,4],[5,6]])\nc = f(a, b)\n</code></pre>\n\n<p>Here, <code>x</code> and <code>y</code> are symbolic matrices. They don't have a value but they stand-in for some value that will be provided after the computation is compiled and executed. <code>a</code>, <code>b</code>, and <code>c</code> are concrete matrices with values. The <code>f = theano.function(...)</code> line compiles the computation and the <code>c = f(...)</code> executes that function, providing the value called <code>a</code> for <code>x</code> and providing the value called <code>b</code> for <code>y</code>; the return value, <code>c</code>, takes on the value computed by the symbolic expression <code>z</code>.</p>\n" } ]
31,722,238
2
<python><ubuntu><anaconda><caffe>
2015-07-30T11:13:00.243
31,943,616
3,633,250
Installing caffe on ubuntu 15.04 with anaconda 3 for python 3.4 - no module caffe found
<p>I am trying to install caffe on my ubuntu 15.04 with anaconda 3 (for python 3.4). I managed to install all requirements and I followed the instructions from official website. So I downloaded caffe-master and did:</p> <pre></pre> <p>It completes fine, no errors (finally). But after that if I go into anaconda and do</p> <pre></pre> <p>I get no module caffe is found. What am I doing wrong? Any ideas?</p>
[ { "AnswerId": "55751464", "CreationDate": "2019-04-18T17:58:06.533", "ParentId": null, "OwnerUserId": "10685311", "Title": null, "Body": "<p>you can try these steps:</p>\n\n<p>To use caffe within python, export its path as</p>\n\n<p>export PYTHONPATH=~/Home/<em>username</em>/caffe/python:$PYTHONPATH</p>\n\n<p>Replace username with the your username in the system.</p>\n\n<p>Once you've done this, run the python terminal and import caffe</p>\n\n<blockquote>\n <blockquote>\n <blockquote>\n <p>import caffe</p>\n </blockquote>\n </blockquote>\n</blockquote>\n\n<p>If it throws a 'module not found' error, check if it has been appended in pythonpath properly by typing</p>\n\n<blockquote>\n <blockquote>\n <blockquote>\n <p>import sys\n sys.path\n ['', '/home/nikita/caffe/python', '/home/nikita', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib\n /python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/home/nikita/.local/lib/python2.7/site-\n packages']</p>\n </blockquote>\n </blockquote>\n</blockquote>\n\n<p>If you see that the /home/username /caffe/python path, isn't there, then do</p>\n\n<blockquote>\n <blockquote>\n <blockquote>\n <p>sys.path.append('/Home/username/caffe/python')</p>\n </blockquote>\n </blockquote>\n</blockquote>\n" }, { "AnswerId": "31943616", "CreationDate": "2015-08-11T13:48:57.023", "ParentId": null, "OwnerUserId": "3633250", "Title": null, "Body": "<p>Finally solved. Honestly the issue was in incorrect makefile.config. I needed to be extremely careful in adjusting it to specify all path to anaconda folders - I have incorrectly specified the path to python3.4 libraries.</p>\n\n<p>The point is - when set up caffe with anaconda and facing issues you need to go over makefile.config one more time - you should have misconfigured something</p>\n" } ]
31,730,125
1
<theano>
2015-07-30T17:14:18.183
31,738,101
5,174,743
theano vector matrix product along third dimension
<p>I've got and .</p> <p>Assume w is 1xd and X is 3 x 10 x d.</p> <p>I want as a result a matrix of size 3 x 10 where the i-th row is .</p> <p>Is there a way to do this in theano?</p>
[ { "AnswerId": "31738101", "CreationDate": "2015-07-31T04:29:37.793", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>What you are looking for amounts to this in numpy (where I extend the degenerate dimension (1, d) to (6, 5) in order to be general for matrices. If <code>w</code> is a vector, then the function may write slightly more simply with 1D arrays)</p>\n\n<pre><code>import numpy as np\n\nw = np.arange(6 * 5).reshape(6, 5)\nX = np.arange(3 * 10 * 5).reshape(3, 10, 5)\n\noutput = np.einsum('ij, klj', w, X)\n</code></pre>\n\n<p>Let's check the zeroth output</p>\n\n<pre><code>print w.dot(X[0].T)\nprint output[:, 0] # same output as above\n</code></pre>\n\n<p>We can do the same by reshaping the matrices, which will lead us to a valid Theano expression immediately</p>\n\n<pre><code>output2 = w.dot(X.reshape(-1, 5).T).reshape((w.shape[0],) + X.shape[:2])\nassert (output2 == output).all()\n</code></pre>\n\n<p>Now the Theano expression</p>\n\n<pre><code>import theano\nimport theano.tensor as T\n\nww = T.fmatrix()\nXX = T.tensor3()\n\noutput_expr = ww.dot(XX.reshape((-1, XX.shape[-1])).T).reshape((ww.shape[0], XX.shape[0], XX.shape[1]), ndim=3)\n\nf = theano.function([ww, XX], output_expr)\n\nprint f(w.astype('float32'), X.astype('float32'))[:, 0]\n</code></pre>\n" } ]
31,730,892
1
<c><lua><stanford-nlp><torch><lstm>
2015-07-30T17:59:56.707
null
2,827,214
Is this a practical way to resolve 'Not enough memory' from LuaJit with Torch
<p><a href="https://github.com/stanfordnlp/treelstm">StanfordNLP's TreeLSTM</a>, when used with a dataset with > 30K instances, causes LuaJit to error with "Not Enough Memory." I am resolving this by using <a href="https://github.com/neomantra/lds">LuaJit Data Structures</a>. In order to get the dataset outside of lua's heap, the trees need to be placed in a LDS.Vector.</p> <p>Since the LDS.Vector holds cdata, the first step was to make the Tree type into a cdata object:</p> <pre></pre> <p>There are also small changes that need to be made in read_data.lua to handle the new cdata CTree type. Using LDS seemed like a reasonable approach to solve the memory limit so far; however, the CTree requires a field named 'composer'. </p> <p>Composer is of the type nn.gModule. To continue with this solution would involve creating a typedef of the nn.gModule as cdata, including creating a typedef for its members. Before continuing, does this seem like the correct direction to follow? Does any one have experience with this problem? </p>
[ { "AnswerId": "31780091", "CreationDate": "2015-08-03T05:37:59.833", "ParentId": null, "OwnerUserId": "1316615", "Title": null, "Body": "<p>As you've discovered, representing structured data in a LuaJIT heap-friendly manner is a bit of a pain at the moment.</p>\n\n<p>In the Tree-LSTM implementation, the tree tables each hold a pointer to a composer instance mainly for expediency in implementation.</p>\n\n<p>One workaround to avoid typedef-ing <code>nn.gModule</code> would be to use the existing <code>idx</code> field to index into a table of composer instances. In this approach, the pair (<code>sentence_idx</code>, <code>node_idx</code>) can be used uniquely identify a composer in a global two-level table of composer instances. To avoid memory issues, the current cleanup code can be replaced with a line that sets the corresponding index in the table to <code>nil</code>.</p>\n" } ]
31,731,711
0
<theano>
2015-07-30T18:45:46.310
null
5,174,743
theano: gradient where cost is imag(x)
<p>If I have a cost that is the imaginary part of a complex number, trying to obtain the gradient with theano I get the following error:</p> <p></p> <p>Is it not possible to use the imaginary part as cost despite it being a real-valued cost?</p> <hr> <p>Edit. Minimal working example</p> <pre></pre> <p>I would expect this to work as T.imag(a) is a real scalar cost..</p>
[]
31,733,166
2
<python><numpy><theano>
2015-07-30T20:13:13.477
31,747,183
1,150,636
Overlapping iteration over theano tensor
<p>I am trying to implement a scan loop in theano, which given a tensor will use a "moving slice" of the input. It doesn't have to actually be a moving slice, it can be a preprocessed tensor to another tensor that represents the moving slice.</p> <p>Essentially:</p> <pre></pre> <p>where is the input for each iteration.</p> <p>I am trying to figure out the most efficient way to do it, maybe using some form of referencing or manipulating strides, but I haven't managed to get something to work even for pure numpy.</p> <p>One possible solution I found can be found <a href="https://stackoverflow.com/questions/2485669/consecutive-overlapping-subsets-of-array-numpy-python/2487551#2487551">here</a>, but I can't figure out how to use strides and I don't see a way to use that with theano.</p>
[ { "AnswerId": "31747183", "CreationDate": "2015-07-31T13:19:09.617", "ParentId": null, "OwnerUserId": "5116849", "Title": null, "Body": "<p>You can build a vector containing the starting index for the slice at each timestep and call Scan with that vector as a sequence and your original vector as a non-sequence. Then, inside Scan, you can obtain the slice you want at every iteration.</p>\n\n<p>I included an example in which I also made the size of the slices a symbolic input, in case you want to change it from one call of your Theano function to the next:</p>\n\n<pre><code>import theano\nimport theano.tensor as T\n\n# Input variables\nx = T.vector(\"x\")\nslice_size = T.iscalar(\"slice_size\")\n\n\ndef step(idx, vect, length):\n\n # From the idx of the start of the slice, the vector and the length of\n # the slice, obtain the desired slice.\n my_slice = vect[idx:idx + length]\n\n # Do something with the slice here. I don't know what you want to do\n # to I'll just return the slice itself.\n output = my_slice\n\n return output\n\n# Make a vector containing the start idx of every slice\nslice_start_indices = T.arange(x.shape[0] - slice_size + 1)\n\nout, updates = theano.scan(fn=step,\n sequences=[slice_start_indices],\n non_sequences=[x, slice_size])\n\nfct = theano.function([x, slice_size], out)\n</code></pre>\n\n<p>Running the function with your parameters produces the output :</p>\n\n<pre><code>print fct(range(17), 5)\n\n[[ 0. 1. 2. 3. 4.]\n [ 1. 2. 3. 4. 5.]\n [ 2. 3. 4. 5. 6.]\n [ 3. 4. 5. 6. 7.]\n [ 4. 5. 6. 7. 8.]\n [ 5. 6. 7. 8. 9.]\n [ 6. 7. 8. 9. 10.]\n [ 7. 8. 9. 10. 11.]\n [ 8. 9. 10. 11. 12.]\n [ 9. 10. 11. 12. 13.]\n [ 10. 11. 12. 13. 14.]\n [ 11. 12. 13. 14. 15.]\n [ 12. 13. 14. 15. 16.]]\n</code></pre>\n" }, { "AnswerId": "31733984", "CreationDate": "2015-07-30T21:02:23.427", "ParentId": null, "OwnerUserId": "190597", "Title": null, "Body": "<p>You could use <a href=\"https://stackoverflow.com/a/4947453/190597\">this rolling_window recipe</a>:</p>\n\n<pre><code>import numpy as np\n\ndef rolling_window_lastaxis(arr, winshape):\n \"\"\"\n Directly taken from Erik Rigtorp's post to numpy-discussion.\n http://www.mail-archive.com/numpy-discussion@scipy.org/msg29450.html\n (Erik Rigtorp, 2010-12-31)\n\n See also:\n http://mentat.za.net/numpy/numpy_advanced_slides/ (Stéfan van der Walt, 2008-08)\n https://stackoverflow.com/a/21059308/190597 (Warren Weckesser, 2011-01-11)\n https://stackoverflow.com/a/4924433/190597 (Joe Kington, 2011-02-07)\n https://stackoverflow.com/a/4947453/190597 (Joe Kington, 2011-02-09)\n \"\"\"\n if winshape &lt; 1:\n raise ValueError(\"winshape must be at least 1.\")\n if winshape &gt; arr.shape[-1]:\n print(winshape, arr.shape)\n raise ValueError(\"winshape is too long.\")\n shape = arr.shape[:-1] + (arr.shape[-1] - winshape + 1, winshape)\n strides = arr.strides + (arr.strides[-1], )\n return np.lib.stride_tricks.as_strided(arr, shape=shape, strides=strides)\n\nx = np.arange(17)\nprint(rolling_window_lastaxis(x, 5))\n</code></pre>\n\n<p>which prints</p>\n\n<pre><code>[[ 0 1 2 3 4]\n [ 1 2 3 4 5]\n [ 2 3 4 5 6]\n [ 3 4 5 6 7]\n [ 4 5 6 7 8]\n [ 5 6 7 8 9]\n [ 6 7 8 9 10]\n [ 7 8 9 10 11]\n [ 8 9 10 11 12]\n [ 9 10 11 12 13]\n [10 11 12 13 14]\n [11 12 13 14 15]\n [12 13 14 15 16]]\n</code></pre>\n\n<p>Note that there are even fancier extensions of this, such as <a href=\"https://stackoverflow.com/a/4947453/190597\">Joe Kington's rolling_window</a> which can roll over multi-dimensional windows, and <a href=\"https://gist.github.com/seberg/3866040\" rel=\"nofollow noreferrer\">Sebastian Berg's implementation</a> which, in addition, can jump by steps.</p>\n" } ]
31,736,583
1
<python><numpy><theano>
2015-07-31T01:09:51.937
null
1,215,364
Theano Function For Transforming Matrix Into Matrix With Different Dimensions
<p>I have matrices where the diagonal is the negative of the sum of all other elements in that row. Here is an example</p> <pre></pre> <p>I'd like to write a Theano function that takes in these matrices and return a matrix with the same number of rows, one less column, and the diagonal removed. So for Q this would be</p> <pre></pre> <p>I'd also like to do the reverse i.e. given Q_raw, I'd like to write a Theano function that spits out Q. How can I write these functions in Theano? So far I haven't come up with a solution that takes only the matrix itself as input.</p>
[ { "AnswerId": "31742026", "CreationDate": "2015-07-31T08:54:09.500", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>Here's a couple of methods doing what I think you're asking for. There may be more efficient approaches.</p>\n\n<pre><code>import numpy\nimport theano\nimport theano.tensor as tt\n\n\ndef symbolic_remove_diagonal(x):\n flat_x = x.flatten()\n indexes = tt.arange(flat_x.shape[0], dtype='int64')\n diagonal_modulo = indexes % (x.shape[0] + 1)\n off_diagonal_flat_x = flat_x[tt.neq(diagonal_modulo, 0).nonzero()]\n return off_diagonal_flat_x.reshape((x.shape[0], x.shape[1] - 1))\n\n\ndef symbolic_add_diagonal(x):\n diagonal_values = -x.sum(axis=1)\n flat_x = x.flatten()\n result_length = flat_x.shape[0] + x.shape[0]\n indexes = tt.arange(result_length, dtype='int64')\n diagonal_modulo = indexes % (x.shape[0] + 1)\n result = tt.zeros((result_length,), dtype=x.dtype)\n result = tt.set_subtensor(result[tt.eq(diagonal_modulo, 0).nonzero()], diagonal_values)\n result = tt.set_subtensor(result[tt.neq(diagonal_modulo, 0).nonzero()], flat_x)\n return result.reshape((x.shape[0], x.shape[1] + 1))\n\n\ndef main():\n theano.config.compute_test_value = 'raise'\n x1 = tt.matrix()\n x1.tag.test_value = numpy.array(\n [[-6, 2, 2, 1, 1],\n [1, -4, 0, 1, 2],\n [1, 0, -4, 2, 1],\n [2, 1, 0, -3, 0],\n [1, 1, 1, 1, -4]])\n x2 = tt.matrix()\n x2.tag.test_value = numpy.array(\n [[2, 2, 1, 1],\n [1, 0, 1, 2],\n [1, 0, 2, 1],\n [2, 1, 0, 0],\n [1, 1, 1, 1]])\n remove_diagonal = theano.function(inputs=[x1], outputs=symbolic_remove_diagonal(x1))\n add_diagonal = theano.function(inputs=[x2], outputs=symbolic_add_diagonal(x2))\n x2_prime = remove_diagonal(x1.tag.test_value)\n x1_prime = add_diagonal(x2.tag.test_value)\n print 'Diagonal removed:\\n', x2_prime\n print 'Diagonal added:\\n', x1_prime\n assert numpy.all(x2_prime == x2.tag.test_value)\n assert numpy.all(x1_prime == x1.tag.test_value)\n\n\nmain()\n</code></pre>\n" } ]
31,750,869
1
<theano>
2015-07-31T16:25:52.323
null
2,789,788
Theano ImportError and process Warning when compiling function
<p>I am running Theano with Anaconda on Windows. Ihave pretty much followed the steps in the comments <a href="https://www.kaggle.com/c/otto-group-product-classification-challenge/forums/t/13973/a-few-tips-to-install-theano-on-windows-64-bits" rel="nofollow">here</a>. I can import theano with no problems:</p> <pre></pre> <p>This works fine. But when I do </p> <pre></pre> <p>I get warnings:</p> <pre></pre> <p>and a long error, which ends with:</p> <pre></pre> <p>Any ideas on how to fix this?</p>
[ { "AnswerId": "31934867", "CreationDate": "2015-08-11T06:41:13.360", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Theano on Windows need Theano development version and not last Theano version:</p>\n\n<p><a href=\"http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions\" rel=\"nofollow\">http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions</a></p>\n\n<p>So just update Theano and it should work.</p>\n" } ]
31,757,952
1
<python><if-statement><theano>
2015-08-01T03:39:33.883
31,759,485
5,179,831
something wrong about T.gt()
<p>In order to obtain the usage of T.gt(), i wrote a toy code.</p> <pre></pre> <p>I expect the returned value is -4 when a=-4 and -4 when a=4, but it always satisfy the condition and run the return -data. I don't know why. Can you help me?</p>
[ { "AnswerId": "31759485", "CreationDate": "2015-08-01T07:36:52.927", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p><code>T.gt</code> is a <em>symbolic</em> function; it doesn't return a Boolean value, instead it returns an object representing a symbolic expression that, when later compiled and executed, will evaluate to a Boolean value.</p>\n\n<p>So, in Python, <code>T.gt(...)</code> will always be evaluated as <code>True</code> because the result is always non-<code>None</code>.</p>\n\n<p>If you want a conditional expression in Theano then you need to use a symbolic conditional operation. There are two: <code>T.switch</code> and <code>theano.ifelse.ifelse</code>. The difference is that <code>T.switch</code> is an element-wise operation, accepting a tensor condition, while <code>ifelse</code> requires a scalar condition.</p>\n\n<p>There is another problem with your example. Even if the code was good, it would always return a negative value. In essence your example says, <em>if input is positive return its negative else return the input as-is (which is already negative)</em>. I would also recommend using <code>theano.function</code> over the <code>eval</code> function.</p>\n\n<p>Your example could be altered to illustrate the workings of <code>ifelse</code> like this:</p>\n\n<pre><code>import theano\nimport theano.ifelse\nimport theano.tensor as T\n\n\ndef symbolic_f(x):\n return theano.ifelse.ifelse(T.gt(x, 0), -x - 1, x + 1)\n\n\ndef main():\n x = T.scalar()\n f = theano.function(inputs=[x], outputs=symbolic_f(x))\n\n print f(-4)\n print f(4)\n\n\nmain()\n</code></pre>\n" } ]
31,763,838
1
<python><numpy><theano><tf-idf>
2015-08-01T16:37:35.623
31,763,987
1,196,752
Theano GPU calculation slower than numpy
<p>I'm learning to use theano. I want to populate a term-document matrix (a numpy sparse matrix) by calculating binary TF-IDF for each element inside it:</p> <pre></pre> <p>But the numpy version is much faster than the theano implementation:</p> <pre></pre> <p>I've read that this can be due to overhead, that for small operations might kill the performance. </p> <p>Is my code bad or should I avoid using GPU because of the overhead?</p>
[ { "AnswerId": "31763987", "CreationDate": "2015-08-01T16:52:50.510", "ParentId": null, "OwnerUserId": "1428739", "Title": null, "Body": "<p>The thing is that you are compiling your Theano function every time. The compilation takes time. Try passing the compiled function like this:</p>\n\n<pre><code>def tfidf_gpu(appearance_in_documents,num_documents,document_words,TFIDF):\n start = perf_counter()\n ret = TFIDF(num_documents,appearance_in_documents,document_words)\n end = perf_counter()\n print(\"\\nTFIDF_GPU \",end-start,\" secs.\")\n return ret\n\nAPP = T.scalar('APP',dtype='int32')\nN = T.scalar('N',dtype='int32')\nSF = T.scalar('S',dtype='int32')\nF = (T.log(N)-T.log(APP)) / SF\nTFIDF = theano.function([N,APP,SF],F)\n\ntfidf_gpu(appearance_in_documents,num_documents,document_words,TFIDF)\n</code></pre>\n\n<p>Also your TFIDF task is a bandwidth intensive task. Theano, and GPU in general, is best for computation intensive tasks. </p>\n\n<p>The current task will considerable overhead taking the data to the GPU and back because in the end you will need to read each element O(1) times. But if you want to do more computation it makes sense to use the GPU. </p>\n" } ]
31,768,116
1
<python><theano><pycuda>
2015-08-02T02:47:25.347
31,771,043
1,103,966
Using a shared variable in a function
<p>Hi I'm following a neural net tutorial where the author seems to be using shared variables everywhere. From my understanding, a shared variable in theanos simply is a space in memory that can be shared by the gpu and cpu heap. Anyway, I have two matrices which I declare as shared variables and I want to perform some operation on them using function. (Question 1) I'd love it if someone could explain why function is usefull vs regular def function. Anyway, I'm setting up my definition like such:</p> <pre></pre> <p>The problem is that I have no idea how to use the shared variables in a regular function-output call. I understand that I can do updates via function([],..update=(shared_var_1, upate_function)). But how do I access them in my regular function?</p>
[ { "AnswerId": "31771043", "CreationDate": "2015-08-02T10:43:21.923", "ParentId": null, "OwnerUserId": "1196752", "Title": null, "Body": "<p>Theano beginner here, so I'm not sure that my answer will cover all the technical aspects.</p>\n\n<p>Answering your <strong>first question</strong>: you need to declare theano function instead of def function, because theano is like a \"language\" inside python and invoking <code>theano.function</code> you're compiling some ad-hoc C code performing your task under the hood. This is what makes Theano fast. \nFrom the <a href=\"http://deeplearning.net/software/theano/introduction.html#sneak-peek\" rel=\"nofollow\">documentation</a>:</p>\n\n<blockquote>\n <p>It is good to think of <code>theano.function</code> as the interface to a compiler which builds a callable object from a purely symbolic graph. One of Theano’s most important features is that theano.function can optimize a graph and even compile some or all of it into native machine instructions.</p>\n</blockquote>\n\n<p>About your <strong>second quetion</strong>, in order to access what's stored in your shared variable you should use </p>\n\n<pre><code>shared_var.get_value()\n</code></pre>\n\n<p>Check <a href=\"http://deeplearning.net/software/theano/tutorial/examples.html#using-shared-variables\" rel=\"nofollow\">these</a> examples:</p>\n\n<blockquote>\n <p>The value can be accessed and modified by the <code>.get_value()</code> and\n <code>.set_value()</code> methods.</p>\n</blockquote>\n\n<p>This code:</p>\n\n<pre><code>a = np.array([[1,2],[3,4]], dtype=theano.config.floatX)\nx = theano.shared(a)\nprint(x)\n</code></pre>\n\n<p>Will output</p>\n\n<pre><code>&lt;CudaNdarrayType(float32, matrix)&gt;\n</code></pre>\n\n<p>But using <code>get_value()</code>:</p>\n\n<pre><code>print(x.get_value())\n</code></pre>\n\n<p>It outputs</p>\n\n<pre><code>[[ 1. 2.]\n [ 3. 4.]]\n</code></pre>\n\n<p><strong>Edit:</strong> to use shared variables in functions </p>\n\n<pre><code>import theano\nimport numpy\na = numpy.int64(2)\ny = theano.tensor.scalar('y',dtype='int64')\nz = theano.tensor.scalar('z',dtype='int64')\nx = theano.shared(a)\nplus = y + z\ntheano_sum = theano.function([y,z],plus)\n# Using shared variable in a function\nprint(theano_sum(x.get_value(),3))\n# Changing shared variable value using a function\nx.set_value(theano_sum(2,2))\nprint(x.get_value())\n# Update shared variable value\nx.set_value(x.get_value(borrow=True)+1)\nprint(x.get_value())\n</code></pre>\n\n<p>Will output: </p>\n\n<pre><code>5\n4\n5\n</code></pre>\n" } ]
31,771,835
1
<theano>
2015-08-02T12:20:47.030
31,782,914
5,182,504
Plotting Hidden Weights
<p>I've had an interest for neural networks for a while now and have just started following the deep learning tutorials. I have what I hope is a relatively straight forward question that I am hoping someone may answer. </p> <p>In the multilayer perception tutorial, I am interested in seeing the state of the network at different layers (something similar to what is seen in this paper: <a href="http://www.iro.umontreal.ca/~lisa/publications2/index.php/publications/show/247" rel="nofollow">http://www.iro.umontreal.ca/~lisa/publications2/index.php/publications/show/247</a> ). For instance, I am able to write out the weights of the hidden layer using:</p> <pre></pre> <p>When I plot this using the utils.py tile plotting, I get the following pretty plot [edit: pretty plot rmoved as I dont have enough rep]. </p> <p>If I wanted to plot the weights at the logRegressionLayer, such that </p> <pre></pre> <p>what would I actually have to do? The above doesn't seem to work - it returns a 2darray of shape (500,10). I understand that the 500 relates to the number of hidden units. The paragraph on the Miscellaneous page:</p> <blockquote> <p>Plotting the weights is a bit more tricky. We have n_hidden hidden units, each of them corresponding to a column of the weight matrix. A column has the same shape as the visible, where the weight corresponding to the connection with visible unit j is at position j. Therefore, if we reshape every such column, using numpy.reshape, we get a filter image that tells us how this hidden unit is influenced by the input image.</p> </blockquote> <p>confuses me alittle. I am unsure exactly how I would string it together.</p> <p>Thanks to all - sorry if the question is confusing!</p>
[ { "AnswerId": "31782914", "CreationDate": "2015-08-03T08:40:36.443", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>You could plot them just the like the weights in the first layer but they will not necessarily make much sense.</p>\n\n<p>Consider the weights in the first layer of a neural network. If the inputs have size 784 (e.g. MNIST images) and there are 2000 hidden units in the first layer then the first layer weights are a matrix of size 784x2000 (or maybe the transpose depending on how it's implemented). Those weights can be plotted as either 784 patches of size 2000 or, more usually, 2000 patches of size 784. In this latter case each patch can be plotted as a 28x28 image which directly ties back to the original inputs and thus is interpretable.</p>\n\n<p>For you higher level regression layer, you could plot 10 tiles, each of size 500 (e.g. patches of size 22x23 with some padding to make it rectangular), or 500 patches of size 10. Either might illustrate some patterns that are being found but it may be difficult to tie those patterns back to the original inputs.</p>\n" } ]
31,772,938
1
<python><optimization><machine-learning><theano>
2015-08-02T14:35:54.803
null
1,936,768
Theano cost function
<p>I am trying to learn how to use Theano. I work very frequently with survival analysis and I wanted therefore to try to implement a standard survival model using Theano's automatic differentiation and gradient descent. The model that I am trying to implement is called the Cox model and here is the wikipedia article: <a href="https://en.wikipedia.org/wiki/Proportional_hazards_model" rel="nofollow">https://en.wikipedia.org/wiki/Proportional_hazards_model</a></p> <p>Very helpfully, they have written there the partial likelihood function, which is what is maximized when estimating the parameters of a Cox model. I am quite new to Theano and as a result am having difficulties implementing this cost function and so I am looking for some guidance. </p> <p>Here is the code I have written so far. My dataset has 137 records and hence the reason I hard-coded that value. T refers to the tensor module, and W refers to what the wikipedia article calls beta, and status is what wikipedia calls C. The remaining variables are identical to wikipedia's notation.</p> <pre></pre> <p>Unfortunately, when I run this code, I am unhappily met with an infinite recursion error, which I hoped would not happen. This makes me think that I have not implemented this cost function in the way that Theano would like and so I am hoping to get some guidance on how to improve this code so that it works. </p>
[ { "AnswerId": "31782430", "CreationDate": "2015-08-03T08:13:41.483", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>You are mixing symbolic and non-symbolic operations but this doesn't work.</p>\n\n<p>For example, <code>T.eq</code> returns a non-executable <em>symbolic</em> expression representing the idea of comparing two things for equality but it doesn't actually do the comparison there and then. <code>T.eq</code> actually returns a Python object that represents the equality comparison and since a non-<code>None</code> object reference is considered the same as <code>True</code> in Python, the execution will always continue inside the if statement.</p>\n\n<p>If you need to construct a Theano computation involving conditionals then you need to use one of its two symbolic conditional operations: <code>T.switch</code> or <code>theano.ifelse.ifelse</code>. <a href=\"http://deeplearning.net/software/theano/tutorial/conditions.html\" rel=\"nofollow\">See the documentation for examples and details</a>.</p>\n\n<p>You are also using Python loops which is probably not what you need. To construct a Theano computation that explicitly loops you need to use the <a href=\"http://deeplearning.net/software/theano/library/scan.html\" rel=\"nofollow\"><code>theano.scan</code> module</a>. However, if you can express your computation in terms of matrix operations (dot products, reductions, etc.) then it will run much, much, faster than something using scans.</p>\n\n<p>I suggest you work through some more <a href=\"http://deeplearning.net/software/theano/tutorial/\" rel=\"nofollow\">Theano tutorials</a> before trying to implement something complex from scratch.</p>\n" } ]
31,773,204
1
<python><neural-network><theano><lasagne>
2015-08-02T15:05:37.077
31,774,280
1,020,302
Add bias to Lasagne neural network layers
<p>I am wondering if there is a way to add bias node to each layer in Lasagne neural network toolkit? I have been trying to find related information in documentation.</p> <p>This is the network I built but i don't know how to add a bias node to each layer.</p> <pre></pre>
[ { "AnswerId": "31774280", "CreationDate": "2015-08-02T16:57:33.143", "ParentId": null, "OwnerUserId": "4592059", "Title": null, "Body": "<p>Actually you don't have to explicitly create biases, because <code>DenseLayer()</code>, and convolution base layers too, has a default keyword argument: </p>\n\n<p><code>b=lasagne.init.Constant(0.)</code>.</p>\n\n<p>Thus you can avoid creating <code>bias</code>, if you don't want to have with explicitly pass <code>bias=None</code>, but this is not that case.</p>\n\n<p>Thus in brief you do have bias parameters while you don't pass <code>None</code> to <code>bias</code> parameter e.g.:</p>\n\n<pre><code>hidden = Denselayer(...bias=None)\n</code></pre>\n" } ]
31,774,953
4
<neural-network><regression><deep-learning><caffe>
2015-08-02T18:05:20.857
31,808,324
3,698,878
Test labels for regression caffe, float not allowed?
<p>I am doing regression using caffe, and my and files are like this:</p> <pre></pre> <p>My problem is it seems caffe does not allow float labels like 2.0, when I use float labels while reading, for example the file caffe only recognizes </p> <blockquote> <p>a total of 1 images</p> </blockquote> <p>which is wrong.</p> <p>But when I for example change the 2.0 to 2 in the file and the following lines same, caffe now gives </p> <blockquote> <p>a total of 2 images</p> </blockquote> <p>implying that the float labels are responsible for the problem.</p> <p>Can anyone help me here, to solve this problem, I definitely need to use float labels for regression, so does anyone know about a work around or solution for this? Thanks in advance.</p> <p><b>EDIT</b> For anyone facing a similar issue <a href="https://stackoverflow.com/questions/31617486/use-caffe-to-train-lenet-with-csv-data">use caffe to train Lenet with CSV data</a> might be of help. Thanks to @Shai.</p>
[ { "AnswerId": "37291510", "CreationDate": "2016-05-18T06:06:02.337", "ParentId": null, "OwnerUserId": "6281477", "Title": null, "Body": "<p>Besides <a href=\"https://stackoverflow.com/a/31808324/6281477\">@Shai's answer</a> above, I wrote a <strong><a href=\"https://github.com/DaleSong89/caffe-batch-normalization\" rel=\"nofollow noreferrer\">MultiTaskData</a></strong> layer supporting <code>float</code> typed labels. </p>\n\n<p>Its main idea is to store the labels in <code>float_data</code> field of <code>Datum</code>, and the <code>MultiTaskDataLayer</code> will parse them as labels for any number of tasks according to the value of <code>task_num</code> and <code>label_dimension</code> set in <code>net.prototxt</code>. The related files include: <code>caffe.proto</code>, <code>multitask_data_layer.hpp/cpp</code>, <code>io.hpp/cpp</code>.</p>\n\n<p>You can easily add this layer to your own caffe and use it like this (this is an example for face expression label distribution learning task in which the \"exp_label\" can be float typed vectors such as [0.1, 0.1, 0.5, 0.2, 0.1] representing face expressions(5 class)'s probability distribution.):</p>\n\n<pre><code> name: \"xxxNet\"\n layer {\n name: \"xxx\"\n type: \"MultiTaskData\"\n top: \"data\"\n top: \"exp_label\"\n data_param { \n source: \"expression_ld_train_leveldb\" \n batch_size: 60 \n task_num: 1\n label_dimension: 8\n }\n transform_param {\n scale: 0.00390625\n crop_size: 60\n mirror: true\n }\n include:{ phase: TRAIN }\n }\n layer { \n name: \"exp_prob\" \n type: \"InnerProduct\"\n bottom: \"data\" \n top: \"exp_prob\" \n param {\n lr_mult: 1\n decay_mult: 1\n }\n param {\n lr_mult: 2\n decay_mult: 0\n }\n inner_product_param {\n num_output: 8\n weight_filler {\n type: \"xavier\"\n } \n bias_filler { \n type: \"constant\"\n } \n }\n }\n layer { \n name: \"exp_loss\" \n type: \"EuclideanLoss\" \n bottom: \"exp_prob\" \n bottom: \"exp_label\"\n top: \"exp_loss\"\n include:{ phase: TRAIN }\n }\n</code></pre>\n" }, { "AnswerId": "36975172", "CreationDate": "2016-05-02T04:48:57.803", "ParentId": null, "OwnerUserId": "134077", "Title": null, "Body": "<p>I ended up transposing, switching the channel order, and using unsigned ints rather than floats to get results. I suggest reading an image back from your HDF5 file to make sure it displays correctly.</p>\n\n<p>First read the image as unsigned ints:</p>\n\n<p><code>img = np.array(Image.open('images/' + image_name))</code></p>\n\n<p>Then change the channel order from RGB to BGR:</p>\n\n<p><code>img = img[:, :, ::-1]</code></p>\n\n<p>Finally, switch from Height x Width x Channels to Channels x Height x Width:</p>\n\n<p><code>img = img.transpose((2, 0, 1))</code></p>\n\n<p>Merely changing the shape will scramble your image and ruin your data!</p>\n\n<p>To read back the image:</p>\n\n<pre><code>with h5py.File(h5_filename, 'r') as hf:\n images_test = hf.get('images')\n targets_test = hf.get('targets')\n for i, img in enumerate(images_test):\n print(targets_test[i])\n from skimage.viewer import ImageViewer\n viewer = ImageViewer(img.reshape(SIZE, SIZE, 3))\n viewer.show()\n</code></pre>\n\n<p>Here's a script I wrote which deals with two labels (steer and speed) for a self-driving car task: <a href=\"https://gist.github.com/crizCraig/aa46105d34349543582b177ae79f32f0\" rel=\"nofollow\">https://gist.github.com/crizCraig/aa46105d34349543582b177ae79f32f0</a></p>\n" }, { "AnswerId": "31808324", "CreationDate": "2015-08-04T11:39:53.370", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>When using the image dataset input layer (with either <code>lmdb</code> or <code>leveldb</code> backend) caffe only supports one <strong>integer</strong> label per input image.</p>\n\n<p>If you want to do regression, and use floating point labels, you should try and use the HDF5 data layer. See for example <a href=\"https://stackoverflow.com/q/31617486/1714410\">this question</a>.</p>\n\n<p>In python you can use <code>h5py</code> package to create hdf5 files.</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>import h5py, os\nimport caffe\nimport numpy as np\n\nSIZE = 224 # fixed size to all images\nwith open( 'train.txt', 'r' ) as T :\n lines = T.readlines()\n# If you do not have enough memory split data into\n# multiple batches and generate multiple separate h5 files\nX = np.zeros( (len(lines), 3, SIZE, SIZE), dtype='f4' ) \ny = np.zeros( (len(lines),1), dtype='f4' )\nfor i,l in enumerate(lines):\n sp = l.split(' ')\n img = caffe.io.load_image( sp[0] )\n img = caffe.io.resize( img, (SIZE, SIZE, 3) ) # resize to fixed size\n # you may apply other input transformations here...\n # Note that the transformation should take img from size-by-size-by-3 and transpose it to 3-by-size-by-size\n # for example\n # transposed_img = img.transpose((2,0,1))[::-1,:,:] # RGB-&gt;BGR\n X[i] = transposed_img\n y[i] = float(sp[1])\nwith h5py.File('train.h5','w') as H:\n H.create_dataset( 'X', data=X ) # note the name X given to the dataset!\n H.create_dataset( 'y', data=y ) # note the name y given to the dataset!\nwith open('train_h5_list.txt','w') as L:\n L.write( 'train.h5' ) # list all h5 files you are going to use\n</code></pre>\n\n<p>Once you have all <code>h5</code> files and the corresponding test files listing them you can add an HDF5 input layer to your <code>train_val.prototxt</code>:</p>\n\n<pre><code> layer {\n type: \"HDF5Data\"\n top: \"X\" # same name as given in create_dataset!\n top: \"y\"\n hdf5_data_param {\n source: \"train_h5_list.txt\" # do not give the h5 files directly, but the list.\n batch_size: 32\n }\n include { phase:TRAIN }\n }\n</code></pre>\n\n<hr>\n\n<p><strong>Clarification</strong>:<br>\nWhen I say \"caffe only supports one integer label per input image\" I do not mean that the leveldb/lmdb containers are limited, I meant the tools of caffe, specifically the <a href=\"https://stackoverflow.com/a/31431716/1714410\"><code>convert_imageset</code></a> tool.<br>\nAt closer inspection, it seems like caffe stores data of type <code>Datum</code> in leveldb/lmdb and the \"label\" property of this type is defined as integer (see <a href=\"https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto#L30\" rel=\"nofollow noreferrer\">caffe.proto</a>) thus when using caffe interface to leveldb/lmdb you are restricted to a single int32 label per image.</p>\n" }, { "AnswerId": "32698722", "CreationDate": "2015-09-21T15:12:13.793", "ParentId": null, "OwnerUserId": "2466336", "Title": null, "Body": "<p><a href=\"https://stackoverflow.com/a/31808324/2466336\">Shai's answer</a> already covers saving float labels to HDF5 format. In case LMDB is required/preferred, here's a snippet on how to create an LMDB from float data (adapted from <a href=\"https://github.com/BVLC/caffe/issues/1698#issuecomment-70211045\" rel=\"nofollow noreferrer\">this</a> github comment):</p>\n\n<pre><code>import lmdb\nimport caffe\ndef scalars_to_lmdb(scalars, path_dst):\n\n db = lmdb.open(path_dst, map_size=int(1e12))\n\n with db.begin(write=True) as in_txn: \n for idx, x in enumerate(scalars): \n content_field = np.array([x])\n # get shape (1,1,1)\n content_field = np.expand_dims(content_field, axis=0)\n content_field = np.expand_dims(content_field, axis=0)\n content_field = content_field.astype(float)\n\n dat = caffe.io.array_to_datum(content_field)\n in_txn.put('{:0&gt;10d}'.format(idx) dat.SerializeToString())\n db.close()\n</code></pre>\n" } ]
31,776,951
1
<python><theano>
2015-08-02T21:45:46.287
null
1,103,966
How do I set only a single element in a tensor in theano?
<p>Currently I have this code which sets the first element of my tensor output.</p> <pre></pre> <p>Unfortunate doing:</p> <pre></pre> <p>Only outputs the first row which is not what I want. Is it possible to create a statement that combines the first 2 lines to give me the matrix that I want?</p>
[ { "AnswerId": "31778583", "CreationDate": "2015-08-03T02:23:18.497", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>You most probably want to replace</p>\n\n<pre><code>output = T.set_subtensor(output[0][0], rotation[0][0])\n</code></pre>\n\n<p>with</p>\n\n<pre><code>output = T.set_subtensor(output[0, 0], rotation[0, 0])\n</code></pre>\n\n<p>if possible.</p>\n" } ]
31,777,135
1
<python><performance><optimization><parallel-processing><theano>
2015-08-02T22:11:55.293
null
1,103,966
How do I set many elements in parallel in theano
<p>Lets say I create a theano function, how do I run operations in parallel elementwise on theano tensors like on matrices?</p> <pre></pre> <p>The question should be reformed, how do I do parallel operations in a Theanos function? I've looked at <a href="http://deeplearning.net/software/theano/tutorial/multi_cores.html#parallel-element-wise-ops-with-openmp" rel="nofollow">http://deeplearning.net/software/theano/tutorial/multi_cores.html#parallel-element-wise-ops-with-openmp</a> which only talks about adding a setting, but does not explain how an operation is parallelized for element wise operations.</p>
[ { "AnswerId": "31782642", "CreationDate": "2015-08-03T08:25:58.837", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>To an extent, Theano expects you to focus more on <em>what</em> you want computed rather than on <em>how</em> you want it computed. The idea is that the Theano optimizing compiler will automatically parallelize as much as possible (either on GPU or on CPU using OpenMP).</p>\n\n<p>The following is an example based on the original post's example. The difference is that the computation is declared symbolically and, crucially, without any loops. Here one is telling Theano that the results should be a stack of tensors where the first tensor is the values in a range modulo the range size and the second tensor is the elements of the same range divided by the range size. We don't say that a loop should occur but clearly at least one will be required. Theano compiles this down to executable code and will parallelize it if it makes sense.</p>\n\n<pre><code>import theano\nimport theano.tensor as tt\n\n\ndef symbolic_range_div_mod(size):\n r = tt.arange(size)\n return tt.stack(r % size, r / size)\n\n\ndef main():\n size = tt.dscalar()\n range_div_mod = theano.function(inputs=[size], outputs=symbolic_range_div_mod(size))\n print range_div_mod(20)\n\n\nmain()\n</code></pre>\n\n<p>You need to be able to specify your computation in terms of Theano operations. If those operations can be parallelized on the GPU, they should be parallelized automatically.</p>\n" } ]
31,779,920
1
<python><numpy><theano>
2015-08-03T05:23:25.430
null
1,103,966
How should I allocate a numpy array inside theano function?
<p>Let's say I have a theano function:</p> <pre></pre> <ol> <li>How should I allocate tensors/ arrays or memory within the actual theano to be optimized when compiled by theano.</li> <li>How should I convert that allocated tensor/ array whatever to a theano like variable to be returned? I've tried converting it to a shared variable in the function but that didn't work.</li> </ol>
[ { "AnswerId": "31781266", "CreationDate": "2015-08-03T07:04:55.390", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>I'm sorry but I don't understand your specific questions but can comment on the code sample you provided.</p>\n\n<p>Firstly, your comment above <code>return z</code> is incorrect. If <code>x</code> and <code>y</code> are Theano variables then <code>z</code> will also be a Theano variable after <code>z = x + y</code>.</p>\n\n<p>Secondly, there is no need to pre-allocate memory, using numpy, for return variables. So your <code>my_fun</code> can change to simply</p>\n\n<pre><code>def my_fun(x, y):\n z = x + y\n return z\n</code></pre>\n\n<p>Thirdly, the output(s) of Theano functions need to be Theano variables, not Python functions. And the output needs to be a function of the inputs. So your <code>theano.function</code> call needs to be changed to</p>\n\n<pre><code>f = function(\n inputs=[x, y],\n outputs=[my_fun(x, y)]\n)\n</code></pre>\n\n<p>The most important point to grasp about Theano, which can be a little difficult to get one's head around when starting out, is the difference between the symbolic world and the executable world. Tied in to that is the difference between Python expressions and Theano expressions.</p>\n\n<p>The modified <code>my_fun</code> above could be used like a symbolic function or like a normal executable Python function but it behaves differently for each. If you pass in normal Python inputs then the addition operation occurs immediately and the return value is the result of the computation. So <code>my_fun(1,2)</code> returns <code>3</code>. If instead you pass in symbolic Theano variables then the addition operation does not take place immediately. Instead the function returns a symbolic expression that after later being compiled and executed will return the result of adding two inputs. So the result of <code>my_fun(theano.tensor.scalar(), theano.tensor.scalar())</code> is a Python object that represents a symbolic Theano computation graph. When that result is passed as the output to a <code>theano.function</code> it is compiled into something that is executable. Thean when the compiled function is executed, and given some concrete values for the inputs, you actually get the result you were looking for.</p>\n" } ]
31,792,416
0
<theano>
2015-08-03T16:33:14.663
null
5,186,172
does theano have something similar to the python dictionary that is iterable with symbolic keys?
<p> Basically, is the following toy problem possible?:</p> <pre class="lang-py prettyprint-override"></pre> <p>Here the problem is that I have inserted a dictionary for the scan updates, which then fails since it doesn't like theano.tensors as keys.</p> <p>The reason I'm curious is that sometimes based on prior information I sometimes know a dataset contains no knowledge about certain classes, so in these cases I don't want want to "try" to learn (when I'm doomed to learn nothing).</p> <h3>Edit: not really an answer, but a workaround. Also see comments by DrBwts below</h3> <pre class="lang-py prettyprint-override"></pre>
[]
31,796,449
1
<python><deep-learning><caffe><conv-neural-network>
2015-08-03T20:44:44.830
null
1,204,685
Same fc6 response for different images
<p>I follow the instruction in <a href="http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/filter_visualization.ipynb" rel="nofollow">filter visualization</a> and <a href="http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb" rel="nofollow">classification</a> example to get the (fully connected layer6) response to multiple different images in a folder from a pretrained model (bvlc reference model) but for all of the images I get the same vector. Here is the code I used:</p> <pre></pre> <p>PS: Is there any simple way to store this data in a file (like txt or csv) that can be used for later and can be read and opened without using Python?</p>
[ { "AnswerId": "31801048", "CreationDate": "2015-08-04T05:04:00.107", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>You are accessing only a <em>single</em> element of the <code>fc6</code> response (the fourth one). It might be the case that this element in the output is degenerate for the kind of inputs you tested it on. Try looking at the entire <code>fc6</code> response.</p>\n\n<p>Moreover, I'm not sure what model you are using, but are you certain this specific model expects its <code>mean</code> argument to be per-channel mean and not per-pixel?</p>\n\n<p>BTW, you are using <code>oversample</code> for your input (the default option in <a href=\"https://github.com/BVLC/caffe/blob/master/python/caffe/classifier.py#L47\" rel=\"nofollow\"><code>caffe.Classifier.predict</code></a>) this means the output you are getting is actually an <em>average</em> of 10 responses to slightly different input image (different cropping+mirroring). You might want to disable this option using</p>\n\n<pre><code>scores = net.predict([input_image], oversample=False)\n</code></pre>\n" } ]
31,796,909
1
<theano><deep-learning><confusion-matrix>
2015-08-03T21:17:31.887
31,798,877
2,073,392
How to add a confusion matrix to Theano examples?
<p>I want to make use of Theano's logistic regression classifier, but I would like to make an apples-to-apples comparison with previous studies I've done to see how deep learning stacks up. I recognize this is probably a fairly simple task if I was more proficient in Theano, but this is what I have so far. From the tutorials on the website, I have the following code:</p> <pre></pre> <p>I'm pretty sure this is where I need to add the functionality, but I'm not certain how to go about it. What I need is either access to y_pred and y for each and every run (to update my confusion matrix in python) or to have the C++ code handle the confusion matrix and return it at some point along the way. I don't think I can do the former, and I'm unsure how to do the latter. I've done some messing around with an update function along the lines of:</p> <pre></pre> <p>But I'm not completely clear on how to get this to interface with the function in question and give me a numpy array I can work with. I'm quite new to Theano, so hopefully this is an easy fix for one of you. I'd like to use this classifer as my output layer in a number of configurations, so I could use the confusion matrix with other architectures.</p>
[ { "AnswerId": "31798877", "CreationDate": "2015-08-04T00:30:09.050", "ParentId": null, "OwnerUserId": "3353215", "Title": null, "Body": "<p>I suggest using a brute force sort of a way. You need an output for a prediction first. Create a function for it.</p>\n\n<pre><code> prediction = theano.function(\n inputs = [index],\n outputs = MLPlayers.predicts,\n givens={\n x: test_set_x[index * batch_size: (index + 1) * batch_size]})\n</code></pre>\n\n<p>In your test loop, gather the predictions... </p>\n\n<pre><code>labels = labels + test_set_y.eval().tolist() \nfor mini_batch in xrange(n_test_batches):\n wrong = wrong + int(test_model(mini_batch)) \n predictions = predictions + prediction(mini_batch).tolist()\n</code></pre>\n\n<p>Now create confusion matrix this way:</p>\n\n<pre><code> correct = 0\n confusion = numpy.zeros((outs,outs), dtype = int)\n for index in xrange(len(predictions)):\n if labels[index] is predictions[index]:\n correct = correct + 1\n confusion[int(predictions[index]),int(labels[index])] = confusion[int(predictions[index]),int(labels[index])] + 1\n</code></pre>\n\n<p>You can find this kind of an implementation <a href=\"http://ragavvenkatesan.github.io/Convolutional-Neural-Networks/\" rel=\"nofollow\">in this repository</a>.</p>\n" } ]
31,798,815
1
<reshape><theano>
2015-08-04T00:23:23.953
31,804,637
3,353,215
Theano Reshaping
<p>I am unable to clearly comprehend 's . I have an image matrix of shape: </p> <pre></pre> <p>, where there are stacks of images, each having of channels. I now want to convert them into the following shape:</p> <pre></pre> <p>such that all the stacks will be combined together into one stack of all channels. I am not sure if reshape will do this for me. I see that reshape seems to not lexicographically order the pixels if they are mixed in dimensions in the middle. I have been trying to achieve this with a combination of , and , but to no avail. I would appreciate some help.</p> <p>Thanks. </p>
[ { "AnswerId": "31804637", "CreationDate": "2015-08-04T08:46:16.863", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p><a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.reshape\" rel=\"noreferrer\">Theano reshape</a> works just like <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html\" rel=\"noreferrer\">numpy reshape</a> with its default <code>order</code>, i.e. <code>'C'</code>:</p>\n\n<blockquote>\n <p>‘C’ means to read / write the elements using C-like index order, with\n the last axis index changing fastest, back to the first axis index\n changing slowest.</p>\n</blockquote>\n\n<p>Here's an example showing that the image pixels remain in the same order after a reshape via either numpy or Theano.</p>\n\n<pre><code>import numpy\nimport theano\nimport theano.tensor\n\n\ndef main():\n batch_size = 2\n stack1_size = 3\n stack2_size = 4\n height = 5\n width = 6\n data = numpy.arange(batch_size * stack1_size * stack2_size * height * width).reshape(\n (batch_size, stack1_size, stack2_size, height, width))\n reshaped_data = data.reshape([batch_size, stack1_size * stack2_size, 1, height, width])\n print data[0, 0, 0]\n print reshaped_data[0, 0, 0]\n\n x = theano.tensor.TensorType('int64', (False,) * 5)()\n reshaped_x = x.reshape((x.shape[0], x.shape[1] * x.shape[2], 1, x.shape[3], x.shape[4]))\n f = theano.function(inputs=[x], outputs=reshaped_x)\n print f(data)[0, 0, 0]\n\n\nmain()\n</code></pre>\n" } ]
31,804,340
2
<theano>
2015-08-04T08:30:01.340
null
5,179,831
A weird error with updates in theano
<p>I designed a variable net, but it occurred some problems with theano. The general idea is that different input will get different net with same parameters, something like a recursive neural network with auto-encoder. There are two cases in my code, one case is run if , the other case is run .</p> <p>It is weird that the code can run without bugs if I comment updates=updates, which is not my expected (train_test theano function in code). However, if I uncomment updates=updates, an error occurred (train_test_bug theano function in code). The later one is that I'd like to implement.</p> <p>I have been already spend some days on this bug. Who can help me? I will appreciate that.</p> <pre></pre> <p><strong>EDIT (by @danielrenshaw)</strong></p> <p>I've cut the code down to a simpler demonstration of the problem.</p> <p>The cause is in the gradient computation of a double-nested scan expression. The problem disappears when a modified inner-most recursive expression is used (see comments in first function below).</p> <pre></pre>
[ { "AnswerId": "31822498", "CreationDate": "2015-08-05T02:37:08.400", "ParentId": null, "OwnerUserId": "5179831", "Title": null, "Body": "<p>I solved this problem edited by danielrenshaw. When I add h0 as outputs_info, it work. Before that I used first element of sequence as outputs_info, I think it caused the error. But I still cannot solve my original problem.</p>\n\n<pre><code>import numpy\nimport theano\nimport theano.tensor as tt\nimport theano.ifelse\n\n\ndef inner_scan_step(x_t_t, h_tm1, w):\n # Fails when using this recursive expression\n h_t = tt.dot(h_tm1, w) + x_t_t\n\n # No failure when using this recursive expression\n # h_t = h_tm1 + tt.dot(x_t_t, w)\n\n return h_t\n\n\ndef outer_scan_step(x_t, w, h0):\n h, _ = theano.scan(inner_scan_step,\n sequences=[x_t],\n outputs_info=[h0],\n non_sequences=[w],\n strict=True)\n return h[-1]\n\n\ndef get_outputs(x, w, h0):\n features, _ = theano.scan(outer_scan_step,\n sequences=[x],\n non_sequences=[w, h0],\n strict=True)\n return tt.grad(features.sum(), w)\n\n\ndef main():\n theano.config.compute_test_value = 'raise'\n\n x_value = numpy.arange(12, dtype=theano.config.floatX).reshape((2, 2, 3))\n\n x = tt.tensor3()\n x.tag.test_value = x_value\n\n w = theano.shared(value=numpy.ones((3, 3), dtype=theano.config.floatX), borrow=True)\n h0 = theano.shared(value=numpy.zeros(3, dtype=theano.config.floatX), borrow=True)\n\n f = theano.function(inputs=[x], outputs=get_outputs(x, w, h0))\n\n print f(x_value)\n\n\nif __name__ == \"__main__\":\n main()\n</code></pre>\n" }, { "AnswerId": "36077074", "CreationDate": "2016-03-18T05:50:01.230", "ParentId": null, "OwnerUserId": "6080844", "Title": null, "Body": "<p>I've encountered the same issue and I fixed it by letting optimizer=fast_compile in theano_flags. Guess that is a bug of theano.</p>\n" } ]
31,807,155
0
<python><numpy><pycharm><theano>
2015-08-04T10:42:31.137
null
5,173,166
Errors importing Theano in Pycharm
<p>I have a project set up in PyCharm that uses imports from Theano. I believe I have completed all the steps in the Theano installation guide and have included the necessary libraries in the Project Interpreter (Theano 0.7.0, numpy 1.9.2, and scipu 0.14.0, among others).</p> <p>Yet, when compiling I get the following errors:</p> <pre></pre> <p>Would appreciate it if someone can point out what else I'm missing.</p>
[]
31,812,660
1
<caffe>
2015-08-04T14:56:04.513
null
3,673,925
Error message during installing caffe command 'make all'
<p>I ran</p> <pre></pre> <p>as suggested on the website to complete the installation. I use Ubuntu 14.04 with CUDA and OpenBlas.</p> <p>The error messages showed as follows</p> <blockquote> <p>CXX/LD -o .build_release/tools/upgrade_net_proto_text.bin .build_release/lib/libcaffe.so: undefined reference to caffe::curandGetErrorString(curandStatus)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::BaseConvolutionLayer::weight_gpu_gemm(double const*, double const*, double*)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::BaseConvolutionLayer::forward_gpu_bias(double*, double const*)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::BaseConvolutionLayer::forward_gpu_bias(float*, float const*)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::cudnn::dataType::zero<br> .build_release/lib/libcaffe.so: undefined reference to caffe::cudnn::dataType::one<br> .build_release/lib/libcaffe.so: undefined reference to caffe::BaseConvolutionLayer::backward_gpu_gemm(float const*, float const*, float*)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::cublasGetErrorString(cublasStatus_t)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::BaseConvolutionLayer::forward_gpu_gemm(double const*, double const*, double*, bool)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::BaseConvolutionLayer::backward_gpu_gemm(double const*, double const*, double*)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::BaseConvolutionLayer::backward_gpu_bias(double*, double const*)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::BaseConvolutionLayer::forward_gpu_gemm(float const*, float const*, float*, bool)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::cudnn::dataType::zero<br> .build_release/lib/libcaffe.so: undefined reference to caffe::BaseConvolutionLayer::weight_gpu_gemm(float const*, float const*, float*)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::BaseConvolutionLayer::backward_gpu_bias(float*, float const*)<br> .build_release/lib/libcaffe.so: undefined reference to caffe::cudnn::dataType::one<br> collect2: error: ld returned 1 exit status<br> make: *** [.build_release/tools/upgrade_net_proto_text.bin] Error 1</p> </blockquote> <p>I only modified Makefile.config. The modified Makefile.config shown as follows</p> <pre></pre>
[ { "AnswerId": "31825926", "CreationDate": "2015-08-05T07:22:54.087", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>You need to change the <code>BLAS</code> settings in <code>Makefile.config</code> to</p>\n\n<pre><code>BLAS := open\n</code></pre>\n\n<p>Rather than <code>'OpenBlas'</code>.</p>\n" } ]
31,817,007
0
<python><theano>
2015-08-04T18:51:10.763
null
4,562,171
Why does theano.tensor.stack() make function compilation time so much faster?
<p>I'm writing a function which takes in an array of enzyme and substrate concentrations, and returns an array of maximum reaction rates.</p> <pre></pre> <p>enzyme_vars_array and substrate_vars array are lists of theanor tensors, built using T.dscalar('name_of_thing').</p> <p>rateExpressionsArray is a list of python expressions which gives the kinetic rate of a reaction in terms of the theano tensors in enzyme_vars_array and substrate_vars array. Because some enzyme and substrate concentrations may not be used, I've included the on_unused_input='ignore' flag.</p> <p>The size of these inputs are:</p> <p>len(enzyme_vars_array) = 132</p> <p>len(substrate_vars array) = 17</p> <p>len(rateExpressionsArray) = 2402</p> <p>Running this line of code as written causes an extremely long compilation, at least 5 minutes (I gave up waiting eventually).</p> <p>However this single change reduced the compilation time to only a few seconds:</p> <pre></pre> <p>What is theano.tensor.stack() doing which causes this massive change in compilation time? And what about the original formulation made it so slow?</p> <p><strong>Edit:</strong></p> <p>While writing up example code which can be run, this problem got stranger. This code generates examples of enzyme_vars_array, substrate_vars_array, and rateExpressionArray:</p> <pre></pre> <p>This is the code I expected to run slowly (as it runs without T.stack). However in this example it now runs at a perfectly reasonable pace. When I replace the last line with this however:</p> <pre></pre> <p>It runs extremely slowly again, the opposite of my observed behavior in my actual code!</p> <p>I considered possible differences between this example and my actual code and realized that my real rateExpressionsArray is very sparse, it mostly consists of a placeholder which just returns a default value, with only a few real expressions. So rewriting my example code to look more like this, I have:</p> <pre></pre> <p>This produces the original behavior - when only a few indexes in rateExpressionArray are nontrivial expressions, the code runs fast with T.stack() and slowly without T.stack(). To replicate the slow version, replace the last line with:</p> <pre></pre> <p>So it seems that T.stack produces different behavior on sparse versus dense inputs? I'm even more confused as to the function and mechanism of stack() now. Any insights into what's happening here would be very much appreciated!</p>
[]
31,820,027
1
<neural-network><deep-learning><caffe>
2015-08-04T21:52:45.130
null
395,857
How can I modify the batch size on the fly, i.e. after loading an ANN in pycaffe?
<p>I trained a neural network using Caffe. I then use it to predict outputs given some new inputs. To do so, I load the trained neural network in pycaffe along with a deploy.prototxt, which specifies the inputs:</p> <pre></pre> <p>I load the neural network using:</p> <pre></pre> <p>Since I don't know in advance how many inputs I will have, I would like to be able to change the batch size after I loaded the neural network (i.e. after calling ). How to do so?</p>
[ { "AnswerId": "31820028", "CreationDate": "2015-08-04T21:52:45.130", "ParentId": null, "OwnerUserId": "395857", "Title": null, "Body": "<p>You can use <a href=\"https://github.com/BVLC/caffe/blob/master/docs/tutorial/interfaces.md#reshape\" rel=\"nofollow\"><code>reshape</code></a>:</p>\n\n<pre><code>net.blobs['data'].reshape(data_batch_size, 1, 1, data_of_features)\nnet.blobs['adbeoption'].reshape(adbeoption_batch_size, 1, 1, 1)\n</code></pre>\n\n<p>Then you can call <code>net.forward()</code>.</p>\n\n<p>It will modify the batch size on the fly, without having to reload the ANN.</p>\n" } ]
31,823,898
1
<neural-network><deep-learning><caffe>
2015-08-05T05:11:01.307
null
395,857
Changing the solver parameters in Caffe through pycaffe
<p>How can I change the solver parameter in Caffe through pycaffe?</p> <p>E.g. right after calling I would like to change the solver's parameters (learning rate, stepsize, gamma, momentum, base_lr, power, etc.), without having to change .</p>
[ { "AnswerId": "32361005", "CreationDate": "2015-09-02T18:54:55.657", "ParentId": null, "OwnerUserId": "5293046", "Title": null, "Body": "<p>Maybe you can create a temporary file.</p>\n\n<p>First of all, load your solver parameters with</p>\n\n<pre><code>from caffe.proto import caffe_pb2\nfrom google.protobuf import text_format\nsolver_config = caffe_pb2.SolverParameter()\nwith open('/your/solver/path') as f:\n text_format.Merge(str(f.read()), solver_config)\n</code></pre>\n\n<p>You can modify any solver parameter just setting the desired value in <code>solver_config</code> (e.g. <code>solver_config.test_interval = 15</code>). Then, it's just creating a temp file and load your solver from it:</p>\n\n<pre><code>new_solver_config = text_format.MessageToString(solver_config)\nwith open('temp.prototxt', 'w') as f:\n f.write(new_solver_config) \nsolver = caffe.get_solver('temp.prototxt')\nsolver.step(1)\n</code></pre>\n" } ]
31,830,189
1
<python><macos><caffe><pycaffe>
2015-08-05T10:40:40.153
null
342,323
Prebuilt Python Caffe for OSX
<p>Is there any pre-built PyCaffe out there for OSX? I do see instructions on how to build it but I'm sure I'll have a lot of difficulties trying to build all of its dependencies. So, I'd appreciate it anyone knows where I can get the prebuilt PyCaffe module? Or is it necessary that it gets fully built on the machine?</p> <p>Thanks</p>
[ { "AnswerId": "32004200", "CreationDate": "2015-08-14T07:06:06.917", "ParentId": null, "OwnerUserId": "627806", "Title": null, "Body": "<p>In case you're familiar with <a href=\"https://www.docker.com/\" rel=\"nofollow\">docker</a>, you could try: <a href=\"https://github.com/ryankennedyio/deep-dream-generator\" rel=\"nofollow\">https://github.com/ryankennedyio/deep-dream-generator</a></p>\n\n<p>On OS X, you'll need to have <a href=\"http://boot2docker.io/\" rel=\"nofollow\">boot2docker</a> or some such.</p>\n\n<p>Anyway, I was able to get up and running in under 5 minutes or so with this approach, after spending an hour or so (and giving up) downloading/building/installing dependencies directly.</p>\n" } ]
31,832,157
1
<numpy><scikit-learn><theano>
2015-08-05T12:11:13.643
31,832,445
1,584,115
Cant fit scikit-neuralnetwork classifier because of tuple index out of range
<p>I am trying to get this <a href="https://scikit-neuralnetwork.readthedocs.org/en/latest/guide_beginners.html#classification" rel="nofollow">classifier</a> working. It is a extension for scikit learn with dependencies to Theano.</p> <p>My goal was to fit a neural network with a list of years and teach it to know if it is a leap year or not (later I would increase the range). But I run in an error if I want to test this example.</p> <p>My code looks like this:</p> <p><strong>leapyear.py</strong></p> <pre></pre> <p>requirements.txt</p> <pre></pre> <p>my output with error:</p> <pre></pre> <p>I looked into the sources of mlp.py, but I dont know how to fix it. What has to be changed that I can fit my network?</p> <p>Update not question related: I just wanted to add, that I need to convert the year to a binary representation, after this the neural network will work.</p>
[ { "AnswerId": "31832445", "CreationDate": "2015-08-05T12:24:32.490", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>The problem is that the classifier requires the data to be presented as a 2 dimensional numpy array, with the first axis being the samples and the second axis being the features.</p>\n\n<p>In your case you have only one \"feature\" (the year) so you need to turn the years data into a Nx1 2D numpy array. This can be achieved by adding the following line just before the data split statement:</p>\n\n<pre><code>years = np.array([[year] for year in years])\n</code></pre>\n" } ]
31,840,488
1
<python><neural-network><euclidean-distance><caffe><pycaffe>
2015-08-05T18:48:26.940
null
3,543,300
Caffe Iteration loss versus Train Net loss
<p>I'm using caffe to train a CNN with a Euclidean loss layer at the bottom, and my solver.prototxt file configured to display every 100 iterations. I see something like this,</p> <pre></pre> <p>I'm confused as to what the difference between the Iteration loss and Train net loss is. Usually the iteration loss is very small (around 0) and the Train net output loss is a bit larger. Can somebody please clarify?</p>
[ { "AnswerId": "34014534", "CreationDate": "2015-12-01T07:07:23.637", "ParentId": null, "OwnerUserId": "1495615", "Title": null, "Body": "<p>Evan Shelhamer already gave his answer on <a href=\"https://groups.google.com/forum/#!topic/caffe-users/WEhQ92s9Vus\" rel=\"nofollow\">https://groups.google.com/forum/#!topic/caffe-users/WEhQ92s9Vus</a>. </p>\n\n<p>As he pointe out, The <code>net output #k</code> result is the output of the net for that particular iteration / batch while the <code>Iteration T, loss = X</code> output is smoothed across iterations according to the <code>average_loss</code> field.</p>\n" } ]
31,849,778
1
<python><theano>
2015-08-06T07:46:44.880
31,852,207
5,197,007
TypeError "Bad input argument to theano function"
<p>The error:</p> <blockquote> <p>TypeError: ('Bad input argument to theano function with name "c2.py:77" at index 1(0-based)', 'Wrong number of dimensions: expected 2, got 1 with shape (128L,).')</p> </blockquote> <p>Please advise how to fix? </p> <p>The code and data can be downloaded on this link: <a href="http://u.163.com/axfWJ81e" rel="noreferrer">http://u.163.com/axfWJ81e</a> and enter this code: QU90WxTZ</p> <p>And here is my code:</p> <pre></pre>
[ { "AnswerId": "31852207", "CreationDate": "2015-08-06T09:39:17.887", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>The problem is that you tell Theano Y is a matrix of floating point values but the value you provide for Y is a vector of integers.</p>\n\n<p>It's not entirely clear which is correct, but I suspect you intend Y to be a vector of integers and to use the 1-hot variant of cross entropy. If so, the problem might be fixed by changing the Theano definition of Y to</p>\n\n<pre><code>Y = T.lvector()\n</code></pre>\n" } ]
31,864,585
2
<macos><cuda><gpu><theano>
2015-08-06T19:51:53.350
null
865,662
Can't get Theano to link against CUDNN on OSX
<p>After a big battle I was finally able to get Theano to use the GPU in OSX. But now, I've tried everything I can remember and Theano still can't use CuDNN.</p> <p>I installed CUDA version 7 and CUDNN version 3.</p> <p>I tried copying the libraries to and also to , the include file was copied to </p> <p>My .theanorc is</p> <pre></pre> <p>And my .profile has the relevant parts:</p> <pre></pre> <p>But still when I try to get Theano to use CUDNN the further I get (with the files in lib64) gives me the error:</p> <pre></pre> <p>Seems like clang is not getting the cudnn library even when I specifically told it to check that path.</p>
[ { "AnswerId": "32122669", "CreationDate": "2015-08-20T15:44:23.810", "ParentId": null, "OwnerUserId": "5248131", "Title": null, "Body": "<p>In your profile add: </p>\n\n<pre><code>export LIBRARY_PATH=$CUDA_ROOT/lib:$CUDA_ROOT/lib64:$LIBRARY_PATH\n</code></pre>\n\n<p>Including the lib on LIBRARY_PATH worked for me. </p>\n" }, { "AnswerId": "33785706", "CreationDate": "2015-11-18T16:47:13.113", "ParentId": null, "OwnerUserId": "636626", "Title": null, "Body": "<p>I had the same problem and could fix it by setting</p>\n\n<pre><code>export DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-7.5/lib:$DYLD_LIBRARY_PATH\n</code></pre>\n\n<p>in my <code>.bashrc</code> and running</p>\n\n<pre><code>sudo update_dyld_shared_cache\n</code></pre>\n\n<p>afterwards.</p>\n" } ]
31,868,441
1
<python><theano>
2015-08-07T01:43:34.557
null
4,029,252
"BUG IN FGRAPH.REPLACE OR A LISTENER" error reported when compile theano function
<p>I encountered an stranger error as below, when compile my theano function. I am using the version 0.7 of theano. I hope a quick work around is available. The function dump is <a href="http://pan.baidu.com/s/1eQGsjIE" rel="nofollow">here</a>.</p> <pre></pre>
[ { "AnswerId": "34083971", "CreationDate": "2015-12-04T08:35:27.977", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>This error message appears when a bug in a Theano optimization causes an invalid graph modification.</p>\n\n<p>If you ever see \"Optimization failure due to: &lt;something&gt;\", try the following:</p>\n\n<ol>\n<li><p>Search the internet, and the <a href=\"https://groups.google.com/forum/#!forum/theano-users\" rel=\"nofollow\">theano-users mailing list</a> in particular, for the message including the specific &lt;something&gt; (in this case &lt;something&gt; is \"local_shape_to_shape_i\"). You may find a message indicating that the bug has already been identified. If it's been reported to the Theano developers then it may have already been fixed though you may need to update to the bleeding edge version of Theano direct from GitHub (i.e. pip install --upgrade may not be sufficient).</p></li>\n<li><p>Even if you can't find any mention online, try updating to the bleeding edge version if that's possible for you. It may have already been fixed.</p></li>\n<li><p>If the latest bleeding edge version still exhibits the bug then report it on the <a href=\"https://groups.google.com/forum/#!forum/theano-users\" rel=\"nofollow\">theano-users mailing list</a>.</p></li>\n<li><p>Ignore it. Optimization failures do not cause invalid computations. The only side effect (at least in theory) is that the computation may not be as efficient as it might otherwise be.</p></li>\n</ol>\n" } ]
31,871,083
1
<cuda><deep-learning><caffe>
2015-08-07T06:29:56.980
31,893,413
2,467,772
Errors in building caffe on Jetson-TK1 board
<p>I am building caffe on my Jetson-TK1 board. The board runs Ubuntu Linux 32bit. My Makefile.config is as follow</p> <pre></pre> <p>I can do with success. The errors came when I run . The errors are</p> <pre></pre> <p>I use CUDA-6.5.</p> <p>What could be wrong with this build?</p> <p><strong>EDIT 1:</strong> The <a href="http://petewarden.com/2014/10/25/how-to-run-the-caffe-deep-learning-vision-library-on-nvidias-jetson-mobile-gpu-board/" rel="nofollow">link</a> what @Klaus Prinoth mentioned is useful. Now I can build. I can also test for both CPU and GPU. But when I do , I got the message as . I am not sure what is wrong. The message is </p> <pre></pre> <p>What does the message mean?</p>
[ { "AnswerId": "31893413", "CreationDate": "2015-08-08T13:04:11.983", "ParentId": null, "OwnerUserId": "2467772", "Title": null, "Body": "<p>I solved the problems following the steps below. These are the steps mentioned in this <a href=\"http://planspace.org/20150614-the_nvidia_jetson_tk1_with_caffe_on_mnist/\" rel=\"nofollow\">link</a>. </p>\n\n<p>(1)Need to make sure all the dependencies are installed. They are</p>\n\n<pre><code> sudo apt-get install \\\n libprotobuf-dev protobuf-compiler gfortran \\\n libboost-dev cmake libleveldb-dev libsnappy-dev \\\n libboost-thread-dev libboost-system-dev \\\n libatlas-base-dev libhdf5-serial-dev libgflags-dev \\\n libgoogle-glog-dev liblmdb-dev gcc-4.7 g++-4.7\n</code></pre>\n\n<p>Since I don't use Python, I skip steps necessary for Python interfaces.</p>\n\n<p>(2)Get the caffe sources</p>\n\n<pre><code>sudo apt-get install git\ngit clone https://github.com/BVLC/caffe.git\ncd caffe\ncp Makefile.config.example Makefile.config\n</code></pre>\n\n<p>(3) Need to change 1099511627776 to 536870912 in src/caffe/util/db.cpp before <code>make -j 8 runtest</code>, without that it will lead to <code>MDB_MAP_FULL error in runtest</code>.\nMy Makefile.config is shown in the original post.\nThen you are ready for </p>\n\n<pre><code>make -j 8 all\nmake -j 8 test\nmake -j 8 runtest\n</code></pre>\n\n<p>Performance differences on CPU and GPU processing can be tested with </p>\n\n<p>For GPU: \"<code>run build/tools/caffe time --model=models/bvlc_alexnet/deploy.prototxt --gpu=0</code>\"</p>\n\n<p>For CPU: \"<code>run build/tools/caffe time --model=models/bvlc_alexnet/deploy.prototxt</code>\"\nThanks to @Klaus Prinoth, for giving me the link.</p>\n" } ]
31,880,475
1
<python><debugging><reshape><theano>
2015-08-07T14:41:21.063
31,891,337
1,799,871
Theano reshape – index out ouf bounds
<p>I can't seem to get Theano to reshape my tensors as want it to. The reshaping in the code bellow is supposed to keep dimensions and flatten all remaining ones into a single array.</p> <p>The code fails with on the line if I run it with a test value. Otherwise, the function seems to compile, but fails upon first real input with .</p> <p>When I tried using just numpy for an equivalent code, it worked normally. Is there anything I am doing wrong? Or is there any easy way to see the resulting dimensions that are used for the reshaping ( does not help since everything is a Theano variable)?</p> <pre></pre>
[ { "AnswerId": "31891337", "CreationDate": "2015-08-08T08:45:11.993", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>The problem relates to how you're combining the two components of the new shape. The <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.reshape\" rel=\"nofollow\"><code>reshape</code> command requires an <code>lvector</code> for the new shape</a>.</p>\n\n<p>Since you're using the test values mechanism you can debug this problem by simply printing test value bits and pieces. For example, I used</p>\n\n<pre><code>print inputs.shape.tag.test_value\nprint inputs.shape[0:keep_dims].tag.test_value\nprint inputs.shape[keep_dims:].tag.test_value\nprint T.prod(inputs.shape[keep_dims:]).tag.test_value\nprint (inputs.shape[0:keep_dims] + (T.prod(inputs.shape[keep_dims:]),)).tag.test_value\nprint T.concatenate([inputs.shape[0:keep_dims], [T.prod(inputs.shape[keep_dims:])]]).tag.test_value\n</code></pre>\n\n<p>This shows a fix to the problem: using <code>T.concatenate</code> to combine the <code>keep_dim</code>s and the product of the remaining dims.</p>\n" } ]
31,880,720
1
<python><machine-learning><keras>
2015-08-07T14:52:07.220
31,884,034
3,828,416
How to prepare a dataset for Keras?
<h2>Motivation</h2> <p>To run a set of labeled vectors through <a href="http://keras.io/">Keras</a> neural network.</p> <h2>Example</h2> <p>Looking at Keras dataset example mnist:</p> <pre></pre> <p>It seem to be a 3 dimensional numpy array:</p> <pre></pre> <ul> <li>1st dimension is for the samples</li> <li>2nd and 3rd for each sample features</li> </ul> <h2>Attempt</h2> <p>Building the labeled vectors:</p> <pre></pre> <h2>The training code</h2> <pre></pre> <h2>Result</h2> <pre></pre> <p>Why do I get such a bad result for such a simple dataset? Is my dataset malformed?</p> <p>Thanks!</p>
[ { "AnswerId": "31884034", "CreationDate": "2015-08-07T18:03:12.217", "ParentId": null, "OwnerUserId": "3760780", "Title": null, "Body": "<p>A softmax over just one output node doesn't make much sense. If you change <code>model.add(Activation('softmax'))</code> to <code>model.add(Activation('sigmoid'))</code>, your network performs well.</p>\n\n<p>Alternatively you can also use two output nodes, where <code>1, 0</code> represents the case of <code>True</code> and <code>0, 1</code> represents the case of <code>False</code>. Then you can use a softmax layer. You just have to change your <code>Y_train</code> and <code>Y_test</code> accordingly.</p>\n" } ]
31,889,087
0
<python><amazon-web-services><amazon-ec2><apache-spark><theano>
2015-08-08T02:40:28.633
null
4,480,756
Apache-Spark with Python Theano
<p>I have written a binary classifier using Python Theano Library. Based on different data files, I would like to parallelize the classifier based on Amazon Web Service (AWS) EC2 with one master node and several slave nodes using Apache-Spark. I've tested my code with "local" mode on the AWS-EC2 master node, and classifier works very well except the long running time. However, when I run my same code on AWS-EC2 cluster with one master node and two slave nodes (all three nodes are type), I got the following error:</p> <pre></pre> <p>I am sure that I correctly set-up my cluster on AWS-EC2 as well as the master node url in my code, and I'm also sure that all my codes have no bugs. But I have no idea about the problem. I really appreciate if anyone helps me solve the problem.</p>
[]
31,892,519
2
<python-2.7><cuda><gpu><theano><nvcc>
2015-08-08T11:16:18.523
31,892,615
4,899,439
Link error with CUDA 7.5 in Windows 10 (from Theano project): MSVCRT.lib error LNK2019: unresolved external symbol
<p>I am trying to properly setup CUDA in order to take advantage of the GPU in Theano.</p> <p>After fixing many compilation problems by tuning my and files, I am struggling to fix this linking error:</p> <pre></pre> <p>Here's my file:</p> <pre></pre> <p>And here is my file:</p> <pre></pre> <p>It seems that this is not an uncommon error, but generally fixes involve <a href="https://stackoverflow.com/questions/2098627/how-to-integrate-cuda-cu-code-with-c-app">changing some setting in the Visual Studio project</a>. However, here I don't have a Visual Studio project. The code is dynamically generated by Theano and compiled at runtime.</p> <p>Relevant system settings:</p> <ul> <li>Windows 10 (yes...)</li> <li>Python 2.7.10 64bits (Anaconda distrib)</li> <li>CUDA 7.5 / NVIDIA driver 353.54 / GeForce GTX 760</li> <li>Visual Studio Community 2013</li> </ul>
[ { "AnswerId": "36092366", "CreationDate": "2016-03-18T19:00:22.990", "ParentId": null, "OwnerUserId": "5473273", "Title": null, "Body": "<p>I would also like to thank you. I have been trying to get this working for hours, and this was the post that put me over the edge. My config was slightly different, so my actual links were different. I am posting them in case it helps anyone else</p>\n\n<p>.theanorc</p>\n\n<pre><code>[global]\ndevice = gpu\nfloatX = float32\n\n[nvcc]\nflags = --use-local-env --cl-version=2008\n</code></pre>\n\n<p>nvcc.profile </p>\n\n<pre><code>TOP = $(_HERE_)/..\n\nNVVMIR_LIBRARY_DIR = $(TOP)/nvvm/libdevice\n\nPATH += $(TOP)/open64/bin;$(TOP)/nvvm/bin;$(_HERE_);$(TOP)/lib;\n\nINCLUDES += \"-I$(TOP)/include\" \"-I$(TOP)/include/cudart\" \"-IC:/Program Files (x86)/Common Files/Microsoft/Visual C++ for Python/9.0/VC/include\" \"-IC:\\Program Files\\Microsoft SDKs\\Windows\\v7.1\\Include\"$(_SPACE_)\n\nLIBRARIES =+ $(_SPACE_) \"/LIBPATH:$(TOP)/lib/$(_WIN_PLATFORM_)\" \"/LIBPATH:C:/Program Files (x86)/Common Files/Microsoft/Visual C++ for Python/9.0/VC/lib/amd64\" \"/LIBPATH:C:\\Program Files\\Microsoft SDKs\\Windows\\v7.1\\Lib\\x64\"\n\nCUDAFE_FLAGS +=\nOPENCC_FLAGS +=\nPTXAS_FLAGS +=\n</code></pre>\n\n<p>using:\nwindows 7, 64bit\ncuda 5.5\npython 2.7\nwindows SDK 7.1\nMicrosoft Visual C++ Compiler for Python 2.7</p>\n" }, { "AnswerId": "31892615", "CreationDate": "2015-08-08T11:29:23.583", "ParentId": null, "OwnerUserId": "4899439", "Title": null, "Body": "<p>Damn it! I figured it out just after posting the question. The solution: slightly different include and library folders:</p>\n\n<pre><code>TOP = $(_HERE_)/..\n\nNVVMIR_LIBRARY_DIR = $(TOP)/nvvm/libdevice\n\nPATH += $(TOP)/open64/bin;$(TOP)/nvvm/bin;$(_HERE_);$(TOP)/lib;\n\nINCLUDES += \"-I$(TOP)/include\" \"-IC:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\include\" \"-IC:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v7.1A\\Include\" $(_SPACE_)\n\nLIBRARIES =+ $(_SPACE_) \"/LIBPATH:$(TOP)/lib/$(_WIN_PLATFORM_)\" \"/LIBPATH:C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\lib\\amd64\" \"/LIBPATH:C:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v7.1A\\Lib\\x64\"\n\nCUDAFE_FLAGS +=\nPTXAS_FLAGS +=\n</code></pre>\n\n<p>In particular I switched from:</p>\n\n<pre><code>C:\\Program Files\\Microsoft SDKs\\Windows\\v6.0A\\\n</code></pre>\n\n<p>to:</p>\n\n<pre><code>C:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v7.1A\n</code></pre>\n\n<p>(I thought that I had to use the <code>Program Files</code> ones because of my 64bits project, but in fact the 64bits files are also included in <code>Program Files (x86)</code>)</p>\n" } ]
31,896,518
1
<lua><torch>
2015-08-08T17:12:43.923
35,181,529
3,106,889
Calculating Recall and precision for the Confusion matrix using torch7
<p>I am using the tutorial <a href="https://github.com/torch/tutorials/tree/master/2_supervised" rel="nofollow">supervised</a> and I would like to calculate the Recall and Precision. Is there way to calculate them in the tutorial?</p>
[ { "AnswerId": "35181529", "CreationDate": "2016-02-03T15:49:19.160", "ParentId": null, "OwnerUserId": "3106889", "Title": null, "Body": "<p>I saved my data in desk then I used python</p>\n" } ]
31,900,850
1
<python><memory><amazon-web-services><apache-spark><theano>
2015-08-09T04:15:36.280
null
4,480,756
PySpark OSError: [Errno 12] Cannot allocate memory
<p>I've written a binary classifier using Python Theano Library. Based on different data files, I would like to parallelize the classifier based on Amazon Web Service (AWS) EC2 with one master node and several slave nodes using Apache-Spark. When I tested my code with "local" mode on the AWS-EC2 master node which is t2.micro type (1 GB RAM), I got the following error on the memory:</p> <pre></pre> <p>I'm sure that all my code has no bugs, and it seems that the problem is running out of memory or other problems related to memory. But t2.micro on AWS has 1 GB memory, data that the code reads for training the classifier is just about 1.7 KB, so I think the memory is big enough. I really have no idea about the error as well as how to fix it, and I really appreciate if anyone helps me.</p>
[ { "AnswerId": "33621839", "CreationDate": "2015-11-10T02:26:10.783", "ParentId": null, "OwnerUserId": "5461201", "Title": null, "Body": "<p>maybe you can use subprocess.Open(\"cmd\" , stdout= subprocess.PIPE, shell =True ,close_fds = True )</p>\n" } ]
31,908,698
1
<installation><theano>
2015-08-09T20:48:47.500
null
4,859,843
gendef returning invalid syntax error
<p>I am trying to install Theano for machine learning on my Windows 7 computer.</p> <p>One of the last steps in installing the dependencies is to 'create a link library for GCC' by 'Opening up the Python shell and cd to C:\SciSoft. Then execute:</p> <pre></pre> <p>I've tried doing this but I get a invalid syntax error highlighted on 'WinPython'. I tried changing directory to go deeper and running gendef again and it also returned the same error. This is a copy and paste job from <a href="http://deeplearning.net/software/theano/install_windows.html#install-windows" rel="nofollow">http://deeplearning.net/software/theano/install_windows.html#install-windows</a></p>
[ { "AnswerId": "35839056", "CreationDate": "2016-03-07T08:18:12.260", "ParentId": null, "OwnerUserId": "1762932", "Title": null, "Body": "<p>I also followed the tutorial at the link to install Theano. </p>\n\n<p>The line \"<em>Finally we need to create a link library for GCC. Open up the Python shell and cd to c:\\SciSoft</em>\" is probably an error; \"the Python shell\" should be modified to \"cmd.exe\".</p>\n\n<p>The two-line scripts are not python scripts, and can be successfully run on cmd.exe after changing directory to c:\\SciSoft.</p>\n" } ]
31,919,818
2
<python><theano>
2015-08-10T12:39:20.993
31,923,358
1,196,752
Theano sqrt returning NaN values
<p>In my code I'm using theano to calculate an euclidean distance matrix (code from <a href="https://stackoverflow.com/questions/25886374/pdist-for-theano-tensor">here</a>):</p> <pre></pre> <p>But the following code causes some values of the matrix to be . I've read that this happens when calculating and <a href="https://groups.google.com/forum/#!topic/theano-users/IXfxAU35MiE" rel="nofollow noreferrer">here</a> it's suggested to </p> <blockquote> <p>Add an eps inside the sqrt (or max(x,EPs))</p> </blockquote> <p>So I've added an eps to my code:</p> <pre></pre> <p>And I'm adding it before performing . I'm getting less s, but I'm still getting them. What is the proper solution to the problem? I've also noticed that if  is  there are no </p>
[ { "AnswerId": "31924111", "CreationDate": "2015-08-10T15:58:41.733", "ParentId": null, "OwnerUserId": "3497273", "Title": null, "Body": "<h2>Just checking</h2>\n\n<p>In <code>squared_euclidian_distances</code> you're adding a column, a row, and a matrix. Are you sure this is what you want?</p>\n\n<p>More precisely, if <code>MAT</code> is of shape (n, p), you're adding matrices of shapes (n, 1), (1, n) and (n, n).</p>\n\n<p>Theano seems to silently repeat the rows (resp. the columns) of each one-dimensional member to match the number of rows and columns of the dot product.</p>\n\n<h2>If this is what you want</h2>\n\n<p>In reshape, you should probably specify <code>ndim=2</code> according to <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor._tensor_py_operators.reshape\" rel=\"nofollow\">basic tensor functionality : reshape</a>.</p>\n\n<blockquote>\n <p>If the shape is a Variable argument, then you might need to use the optional ndim parameter to declare how many elements the shape has, and therefore how many dimensions the reshaped Variable will have.</p>\n</blockquote>\n\n<p>Also, it seems that <code>squared_euclidean_distances</code> should always be positive, unless imprecision errors in the difference change zero values into small negative values. If this is true, and if negative values are responsible for the NaNs you're seeing, you could indeed get rid of them without corrupting your result by surrounding <code>squared_euclidean_distances</code> with <code>abs(...)</code>.</p>\n" }, { "AnswerId": "31923358", "CreationDate": "2015-08-10T15:20:03.783", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>There are two likely sources of NaNs when computing Euclidean distances.</p>\n\n<ol>\n<li><p>Floating point representation approximation issues causing negative distances when it's really just zero. The square root of a negative number is undefined (assuming you're not interested in the complex solution).</p>\n\n<p>Imagine <code>MAT</code> has the value</p>\n\n<pre><code>[[ 1.62434536 -0.61175641 -0.52817175 -1.07296862 0.86540763]\n [-2.3015387 1.74481176 -0.7612069 0.3190391 -0.24937038]\n [ 1.46210794 -2.06014071 -0.3224172 -0.38405435 1.13376944]\n [-1.09989127 -0.17242821 -0.87785842 0.04221375 0.58281521]]\n</code></pre>\n\n<p>Now, if we break down the computation we see that <code>(MAT ** 2).sum(1).reshape((MAT.shape[0], 1)) + (MAT ** 2).sum(1).reshape((1, MAT.shape[0]))</code> has value</p>\n\n<pre><code>[[ 10.3838024 -9.92394296 10.39763039 -1.51676099]\n [ -9.92394296 18.16971188 -14.23897281 5.53390084]\n [ 10.39763039 -14.23897281 15.83764622 -0.65066204]\n [ -1.51676099 5.53390084 -0.65066204 4.70316652]]\n</code></pre>\n\n<p>and <code>2 * MAT.dot(MAT.T)</code> has value</p>\n\n<pre><code>[[ 10.3838024 14.27675714 13.11072431 7.54348446]\n [ 14.27675714 18.16971188 17.00367905 11.4364392 ]\n [ 13.11072431 17.00367905 15.83764622 10.27040637]\n [ 7.54348446 11.4364392 10.27040637 4.70316652]]\n</code></pre>\n\n<p>The diagonal of these two values should be equal (the distance between a vector and itself is zero) and from this textual representation it looks like that is true, but in fact they are slightly different -- the differences are too small to show up when we print the floating point values like this</p>\n\n<p>This becomes apparent when we print the value of the full expression (the second of the matrices above subtracted from the first)</p>\n\n<pre><code>[[ 0.00000000e+00 2.42007001e+01 2.71309392e+00 9.06024545e+00]\n [ 2.42007001e+01 -7.10542736e-15 3.12426519e+01 5.90253836e+00]\n [ 2.71309392e+00 3.12426519e+01 0.00000000e+00 1.09210684e+01]\n [ 9.06024545e+00 5.90253836e+00 1.09210684e+01 0.00000000e+00]]\n</code></pre>\n\n<p>The diagonal is almost composed of zeros but the item in the second row, second column is now a very small negative value. When you then compute the square root of all these values you get <code>NaN</code> in that position because the square root of a negative number is undefined (for real numbers).</p>\n\n<pre><code>[[ 0. 4.91942071 1.64714721 3.01002416]\n [ 4.91942071 nan 5.58951267 2.42951402]\n [ 1.64714721 5.58951267 0. 3.30470398]\n [ 3.01002416 2.42951402 3.30470398 0. ]]\n</code></pre></li>\n<li><p>Computing the gradient of a Euclidean distance expression with respect to a variable inside the input to the function. This can happen not only if a negative number of generated due to floating point approximations, as above, but also if any of the inputs are zero length.</p>\n\n<p>If <code>y = sqrt(x)</code> then <code>dy/dx = 1/(2 * sqrt(x))</code>. So if <code>x=0</code> or, for your purposes, if <code>squared_euclidean_distances=0</code> then the gradient will be <code>NaN</code> because <code>2 * sqrt(0) = 0</code> and dividing by zero is undefined.</p></li>\n</ol>\n\n<p>The solution to the first problem can be achieved by ensuring squared distances are never negative by forcing them to be no less than zero:</p>\n\n<pre><code>T.sqrt(T.maximum(squared_euclidean_distances, 0.))\n</code></pre>\n\n<p>To solve both problems (if you need gradients) then you need to make sure the squared distances are never negative or zero, so bound with a small positive epsilon:</p>\n\n<pre><code>T.sqrt(T.maximum(squared_euclidean_distances, eps))\n</code></pre>\n\n<p>The first solution makes sense since the problem only arises from approximate representations. The second is a bit more questionable because the true distance is zero so, in a sense, the gradient should be undefined. Your specific use case may yield some alternative solution that is maintains the semantics without an artificial bound (e.g. by ensuring that gradients are never computed/used for zero-length vectors). But <code>NaN</code> values can be pernicious: they can spread like weeds.</p>\n" } ]
31,921,084
2
<python><serialization><save><loading><theano>
2015-08-10T13:38:47.710
31,922,453
3,497,273
How to save / serialize a trained model in theano?
<p>I saved the model as documented on <a href="http://deeplearning.net/software/theano/tutorial/loading_and_saving.html" rel="noreferrer">loading and saving</a>.</p> <pre></pre> <p> is a trained auto-encoder. It's a instance of class <a href="http://deeplearning.net/tutorial/code/cA.py" rel="noreferrer"></a>. From the script in which I build and save the model I can call and without any problem.</p> <p>In a different script I try to load the trained model.</p> <pre></pre> <p>I receive the following error.</p> <blockquote> <pre></pre> <p>AttributeError: 'module' object has no attribute 'cA'</p> </blockquote>
[ { "AnswerId": "31922453", "CreationDate": "2015-08-10T14:38:26.183", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>All the class definitions of the pickled objects need to be known by the script that does the unpickling. There is more on this in other StackOverflow questions (e.g. <a href=\"https://stackoverflow.com/questions/9127714/attributeerror-module-object-has-no-attribute-newperson\">AttributeError: &#39;module&#39; object has no attribute &#39;newperson&#39;</a>).</p>\n\n<p>Your code is correct as long as you properly import <code>cA</code>. Given the error you're getting it may not be the case. Make sure you're using <code>from cA import cA</code> and not just <code>import cA</code>.</p>\n\n<p>Alternatively, your model is defined by its parameters so you could instead just pickle the parameter values). This could be done in two ways depending on what you point of view.</p>\n\n<ol>\n<li><p>Save the Theano shared variables. Here we assume that <code>ca.params</code> is a regular Python list of Theano shared variable instances.</p>\n\n<pre><code>cPickle.dump(ca.params, f, protocol=cPickle.HIGHEST_PROTOCOL)\n</code></pre></li>\n<li><p>Save the numpy arrays stored inside the Theano shared variables.</p>\n\n<pre><code>cPickle.dump([param.get_value() for param in ca.params], f, protocol=cPickle.HIGHEST_PROTOCOL)\n</code></pre></li>\n</ol>\n\n<p>When you want to load the model you'll need to reinitialize the parameters. For example, create a new instance of the <code>cA</code> class then either</p>\n\n<pre><code>ca.params = cPickle.load(f)\nca.W, ca.b, ca.b_prime = ca.params\n</code></pre>\n\n<p>or</p>\n\n<pre><code>ca.params = [theano.shared(param) for param in cPickle.load(f)]\nca.W, ca.b, ca.b_prime = ca.params\n</code></pre>\n\n<p>Note that you need to set both the <code>params</code> field and the separate parameters fields.</p>\n" }, { "AnswerId": "40000726", "CreationDate": "2016-10-12T14:04:51.743", "ParentId": null, "OwnerUserId": "5637569", "Title": null, "Body": "<p>One alternative way to save model is to save its weights and architecture and then load the same, the way we do for pre-train CNN:</p>\n\n<pre><code>def save_model(model):\n\n\n model_json = model.to_json()\n open('cifar10_architecture.json', 'w').write(model_json)\n model.save_weights('cifar10_weights.h5', overwrite=True)\n</code></pre>\n\n<p>source/ref : <a href=\"https://blog.rescale.com/neural-networks-using-keras-on-\" rel=\"nofollow\">https://blog.rescale.com/neural-networks-using-keras-on-</a> rescale/</p>\n" } ]
31,923,625
1
<neural-network><keras>
2015-08-10T15:33:54.917
36,524,052
3,828,416
Recurrent neural layers in Keras
<p>I'm learning neural networks through Keras and would like to explore my sequential dataset on a recurrent neural network. I was <a href="http://keras.io/layers/recurrent/" rel="noreferrer">reading the docs</a> and trying to make sense of the <a href="http://keras.io/examples/" rel="noreferrer">LSTM example</a>.</p> <p>My questions are:</p> <ol> <li>What are the that are required for both layers?</li> <li>How do I prepare a sequential dataset that works with as an input for those recurrent layers?</li> <li>What does the layer do?</li> </ol>
[ { "AnswerId": "36524052", "CreationDate": "2016-04-09T23:38:55.190", "ParentId": null, "OwnerUserId": "5974433", "Title": null, "Body": "<ol>\n<li><p>Timesteps are a pretty bothering thing about Keras. Due to the fact that data you provide as an input to your LSTM must be a numpy array it is needed (at least for Keras version &lt;= 0.3.3) to have a specified shape of data - even with a \"time\" dimension. You can only put a sequences which have a specified length as an input - and in case your inputs vary in a length - you should use either an artificial data to \"fill\" your sequences or use a \"stateful\" mode (please read carefully Keras documentation to understand what this approach means). Both solutions might be unpleasent - but it's a cost you pay that Keras is so simple :) I hope that in version 1.0.0 they will do something with that.</p></li>\n<li><p>There are two ways to apply norecurrent layers after LSTM ones: </p>\n\n<ul>\n<li>you could set an argument return_sequences to False - then only the last activations from every sequence will be passed to a \"static\" layer.</li>\n<li>you could use one of \"time distributed\" layers - to get more flexibility with what you want to do with your data.</li>\n</ul></li>\n<li><p><a href=\"https://stats.stackexchange.com/questions/182775/what-is-an-embedding-layer-in-a-neural-network\">https://stats.stackexchange.com/questions/182775/what-is-an-embedding-layer-in-a-neural-network</a> :)</p></li>\n</ol>\n" } ]
31,927,565
0
<lua><torch>
2015-08-10T19:27:56.533
null
2,293,069
Torch.gesv B should be 2 dimensional
<p>I just start taking the oxford machine learning course and I am new in lua and torch.</p> <p>I am trying to solve a simple linear equation problem with torch. The problem is like AX = B</p> <p>However, I am not able to this because B is just a 1-D tensor (a vector). I think the case of B being a vector should be a common. And duplicating B into a 2-D tensor is wasteful. </p> <pre></pre> <p>And I will get:</p> <pre></pre> <p>Any suggestions?</p>
[]
31,935,294
1
<theano>
2015-08-11T07:06:23.050
31,940,737
5,197,007
How to get the predictation result in csv format from shared variables
<p>Data is shared variables. I want to get the predictation result in csv format. Below is the code. It throws an error. How to fix? Thank you for your help!</p> <pre></pre> <p>And more information about "index".</p> <pre></pre> <p>when enter:</p> <pre></pre> <p>The console shows: </p> <pre></pre> <p>When enter:</p> <pre></pre> <p>It throws an error:</p> <pre></pre>
[ { "AnswerId": "31940737", "CreationDate": "2015-08-11T11:38:20.510", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>Your <code>test_model</code> function has a single input value,</p>\n\n<pre><code>inputs=[index],\n</code></pre>\n\n<p>Your pasted code doesn't show the creation of the variable <code>index</code> but my guess is that it's a Theano symbolic scalar with an integer type. If so, you need to call the compiled function with a single integer input, for example</p>\n\n<pre><code>test_model(1)\n</code></pre>\n\n<p>You are trying to call <code>test_model(test_set_x)</code> which doesn't work because <code>test_set_x</code> is (again probably) a shared variable, not the integer index the function is expecting.</p>\n\n<p>Note that the <a href=\"http://deeplearning.net/tutorial/code/logistic_sgd.py\" rel=\"nofollow\">tutorial code</a> does this:</p>\n\n<pre><code>test_losses = [test_model(i) for i in xrange(n_test_batches)]\n</code></pre>\n" } ]
31,936,080
2
<python><ipython><caffe>
2015-08-11T07:50:15.417
32,722,501
148,346
Construct caffe.Net object using NetParameter
<p>From the <a href="http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1Net.html" rel="noreferrer">documentation</a> I thought there was a constructor taking a NetParameter argument,</p> <blockquote> <p>explicit Net(const NetParameter&amp; param);</p> </blockquote> <p>but when I try to use it like this:</p> <pre></pre> <p>The below error occurs in ipython</p> <pre></pre> <p>So what am I doing wrong here? How do I use this constructor?</p> <p>Note: I know how use the "read file from disk constructor" already, I want to use the NetParameter one / or understand why it doesn't work.</p> <p>Edit after Shai's comment:</p> <p>I acquired caffe using this command on Jul 26, 2015: git clone <a href="https://github.com/BVLC/caffe.git" rel="noreferrer">https://github.com/BVLC/caffe.git</a></p> <p>Here's the file on my disk:</p> <pre></pre> <p>The -version switch appears to do nothing. I grepped through the source and was unable to find a version number.</p>
[ { "AnswerId": "32722501", "CreationDate": "2015-09-22T16:51:49.453", "ParentId": null, "OwnerUserId": "2466336", "Title": null, "Body": "<p>Nothing wrong with your code. Indeed there's an overloaded constructor of the Net class in C++, but it's currently not exposed by the python interface. The python interface is limited to the constructor with the file param.</p>\n\n<p>I'm not sure if simply exposing it in python/caffe/_caffe.cpp is the only thing keeping us from constructing a python Net object with NetParameter or if more elaborate changes are needed.</p>\n" }, { "AnswerId": "34172374", "CreationDate": "2015-12-09T06:47:09.053", "ParentId": null, "OwnerUserId": "2932001", "Title": null, "Body": "<p>I came across the same problem and I get a solution <a href=\"https://groups.google.com/forum/#!topic/caffe-users/OGLXn8qQzGI\" rel=\"nofollow\">on google user group</a>, which explains that your c++ boost lib is too old, you may need to update it.</p>\n" } ]
31,962,975
3
<python><opencv><ubuntu><anaconda><caffe>
2015-08-12T10:46:26.110
32,010,074
3,633,250
Caffe install on ubuntu for anaconda with python 2.7 fails with libpng16.so.16 not found
<p>So I have installed anaconda with python 2.7 and installed all of the requirements for Caffe library. I ensured that opencv is installed by</p> <pre></pre> <p>And checking that I can run couple of examples from docs.</p> <p>Now I download caffe, configure makefile.config properly and run make all. I get very odd error:</p> <pre></pre> <p>What's wrong with that guy? Notice that I originally had anaconda3 and compiled caffe for it but successfully but I faced tons of issues with caffe under python3, so I had to remove it and try to set it up for anaconda with python 2.7.</p> <p>And of course I have ensured that libpng16.so.16 is in anaconda:</p> <pre></pre> <p>I googled the error, but haven't found anything in relation to caffe.</p>
[ { "AnswerId": "32010074", "CreationDate": "2015-08-14T12:25:49.790", "ParentId": null, "OwnerUserId": "3633250", "Title": null, "Body": "<p>Per @cel suggestion - </p>\n\n<pre><code>ldd libopencv_highgui.so \n</code></pre>\n\n<p>shows the files on which this lib depends. Couple of them (not the libpng!) were located in folder which I haven't included into the makefile.config. After including their folder into MakeFile build succeeded. Notice: after building the caffe you may won't to go in Spyder into the PythonPath manager and add the caffe's folder into it (or just include it into pythonpath if you are not using anaconda\\spyder).</p>\n" }, { "AnswerId": "32438450", "CreationDate": "2015-09-07T12:11:31.137", "ParentId": null, "OwnerUserId": "116067", "Title": null, "Body": "<p>I ran into the same problem and I fixed it by adding an <code>-rpath</code> in my Makefile.config :</p>\n\n<p><code>LINKFLAGS := -Wl,-rpath,$(HOME)/anaconda/lib</code></p>\n\n<p>I think this is the correct fix because it (-rpath) tells GCC where it can find libraries (libjpeg, libpng) that other libraries (in this case opencv) depend on.</p>\n" }, { "AnswerId": "33126417", "CreationDate": "2015-10-14T13:17:19.357", "ParentId": null, "OwnerUserId": "2686319", "Title": null, "Body": "<p>Adding </p>\n\n<pre><code>LINKFLAGS := -Wl,-rpath,$(HOME)/anaconda/lib\n</code></pre>\n\n<p>in to Makefile.config worked.</p>\n" } ]
31,964,764
1
<theano>
2015-08-12T12:07:18.367
31,966,758
3,054,356
Theano function analyse givens
<p>How can we monitor/analyse the of a theano function? </p> <p>As an example consider the following function:</p> <pre></pre> <p>What would be a way to monitor/analyse the shared variables x and y?</p>
[ { "AnswerId": "31966758", "CreationDate": "2015-08-12T13:33:38.803", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>If you're following/using the code from the Theano tutorials, as appears to be the case, then <code>x_train</code> and <code>y_train</code> are shared variables containing your training data (<code>x_train</code> is the input and <code>y_train</code> is the true/actual output that you want your model to predict when correct).</p>\n\n<p>The contents of these shared variables never (or, at least, shouldn't) change because your training data is normally static while a model is training.</p>\n\n<p>So, looking at the contents of shared variables <code>train_x</code> and <code>train_y</code> is just the same as looking at your training data. You can presumably just go look at the data wherever you load it from (e.g. maybe CSV data files, or numpy saved arrays, etc.)</p>\n\n<p>If you really want to look at the contents of a shared variable then you can do this using the <code>get_value()</code> method which returns the underlying numpy array:</p>\n\n<pre><code>x_data = X_train.get_value()\nprint x_data.shape\n# etc.\n</code></pre>\n\n<p>Theano is not involved at all here. Nothing is symbolic, it's just concrete numpy arrays.</p>\n" } ]
31,966,126
3
<python><theano>
2015-08-12T13:04:37.440
31,966,191
2,711,403
what is the difference between x.type and type(x) in Python?
<p>Consider the following lines </p> <pre></pre> <p>And then,</p> <pre></pre> <p>while,</p> <pre></pre> <p>Why type(x) and x.type give two different pieces of information ? What information is conveyed by them ?</p> <p>I also see that referring to <a href="http://deeplearning.net/software/theano/tutorial/adding.html" rel="nofollow">Theano tutorial</a> ,</p> <pre></pre> <p>Why type(x) output is different in my case ? Are these caused by version specific implementation differences and what is signified by this difference ?</p>
[ { "AnswerId": "31966191", "CreationDate": "2015-08-12T13:07:32.897", "ParentId": null, "OwnerUserId": "2296458", "Title": null, "Body": "<p><code>theano.tensor</code> has an attribute <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor._tensor_py_operators.type\" rel=\"nofollow\"><code>type</code></a> which you are looking at when you say</p>\n\n<pre><code>x.type\n</code></pre>\n\n<p>This is analagous to numpy objects <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html\" rel=\"nofollow\"><code>dtype</code></a> attribute that many of their objects carry (if you are familiar with that library).</p>\n\n<p>On the other hand <a href=\"https://docs.python.org/3/library/functions.html#type\" rel=\"nofollow\"><code>type</code></a> is a Python function that looks at the actual type of the object you pass in, which for <code>type(x)</code> is indeed a </p>\n\n<pre><code>theano.tensor.var.TensorVariable\n</code></pre>\n\n<p>So long story short, you are comparing an attribute to the actual object type.</p>\n" }, { "AnswerId": "31966200", "CreationDate": "2015-08-12T13:07:51.483", "ParentId": null, "OwnerUserId": "3398583", "Title": null, "Body": "<p><code>type(x)</code> is a builtin.</p>\n\n<p><code>x.type</code> is an attribute that's defined in your object.</p>\n\n<p>They are completely seperate, <code>type(x)</code> returns what type of object <code>x</code> is and <code>x.type</code> does whatever the object wants it to. In this case, it returns some information on the type of object it is</p>\n" }, { "AnswerId": "32140234", "CreationDate": "2015-08-21T12:21:57.730", "ParentId": null, "OwnerUserId": "650654", "Title": null, "Body": "<p>As others have mentioned, <code>type(x)</code> is Python's <a href=\"https://docs.python.org/2/library/functions.html#type\" rel=\"nofollow\">builtin function</a> that returns the type of the object. It has nothing to do with Theano per se. This builtin function can be applied to any Python object (and everything in Python is an object). For example,</p>\n\n<ul>\n<li><code>type(1)</code> is <code>int</code>,</li>\n<li><code>type(True)</code> is <code>bool</code>,</li>\n<li><code>type(lambda x: x * x)</code> is <code>function</code>, etc.</li>\n</ul>\n\n<p>Interestingly, you can call <code>type</code> on <code>type</code> itself (everything, including <code>type</code>, is an object) - <code>type(type)</code> is <code>type</code>.</p>\n\n<p>Incidentally, <code>type(T.dscalar)</code> is <code>TensorType</code> (<code>theano.tensor.type.TensorType</code> to be precise).</p>\n\n<p><code>x.type</code>, as others have mentioned, is an attribute of the object <code>x</code>. It points back to <code>type(T.dscalar)</code>. <code>x.type</code> returns <code>TensorType(float64, scalar)</code> - this not only shows you the type of <code>T.dscalar</code>, it also tells you that <code>x</code> is scalar and it is 64-bit float.</p>\n\n<p>Other examples of the type attribute:</p>\n\n<pre><code>&gt;&gt;&gt; iv = T.ivector()\n&gt;&gt;&gt; iv.type\nTensorType(int32, vector) # iv is a vector of 32-bit ints\n&gt;&gt;&gt; fm = T.fmatrix()\n&gt;&gt;&gt; fm.type\nTensorType(float32, matrix) # fm is a matrix of 32-bit floats\n&gt;&gt;&gt; lt3 = T.ltensor3()\n&gt;&gt;&gt; lt3.type\nTensorType(int64, 3D) # lt3 is a 3D array of 64-bit ints\n</code></pre>\n" } ]
31,971,462
1
<theano>
2015-08-12T17:10:13.843
null
3,697,238
Bernoulli in theano
<p>I have a question about Bernoulli mask. </p> <p>As far as I understand the mask should be depending on the rate (probability) p. ( e.g. if p= 0.5 and the mask is an array A with size=2, then the mask should be something like: [0,1] or [1,0]). The most of theano codes use bernoulli like :</p> <p>rs = np.random.RandomState(1234)</p> <p>rng = theano.tensor.shared_randomstreams.RandomStreams(rs.randint(999999))</p> <p>mask = rng.binomial(n=1, p=(0.5), size=A.shape)</p> <p>but when I test this I find out that the mask could also be [0,0] or [1,1], which seems for me to be not logic. Because I want to randomly set exactly the half of the array to zeros. Is there maybe a bug? or maybe there is an alternative that theano provides for that purpose. Thank you in advance!</p>
[ { "AnswerId": "31971699", "CreationDate": "2015-08-12T17:25:05.797", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p><code>rng.binomial(n=1, p=0.5, shape=A.shape)</code> will sample from a Bernoulli distribution independently for each element of the result tensor with shape <code>A.shape</code>. Because each sample is independent each element of the result tensor will be zero with probability 0.5 and 1 with probability 0.5. Consequently, if the result tensor should have shape <code>(2, )</code> (i.e. a vector of length 2), there are four possible outcomes and each will be obtained with probability 0.25:</p>\n\n<pre><code>[0, 0]\n[0, 1]\n[1, 0]\n[1, 1]\n</code></pre>\n\n<p>It's not clear what your use-case is but if it is for a denoising autoencoder for example, this is the usual approach; sometimes you drop more features than at other times. Dropping all features is rather extreme but this is an unlikely outcome when the size of the result tensor is much larger than 2.</p>\n\n<p>If you really need to mask exactly half the elements then you might be able to use <a href=\"http://deeplearning.net/software/theano/library/tensor/raw_random.html\" rel=\"nofollow\"><code>theano.tensor.raw_random.shuffle_row_elements</code></a>. I've not tried this but the idea would be to symbolically shuffle a list of indexes using <code>shuffle_row_elements</code>, select the first half of the resulting list, then use <code>set_subtensor</code> to mask just those elements in the original tensors at the selected indexes.</p>\n" } ]
31,972,448
1
<c++><deep-learning><caffe><conv-neural-network>
2015-08-12T18:09:57.523
31,984,228
2,467,772
How to understand the Cifar10 prediction output?
<p>I have trained (<a href="http://caffe.berkeleyvision.org/" rel="nofollow">caffe</a>) model for two classes classification. Pedestrian and non-pedestrian. Training looks fine, I have updated weights in a file. I used two labels 1 for pedestrians and 2 for non-pedestrians, together with images for pedestrians (64 x 160) and background images (64 x 160). After training, I do testing with positive image(pedestrian image) and negative image (background image). My testing file is as shown below</p> <pre></pre> <p>For testing, I used and did some modifications especially for paths and image size.</p> <p>I can't figure out the test output. When I test with positive image, I got the output as</p> <pre></pre> <p>When I test with a negative image, I got the output as</p> <pre></pre> <p>How to understand the testing output?</p> <p>Is there any more effecient testing algorithm for testing the model from video feed (frame by frame from video clip)?</p>
[ { "AnswerId": "31984228", "CreationDate": "2015-08-13T09:20:07.437", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>Why do you have <code>num_output: 10</code> for the last layer <code>ip2</code>? you only need 2-way classifier? Why are you using labels 1 and 2 instead of 0 and 1?</p>\n\n<p><strong>What you got</strong>: You have 11 outputs: one is the <code>\"label\"</code> output from the data layer, and the other 10 outputs are the 10-vector output of the softmax layer. It is unclear what the values of the 10-vector are since you only trained using two labels, thus 8 out of 10 entries were not supervised at all. Moreover, judging by the first output it seems both tests were samples with label <code>1</code> and not <code>2</code>.</p>\n\n<p><strong>What you should do:</strong><br>\n1. Change the topmost fully connected layer to have only two outputs (I also changed the format to match the new version protobuff)</p>\n\n<pre><code>layer {\n name: \"ip2/pedestrains\"\n type: \"InnerProduct\"\n bottom: \"ip1\"\n top: \"ip2\"\n param {\n lr_mult: 1\n decay_mult: 1\n }\n param {\n lr_mult: 2\n decay_mult: 0\n }\n inner_product_param {\n num_output: 2 # This is what you need changing\n }\n}\n</code></pre>\n\n<p>2. Change the binary labels in your training data to 0/1 rather that 1/2. </p>\n\n<p>Now you can train again and see what you get.</p>\n" } ]
31,978,186
3
<c++><classification><deep-learning><caffe><conv-neural-network>
2015-08-13T01:43:08.970
32,006,840
2,281,121
Monitor training/validation process in Caffe
<p>I'm training Caffe Reference Model for classifying images. My work requires me to monitor the training process by drawing graph of accuracy of the model after every 1000 iterations on entire training set and validation set which has 100K and 50K images respectively. Right now, Im taking the naive approach, make snapshots after every 1000 iterations, run the C++ classififcation code which reads raw JPEG image and forward to the net and output the predicted labels. However, this takes too much time on my machine (with a Geforce GTX 560 Ti)</p> <p>Is there any faster way that I can do to have the graph of accuracy of the snapshot models on both training and validation sets?</p> <p>I was thinking about using LMDB format instead of raw images. However, I cannot find documentation/code about doing classification in C++ using LMDB format.</p>
[ { "AnswerId": "38478071", "CreationDate": "2016-07-20T09:58:07.317", "ParentId": null, "OwnerUserId": "2736559", "Title": null, "Body": "<p>Caffe creates logs each time you try to train something, and its located in the tmp folder (both linux and windows).<br>\nI also wrote a plotting script in python which you can easily use to visualize your loss/accuracy.<br>\nJust place your training logs with <code>.log</code> extension next to the script and double click on it.\nYou can use command prompts as well, but for ease of use, when executed it loads all logs (*.log) it can find in the current directory. \nit also shows the top 4 accuracies and at-which accuracy they were achieved. </p>\n\n<p>you can find it here : <a href=\"https://gist.github.com/Coderx7/03f46cb24dcf4127d6fa66d08126fa3b\" rel=\"nofollow\">https://gist.github.com/Coderx7/03f46cb24dcf4127d6fa66d08126fa3b</a> </p>\n" }, { "AnswerId": "44223960", "CreationDate": "2017-05-28T04:59:48.543", "ParentId": null, "OwnerUserId": "8076417", "Title": null, "Body": "<pre><code>python /pathtocaffe/tools/extra/parse_log.py lenet_train.log\n</code></pre>\n\n<p>command produces the following error:</p>\n\n<pre><code>usage: parse_log.py [-h] [--verbose] [--delimiter DELIMITER]\n logfile_path output_dir\nparse_log.py: error: too few arguments\n</code></pre>\n\n<p>Solution:</p>\n\n<p>For successful execution of \"parse_log.py\" command, we should pass the two arguments:</p>\n\n<ol>\n<li>log file</li>\n<li>path of output directory</li>\n</ol>\n\n<p>So the correct command is as follows:</p>\n\n<pre><code>python /pathtocaffe/tools/extra/parse_log.py lenet_train.log output_dir\n</code></pre>\n" }, { "AnswerId": "32006840", "CreationDate": "2015-08-14T09:32:54.823", "ParentId": null, "OwnerUserId": "2347543", "Title": null, "Body": "<p>1) You can use the <a href=\"https://github.com/NVIDIA/DIGITS\" rel=\"noreferrer\">NVIDIA-DIGITS</a> app to monitor your networks. They provide a GUI including dataset preparation, model selection, and learning curve visualization. More, they use a caffe distribution allowing <a href=\"https://developer.nvidia.com/digits\" rel=\"noreferrer\">multi-GPU training</a>.</p>\n\n<p>2) Or, you can simply use the log-parser inside caffe.</p>\n\n<pre><code>/pathtocaffe/build/tools/caffe train --solver=solver.prototxt 2&gt;&amp;1 | tee lenet_train.log\n</code></pre>\n\n<p>This allows you to save train log into \"lenet_train.log\". Then by using:</p>\n\n<pre><code>python /pathtocaffe/tools/extra/parse_log.py lenet_train.log .\n</code></pre>\n\n<p>you parse your train log into two csv files, containing train and test loss. You can then plot them using the following python script</p>\n\n<pre><code>import pandas as pd\nfrom matplotlib import *\nfrom matplotlib.pyplot import *\n\ntrain_log = pd.read_csv(\"./lenet_train.log.train\")\ntest_log = pd.read_csv(\"./lenet_train.log.test\")\n_, ax1 = subplots(figsize=(15, 10))\nax2 = ax1.twinx()\nax1.plot(train_log[\"NumIters\"], train_log[\"loss\"], alpha=0.4)\nax1.plot(test_log[\"NumIters\"], test_log[\"loss\"], 'g')\nax2.plot(test_log[\"NumIters\"], test_log[\"acc\"], 'r')\nax1.set_xlabel('iteration')\nax1.set_ylabel('train loss')\nax2.set_ylabel('test accuracy')\nsavefig(\"./train_test_image.png\") #save image as png\n</code></pre>\n" } ]
31,983,794
2
<python><gpu><theano><lasagne>
2015-08-13T09:00:17.930
31,983,939
2,902,280
Select GPU during execution in Theano
<p>I am training neural nets with theano and lasagne on a 4 GPU machine. My contains the following lines:</p> <pre></pre> <p>So when in python I execute , I get </p> <p>What if, after importing theano, I chose to use say gpu1? I'd like to do this dynamically, that is, without editing is it possible? Or even to choose it at runtime?</p>
[ { "AnswerId": "31983939", "CreationDate": "2015-08-13T09:07:18.683", "ParentId": null, "OwnerUserId": "1692028", "Title": null, "Body": "<p>I'm afraid it's not possible to change the execution device after Theano has been imported. From the <a href=\"http://deeplearning.net/software/theano/library/config.html\">documentation</a>:</p>\n\n<blockquote>\n <p>config.device</p>\n \n <p>String value: either 'cpu', 'gpu', 'gpu0', 'gpu1',\n 'gpu2', or 'gpu3'</p>\n \n <p>[...]</p>\n \n <p><em>This flag’s value cannot be modified during\n the program execution.</em></p>\n</blockquote>\n\n<p><strong>Bonus</strong>: however, let's say you wanted to have two Python processes each running on a separate GPU (is that what you want?), then you could do something like:</p>\n\n<pre><code>import os\nos.system(\"THEANO_FLAGS='device=gpu0' python myscript.py\")\nos.system(\"THEANO_FLAGS='device=gpu1' python myscript.py\")\n</code></pre>\n\n<p>or hack into/extend Python's <a href=\"https://docs.python.org/3.5/library/multiprocessing.html\">multiprocessing</a> module (which works by spawning subprocesses) to ensure the flag is set before a child process is spawned.</p>\n" }, { "AnswerId": "36829078", "CreationDate": "2016-04-24T21:27:28.603", "ParentId": null, "OwnerUserId": "1667840", "Title": null, "Body": "<p>EDIT:\nTheano is now based on the GPU array backend and the following API is no longer available. </p>\n\n<p>As @EelkeSpaak mentioned, you can't change the GPU device after theano was imported. But if you want to choose it programmatically before that's possible without changing environment variables. </p>\n\n<ol>\n<li><p>Make sure you're not choosing a device in your .theanorc file. \nSo nothing like: </p>\n\n<p><code>device=gpu</code></p></li>\n<li><p>before calling <code>import theano</code> choose the GPU device as follows:</p>\n\n<pre><code>import theano.sandbox.cuda\ntheano.sandbox.cuda.use('gpu1')\n\n#Results in Using gpu device 1: Tesla K80\n</code></pre></li>\n</ol>\n" } ]
31,988,629
1
<python><theano><conv-neural-network>
2015-08-13T12:44:01.263
31,989,059
4,899,439
Theano - Shared variable as input of function for large dataset
<p>I am new to Theano... My apologies if this is obvious.</p> <p>I am trying to train a CNN, based on the <a href="http://deeplearning.net/tutorial/lenet.html" rel="nofollow">LeNet tutorial</a>. A major difference with the tutorial is that my dataset is too large to fit in memory, so I have to load each batch during training.</p> <p>The original model has this:</p> <pre></pre> <p>...Which does not work for me, as it assumes that is entirely loaded in memory.</p> <p>So I switched to this:</p> <pre></pre> <p>And tried to call it with:</p> <pre></pre> <p>But got:</p> <blockquote> <p>TypeError: ('Bad input argument to theano function with name ":103" at index 0(0-based)', 'Expected an array-like object, but found a Variable: maybe you are trying to call a function on a (possibly shared) variable instead of a numeric array?')</p> </blockquote> <p>So I guess I can't use a shared variable as an input to a Theano function. But then, how should I proceed...?</p>
[ { "AnswerId": "31989059", "CreationDate": "2015-08-13T13:03:36.290", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>All inputs to compiled Theano functions (i.e the output of a call to <code>theano.function(...)</code>) should always be concrete values, typically scalars or numpy arrays. Shared variables are a way to wrap a numpy array and treat it like a symbolic variable but this is not necessary when the data is being passed as an input.</p>\n\n<p>So you should be able to just omit wrapping your data and target values as shared variables and do the following instead:</p>\n\n<pre><code>cost_ij = train_model(data, target)\n</code></pre>\n\n<p>Note that, if you're using a GPU, this means your data will reside in the computer's main memory, and each part you pass as input will need to be copied to the GPU memory separately, increasing overhead and slowing it down. Also note that you will have to divide your data up and only pass part of it; this change of approach won't allow you to do GPU computations over your whole dataset at once if the whole dataset won't fit in GPU memory.</p>\n" } ]
31,992,971
0
<deep-learning><caffe><conv-neural-network>
2015-08-13T15:49:43.417
null
2,467,772
Detecting pedestrian using CaffeNet model at moderate framerate
<p>I train the CaffeNet (<a href="https://stackoverflow.com/questions/31972448/how-to-understand-the-cifar10-prediction-output">more precisely Cifar10 model for two classes classification</a>) model. Now the model is ready for detection. For the model testing with a single image, I use . I haven't tested how fast the code can run for 640 x 480 image. My target is I like to have 5~10 frame/sec is just nice for offline detection. I understand that I need to implement multi-size detection (i.e. something like we do in face detection, original image size is re-sized for different smaller sizes) so that I don't miss the pedestrian in each frame. </p> <p>According to this <a href="http://arxiv.org/pdf/1501.05790.pdf" rel="nofollow noreferrer">paper</a>, they use 64 x 128 image size in training and detection takes 3ms/window and for 100 windows/image, it takes 300msec/frame. Not sure they implement multi-size detection approach. If multi-size is implemented, it will take much longer. </p> <p>At this moment, I have only knowledge of implementing method for multi-size detection. I know it will be very slow. Is there any more efficient way of detection using CaffeNet model? My target is just nice for 5~10 frame/sec rate. Thanks</p>
[]
31,997,124
1
<lua><torch>
2015-08-13T19:46:25.900
31,999,155
2,293,069
Grid search for hyper-paramerters in torch / lua
<p>I am new in torch/lua and am trying evaluate some different optimization algorithms and different parameters for each of them.</p> <p>Algo: optim.sgd optim.lbfgs</p> <p>Parameters:</p> <ul> <li>learning_rate: {1e-1, 1e-2, 1e-3}</li> <li>weight_decay: {1e-1, 1e-2}</li> </ul> <p>So what I am trying to achieve is try every combination of the hyper-parameters and get the optimal parameter set for each of the algorithm.</p> <p>So is there something like:</p> <pre></pre> <p>as in <a href="http://scikit-learn.org/stable/modules/grid_search.html" rel="nofollow">http://scikit-learn.org/stable/modules/grid_search.html</a> available in torch to deal with it?</p> <p>Any suggestions would be nice!</p>
[ { "AnswerId": "31999155", "CreationDate": "2015-08-13T22:01:39.327", "ParentId": null, "OwnerUserId": "117844", "Title": null, "Body": "<p>Try this hyper-optimization library that is being worked on:\n<a href=\"https://github.com/nicholas-leonard/hypero\" rel=\"nofollow\">https://github.com/nicholas-leonard/hypero</a></p>\n" } ]
31,997,366
1
<python><keras>
2015-08-13T20:00:30.607
32,938,020
3,054,356
Python: keras shape mismatch error
<p>I am trying to build a very simple multilayer perceptron (MLP) in :</p> <pre></pre> <p>My training data shape: gives </p> <p>The labels belong to binary class with shape: gives </p> <p>So my code should produce the network with following connection: </p> <p>which produces the shape mismatch error:</p> <pre></pre> <p>At at line . Am I overseeing something obvious in Keras?</p> <p><strong>EDIT:</strong> I have gone through the question <a href="https://stackoverflow.com/questions/30384908/python-keras-neural-network-theano-package-returns-an-error-about-data-dimensi/30386018#30386018">here</a> but does not solve my problem</p>
[ { "AnswerId": "32938020", "CreationDate": "2015-10-04T20:26:27.393", "ParentId": null, "OwnerUserId": "5407700", "Title": null, "Body": "<p>I had the same problem and then found this thread; </p>\n\n<p><a href=\"https://github.com/fchollet/keras/issues/68\">https://github.com/fchollet/keras/issues/68</a></p>\n\n<p>It appears for you to state a final output layer of 2 or for any number of categories the labels need to be of a categorical type where essentially this is a binary vector for each observation e.g a 3 class output vector [0,2,1,0,1,0] becomes [[1,0,0],[0,0,1],[0,1,0],[1,0,0],[0,1,0],[1,0,0]].</p>\n\n<p>The np_utils.to_categorical function solved this for me;</p>\n\n<pre class=\"lang-php prettyprint-override\"><code>from keras.utils import np_utils, generic_utils\n\ny_train, y_test = [np_utils.to_categorical(x) for x in (y_train, y_test)]\n</code></pre>\n\n" } ]
31,998,824
1
<neural-network><caffe>
2015-08-13T21:34:15.877
32,013,403
4,080,543
Parser in Caffe
<p>I am trying to find the parser in Caffe. By parser, I mean the part of the code that reads the network configuration from a file and parses it. I was wondering if anyone knows where in the Caffe codebase I should look for this specific piece of code. </p>
[ { "AnswerId": "32013403", "CreationDate": "2015-08-14T15:18:07.097", "ParentId": null, "OwnerUserId": "2014584", "Title": null, "Body": "<p>Caffe's text file format for specifying models uses the <a href=\"https://developers.google.com/protocol-buffers/?hl=en\" rel=\"nofollow\">Google Protocol Buffer</a> format.</p>\n\n<p>You can see the code that reads a model in <a href=\"https://github.com/BVLC/caffe/blob/fc77ef3d23423e57d83e16c32844bbbe8d4d9f0c/src/caffe/util/io.cpp#L32\" rel=\"nofollow\">src/caffe/util/io.cpp</a>:</p>\n\n<pre><code>bool ReadProtoFromTextFile(const char* filename, Message* proto) {\n int fd = open(filename, O_RDONLY);\n CHECK_NE(fd, -1) &lt;&lt; \"File not found: \" &lt;&lt; filename;\n FileInputStream* input = new FileInputStream(fd);\n bool success = google::protobuf::TextFormat::Parse(input, proto);\n delete input;\n close(fd);\n return success;\n}\n</code></pre>\n\n<p>Try using GitHub's search to see places in the code that call this function.</p>\n" } ]
31,999,536
1
<theano>
2015-08-13T22:36:49.513
32,004,645
1,157,605
Theano scan: how do I have a tuple output as input into next step?
<p>I want to do this sort of loop in Theano:</p> <pre></pre> <p>However, when I do</p> <pre></pre> <p>I get . If I change it so that the function takes a single tuple instead, </p> <p>In particular, I eventually want to differentiate the entire result with respect to k.</p>
[ { "AnswerId": "32004645", "CreationDate": "2015-08-14T07:32:46.320", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>The first error is because your <code>add_multiply</code> function takes 3 arguments but, by having only one element in the <code>outputs_info</code> list, you're only providing a single argument. It's not clear if you intended the <code>x0</code> vector to be the initial value for just <code>a</code> or were expecting it to be spread over <code>a</code>, <code>b</code>, and <code>k</code>. The latter isn't supported by Theano and, in general, tuples are not supported by Theano. In Theano, everything needs to be a tensor (e.g. scalars are just special types of tensors with zero dimensions).</p>\n\n<p>You can achieve a replica of the Python implementation in Theano as follows.</p>\n\n<pre><code>import theano\nimport theano.tensor as tt\n\n\ndef add_multiply(a, b, k):\n return a + b + k, a * b * k\n\n\ndef python_main():\n x = 1\n y = 2\n k = 1\n tuples = []\n for i in range(5):\n x, y = add_multiply(x, y, k)\n tuples.append((x, y, k))\n return tuples\n\n\ndef theano_main():\n x = tt.constant(1, dtype='uint32')\n y = tt.constant(2, dtype='uint32')\n k = tt.scalar(dtype='uint32')\n outputs, _ = theano.scan(add_multiply, outputs_info=[x, y], non_sequences=[k], n_steps=5)\n g = theano.grad(tt.sum(outputs), k)\n f = theano.function(inputs=[k], outputs=outputs + [g])\n tuples = []\n xvs, yvs, _ = f(1)\n for xv, yv in zip(xvs, yvs):\n tuples.append((xv, yv, 1))\n return tuples\n\n\nprint 'Python:', python_main()\nprint 'Theano:', theano_main()\n</code></pre>\n\n<p>Note that in the Theano version, all the tuple handling happens outside Theano; Python has to convert from the three tensors returned by the Theano function into a list of tuples.</p>\n\n<p><strong>Update:</strong></p>\n\n<p>It's unclear what \"the entire result\" should refer to but the code has been updated to show how you might differentiate with respect to <code>k</code>. Note that in Theano the symbolic differentiation only works with scalar expressions, but can differentiate with respect to multi-dimensional tensors.</p>\n\n<p>In this update the <code>add_multiply</code> method no longer returns <code>k</code> since that is constant. For similar reasons, the Theano version now accepts <code>k</code> as a <code>non_sequence</code>.</p>\n" } ]
32,001,363
3
<python><theano>
2015-08-14T02:27:03.590
32,004,904
2,789,788
Trying to understand variables... can I print them?
<p>If we use one of the tutorial examples:</p> <pre></pre> <p>Does now hold the array ? </p> <p>How can I print out after my call yo ? </p> <p>I'm having trouble understanding this functional style of usage. Can anyone explain?</p>
[ { "AnswerId": "32007867", "CreationDate": "2015-08-14T10:25:57.913", "ParentId": null, "OwnerUserId": "5226958", "Title": null, "Body": "<p>We can not print a symbolic var since it remembers no real value.</p>\n\n<p>However, if you claim diff to be a shared var, u can do the following:\nprint diff.get_value()</p>\n" }, { "AnswerId": "32004904", "CreationDate": "2015-08-14T07:48:41.207", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>I think you're hitting the most difficult conceptual challenge of learning Theano.</p>\n\n<p>To answer your specific questions first,</p>\n\n<ol>\n<li><p>No, <code>diff</code> does not hold the array <code>[[ 1., 0.],[-1., -2.]]</code>. As far as Python is concerned <code>diff</code> is a reference to a Theano object. That object is a symbolic variable and never, itself, has or holds a value.</p></li>\n<li><p>When you ask how to print out <code>diff</code>, I take that to mean \"how can I print out the result of the computation that <code>diff</code> is a reference to?\" Well, in one sense, you're already doing it: <code>diff</code> is the output of the Theano function <code>f</code> so after <code>f</code> is compiled and then executed via <code>f([[1, 1], [1, 1]], [[0, 1], [2, 3]])</code> you see the result is <code>[array([[ 1., 0.], [-1., -2.]])</code>.</p></li>\n</ol>\n\n<p>What's difficult is separating the symbolic world from the concrete/executable world. <code>diff</code> exists in the symbolic world and so doesn't really ever have a value. Everything that happens after compiling <code>f</code> via <code>theano.function(...)</code> is in the concrete/executable world. With some caveats, nothing in the symbolic world is affected by what happens in the concrete/executable world.</p>\n\n<p>It's also important to note that <code>diff</code> cannot \"have a value\" (in any sense) until the symbolic variables on which it depends, <code>a</code> and <code>b</code> have been given a value. If I posed a mathematics problem, telling you that <code>y = x * z</code> and asked you what value <code>y</code> has, you couldn't answer unless I also told you what value <code>x</code> and <code>z</code> have. The same is true here, the symbolic world operates like symbolic mathematics: it is an abstract definition of some computation that doesn't and can't have a value until compiled and input values are provided.</p>\n\n<p>When developing Theano code it can be helpful to separate the symbolic code from the concrete/executable code into two separate areas. Without doing this you will often find, for example, wanting to call a symbolic variable <code>x</code> but then also wanting to give a concrete value that is to be provided for <code>x</code> the same name.</p>\n" }, { "AnswerId": "32001571", "CreationDate": "2015-08-14T02:54:16.767", "ParentId": null, "OwnerUserId": "2957601", "Title": null, "Body": "<p>In the console, just type the var name</p>\n\n<pre><code>&gt;&gt;&gt; diff\n</code></pre>\n\n<p>In Python 2, use <code>print diff</code>, and in Python 3, use <code>print(diff)</code></p>\n" } ]
32,013,927
2
<indexing><torch><one-hot-encoding>
2015-08-14T15:46:02.190
32,015,143
1,120,370
In Torch how do I create a 1-hot tensor from a list of integer labels?
<p>I have a byte tensor of integer class labels, e.g. from the MNIST data set.</p> <pre></pre> <p>How do use it to create a tensor of 1-hot vectors?</p> <pre></pre> <p>I know I could do this with a loop, but I'm wondering if there's any clever Torch indexing that will get it for me in a single line.</p>
[ { "AnswerId": "32015143", "CreationDate": "2015-08-14T16:55:55.833", "ParentId": null, "OwnerUserId": "117844", "Title": null, "Body": "<pre><code>indices = torch.LongTensor{1,7,5}:view(-1,1)\none_hot = torch.zeros(3, 10)\none_hot:scatter(2, indices, 1)\n</code></pre>\n\n<p>You can find the documentation for <code>scatter</code> in the <a href=\"https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor-scatterdim-index-srcval\" rel=\"nofollow noreferrer\">torch/torch7 github readme</a> (in the master branch).</p>\n" }, { "AnswerId": "40506828", "CreationDate": "2016-11-09T12:16:33.423", "ParentId": null, "OwnerUserId": "4412981", "Title": null, "Body": "<p>An alternate method is to shuffle rows from an identity matrix:</p>\n\n<pre><code>indicies = torch.LongTensor{1,7,5}\none_hot = torch.eye(10):index(1, indicies)\n</code></pre>\n\n<p>This was not my idea, I found it in <a href=\"https://github.com/karpathy/char-rnn/blob/master/util/OneHot.lua\" rel=\"nofollow noreferrer\">karpathy/char-rnn</a>.</p>\n" } ]
32,014,110
0
<theano>
2015-08-14T15:55:58.447
null
2,789,788
mingw seems to crash python (Anaconda on windows)
<p>I'm trying to install theano on a windows computer. When I do python crashes. I think this is a mingw issue because when I uninstall mingw libpython and try to import theano, I get a theano warning, but python does not crash.</p> <p>I have tried Anaconda 2.10 and 2.30. I follow the installation up with , and then install the latest theano from github. still crashes. Any ideas?</p> <p>Here are the details of the "python.exe has stopped working" message:</p> <pre></pre>
[]
32,018,923
1
<python><windows-7><64-bit><python-import><theano>
2015-08-14T21:29:11.670
54,008,530
5,228,890
Theano import Error-windows 7
<p>I have a problem in importing theano in python. When I import Theano in the python 27 32 bit, in Windows 7 64 bit, I get the following errors and warning: I also should add that currently I have installed GCC 4.8.1. What I have to do in order to fix it. </p> <p>Thanks, Afshin</p> <pre></pre>
[ { "AnswerId": "54008530", "CreationDate": "2019-01-02T14:58:46.637", "ParentId": null, "OwnerUserId": "4152633", "Title": null, "Body": "<p>You should check your windows path and make sure they are all right.</p>\n" } ]
32,020,815
1
<windows><import><theano>
2015-08-15T02:07:54.157
32,035,083
5,228,890
Theano Import Error-Windows 7-64b
<p>I just installed the Theano. I have a problem in importing theano in python. When I import Theano in the python 27 32 bit, in Windows 7 64 bit, I get the following errors and warning: I also should add that currently I have installed GCC 4.8.1. What I have to do in order to fix it.</p> <p>Thanks, Afshin</p> <pre></pre> <p>...... (I just removed some codes, because Stachoverflow does not allow more than 30000 character...... ......</p> <pre></pre>
[ { "AnswerId": "32035083", "CreationDate": "2015-08-16T12:28:52.350", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>This was also discussed on the <a href=\"https://groups.google.com/forum/#!topic/theano-users/jSG_xK2IPRM\" rel=\"nofollow\">Theano mailing list</a>. For the benefit of StackOverflow users, the answer was to install and use <a href=\"http://www.mingw.org/\" rel=\"nofollow\">MinGW</a> in preference to Cygwin.</p>\n\n<p>If both are installed, MinGW should be before Cygwin in the <code>PATH</code>.</p>\n" } ]
32,021,975
1
<python><caffe>
2015-08-15T05:48:01.197
null
1,263,702
Python Caffe cpu & gpu mode simultaneously
<p>Is it possible to run Caffe in both CPU and GPU mode? I have several Caffe models, but my GPU resources are limited, so that I can't put all models into GPU memory. I want to use e.g. 3 models with GPU mode and 2 models with CPU mode, but and commands just switch the mode for the whole library.</p>
[ { "AnswerId": "35031858", "CreationDate": "2016-01-27T08:10:24.187", "ParentId": null, "OwnerUserId": "3130081", "Title": null, "Body": "<p>In my opinion, you can write multiple python scripts one for each task.\nIn each script you can choose whether use CPU or GPU (and GPU device).\nThen you can run these multiple scripts at the same time.\nBut run multiple tasks at one GPU card with slow down the speed severely in my experience. Good luck!</p>\n" } ]
32,034,231
1
<python><theano><lasagne>
2015-08-16T10:50:42.140
32,035,049
127,400
Lasagne/Theano wrong number of dimensions
<p>Headed into Lasagne and Theano with a modified mnist.py (the primary example of Lasagne) to train a very simple XOR.</p> <pre></pre> <p>Defined a training set at (1), modified the input to the new dimension at (2) and get an exception at (3):</p> <pre></pre> <p>And I have no clue what I did wrong. When I print the dimension (or the output of the program until the exception) I get this</p> <pre></pre> <p>Which seem to be perfect. What I'm doing wrong and how must the array be formed to work?</p>
[ { "AnswerId": "32035049", "CreationDate": "2015-08-16T12:25:21.423", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>The problem is with the second input, <code>targets</code>. Note that the error message indicated this by saying \"...at index 1(0-based)...\", i.e. the second parameter.</p>\n\n<p><code>target_var</code> is an <code>ivector</code> but you're providing a 4-dimensional tensor for <code>targets</code>. The solution is to alter your <code>y_train</code> dataset so that it is 1-dimensional:</p>\n\n<pre><code>y_train = [0, 1, 1, 0]\n</code></pre>\n\n<p>This will cause another error because you currently assert that the first dimension of the inputs and targets should match, but changing</p>\n\n<pre><code>assert len(inputs) == len(targets)\n</code></pre>\n\n<p>to</p>\n\n<pre><code>assert inputs.shape[2] == len(targets)\n</code></pre>\n\n<p>will solve the second problem and allow the script to run successfully.</p>\n" } ]
32,036,386
1
<deep-learning><caffe><conv-neural-network>
2015-08-16T14:51:21.567
32,055,436
2,467,772
Error in testing Caffe's Alexnet caffe model
<p>I trained caffe's Alexnet model for testing with more efficient model. Since my training is for pedestrians my image size is 64 x 80 images. I made changes to prototxt files to match to my trained image size. According to this <a href="http://cs231n.github.io/convolutional-networks/" rel="nofollow">tutorial</a>, it will be better to set the convolution filter size to match the input image size. So my filter sizes have slight changes from the original Alexnet's provided prototxt files (I trained and tested with Alexnet's original prototxt files and get the same error at the same line mentioned below).</p> <p>According to my calculation, image sizes after passing each layer will be</p> <p>80x64x3 -> Conv1 -> 38x30x96<br> 38x30x96 -> Pools -> 18x14x96<br> 18x14x96 -> Conv2 -> 19x15x256<br> 19x15x256 -> Pool2 -> 9x7x256<br> 9x7x256 -> Conv3 -> 9x7x384<br> 9x7x384 -> Conv4 -> 9x7x384<br> 9x7x384 -> Conv5 -> 9x7x256<br> 9x7x256 -> Pool5 -> 4x3x256</p> <p>The error is at fc6 layer and line number 714 of . I use file to test the model.</p> <pre></pre> <p>The error is </p> <pre></pre> <p>I don't understand why it is like that.</p> <p>My two prototxt files are shown below.</p> <pre></pre> <p>This is the testing file for the model.</p> <pre></pre> <p>What is wrong with this error?</p>
[ { "AnswerId": "32055436", "CreationDate": "2015-08-17T16:33:56.437", "ParentId": null, "OwnerUserId": "2467772", "Title": null, "Body": "<p>Those who have the same problem as i faced, please look at the prototxt files shown below. There are some modifications made compared to the original prototxt files provided in the downloaded folders. I used 80x64 image sizes for the input in training and testing.</p>\n\n<pre><code>Train_val.prototxt\nname: \"AlexNet\"\nlayers {\n name: \"data\"\n type: DATA\n top: \"data\"\n top: \"label\"\n data_param {\n source: \"../../examples/Alexnet_2/Alexnet_train_leveldb\"\n batch_size: 100\n }\n transform_param {\n mean_file: \"../../examples/Alexnet_2/mean.binaryproto\"\n\n }\n include: { phase: TRAIN }\n}\nlayers {\n name: \"data\"\n type: DATA\n top: \"data\"\n top: \"label\"\n data_param {\n source: \"../../examples/Alexnet_2/Alexnet_test_leveldb\"\n batch_size: 100\n }\n transform_param {\n mean_file: \"../../examples/Alexnet_2/mean.binaryproto\"\n }\n include: { phase: TEST }\n}\nlayers {\n name: \"conv1\"\n type: CONVOLUTION\n bottom: \"data\"\n top: \"conv1\"\n blobs_lr: 1\n blobs_lr: 2\n weight_decay: 1\n weight_decay: 0\n convolution_param {\n num_output: 96\n kernel_size: 11\n stride: 2\n weight_filler {\n type: \"gaussian\"\n std: 0.01\n }\n bias_filler {\n type: \"constant\"\n value: 0\n }\n }\n}\nlayers {\n name: \"relu1\"\n type: RELU\n bottom: \"conv1\"\n top: \"conv1\"\n}\nlayers {\n name: \"pool1\"\n type: POOLING\n bottom: \"conv1\"\n top: \"pool1\"\n pooling_param {\n pool: MAX\n kernel_size: 3\n stride: 2\n }\n}\nlayers {\n name: \"norm1\"\n type: LRN\n bottom: \"pool1\"\n top: \"norm1\"\n lrn_param {\n local_size: 5\n alpha: 0.0001\n beta: 0.75\n }\n}\nlayers {\n name: \"conv2\"\n type: CONVOLUTION\n bottom: \"norm1\"\n top: \"conv2\"\n blobs_lr: 1\n blobs_lr: 2\n weight_decay: 1\n weight_decay: 0\n convolution_param {\n num_output: 256\n pad: 2\n kernel_size: 5\n group: 2\n weight_filler {\n type: \"gaussian\"\n std: 0.01\n }\n bias_filler {\n type: \"constant\"\n value: 1\n }\n }\n}\nlayers {\n name: \"relu2\"\n type: RELU\n bottom: \"conv2\"\n top: \"conv2\"\n}\nlayers {\n name: \"pool2\"\n type: POOLING\n bottom: \"conv2\"\n top: \"pool2\"\n pooling_param {\n pool: MAX\n kernel_size: 3\n stride: 2\n }\n}\nlayers {\n name: \"norm2\"\n type: LRN\n bottom: \"pool2\"\n top: \"norm2\"\n lrn_param {\n local_size: 5\n alpha: 0.0001\n beta: 0.75\n }\n}\nlayers {\n name: \"conv3\"\n type: CONVOLUTION\n bottom: \"norm2\"\n top: \"conv3\"\n blobs_lr: 1\n blobs_lr: 2\n weight_decay: 1\n weight_decay: 0\n convolution_param {\n num_output: 384\n pad: 1\n kernel_size: 3\n weight_filler {\n type: \"gaussian\"\n std: 0.01\n }\n bias_filler {\n type: \"constant\"\n value: 0\n }\n }\n}\nlayers {\n name: \"relu3\"\n type: RELU\n bottom: \"conv3\"\n top: \"conv3\"\n}\nlayers {\n name: \"conv4\"\n type: CONVOLUTION\n bottom: \"conv3\"\n top: \"conv4\"\n blobs_lr: 1\n blobs_lr: 2\n weight_decay: 1\n weight_decay: 0\n convolution_param {\n num_output: 384\n pad: 1\n kernel_size: 3\n group: 2\n weight_filler {\n type: \"gaussian\"\n std: 0.01\n }\n bias_filler {\n type: \"constant\"\n value: 1\n }\n }\n}\nlayers {\n name: \"relu4\"\n type: RELU\n bottom: \"conv4\"\n top: \"conv4\"\n}\nlayers {\n name: \"conv5\"\n type: CONVOLUTION\n bottom: \"conv4\"\n top: \"conv5\"\n blobs_lr: 1\n blobs_lr: 2\n weight_decay: 1\n weight_decay: 0\n convolution_param {\n num_output: 256\n pad: 1\n kernel_size: 3\n group: 2\n weight_filler {\n type: \"gaussian\"\n std: 0.01\n }\n bias_filler {\n type: \"constant\"\n value: 1\n }\n }\n}\nlayers {\n name: \"relu5\"\n type: RELU\n bottom: \"conv5\"\n top: \"conv5\"\n}\nlayers {\n name: \"pool5\"\n type: POOLING\n bottom: \"conv5\"\n top: \"pool5\"\n pooling_param {\n pool: MAX\n kernel_size: 3\n stride: 2\n }\n}\nlayers {\n name: \"fc6\"\n type: INNER_PRODUCT\n bottom: \"pool5\"\n top: \"fc6\"\n blobs_lr: 1\n blobs_lr: 2\n weight_decay: 1\n weight_decay: 0\n inner_product_param {\n num_output: 4096\n weight_filler {\n type: \"gaussian\"\n std: 0.005\n }\n bias_filler {\n type: \"constant\"\n value: 1\n }\n }\n}\nlayers {\n name: \"relu6\"\n type: RELU\n bottom: \"fc6\"\n top: \"fc6\"\n}\nlayers {\n name: \"drop6\"\n type: DROPOUT\n bottom: \"fc6\"\n top: \"fc6\"\n dropout_param {\n dropout_ratio: 0.5\n }\n}\nlayers {\n name: \"fc7\"\n type: INNER_PRODUCT\n bottom: \"fc6\"\n top: \"fc7\"\n blobs_lr: 1\n blobs_lr: 2\n weight_decay: 1\n weight_decay: 0\n inner_product_param {\n num_output: 4096\n weight_filler {\n type: \"gaussian\"\n std: 0.005\n }\n bias_filler {\n type: \"constant\"\n value: 1\n }\n }\n}\nlayers {\n name: \"relu7\"\n type: RELU\n bottom: \"fc7\"\n top: \"fc7\"\n}\nlayers {\n name: \"drop7\"\n type: DROPOUT\n bottom: \"fc7\"\n top: \"fc7\"\n dropout_param {\n dropout_ratio: 0.5\n }\n}\nlayers {\n name: \"fc8\"\n type: INNER_PRODUCT\n bottom: \"fc7\"\n top: \"fc8\"\n blobs_lr: 1\n blobs_lr: 2\n weight_decay: 1\n weight_decay: 0\n inner_product_param {\n num_output: 2\n weight_filler {\n type: \"gaussian\"\n std: 0.01\n }\n bias_filler {\n type: \"constant\"\n value: 0\n }\n }\n}\nlayers {\n name: \"accuracy\"\n type: ACCURACY\n bottom: \"fc8\"\n bottom: \"label\"\n top: \"accuracy\"\n include: { phase: TEST }\n}\nlayers {\n name: \"loss\"\n type: SOFTMAX_LOSS\n bottom: \"fc8\"\n bottom: \"label\"\n top: \"loss\"\n}\n\ntest.prototxt\nname: \"CaffeNet\"\nlayers \n{\n name: \"data\"\n type: MEMORY_DATA\n top: \"data\"\n top: \"label\"\n memory_data_param \n {\n batch_size: 1\n channels: 3\n height: 80\n width: 64\n }\n transform_param \n {\n crop_size: 64\n mirror: false\n mean_file: \"../../examples/Alexnet_2/mean.binaryproto\"\n }\n}\nlayers {\n name: \"conv1\"\n type: CONVOLUTION\n bottom: \"data\"\n top: \"conv1\"\n convolution_param {\n num_output: 96\n kernel_size: 11\n stride: 2\n }\n}\nlayers {\n name: \"relu1\"\n type: RELU\n bottom: \"conv1\"\n top: \"conv1\"\n}\nlayers {\n name: \"pool1\"\n type: POOLING\n bottom: \"conv1\"\n top: \"pool1\"\n pooling_param {\n pool: MAX\n kernel_size: 3\n stride: 2\n }\n}\nlayers {\n name: \"norm1\"\n type: LRN\n bottom: \"pool1\"\n top: \"norm1\"\n lrn_param {\n local_size: 5\n alpha: 0.0001\n beta: 0.75\n }\n}\nlayers {\n name: \"conv2\"\n type: CONVOLUTION\n bottom: \"norm1\"\n top: \"conv2\"\n convolution_param {\n num_output: 256\n pad: 2\n kernel_size: 5\n group: 2\n }\n}\nlayers {\n name: \"relu2\"\n type: RELU\n bottom: \"conv2\"\n top: \"conv2\"\n}\nlayers {\n name: \"pool2\"\n type: POOLING\n bottom: \"conv2\"\n top: \"pool2\"\n pooling_param {\n pool: MAX\n kernel_size: 3\n stride: 2\n }\n}\nlayers {\n name: \"norm2\"\n type: LRN\n bottom: \"pool2\"\n top: \"norm2\"\n lrn_param {\n local_size: 5\n alpha: 0.0001\n beta: 0.75\n }\n}\nlayers {\n name: \"conv3\"\n type: CONVOLUTION\n bottom: \"norm2\"\n top: \"conv3\"\n convolution_param {\n num_output: 384\n pad: 1\n kernel_size: 3\n }\n}\nlayers {\n name: \"relu3\"\n type: RELU\n bottom: \"conv3\"\n top: \"conv3\"\n}\nlayers {\n name: \"conv4\"\n type: CONVOLUTION\n bottom: \"conv3\"\n top: \"conv4\"\n convolution_param {\n num_output: 384\n pad: 1\n kernel_size: 3\n group: 2\n }\n}\nlayers {\n name: \"relu4\"\n type: RELU\n bottom: \"conv4\"\n top: \"conv4\"\n}\nlayers {\n name: \"conv5\"\n type: CONVOLUTION\n bottom: \"conv4\"\n top: \"conv5\"\n convolution_param {\n num_output: 256\n pad: 1\n kernel_size: 3\n group: 2\n }\n}\nlayers {\n name: \"relu5\"\n type: RELU\n bottom: \"conv5\"\n top: \"conv5\"\n}\nlayers {\n name: \"pool5\"\n type: POOLING\n bottom: \"conv5\"\n top: \"pool5\"\n pooling_param {\n pool: MAX\n kernel_size: 3\n stride: 2\n }\n}\nlayers {\n name: \"fc6\"\n type: INNER_PRODUCT\n bottom: \"pool5\"\n top: \"fc6\"\n inner_product_param {\n num_output: 4096\n }\n}\nlayers {\n name: \"relu6\"\n type: RELU\n bottom: \"fc6\"\n top: \"fc6\"\n}\nlayers {\n name: \"drop6\"\n type: DROPOUT\n bottom: \"fc6\"\n top: \"fc6\"\n dropout_param {\n dropout_ratio: 0.5\n }\n}\nlayers {\n name: \"fc7\"\n type: INNER_PRODUCT\n bottom: \"fc6\"\n top: \"fc7\"\n inner_product_param {\n num_output: 4096\n }\n}\nlayers {\n name: \"relu7\"\n type: RELU\n bottom: \"fc7\"\n top: \"fc7\"\n}\nlayers {\n name: \"drop7\"\n type: DROPOUT\n bottom: \"fc7\"\n top: \"fc7\"\n dropout_param {\n dropout_ratio: 0.5\n }\n}\nlayers {\n name: \"fc8\"\n type: INNER_PRODUCT\n bottom: \"fc7\"\n top: \"fc8\"\n inner_product_param {\n num_output: 2\n }\n}\nlayers {\n name: \"prob\"\n type: SOFTMAX\n bottom: \"fc8\"\n top: \"prob\"\n}\nlayers {\n name: \"output\"\n type: ARGMAX\n bottom: \"prob\"\n top: \"output\"\n}\n</code></pre>\n" } ]
32,038,757
1
<python><windows><theano><canopy>
2015-08-16T18:54:53.610
null
3,656,049
Installing Theano with Canopy EPD on windows 7, 64 bit
<p>I have successfully installed theano on Canopy EPD, windows 7, 64 bit. While importing theano (for testing at first time), I am getting this error. Can anybody help. Thanks.</p> <p>It is similar to this question: <a href="https://stackoverflow.com/questions/10270871/installing-theano-on-epd-windows-x64">Installing Theano on EPD (Windows x64)</a></p> <pre></pre>
[ { "AnswerId": "37845271", "CreationDate": "2016-06-15T20:34:50.737", "ParentId": null, "OwnerUserId": "2280645", "Title": null, "Body": "<p>Since this question wasnt properly closed by the author and it's also relevant to most EPD users, including me when i first started with EPD + theano, i will answer to it.</p>\n\n<p>First, remove any other python environment if possible.\nBe sure there is no other conflicting environment path of python, just in case.</p>\n\n<p><strong>1.Install EPD CANOPY</strong></p>\n\n<p><strong>2.Install the MinGW package that is on package installer available on Canopy application</strong></p>\n\n<p>That should be easy to do, you just open the package manager from inside the Canopy application.</p>\n\n<p><strong>3.Install Theano from inside EPD</strong></p>\n\n<p><strong>Before</strong> you go spamming </p>\n\n<p>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git</p>\n\n<p>in every prompt of command you see, <strong>DONT DO IT</strong>.</p>\n\n<p>--\n<strong>Instead, Open Canopy.</strong></p>\n\n<p>In Canopy screen, go to <strong>\"Tools\"</strong>, and then <strong>open \"Canopy Command Prompt\"</strong></p>\n\n<p>A screen exactly like the CMD from windows will open.</p>\n\n<p>On that screen, <strong>execute</strong> :</p>\n\n<p><strong>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git</strong></p>\n\n<p>Should work fine now.</p>\n\n<p>--</p>\n\n<p>However, dont forget you will need , obviously the Windows SDK , depending on the windows version of course.\nYou can try installing the last Visual Studio with comes with everything you need.</p>\n\n<p>If you need more info on this, check this related Stackoverflow topic :</p>\n\n<p><a href=\"https://stackoverflow.com/questions/35630020/installing-theano/37780769#37780769\">Installing theano</a></p>\n" } ]
32,039,549
1
<protocol-buffers><caffe>
2015-08-16T20:22:16.110
null
5,074,889
protobuf common.h "No such file"
<p>I am trying to install Caffe, and I am running into this frustrating error. When I run I get the following:</p> <pre></pre> <p>I am using the Google protocol buffer 2.6.1 (<a href="https://developers.google.com/protocol-buffers/docs/downloads" rel="noreferrer">https://developers.google.com/protocol-buffers/docs/downloads</a>), and I have indeed added the directory to the PATH. The common.h file is definitely there in the directory (I see it with my own eyes), but somehow it is unable to detect it. I have no clue what to do, and all the solutions from <a href="https://github.com/BVLC/caffe/issues/19" rel="noreferrer">this issue</a> don't seem to work for me.</p> <p>Any insight would be appreciated. I suspect I am neglecting a step somewhere, as I am rather new to Linux.</p> <p>Thank you very much.</p>
[ { "AnswerId": "32042334", "CreationDate": "2015-08-17T03:24:42.990", "ParentId": null, "OwnerUserId": "2686899", "Title": null, "Body": "<p><code>PATH</code> tells your shell where to search for commands. It does not tell your compiler where to search for headers. To tell your compiler to find headers in a particular directory, you need to use the <code>-I</code> flag. For example:</p>\n\n<pre><code>g++ -I/path/to/protobuf/include -c my-source.cc\n</code></pre>\n\n<p>You will need to convince your build system to add this flag to the compiler command-line. All reasonable build systems have some way to do this, but the details vary. For autoconf you can specify when you run configure:</p>\n\n<pre><code>./configure CXXFLAGS=-I/path/to/protobuf/include\n</code></pre>\n\n<p>For cmake I think you can do something like this (not tested):</p>\n\n<pre><code>cmake -DCMAKE_CXX_FLAGS=-I/path/to/protobuf/include\n</code></pre>\n\n<p>Alternatively, you would probably not have this problem if you installed protobuf to the standard location -- either <code>/usr</code> or <code>/usr/local</code> (hence placing the headers in <code>/usr/include/google/protobuf</code> or <code>/usr/local/include/google/protobuf</code>).</p>\n\n<p>Also note that almost all Linux distributions have a Protobuf package, and you should probably use that rather than installing Protobuf from source code. You will need the <code>-dev</code> or <code>-devel</code> package in order to get headers. On Debian/Ubuntu:</p>\n\n<pre><code>sudo apt-get install libprotobuf-dev protobuf-compiler\n</code></pre>\n" } ]
32,045,279
1
<macos><gpu><theano>
2015-08-17T07:55:09.117
32,046,938
2,394,303
How can I enable my MacBook Pro GPU optimization for theano?
<ul> <li>MacBook Pro (Retina, 13-inch, Mid 2014)</li> <li>OS X Yosemite 10.10.3</li> <li>Intel Iris 1536MB</li> </ul> <p><a href="http://deeplearning.net/software/theano/install.html#using-the-gpu" rel="nofollow">Installing Theano</a> shows I need CUDA, but I do not have NVIDIA, that means I can never enable GPU optimization? </p>
[ { "AnswerId": "32046938", "CreationDate": "2015-08-17T09:28:44.983", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>You are correct: an NVIDIA graphics processor is currently required to enable Theano's GPU operation. However, that does not prevent you from running Theano -- it works just fine on the CPU.</p>\n\n<p>Theano's current GPU implementation is based on CUDA and thus requires an NVIDIA GPU. A new implementation based on OpenCL is in development which should enable operation on non-NVIDIA GPUs, but this implementation is incomplete and not yet practically useful.</p>\n\n<p>The CPU implementation will work fine. In many ways it is easier to use than the GPU implementation and, if you use OpenMP, can still perform reasonably well by utilizing many CPU cores.</p>\n" } ]
32,068,712
1
<matplotlib><ipython-notebook><theano><lasagne>
2015-08-18T09:34:34.703
32,076,978
2,902,280
Live plot losses while training neural net in Theano/lasagne
<p>I'm training a neural net in Theano and lasagne, running the code in an iPython notebook. I like having the train and valid loss displayed at each iteration, like this:</p> <pre></pre> <p>but I would also like to see the live/dynamic plot of the two losses. Is there a built-in way to do so?</p> <p>I have tried creating a custom class and adding it to my net's , but either I get a new plot at each iteration (I'd like a single one, updating), either I have to erase the previous output at each iteration, and thus cannot see the text output (which I want to keep).</p>
[ { "AnswerId": "32076978", "CreationDate": "2015-08-18T15:47:17.567", "ParentId": null, "OwnerUserId": "2902280", "Title": null, "Body": "<p>I finally managed to update the loss plot as I wanted, using the following class:</p>\n\n<pre><code>from IPython import display\nfrom matplotlib import pyplot as plt\nimport numpy as np\nfrom lasagne import layers\nfrom lasagne.updates import nesterov_momentum\nfrom nolearn.lasagne import NeuralNet\nfrom lasagne import nonlinearities\n\nclass PlotLosses(object):\n def __init__(self, figsize=(8,6)):\n plt.plot([], []) \n\n def __call__(self, nn, train_history):\n train_loss = np.array([i[\"train_loss\"] for i in nn.train_history_])\n valid_loss = np.array([i[\"valid_loss\"] for i in nn.train_history_])\n\n plt.gca().cla()\n plt.plot(train_loss, label=\"train\") \n plt.plot(valid_loss, label=\"test\")\n\n plt.legend()\n plt.draw()\n</code></pre>\n\n<p>and the code sample to reproduce:</p>\n\n<pre><code>net_SO = NeuralNet(\n layers=[(layers.InputLayer, {\"name\": 'input', 'shape': (None, 1, 28, 28)}),\n (layers.Conv2DLayer, {\"name\": 'conv1', 'filter_size': (3,3,), 'num_filters': 5}),\n (layers.DropoutLayer, {'name': 'dropout1', 'p': 0.2}),\n (layers.DenseLayer, {\"name\": 'hidden1', 'num_units': 50}),\n (layers.DropoutLayer, {'name': 'dropout2', 'p': 0.2}),\n (layers.DenseLayer, {\"name\": 'output', 'nonlinearity': nonlinearities.softmax, 'num_units': 10})],\n # optimization method:\n update=nesterov_momentum,\n update_learning_rate=10**(-2),\n update_momentum=0.9,\n\n regression=False, \n max_epochs=200, \n verbose=1,\n\n on_epoch_finished=[PlotLosses(figsize=(8,6))], #this is the important line\n )\n\nnet_SO.fit(X, y) #X and y from the MNIST dataset\n</code></pre>\n" } ]
32,079,552
1
<protocol-buffers><caffe>
2015-08-18T18:02:27.083
null
4,996,964
Error during "make all" in Caffe
<p>I had an error after running the command while compiling Caffe. Here is what I got (it is a snippet):</p> <pre></pre> <p>How can I fix this issue?</p> <p>EDIT: I'm installing it in Ubuntu 12.04.</p>
[ { "AnswerId": "38563673", "CreationDate": "2016-07-25T09:07:08.233", "ParentId": null, "OwnerUserId": "4265775", "Title": null, "Body": "<p>comment out line 113 in</p>\n\n<pre><code>/usr/local/cuda/include/host_config.h.\n</code></pre>\n\n<p>it is because gcc 5 error</p>\n" } ]
32,080,017
2
<python><numpy><attributes><caffe><pycaffe>
2015-08-18T18:25:44.323
32,080,111
1,245,262
Why does assigning an ndarray to an ndarray in PyCaffe raise an Attribute Error?
<p>When reading through a Caffe tutorial (<a href="http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb" rel="nofollow">http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/00-classification.ipynb</a>), I came across the following statement:</p> <pre></pre> <p>It basically serves to assign a single image to . </p> <p> is a 4D ndarray and returns a 3D ndarray, so the ellipsis serve to copy the 3D array over the 0th axis. This made me think I should be able to rewrite the code to avoid the ellipsis as follows:</p> <pre></pre> <p>However, when I do, I get </p> <pre></pre> <p>Even though,</p> <pre></pre> <p>works fine. Does this make sense to anyone?</p> <p>I've verified the shapes and type of my variables as follows:</p> <pre></pre> <p>Why does cause problems?</p>
[ { "AnswerId": "37271679", "CreationDate": "2016-05-17T09:04:34.873", "ParentId": null, "OwnerUserId": "6064933", "Title": null, "Body": "<p>The problem with the first method(<code>net.blobs['data'].data = z4</code>) is that 'data' is an attribute of net.blobs['data'](which is Caffe <strong>Blob</strong> object) which can not be assigned. If you assign numpy array to the data attribute, you mean \"instead of using the memory allocated for data, use the memory of the numpy array\", which is <strong>not acceptable</strong>.</p>\n\n<p>But if you use <code>net.blobs['data'].data[...] = z4</code>, you mean \"copy the data from the numpy array to the memory allocated for the data attribute\", which is <strong>acceptable</strong>.</p>\n\n<hr>\n\n<p>For more information, you can read <a href=\"https://groups.google.com/forum/#!topic/caffe-users/1TqHBsfj1Zc\" rel=\"nofollow\">a similar question</a> in the Caffe Users Group.</p>\n" }, { "AnswerId": "32080111", "CreationDate": "2015-08-18T18:31:11.597", "ParentId": null, "OwnerUserId": "1427416", "Title": null, "Body": "<p>Doing <code>obj.attr = blah</code> is setting an attribute on the object <code>obj</code>, so <code>obj</code> controls this. Doing <code>obj.attr[...] = blah</code> is setting an <em>item</em> (e.g., the \"contents\" of some array-like object) on the object referred to by <code>obj.attr</code>, so the object <code>obj.attr</code> controls this.</p>\n\n<p>In your example, <code>net.blobs['data']</code> is some kind of object that won't allow its <code>data</code> attribute to be set, so you can't do <code>net.blobs['data'].data = blah</code>. But <code>net.blobs['data'].data</code> is an array that <em>does</em> allow you to change its contents, so you can do <code>net.blobs['data'].data[...] = stuff</code>. You're operating on two different objects with those two syntaxes (<code>net.blobs['data']</code> in one case and <code>net.blobs['data'].data</code> in the other).</p>\n" } ]
32,082,506
0
<neural-network><reinforcement-learning><torch><lstm><temporal-difference>
2015-08-18T21:03:48.150
null
5,240,885
Neural Network Reinforcement Learning Requiring Next-State Propagation For Backpropagation
<p>I am attempting to construct a neural network incorporating convolution and LSTM (using the Torch library) to be trained by Q-learning or Advantage-learning, both of which require propagating state T+1 through the network before updating the weights for state T.</p> <p>Having to do an extra propagation would cut performance and that's bad, but not <em>too</em> bad; However, the problem is that there is all kinds of state bound up in this. First of all, the Torch implementation of backpropagation has some efficiency shortcuts that rely on the back propagation happening immediately after the forward propagation, which an additional propagation would mess up. I could possibly get around this by having a secondary cloned network sharing the weight values, but we come to the second problem.</p> <p>Every forward propagation involving LSTMs is stateful. How can I update the weights at T+1 when propagating network(T+1) may have changed the contents of the LSTMs? I have tried to look at the discussion of TD weight updates as done in TD-Gammon, but it's obtuse to me and that's for feedforward anyway, not recurrent.</p> <p>How can I update the weights of a network at T without having to advance the network to T+1, or how do I advance the network to T+1 and then go back and adjust the weights as if it were still T?</p>
[]
32,082,696
1
<python><gpu><pickle><theano>
2015-08-18T21:16:40.267
32,084,018
4,899,439
Theano - NameError: name 'register_gpu_opt' is not defined
<p>I have trained a Theano model on a GPU and now want to set it up to run on a server (with no GPU).</p> <p>First I faced the problem that my model could not be unpickled, due to the missing type. Then, following a recommendation from <a href="https://stackoverflow.com/questions/25237039/converting-a-theano-model-built-on-gpu-to-cpu">this post</a>, I set the option to .</p> <p>But then I got this error:</p> <pre></pre> <p>The file was produced the following way:</p> <pre></pre>
[ { "AnswerId": "32084018", "CreationDate": "2015-08-18T23:05:57.640", "ParentId": null, "OwnerUserId": "5226958", "Title": null, "Body": "<p>Yes that is a typical problem.\nWhy does it happen? Because theano compiles the model into c/cuda c codes, so an error will happen when the model finds codes from inconsistent compiler.</p>\n\n<p>How to handler is problem? I would choose to save all the parameters as numpy value. For example</p>\n\n<pre><code>values_to_pickle = [p.get_value() for p in model.all_parameters()]\n</code></pre>\n" } ]
32,086,106
2
<neural-network><torch>
2015-08-19T03:35:49.427
32,086,325
5,118,777
[torch]how to read weights in nn model
<p>I constructed the nn model using itorch notebook.</p> <pre></pre> <hr> <p>Input data to the model</p> <pre></pre> <p>Then, I print the model and got this.</p> <pre></pre> <p>how to read the weight in nn.linear ?</p> <p>Thanks in advance.</p>
[ { "AnswerId": "32086325", "CreationDate": "2015-08-19T04:00:16.477", "ParentId": null, "OwnerUserId": "5118777", "Title": null, "Body": "<p>Oh, it is similar to php</p>\n\n<pre><code>model.modules[2].weight\n</code></pre>\n" }, { "AnswerId": "41345372", "CreationDate": "2016-12-27T12:38:06.743", "ParentId": null, "OwnerUserId": "7341318", "Title": null, "Body": "<p>I find that <code>model.modules[1].weight</code> is similar to <code>model:get(1).weight</code>, but both can't get the parameters from the table layer like residual block. In this way the residual block as a layer.</p>\n\n<p>however, we can use <code>params, gradParams = model:parameters()</code> to get the parameters for each layer even in the table layer. </p>\n\n<p>It is worth noting that, in the second way, each layer of the network parameters are divided into two layers and arranged in layers</p>\n" } ]
32,087,747
0
<python><c++><machine-learning><neural-network><theano>
2015-08-19T06:11:28.027
null
1,377,127
Can I use Theano with C++?
<p>I've been using the Theano library in Python and had some success, although my main program where I'm extracting sections of images to classify is written in C++. Is there any way to call Theano via C++ or is it only built for Python?</p>
[]
32,096,840
1
<cuda><gpu><nvidia><caffe>
2015-08-19T13:25:42.123
null
2,778,860
Caffe using GPU with NVidia Quadro 2200
<p>I'm using the deep learning framework <a href="http://caffe.berkeleyvision.org/" rel="nofollow">Caffe</a> on a Ubuntu 14.04 machine. I compiled CAFE with option, i.e. I disabled GPU and CUDA usage. I have an NVidia Quadro K2200 graphics card and CUDA version 5.5. </p> <p>I would like to know if it is possible to use Caffe with CUDA enabled with my GPU. On NVidia page, it is written that Quadro K2200 has a compute capability of 5.0. Does it mean that I can use it with CUDA versions up to release 5.0? When it is possible to use Caffe with GPU-enabled with Quadro K2200, how can I choose the appropriate CUDA version for that?</p>
[ { "AnswerId": "32097095", "CreationDate": "2015-08-19T13:35:26.207", "ParentId": null, "OwnerUserId": "2771245", "Title": null, "Body": "<p>CUDA version is not the same thing as Compute Capability. For one, CUDA is current (7.5 prerelease), while CC is only at 5.2. K2200 supports CC 5.0.</p>\n\n<p>The difference:</p>\n\n<p>CUDA version means the library/toolkit/SDK/etc version. You should always use the highest one available.</p>\n\n<p>Compute Capability is your GPU's capability to perform certain instructions, etc. Every CUDA function has a minimum CC requirement. When you write a CUDA program, it's CC requirement is the maximum of the requirements of all the features you used.</p>\n\n<hr>\n\n<p>That said, I've no idea what Caffe is, but a quick search shows they require CC of 2.0, so you should be good to go. CC 5.0 is pretty recent, so very few things won't work on it.</p>\n" } ]
32,101,249
2
<macos><theano><osx-yosemite>
2015-08-19T16:46:13.547
32,110,990
5,244,344
Use theano in a macbook pro without NVIDIA card
<p>I have:</p> <ul> <li>MacBook Pro (Retina, 13-inch, Mid 2014)</li> <li>OS X Yosemite</li> <li>Intel Iris 1536MB</li> </ul> <p>I heard that I can not use GPU theano but CPU I do, but I want to know if the programming will be the same and internaly theano will work with CPU or in anothe case with GPU. Or if when I programming I have one way to program for each one.</p> <p>Thanks a lot</p>
[ { "AnswerId": "33896455", "CreationDate": "2015-11-24T14:39:13.543", "ParentId": null, "OwnerUserId": "5244344", "Title": null, "Body": "<p>Yes, effectively Theano understand if you have GPU or not, and decide to use CPU or GPU to create variables, the only difficult is that when you create a model and variables with Theano with or without GPU config of variables change, in other words if you create a model with GPU (or CPU) and save them in a *.pickle (e.g.) and you go to another pc without CPU (or GPU respectively) this model and their variables saved will not work.</p>\n" }, { "AnswerId": "32110990", "CreationDate": "2015-08-20T06:28:46.980", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>For the most part a Theano program that runs well on a CPU will also run well on a GPU, with no changes to the code required. There are, however, a few things to bear in mind.</p>\n\n<ul>\n<li><p>Not all operations have GPU versions so it's possible to create a computation that contains a component that cannot run on the GPU at all. When you run one of these computations on a GPU it will silently fall back to the CPU, so the program will run without failing and the result will be correct, but the computation is not running as efficiently as it might and will be slower because it has to copy data backwards and forwards between main and GPU memory.</p></li>\n<li><p>How you access your data can affect GPU performance quite a lot but has little impact on CPU performance. Main memory tends to be larger than GPU memory so it's often the case that your entire dataset can fit in main memory but not in GPU memory. There are techniques that can be used to avoid this problem but you need to bear in mind the GPU limitations in advance.</p></li>\n</ul>\n\n<p>If you stick to conventional neural network techniques, and follow the patterns used in the Theano sample/tutorial code, then it will probably run fine on the GPU since this is the primary use-case for Theano.</p>\n" } ]
32,111,311
1
<caffe><conv-neural-network>
2015-08-20T06:47:40.933
null
2,467,772
Parameter fine tuning for training Alexnet with smaller image size
<p>Alexnet is intended to use 227x227x3 image size. If I like to train the image size smaller like 32x80x3, what are the parameters to be fine tuned.</p> <p>I initially trained with 64x80x3 image sizes with all parameters same as provided except the stride in the first Conv1 layer, it was changed to 2. I achieved the testing accuracy very high and as high as 0.999. Then in real use also, I get reasonable high in accuracy in detection.</p> <p>Then I prefer to use the smaller image size 32x80x3. I used the same parameters as trained in 64x80x3 image size, but accuracy is as low as 0.9671. I tried to fine tune parameters like Conv1 layer's filer size to 5. Gaussian weight filter's std size to 10 times and 100 times smaller. But none of them can help achieve the accuracy achieved in training 64x80x3 images.</p> <p>For smaller image sizes to train, what are the parameters to be fine tuned to achieve the higher accuracy? I used 24000 dataset. 20000 is for training and 4000 is for testing.</p> <p>For both 32x80x3 and 64x80x3, I used same images, just that image size is edited to be 32x80 and 64x80.</p>
[ { "AnswerId": "33314545", "CreationDate": "2015-10-24T04:26:21.580", "ParentId": null, "OwnerUserId": "5458952", "Title": null, "Body": "<p>Maybe you can train to resize the <code>32x80x3 images to 64x80x3</code> and then use the similar parameter settings.\nAlso, maybe you could find some thing useful here <a href=\"https://github.com/BVLC/caffe/tree/master/examples/cifar10\" rel=\"nofollow\">https://github.com/BVLC/caffe/tree/master/examples/cifar10</a>.\nThere are some solver and train_val files for fine tuning over <code>CIFAR-10</code>, which is a dataset consists of small images too.</p>\n" } ]
32,113,915
1
<theano><lasagne><nolearn>
2015-08-20T09:00:17.167
32,115,199
2,902,280
Aggregate predictions with data augmentation in lasagne
<p>I am working on the MNIST dataset and using data augmentation to train a neural network. I have a BatchIterator which randomly extracts from each picture a 24, 24 subimage, and uses it as input for the NN. </p> <p>As far as training is concerned, everything goes fines. But for prediction, I want to extract 5 sub-images from a given image, and average the predictions, but I cannot get it to work:</p> <p>Here's my BatchIterator:</p> <pre></pre> <p>Fitting my net to the training data works, but when I do , I get an error because is, I believe, called with equal to .</p> <p>here's the full call stack:</p> <pre></pre> <p>Any idea on how to fix it in the testing part of ?</p>
[ { "AnswerId": "32115199", "CreationDate": "2015-08-20T10:00:05.627", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>Looking at the <a href=\"https://github.com/dnouri/nolearn/blob/f8e8eb7837b0d620a66a4537266d2639efcf34f6/nolearn/lasagne/base.py#L63\" rel=\"nofollow\">code for <code>nolearn.lasagne.BatchIterator</code></a> and <a href=\"https://github.com/dnouri/nolearn/blob/f8e8eb7837b0d620a66a4537266d2639efcf34f6/nolearn/lasagne/base.py#L520\" rel=\"nofollow\">how it is used</a> by the <code>nolearn.lasagne.NeuralNet</code> class, it looks like <code>BatchIterator</code>s need to work when <code>y</code> is not provided, i.e. in prediction mode. Note the call at <a href=\"https://github.com/dnouri/nolearn/blob/f8e8eb7837b0d620a66a4537266d2639efcf34f6/nolearn/lasagne/base.py#L520\" rel=\"nofollow\">line 520</a> where <code>X</code> is provided but no value is given for <code>y</code> so it defaults to <code>None</code>.</p>\n\n<p>Your <code>CropIterator</code> currently assumes that <code>yb</code> is always a non-<code>None</code> value. I don't know if it makes sense to do anything useful when <code>yb</code> is not provided but I assume you could just transform <code>Xb</code> and return <code>None</code> for <code>y_new</code> if <code>yb</code> is <code>None</code>.</p>\n" } ]
32,118,273
1
<python><theano>
2015-08-20T12:27:29.540
null
650,654
Theano: Cannot allocate memory error for "shape.eval()" but not "np.asarray"
<p>When I try to print the shape of a shared variable using shape.eval(), I get a "Cannot allocate memory" error. But when I convert it to a numpy array (thereby moving it to CPU), I can not only get the shape but also the array itself. See below where it is stopped in pdb, data has been loaded into CPU/RAM and then I am trying to print the shape of p1:</p> <pre></pre> <p>borrow is set to True for p1, but this should not matter since I am using GPU, right?</p> <p>There is enough free space in memory:</p> <pre></pre> <p>This error is a RAM (as opposed to GPU memory) error, right? Why am I able to print the shape after getting the variable to CPU (using np.asarray), but shape.eval() prints an error?</p> <p>What's a good way to address this issue (besides, may be, getting more RAM)?</p>
[ { "AnswerId": "32163446", "CreationDate": "2015-08-23T05:20:37.060", "ParentId": null, "OwnerUserId": "650654", "Title": null, "Body": "<p>Found the issue. See <a href=\"https://stackoverflow.com/a/13329386/650654\">here</a>.</p>\n\n<p><code>eval()</code> invokes <code>subprocess()</code> (for compiling) which in turn invokes <code>fork()</code>.</p>\n\n<p>This process had a very large amount of virtual memory (<code>vsize</code>) allocated to it. Also, there was no swap configured on this host. Even though the actual memory needed to execute this <code>eval()</code> would have been tiny, I believe the <code>fork</code> failed in allocating virtual memory and hence the error.</p>\n\n<p>Adding swap space made the issue go away. The swap space never got used, but I believe was sufficient to satisfy the virtual memory allocation.</p>\n" } ]
32,130,304
1
<theano><deep-learning><mnist>
2015-08-20T23:55:54.547
32,134,434
5,249,666
How do I use a trained Theano artificial neural network on single examples?
<p>I have been following the tutorial on how to train an ANN to classify the MNIST numbers. I am now at the "Convolutional Neural Networks" chapter. I want to use the trained network on single examples (MNIST images) and get the predictions. Is there a way to do that?</p> <p>I have looked ahead in the tutorial and on google but can't find anything.</p> <p>Thanks a lot in advance for any kind of help!</p>
[ { "AnswerId": "32134434", "CreationDate": "2015-08-21T07:14:43.627", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>The material in the Theano tutorial in the earlier chapters, before reaching the Convolutional Neural Networks (CNN) chapter, give a good overview of how Theano works and some of the components the CNN sample code uses. It might be reasonable to assume that students reaching this point have developed their understanding of Theano sufficiently to figure out how to modify the code to extract the model's predictions. Here's a few hints.</p>\n\n<p>The CNN's output layer, called <code>layer3</code>, is an instance of the <code>LogisticRegression</code> class, introduced in an earlier chapter.</p>\n\n<p>The <code>LogisticRegression</code> class has an attribute called <code>y_pred</code>. The comments next to the code which assigns that attribute's values says</p>\n\n<blockquote>\n <p>symbolic description of how to compute prediction as class whose\n probability is maximal</p>\n</blockquote>\n\n<p>Looking for places where <code>y_pred</code> is used in the logistic regression sample will highlight a function called <code>predict()</code>. This does for the logistic regression sample what is desired of the CNN example.</p>\n\n<p>If one follows the same approach, using <code>layer3.y_pred</code> as the output of a new Theano function, the model's predictions will become apparent.</p>\n" } ]