QuestionId
int64
388k
59.1M
AnswerCount
int64
0
47
Tags
stringlengths
7
102
CreationDate
stringlengths
23
23
AcceptedAnswerId
float64
388k
59.1M
OwnerUserId
float64
184
12.5M
Title
stringlengths
15
150
Body
stringlengths
12
29.3k
answers
listlengths
0
47
31,030,032
1
<lua><deep-learning><torch><conv-neural-network>
2015-06-24T14:59:12.673
null
3,805,104
How to prepare data for torch7 deep learning convolutional neural network example?
<p>I have been trying to use convolutional neural network example in deep learning library of torch7(<a href="https://github.com/nicholas-leonard/dp/blob/master/examples/convolutionneuralnetwork.lua" rel="nofollow">convolutionalneuralnetwork.lua</a>) for my own dataset. I have a dataset of 100x100 binary jpg images and they are in the following directories:<br> /home/akshay/project/data/train -- training data<br> /home/akshay/project/data/valid -- validation data</p> <p>I have changed the dataset to ImageSource and made the other necessary changes as in the code:</p> <pre></pre> <p>But when I ran the code, I got an error as below:</p> <pre></pre> <p>1).How do I prepare the data differently?<br> 2).Are the parameters passed wrong and How to correct them?</p>
[ { "AnswerId": "32724840", "CreationDate": "2015-09-22T19:01:42.470", "ParentId": null, "OwnerUserId": "5364826", "Title": null, "Body": "<p>I was getting the same error. Change your --loadSize to be '1,100,100' instead of '{1,100,100}'. Similarly for --sampleSize. </p>\n" } ]
31,032,814
1
<linear-algebra><torch>
2015-06-24T17:08:38.730
31,033,424
2,113,367
Mean Centering a Tensor in torch7
<p>I was wondering if there was an easier (or more efficient) way to perform a mean centering operation? </p> <p>Currently, I'm doing the following:</p> <pre></pre>
[ { "AnswerId": "31033424", "CreationDate": "2015-06-24T17:43:47.670", "ParentId": null, "OwnerUserId": "117844", "Title": null, "Body": "<pre><code>mean = data:mean(1)\ndata:add(-1, mean:expandAs(data))\n</code></pre>\n" } ]
31,034,293
1
<theano><pymc3>
2015-06-24T18:32:12.050
null
1,215,364
Is it possible to use NUTS in PyMC3 with a model that involves the eigendecomposition of parameters?
<p>I have a model where the likelihood involves computing the sum of all the terms in the matrix</p> <p>P = U exp(tD) U^-1</p> <p>Where </p> <p>UDU^-1 = Q</p> <p>and Q is my matrix of parameters.If I wanted to use NUTS in PyMC3, NUTS would have to be able to compute the derivative of all the elements in P with respect to each of the elements in Q. Is this possible using the symbolic differentiator in Theano that PyMC3 uses?</p>
[ { "AnswerId": "31089162", "CreationDate": "2015-06-27T13:10:17.100", "ParentId": null, "OwnerUserId": "2288595", "Title": null, "Body": "<p>PyMC3 uses Theano for computation and autodiff. Theano has very good support for tensor algebra (of which matrix algebra is a subset) so I think that your model should be supported. </p>\n" } ]
31,036,340
1
<linux><lua><torch>
2015-06-24T20:26:09.090
null
864,128
Torch7 Lua, error loading module 'libpaths' (Linux)
<p>I am a new user to . I have trouble loading module '' (on ). The <strong>error log is</strong>:</p> <blockquote> <p>Exception in thread "main" com.naef.jnlua.LuaRuntimeException: error loading module 'libpaths' from file '/usr/local/lib/lua/5.1/libpaths.so': /usr/local/lib/lua/5.1/libpaths.so: undefined symbol: lua_gettop at com.naef.jnlua.LuaState.lua_pcall(Native Method) at com.naef.jnlua.LuaState.call(LuaState.java:555) at org.eclipse.koneki.ldt.support.lua51.internal.interpreter.JNLua51Launcher.run(JNLua51Launcher.java:128) at org.eclipse.koneki.ldt.support.lua51.internal.interpreter.JNLua51DebugLauncher.main(JNLua51DebugLauncher.java:24)</p> </blockquote> <p>What might be the problem? Thanks in advance!</p>
[ { "AnswerId": "31160347", "CreationDate": "2015-07-01T11:44:16.743", "ParentId": null, "OwnerUserId": "5039693", "Title": null, "Body": "<p>This is how to configure torch + eclipse:</p>\n\n<p><strong>1) Configure the Lua interpreter with torch</strong>:</p>\n\n<p>Go to Windows -> Preference -> Lua -> interpreter:</p>\n\n<ul>\n<li><p>Interpreter Type : Lua 5.2</p></li>\n<li><p>Interpreter executable : /opt/torch/install/bin/qlua (-> this is\nrequired to use qt features)</p></li>\n<li><p>Interpreter name : Qt + Torch Interpreter arguments : -lenv -e\n\"io.stdout:setvbuf('no'); if os.getenv('DEBUG_MODE') then require\n'debugger' ; require 'debugger.plugins.ffi'end\"</p></li>\n<li><p>LinkedExecution argument : Lua 5.2</p></li>\n</ul>\n\n<p><strong>2)</strong> Pick this interpreter as a default interpreter </p>\n\n<p><strong>3)</strong> Also Working with an external interpreter, require that \"LuaSocket\" packet is installed, \nYou will get a message error of \"libsocket.so not found\" when debugging if it is not installed</p>\n\n<p><strong>To install LuaSocket, you may try</strong> :</p>\n\n<pre><code>sudo luarocks install luasocket --only-server=http://luarocks.org/repositories/rocks-scm\n</code></pre>\n\n<p>or</p>\n\n<pre><code>sudo luarocks install luasocket\n</code></pre>\n\n<p>or</p>\n\n<pre><code>luarocks install luasocket\n</code></pre>\n\n<p>Credits to <a href=\"https://groups.google.com/forum/#!msg/torch7/wWwPjL9PgBU/MkloTN7QqT8J\" rel=\"nofollow\">STRUB Floriab</a></p>\n" } ]
31,036,680
1
<python><numpy><ffi><theano><python-cffi>
2015-06-24T20:45:33.167
null
669,329
Getting a C pointer to a function generated by Theano?
<p>I would like to use a Theano function from C/Fortran code (in particular, I want to use an implicit ODE solver written in Fortran with a function created in Theano). Are there any examples/resources on how to do that?</p>
[ { "AnswerId": "31044667", "CreationDate": "2015-06-25T08:08:49.840", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>You've tagged your question with ffi/cffi but that's for calling foreign code from Python. However it sounds like you actually want to call Python/Theano code from C/Fortran. For that, the documentation on <a href=\"https://docs.python.org/2/extending/embedding.html\" rel=\"nofollow\">Embedding Python in Another Application</a> might be helpful.</p>\n\n<p>In principle you could just run Theano Python code from your C/Fortran code via facilities in <code>Python.h</code>.</p>\n\n<p>Although Theano compiles some operations via C code, I don't believe it produces an natively executable function/library for the entire computation graph that could then be linked in by some other, non-Python, application.</p>\n\n<p>Update: via the <a href=\"https://groups.google.com/forum/#!topic/theano-users/0hl6O8f260U\" rel=\"nofollow\">thread on the Theano mailing list</a>... apparently <a href=\"https://github.com/Theano/Theano/issues/1408#issuecomment-106029111\" rel=\"nofollow\">a prototype for having Theano create a linkable library</a> was done some time ago but isn't currently integrated into Theano.</p>\n" } ]
31,050,976
2
<python><anaconda><theano>
2015-06-25T12:53:59.893
null
2,998,813
python.exe crashes when importing `theano`
<p>I am using Anaconda(2.2 64bit) on a Windows 7 64-bit machine. When I try to</p> <pre></pre> <p>Python crashes without infomation. I installed using Anaconda.</p> <p>Does anyone know where this problem comes from?</p>
[ { "AnswerId": "31051951", "CreationDate": "2015-06-25T13:35:30.117", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>This could be caused by a missing, or incorrectly configured C compiler. There's a lot of help about installing Theano on Windows <a href=\"http://deeplearning.net/software/theano/install_windows.html\" rel=\"nofollow\">in the documentation</a>.</p>\n\n<p>In particular, make sure you've run</p>\n\n<pre><code>conda install mingw libpython\n</code></pre>\n" }, { "AnswerId": "31063985", "CreationDate": "2015-06-26T02:40:33.773", "ParentId": null, "OwnerUserId": "2998813", "Title": null, "Body": "<p>I solve this problem by using anaconda 2.10 instead of anaconda 2.20.but I still have no idear about the reason.If anyone can tell me,I will appreiciate</p>\n" } ]
31,055,033
1
<python><neural-network><deep-learning><caffe><lmdb>
2015-06-25T15:48:16.763
31,081,253
4,561,745
Caffe: Extremely high loss while learning simple linear functions
<p>I'm trying to train a neural net to learn the function . The objective is to play around with Caffe in order to learn and understand it better. The data required are synthetically generated in python and written to memory as an lmdb database file.</p> <p>Code for data generation:</p> <pre></pre> <p>Solver.prototext file:</p> <pre></pre> <p>Caffe model:</p> <pre></pre> <p>The loss on the test data that I'm getting is . This is shocking as the loss is three orders of magnitude greater than the numbers in the training and test data sets. Also, the function to be learned is a simple linear function. I can't seem to figure out what is wrong in the code. Any suggestions/inputs are much appreciated.</p>
[ { "AnswerId": "31081253", "CreationDate": "2015-06-26T20:03:38.397", "ParentId": null, "OwnerUserId": "4561745", "Title": null, "Body": "<p>The loss generated is a lot in this case because Caffe only accepts data (i.e. <code>datum.data</code>) in the <code>uint8</code> format and labels (<code>datum.label</code>) in <code>int32</code> format. However, for the labels, <code>numpy.int64</code> format also seems to be working. I think <code>datum.data</code> is accepted only in <code>uint8</code> format because Caffe was primarily developed for Computer Vision tasks where inputs are images, which have RGB values in [0,255] range. <code>uint8</code> can capture this using the least amount of memory. I made the following changes to the data generation code:</p>\n\n<pre><code>Xtrain = np.uint8(np.random.randint(0,256, size = (Ntrain,K,H,W)))\nXtest = np.uint8(np.random.randint(0,256, size = (Ntest,K,H,W)))\n\nytrain = int(Xtrain[:,0,0,0]) + int(Xtrain[:,1,0,0]) + int(Xtrain[:,2,0,0])\nytest = int(Xtest[:,0,0,0]) + int(Xtest[:,1,0,0]) + int(Xtest[:,2,0,0])\n</code></pre>\n\n<p>After playing around with the net parameters (learning rate, number of iterations etc.) I'm getting an error of the order of 10^(-6) which I think is pretty good!</p>\n" } ]
31,057,219
1
<python><python-2.7><gpu><theano><pydot>
2015-06-25T17:45:08.717
null
612,837
Using GPU with Theano
<p>I'm trying to execute the next code <a href="https://github.com/erogol/KLP_KMEANS/blob/master/klp_kmeans.py" rel="nofollow">https://github.com/erogol/KLP_KMEANS/blob/master/klp_kmeans.py</a> using my gpu</p> <p>I execute:</p> <p>THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python klp_kmeans.py </p> <p>But it says:</p> <pre></pre> <p>After doing a little debug, I noticed that it has detected cpu usage due to an instance of class 'Gemm' (checked in line #71)</p> <p>Why is not using the gpu?</p> <p>Thanks in advance</p>
[ { "AnswerId": "31069130", "CreationDate": "2015-06-26T09:06:14.697", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>You really need to contact the author of this script for support. The '<code>Used the cpu</code>' message is coming from this script, not from Theano. It's the author's code that is doing the detection, and that detection logic may be faulty.</p>\n\n<p>As far as Theano is concerned, given your <code>THEANO_FLAGS</code> and the '<code>Using gpu device 0: GeForce GTX 750 Ti</code>' message you see at startup, it will use the GPU for all computation graphs that can be run on the GPU.</p>\n\n<p>Are you running the script as is? If so, it looks like parts of it are intended to be run on the CPU, and other parts on the GPU (it seems to be doing a speed comparison between the two). Only those calls to <code>klp_kmeans</code> where <code>use_gpu=True</code> will run on the GPU because of the way the variables are typed (e.g. <code>theano.tensor.dmatrix</code> vs. <code>theano.tensor.matrix</code>).</p>\n" } ]
31,060,970
1
<python><theano>
2015-06-25T21:18:06.437
31,068,759
3,023,426
Customize operation/function on Theano
<p>I am new to the Theano library, which is used for deep learning on GPU device. I have noticed that there are several build-in operations which can support gpu computation (I guess they are specially written in a way to support GPU):</p> <pre></pre> <p>1.What's the difference if I use python's build-in function sum() instead of T.sum(). Will sum() still work but maybe slower? </p> <ol start="2"> <li><p>Suppose sum() doesn't work for gpu computing, then if I need any operation/function that workable on gpu, I need to implement it in such a way. e.g. I want calculate sin(x) where x is a vector or matrix, and stored in GPU memory. Is there any hint to implement sin(x) that can operate on gpu device? (this might not be suitable or easy to answer)</p></li> <li><p>I have trouble understanding T.grad(). How T.grad can do the symbolic calculation for any given smooth symbolic function? I am very curious about it.</p></li> </ol>
[ { "AnswerId": "31068759", "CreationDate": "2015-06-26T08:47:10.033", "ParentId": null, "OwnerUserId": "4592059", "Title": null, "Body": "<p>In theano you have to use <code>T.sum(), T.neq(), T.argmax(),T.grad()</code> for symbolic computation with theano variables like <code>T.matrix</code>. You can't use built in <code>sum()</code> for example.\nIf you use theano you have to follow theano's own methods because theano uses a different form of computing to utilize the gpu's architecture.</p>\n\n<p>However if you want to use <code>sum()</code> you can do the computation with it, and then create a <code>theano.shared</code> variable where you can store the results in, this way you storing it in the gpu's memory at runtime.</p>\n\n<p>Regarding T.grad() perhaps you should ask the theano developers. :)\nHowever I think when theano is running, it can compute the function's gradient runtime, using the actual variables utilizing the gpu's computing capacity.\nI hope this can help.</p>\n\n<p><strong>Glad to have been of help! Feel free to accept my answer if you feel it was useful to you. :-)</strong></p>\n" } ]
31,080,866
1
<python><function><input><theano>
2015-06-26T19:35:16.763
31,082,580
5,009,112
Theano function that can take input arrays of different shapes in python
<p>In theano, I want to make a function that can take several different inputs, such as both matrices and vectors. </p> <p>Normally I would do something like this:</p> <pre></pre> <p>However, then when I enter a vector instead of a matrix, for example:</p> <pre></pre> <p>Then I get an error of dimension mismatch: 'Wrong number of dimensions: expected 2, got 1 with shape (3,).'</p> <p>Is there any way to define a more general input symbol in theano that can take matrices but also different shaped arrays such as vectors or 3-dimensional arrays and still works?</p> <p>Thanks.</p>
[ { "AnswerId": "31082580", "CreationDate": "2015-06-26T21:41:20.817", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>The number of dimensions must be fixed at the time the Theano function is compiled. Part of the compilation process is to select operation variants that depend on the number of dimensions.</p>\n\n<p>You could always compile the function for a high-dimensional tensor and just stack your inputs such that they have the required shape.</p>\n\n<p>So</p>\n\n<pre><code>x = theano.tensor.tensor3()\ny = 3*x\nf = theano.function([x],y)\n</code></pre>\n\n<p>will accept and of these</p>\n\n<pre><code>f(numpy.array([[[1,2]]])) # (1,1,3) vector wrapped as a tensor3\nf(numpy.array([[[1,2],[3,4]]])) # (1,2,2) matrix wrapped as a tensor3\nf(numpy.array([[[1,2],[3,4]],[[5,6],[7,8]]])) # (2,2,2) tensor3\n</code></pre>\n" } ]
31,099,233
1
<c++><deep-learning><caffe>
2015-06-28T11:25:13.047
31,132,209
1,544,186
Euclidean Loss Layer in Caffe
<p>I am currently trying to implement my own loss layer in caffe, and while attempting to do so, am using other layers as a reference. One thing that puzzles me, however, is the use of in . I will be using the as a reference. Here are my questions</p> <ul> <li><p>It is my understanding that holds the error derivative from the next layer, but what if there is no other layer, how is it initialised? since it is used in without performing any checks:</p> <pre></pre></li> <li><p>Again, in the , the derivative for the error with respect to the activations is calculated using the following code snippet:</p> <pre></pre> <p>If my first assumption is correct, and does indeed hold the error derivative for the layer above, why do we only use the first element i.e. as opposed to multiplying by the whole vector i.e. ?</p></li> </ul>
[ { "AnswerId": "31132209", "CreationDate": "2015-06-30T07:32:27.687", "ParentId": null, "OwnerUserId": "809993", "Title": null, "Body": "<p>For loss layers, there is no next layer, and so the top diff blob is technically undefined and unused - but Caffe is using this preallocated space to store unrelated data: Caffe supports multiplying loss layers with a user-defined weight (loss_weight in the prototxt), this information (a single scalar floating point number) is stored in the first element of the diff array of the top blob. That's why you'll see in every loss layer, that they multiply by that amount to support that functionality. This is explained in <a href=\"http://caffe.berkeleyvision.org/tutorial/loss.html\">Caffe's tutorial about the loss layer</a>.</p>\n\n<p>This weight is usually used to add auxiliary losses to the network. You can read more about it in Google's <a href=\"http://arxiv.org/abs/1409.4842\">Going Deeper with Convoltions</a> or in <a href=\"http://arxiv.org/abs/1409.5185\">Deeply-Supervised Nets</a>.</p>\n" } ]
31,105,548
2
<python><visualization><gradient><deep-learning><caffe>
2015-06-28T22:46:58.017
32,271,804
5,013,415
Implementation of the paper "Deep inside convolutional networks: Visualising image classification models and saliency maps", Simonyan et al
<p>In visualization of gradient data in Convolutional Neural Networks, employing Caffe framework, having already visualized gradient data with respect to all classes, it is interesting to take gradient regarding a specific class. in deploy.prototxt file in "bvlc_reference_caffenet" model, I have set:</p> <pre></pre> <p>and has commented the last part:</p> <pre></pre> <p>,which is before:</p> <pre></pre> <p>, and added instead of it:</p> <pre></pre> <p>, in the python code by calling:</p> <pre></pre> <p>, we forward towards the last layer and afterwards, by calling:</p> <pre></pre> <p>,got the visualization of gradient. Firstly, I'd like to ask this is called saliency map and if I want to do backward with respect to a specific class e.g. 281 is for a cat. what shall I do?</p> <p>thanks in advance for your guidance.</p> <p>P.S. benefited from the code by Yangqing for its notebook in filter visualization.</p> <pre></pre>
[ { "AnswerId": "32271804", "CreationDate": "2015-08-28T13:06:36.303", "ParentId": null, "OwnerUserId": "5013415", "Title": null, "Body": "<p>also for full visualization you can refer to my github, which is more complete and visualize the saliency map as well as the visualization of class models and the gradient visualization in backpropagation.</p>\n\n<p><a href=\"https://github.com/smajida/Deep_Inside_Convolutional_Networks\" rel=\"nofollow\">https://github.com/smajida/Deep_Inside_Convolutional_Networks</a></p>\n" }, { "AnswerId": "31374054", "CreationDate": "2015-07-13T00:49:17.620", "ParentId": null, "OwnerUserId": "5013415", "Title": null, "Body": "<p>using the following code it can be done:</p>\n\n<pre><code>label_index = 281 # Index for cat class\ncaffe_data = np.random.random((1,3,227,227))\ncaffeLabel = np.zeros((1,1000,1,1))\ncaffeLabel[0,label_index,0,0] = 1;\n\nbw = net.backward(**{net.outputs[0]: caffeLabel})\n</code></pre>\n" } ]
31,106,263
1
<neural-network><convolution><theano><conv-neural-network><keras>
2015-06-29T00:36:25.100
null
null
Keras / Theano: How to add Convolution2D Layers?
<p>Im having issues on how to understand how Convolution Layers are added. Im trying to add Convolution Layers but i get this error : </p> <pre class="lang-py prettyprint-override"></pre> <hr> <p>Im trying to understand what is nb_filter, stack_size, nb_row, nb_col are on a convolutional layer.</p> <p>My Objective is to copy the VGG Model.</p> <pre class="lang-py prettyprint-override"></pre> <p>-- Im currently using Theano and keras.</p> <p>Please, any tip is appreciated.</p>
[ { "AnswerId": "41734801", "CreationDate": "2017-01-19T06:00:26.427", "ParentId": null, "OwnerUserId": "3698136", "Title": null, "Body": "<p>You need to correct the output shape for the convolutional layer. Output of a CNN layer depends on many factors such as input size, number of kernels, stride and padding. Generally for an input of size BxCxW1xH1, the output would be BxFxW2xH2 where B is the batch size, C is the input channels, F is the number of output features, W1xH1 is the input size and you can compute the value of W2 and H2 using W1, H1, stride and padding. It is illustrated very well in this tutorial from Stanford: <a href=\"http://cs231n.github.io/convolutional-networks/#comp\" rel=\"nofollow noreferrer\">http://cs231n.github.io/convolutional-networks/#comp</a></p>\n\n<p>Hope it helps!</p>\n" } ]
31,107,776
3
<python-2.7><graphviz><anaconda><theano><deep-learning>
2015-06-29T04:26:22.833
null
5,059,640
Theano Import error
<p>I am trying to install Theano on CPU machine (running intel HD graphics, without NVIDIA). I get the following import error while testing in python.</p> <pre></pre> <p>I have the g++ installed though.</p> <p>Thanks. </p>
[ { "AnswerId": "41894442", "CreationDate": "2017-01-27T12:53:35.603", "ParentId": null, "OwnerUserId": "7173909", "Title": null, "Body": "<p>Try this:</p>\n\n<pre><code>pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git\n</code></pre>\n\n<p>ten,</p>\n\n<pre><code>from theano import *\n</code></pre>\n\n<p>It is Worked for me</p>\n" }, { "AnswerId": "41732135", "CreationDate": "2017-01-19T01:08:23.297", "ParentId": null, "OwnerUserId": "6936471", "Title": null, "Body": "<p>If you use Pycharm on windows, following the following steps:</p>\n\n<ul>\n<li>install anaconda</li>\n<li>Change your interpreter to anaconda python</li>\n<li>install theano</li>\n<li>install mingw</li>\n<li>install libpython</li>\n</ul>\n" }, { "AnswerId": "31109547", "CreationDate": "2015-06-29T06:58:35.620", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p><a href=\"http://deeplearning.net/software/theano/install_windows.html\">As described in the documentation</a>, make sure you have done this when using Anaconda.</p>\n\n<pre><code>conda install mingw libpython\n</code></pre>\n" } ]
31,108,507
2
<python><c++><windows><cuda><theano>
2015-06-29T05:38:37.800
31,128,511
891,010
Installing Theano on windows for gpu - suspected nvcc version issue
<p>I have been following <a href="http://deeplearning.net/software/theano/install_windows.html" rel="nofollow">instructions</a> to set up Theano to use a GPU on Windows.</p> <p>The issue is I cannot follow these instructions exactly because I have a new graphics card, the GEForce GTX 980M and it only works with cuda 7.0. (The instructions suggest cuda 5.5). Everything works fine except when it gets time to run the GPU then I get an error:</p> <pre></pre> <p>The version of nvcc I have installed does not have version -2008 (that looks to have been deprecated by the lastest cuda 7.0 version, but 2010 is allowed). What is the best way to fix it? Should I hard code it in Theano into the file cuda\nvcc_compiler.py? I tried that and it seems to try to use 2008 version anyway. Is there a later version of Theano that would use the later nvcc version?</p>
[ { "AnswerId": "31128511", "CreationDate": "2015-06-30T02:19:24.140", "ParentId": null, "OwnerUserId": "5063541", "Title": null, "Body": "<p>I ran into a similar problem when trying to install Theano on Win 8.1 64bit with CUDA 7.0., using a GTX 750Ti graphics card. I was able to get it working by following these <a href=\"https://my6266blog.wordpress.com/2015/01/21/installing-theano-pylearn2-and-even-gpu-on-windows/\">instructions</a>. </p>\n" }, { "AnswerId": "35959271", "CreationDate": "2016-03-12T15:03:07.573", "ParentId": null, "OwnerUserId": "5231110", "Title": null, "Body": "<p>For me it started working when I replaced</p>\n\n<pre><code>[nvcc]\nflags = --use-local-env --cl-version=2008\n</code></pre>\n\n<p>by</p>\n\n<pre><code>[nvcc]\ncompiler_bindir=C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin\n</code></pre>\n\n<p>in the .theanorc file.</p>\n" } ]
31,117,261
1
<python><numpy><scipy><theano>
2015-06-29T13:40:53.537
null
3,891,733
DeprecationWarning: Module scipy.linalg.blas.fblas is deprecated, use scipy.linalg.blas instead
<p>I've just now installed theano on my machine but when i try to use it - i.e I get this 'DeprecationWarning':</p> <pre></pre> <p>When I comment out the above line the error doesn't show. I've tried updating scipy, numpy, theano.. but nothing works. </p> <p>Any ideas on what's causing this warning and how to get rid of it?</p>
[ { "AnswerId": "33343545", "CreationDate": "2015-10-26T10:33:29.450", "ParentId": null, "OwnerUserId": "2251058", "Title": null, "Body": "<p>You could avoid a Deprecation Warning. If you don't want it to show up you could add the following to your code.</p>\n\n<pre><code>import warnings\nwarnings.filterwarnings(\"ignore\", category=DeprecationWarning)\n</code></pre>\n" } ]
31,126,636
1
<lua><torch>
2015-06-29T22:29:21.573
31,129,029
3,113,501
Torch tensors swapping dimensions
<p>I came across these two lines (back-to-back) of code in a torch project:</p> <pre></pre> <p>What do these two lines do? I assumed they did some sort of swapping.</p>
[ { "AnswerId": "31129029", "CreationDate": "2015-06-30T03:27:14.673", "ParentId": null, "OwnerUserId": "2726734", "Title": null, "Body": "<p>This is covered in indexing in the <a href=\"https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor--dim1dim2--or--dim1sdim1e-dim2sdim2e-\" rel=\"noreferrer\">Torch Tensor Documentation</a></p>\n\n<p>Indexing using the empty table <code>{}</code> is shorthand for all indices in that dimension. Below is a demo which uses <code>{}</code> to copy an entire row from one matrix to another:</p>\n\n<pre><code>&gt; a = torch.Tensor(3, 3):fill(0)\n 0 0 0\n 0 0 0\n 0 0 0\n\n&gt; b = torch.Tensor(3, 3)\n&gt; for i=1,3 do for j=1,3 do b[i][j] = (i - 1) * 3 + j end end\n&gt; b\n 1 2 3\n 4 5 6\n 7 8 9\n\n&gt; a[{1, {}}] = b[{3, {}}]\n&gt; a\n 7 8 9\n 0 0 0\n 0 0 0\n</code></pre>\n\n<p>This assignment is equivalent to: <code>a[1] = b[3]</code>.</p>\n\n<p>Your example is similar:</p>\n\n<pre><code> im4[{1,{},{}}] = im3[{3,{},{}}]\n im4[{3,{},{}}] = im3[{1,{},{}}]\n</code></pre>\n\n<p>which is more clearly stated as:</p>\n\n<pre><code> im4[1] = im3[3]\n im4[3] = im3[1]\n</code></pre>\n\n<p>The first line assigns the values from <code>im3</code>'s third row (a 2D sub-matrix) to <code>im4</code>'s first row and the second line assigns the first row of <code>im3</code> to the third row of <code>im4</code>.</p>\n\n<p>Note that this is not a swap, as <code>im3</code> is never written and <code>im4</code> is never read from.</p>\n" } ]
31,126,785
2
<python><neural-network><gpu><theano>
2015-06-29T22:42:09.703
31,166,170
2,938,232
Theano gradient calculation creates float64
<p>I have some standard NN code on Theano with two separate compiled functions. One that calculates the cost and one that calculates the cost with AdaGrad updates.</p> <p>For GPU speed, I'm trying to keep everything . Problem is that I'm getting a warning that the gradient calculation is creating a . In particular, for the following line of code.</p> <pre></pre> <p>If I comment out the gradient calculation and replace the second line with a placeholder, everything is fine. Obviously this is a junk update but it helps pinpoint the problem.</p> <pre></pre> <p>For reference, this is the loss function:</p> <pre></pre> <p>The cost only compiled function is:</p> <pre></pre> <p>whereas the cost + update function is:</p> <pre></pre> <p><strong>UPDATE</strong></p> <p>I pinned down the problem to a LSTM layer. Here is minimal code</p> <pre></pre> <p>The problem occurs when I call to get gradients (here it is a junk update for example). I've found that commenting out or replacing the with something that repeats to the required shape fixes things. </p> <p><strong>UPDATE 2</strong></p> <p>Debugging reveals a mess of in the function.</p> <pre></pre>
[ { "AnswerId": "31146833", "CreationDate": "2015-06-30T19:28:56.767", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>You could use <code>theano.printing.debugprint</code> with <code>print_type=True</code> to see what the type of the various components are.</p>\n\n<p>For example,</p>\n\n<pre><code>class TestLSTMLayer(object):\n def __init__(self, inputs, outputSize, dropout=0.9, inputSize=None, adagradInit=1, forgetGateBias=3, srng=None):\n self.h0 = theano.shared(np.random.randn(outputSize).astype(floatX))\n\n self.params = [self.h0]\n\n def _recurrence(hBelow):\n print 'hBelow', theano.printing.debugprint(hBelow, print_type=True) \n return hBelow\n\n print 'h0', theano.printing.debugprint(self.h0, print_type=True) \n hOutputs, _ = theano.scan(\n fn=_recurrence,\n outputs_info=self.h0,\n n_steps=inputs.shape[0]\n )\n self.hOutputs = hOutputs\n\n def getUpdates(self):\n print 'sum', theano.printing.debugprint(TT.sum(self.hOutputs), print_type=True) \n gradients = TT.grad(TT.sum(self.hOutputs), self.params)\n paramUpdates = [(self.params[0], self.params[0])]\n return paramUpdates\n</code></pre>\n" }, { "AnswerId": "31166170", "CreationDate": "2015-07-01T15:53:27.097", "ParentId": null, "OwnerUserId": "2938232", "Title": null, "Body": "<p>I solved this by updating to the bleeding edge (latest version on GitHub) version. <a href=\"http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions\" rel=\"nofollow\">http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions</a></p>\n\n<p>Somewhat odd, but a solution nonetheless.</p>\n" } ]
31,143,452
2
<python><class><theano>
2015-06-30T16:21:02.827
31,146,656
4,480,756
Theano Shared Variables on Python
<p>I am now learning the Theano library, and I am just feeling confused about Theano shared variables. By reading the tutorial, I think I didn't understand its detailed meaning. The following is the definition of the Theano shared variables from the tutorial:</p> <p>"Variable with Storage that is shared between functions that it appears in. These variables are meant to be created by registered shared constructors."</p> <p>Also, I am wondering if the Theano shared variables can be a python class data member. For example:</p> <pre></pre> <p>Can "data" be or initialized as a Theano Shared variable? I really appreciate if anyone could help me.</p>
[ { "AnswerId": "31146656", "CreationDate": "2015-06-30T19:20:11.807", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>Theano shared variables behave more like ordinary Python variables. They have an explicit value that is persistent. In contrast, symbolic variables are not given an explicit value until one is assigned on the execution of a compiled Theano function.</p>\n\n<p>Symbolic variables can be thought of as representing state for the duration of a single execution. Shared variables on the other hand represent state that remains in memory for the lifetime of the Python reference (often similar to the lifetime of the program).</p>\n\n<p>Shared variables are usually used to store/represent neural network weights because we want these values to remain around across many executions of a Theano training or testing function. Often, the purpose of a Theano training function is to update the weights stored in a shared variable. And a testing function needs the current weights to perform the network's forward pass.</p>\n\n<p>As far as Python is concerned Theano variables (shared or symbolic) are just objects -- instances of classes defined within the Theano library. So, yes, references to shared variables can be stored in your own classes, just like any other Python object.</p>\n" }, { "AnswerId": "33017372", "CreationDate": "2015-10-08T13:33:14.590", "ParentId": null, "OwnerUserId": "3212998", "Title": null, "Body": "<p>Shared variable helps in simplifying the operations over a pre-defined variable. An example to @danien-renshaw 's answer, suppose we want to add two matrix, let's say a and b, where the value of b matrix will remain constant throughout the lifetime of the program, we can have the b matrix as the shared variable and do the required operation.</p>\n\n<p>Code without using shared variable:</p>\n\n<pre><code>a = theano.tensor.matrix('a')\nb = theano.tensor.matrix('b')\nc = a + b\nf = theano.function(inputs = [a, b], outputs = [c])\noutput = f([[1, 2, 3, 4]], [[5, 5, 6, 7]])\n</code></pre>\n\n<p>Code using shared variable :</p>\n\n<pre><code>a = theano.tensor.matrix('a')\nb = theano.tensor.shared( numpy.array([[5, 6, 7, 8]]))\nc = a + b\nf = theano.function(inputs = [a], outputs = [c])\noutput = f([[1, 2, 3, 4]])\n</code></pre>\n" } ]
31,144,564
1
<lua><torch>
2015-06-30T17:25:49.497
31,149,198
3,113,501
What are the differences between doFile and require in Lua
<p>What are the differences between doFile and require in Lua, especially in Torch? When do you call one but not the other? When will one work but the other won't? (I'm using Lua 5.1, torch7).</p>
[ { "AnswerId": "31149198", "CreationDate": "2015-06-30T21:54:03.813", "ParentId": null, "OwnerUserId": "646619", "Title": null, "Body": "<p><code>dofile</code> loads and executes a file right then and there.</p>\n\n<p><code>require</code> is more complicated; it keeps a table of modules that have already been loaded and their return results, to ensure that the same code isn't loaded twice. It also keeps a list of module loaders that handle loading a module, one of which that can load from <code>dll</code>/<code>so</code> files.</p>\n\n<p>You probably want <code>require</code>, as if you're just loading functions, you don't want to duplicate them.</p>\n" } ]
31,147,894
1
<emacs><theano><jedi>
2015-06-30T20:26:36.667
31,148,056
1,776,544
jedi:complete produces a deferred error when using the theano library
<p>In emacs, I am trying to get to work with . To do so, I have the following minimal bit of code.</p> <pre></pre> <p>When I place my cursor at the sign, and run , I am met with the following error, and no autocompletion is offered.</p> <pre></pre> <p>I wonder if this is an incompatability with a source file in and . But I am not sure, and I do not know what to do to further resolve the issue.</p> <p>I get a similar error when I try to use .</p> <p>I have installed all of my packages through the command, and they are updated with the latest versions.</p>
[ { "AnswerId": "31148056", "CreationDate": "2015-06-30T20:36:16.410", "ParentId": null, "OwnerUserId": "1776544", "Title": null, "Body": "<p>I resolved this by looking through <code>m-x packages-list-packages</code> and seeing that there was (for some reason) an old deprecated version of jedi installed alongside another version of jedi. I deleted all the deprecated installations I had, and the error has gone away, but <code>jedi</code> doesnt seem to be able to autocomplete the above code still. It now just says <code>No completion found</code>.</p>\n" } ]
31,148,167
0
<python><neural-network><deep-learning><caffe><lmdb>
2015-06-30T20:44:24.920
null
4,561,745
Caffe: Can't seem to learn y = x^2 function
<p>I was trying to train a neural network to learn the function y = x^2 in the deep learning framework Caffe. Here is my code:</p> <p>Data generation code:</p> <pre></pre> <p>Solver file:</p> <pre></pre> <p>Caffe Model:</p> <pre></pre> <p>I'm getting an error of the order of 10^8, which is unbelievable. The net is supposed to take a single input and produce a single output. The inputs are integers in [0,255] range and the outputs are supposed to be the squares of the respective inputs. Any ideas why such a huge error is obtained?</p>
[]
31,162,021
1
<python><environment-variables><spyder><theano>
2015-07-01T12:58:24.673
31,162,799
492,372
Why do the environment variables set in command prompt have no effect when I start Spyder
<p>I am using the Spyder Anaconda IDE for Python. I am writing a code in the Spyder IDE that requires few environment variables to be set ($CPATH, $LIBRARY_PATH and $LD_LIBRARY_PATH) for the Theano library. </p> <p>I am starting Spyder using the command</p> <pre></pre> <p>and it starts fine. Even though I set the environment variables in my </p> <pre></pre> <p>file, the code still fails to accept the path and if I try printing</p> <pre></pre> <p>it raises a KeyError.</p> <p>I tried all the above with a normal user but still it fails. How can I get Spyder IDE to be able to view files in the above paths and where can I set them inside Spyder?</p>
[ { "AnswerId": "31162799", "CreationDate": "2015-07-01T13:29:05.577", "ParentId": null, "OwnerUserId": "1596068", "Title": null, "Body": "<p>You need to tell the <code>sudoers</code> file which Environmental Variables to keep when using the <code>sudo</code> command.</p>\n\n<p>To edit the sudoers file run.</p>\n\n<pre><code>sudo visudo\n</code></pre>\n\n<p>Then add the following line to the end of it.</p>\n\n<pre><code>Defaults env_keep = \"LD_LIBRARY_PATH CPATH LIBRARY_PATH\"\n</code></pre>\n\n<p>Then <code>export</code> your variable.</p>\n\n<pre><code>export LD_LIBRARY_PATH=\"/path/to/library\"\n</code></pre>\n\n<p>Now you should be able to run it.</p>\n\n<p>More info can be found here <a href=\"https://stackoverflow.com/questions/8633461/how-to-keep-environment-variables-when-using-sudo\">How to keep Environment Variables when Using SUDO</a></p>\n" } ]
31,169,441
3
<amazon-web-services><amazon-ec2><theano><deep-learning><caffe>
2015-07-01T18:55:23.860
31,177,427
5,067,505
Library for deep learning on Amazon EC2 with CPU and GPU support for convolutional neural network
<p>I want to train a CNN on a bunch of images. I want to run it on Amazon EC2 CPU or GPU clusters. For running deep learning on a cluster, I figured that some of the options are:</p> <ol> <li>h2o (with Spark)</li> <li>Caffee</li> <li>Theano</li> </ol> <p>I am not sure which of these options suit my needs. I read through <a href="http://h2o-release.s3.amazonaws.com/h2o/rel-shannon/25/docs-website/h2o-docs/index.html#Data%20Science%20Algorithms-Deep%20Learning-Introduction" rel="nofollow">h2o documentation on deep learning</a>, they do not seem to support CNNs. Any ideas on how I should proceed? </p> <p>Another side question: How do I upload my images to the cluster for training the CNN? I am fairly new to cluster computing.</p>
[ { "AnswerId": "53548765", "CreationDate": "2018-11-29T22:50:43.797", "ParentId": null, "OwnerUserId": "2136741", "Title": null, "Body": "<p>AWS provides the Deep Learning AMI with various Deep Learning Frameworks installed into it, which covers your use case since it has Theano as well as Caffe.\n Link to Deep Learning AMI <a href=\"https://aws.amazon.com/machine-learning/amis/\" rel=\"nofollow noreferrer\">https://aws.amazon.com/machine-learning/amis/</a>.</p>\n\n<blockquote>\n <p>How do I upload my images to the cluster for training the CNN? I am\n fairly new to cluster computing?</p>\n</blockquote>\n\n<p>There are many AWS storage services which gives you way to store your training data (images) which will be accessible to your cluster. Few of them are</p>\n\n<ol>\n<li>S3</li>\n<li>EBS </li>\n<li><p>EFS </p>\n\n<p>Explore them and see what works best for you. </p></li>\n</ol>\n" }, { "AnswerId": "31177427", "CreationDate": "2015-07-02T06:52:11.320", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>If you follow the instructions here <a href=\"https://github.com/deeplearningparis/dl-machine\" rel=\"nofollow\">https://github.com/deeplearningparis/dl-machine</a> then you can set up an AMI image with Theano and Torch. There is also a PR on the config to have caffe by default as well (if you need it, just checkout the branch and run the install script as soon as the instance is up).</p>\n" }, { "AnswerId": "41111751", "CreationDate": "2016-12-13T00:10:26.380", "ParentId": null, "OwnerUserId": "3867406", "Title": null, "Body": "<p>Just came accross your question. In this <strong><a href=\"http://vict0rsch.github.io/2016/12/03/aws_gpu/\" rel=\"nofollow noreferrer\">tutorial</a></strong> you'll also find how to set up an Amazon instance with a GPU to run Deep Learning frameworks.</p>\n\n<p>The AMI (~computer model) is pre-configured with:</p>\n\n<blockquote>\n <ul>\n <li>Ubuntu Server 16.04 as OS</li>\n <li>Anaconda 4.2.0 (scientific Python distribution)</li>\n <li>Python 3.5</li>\n <li>Cuda 8.0 (“parallel computing platform and programming model”, used to send code to the GPU)</li>\n <li>cuDNN 5.1 (Cuda’s library for Deep Learning used by Tensorflow and Theano)</li>\n <li><strong>Tensorflow 0.12</strong> for Python 3.5 and GPU-enabled</li>\n <li><strong>Keras 1.1.2</strong> (use with Tensorflow backend)</li>\n </ul>\n</blockquote>\n\n<p>I believe you can use this set-up with <a href=\"https://aws.amazon.com/fr/ec2/Elastic-GPUs/\" rel=\"nofollow noreferrer\">elastic GPUs</a> to scale the system according to your needs or use a <a href=\"https://aws.amazon.com/en/ec2/instance-types/p2/\" rel=\"nofollow noreferrer\">P2 instance</a></p>\n\n<p>Anyway you can follow the tutorial and use another AMI like <a href=\"https://aws.amazon.com/fr/amazon-ai/amis/\" rel=\"nofollow noreferrer\">Amazon's Deep Learning AMI</a></p>\n" } ]
31,171,277
1
<python><numpy><multiprocessing><shared-memory><caffe>
2015-07-01T20:42:18.650
null
1,322,301
Sharing contiguous numpy arrays between processes in python
<p>While I have found numerous answers to questions similar to mine, I don't believe it has been directly addressed here--and I have several additional questions. The motivation for sharing contiguous numpy arrays is as follows:</p> <ul> <li>I'm using a convolutional neural network run on Caffe to perform a regression on images to a series of continuous-value labels. </li> <li>The images require specific preprocessing and data augmentation.</li> <li>The constraints of (1) the continuous nature of the labels (they're floats) and (2) the data augmentation means that I'm preprocessing the data in python and then serving it up as contiguous numpy arrays using the in-memory data layer in Caffe.</li> <li>Loading the training data into memory is comparatively slow. I'd like to parallelize it such that:</li> </ul> <p>(1) The python I'm writing creates a "data handler" class which instantiates two contiguous numpy arrays. (2) A worker process alternates between those numpy arrays, loading the data from the disk, performing preprocessing, and inserting the data into the numpy array. (3) Meanwhile, the python Caffe wrappers send data from the <em>other</em> array to the GPU to be run through the net. </p> <p>I have a few questions:</p> <ol> <li><p>Is it possible to allocate memory in a contiguous numpy array then wrap it in a shared memory object (I'm not sure if 'object' is the correct term here) using something like the Array class from python's multiprocessing? </p></li> <li><p>Numpy arrays have a .ctypes attribute, I presume this is useful for the instantiation of shared memory arrays from Array(), but can't seem to determine precisely how to use them. </p></li> <li><p>If the shared memory is instantiated <em>without</em> the numpy array, does it remain contiguous? If not, is there a way to ensure it does remain contiguous? </p></li> </ol> <p>Is it possible to do something like:</p> <pre></pre> <p>Then instantiate the worker with</p> <pre></pre> <p>Thanks!</p> <p>Edit: I'm aware there are a number of libraries that have similar functions in varying states of maintenance. I would prefer to restrict this to pure python and numpy, but if that's not possible I would of course be willing to use one. </p>
[ { "AnswerId": "43680177", "CreationDate": "2017-04-28T12:22:52.107", "ParentId": null, "OwnerUserId": "794539", "Title": null, "Body": "<h2>Wrap numpy's <code>ndarray</code> around multiprocessing's <code>RawArray()</code></h2>\n\n<p>There are multiple ways to share <em>numpy</em> arrays in memory across processes. Let's have a look at how you can do it using the <em>multiprocessing</em> module.</p>\n\n<p>The first important observation is that <em>numpy</em> provides the <strong><code>np.frombuffer()</code> function to wrap an <em>ndarray</em> interface around a preexisting object</strong> that supports the buffer protocol (such as <code>bytes()</code>, <code>bytearray()</code>, <code>array()</code> and so on). This creates read-only arrays from read-only objects and writable arrays from writable objects.</p>\n\n<p>We can combine that with the <strong>shared memory <code>RawArray()</code></strong> that <em>multiprocessing</em> provides. Note that <code>Array()</code> doesn't work for that purpose, as it is a proxy object with a lock and doesn't directly expose the buffer interface. Of course that means that we need to provide for proper synchronization of our <em>numpified RawArrays</em> ourselves.</p>\n\n<p>There is one complicating issue regarding <em>ndarray</em>-wrapped <em>RawArrays</em>: When <em>multiprocessing</em> sends such an array between processes - and indeed it will need to send our arrays, once created, to both workers - it pickles and then unpickles them. Unfortunately, that results in it creating copies of the <em>ndarrays</em> instead of sharing them in memory.</p>\n\n<p>The solution, while a bit ugly, is to <strong>keep the <em>RawArrays</em> as is</strong> until they are transferred to the workers and <strong>only wrap them in <em>ndarrays</em> once each worker process has started</strong>.</p>\n\n<p>Furthermore, it would have been preferable to communicate arrays, be it a plain <em>RawArray</em> or an <em>ndarray</em>-wrapped one, directly via a <code>multiprocessing.Queue</code>, but that doesn't work, either. A <em>RawArray</em> cannot be put inside such a <em>Queue</em> and an <em>ndarray</em>-wrapped one would have been pickled and unpickled, so in effect copied.</p>\n\n<p>The workaround is to send a list of all pre-allocated arrays to the worker processes and <strong>communicate indices into that list over the <em>Queues</em></strong>. It's very much like passing around tokens (the indices) and whoever holds the token is allowed to operate on the associated array.</p>\n\n<p>The structure of the main program could look like this:</p>\n\n\n\n<pre class=\"lang-py prettyprint-override\"><code>#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\nimport numpy as np\nimport queue\n\nfrom multiprocessing import freeze_support, set_start_method\nfrom multiprocessing import Event, Process, Queue\nfrom multiprocessing.sharedctypes import RawArray\n\n\ndef create_shared_arrays(size, dtype=np.int32, num=2):\n dtype = np.dtype(dtype)\n if dtype.isbuiltin and dtype.char in 'bBhHiIlLfd':\n typecode = dtype.char\n else:\n typecode, size = 'B', size * dtype.itemsize\n\n return [RawArray(typecode, size) for _ in range(num)]\n\n\ndef main():\n my_dtype = np.float32\n\n # 125000000 (size) * 4 (dtype) * 2 (num) ~= 1 GB memory usage\n arrays = create_shared_arrays(125000000, dtype=my_dtype)\n q_free = Queue()\n q_used = Queue()\n bail = Event()\n\n for arr_id in range(len(arrays)):\n q_free.put(arr_id) # pre-fill free queue with allocated array indices\n\n pr1 = MyDataLoader(arrays, q_free, q_used, bail,\n dtype=my_dtype, step=1024)\n pr2 = MyDataProcessor(arrays, q_free, q_used, bail,\n dtype=my_dtype, step=1024)\n\n pr1.start()\n pr2.start()\n\n pr2.join()\n print(\"\\n{} joined.\".format(pr2.name))\n\n pr1.join()\n print(\"{} joined.\".format(pr1.name))\n\n\nif __name__ == '__main__':\n freeze_support()\n\n # On Windows, only \"spawn\" is available.\n # Also, this tests proper sharing of the arrays without \"cheating\".\n set_start_method('spawn')\n main()\n</code></pre>\n\n<p>This prepares a list of two arrays, two <em>Queues</em> - a \"free\" queue where <em>MyDataProcessor</em> puts array indices it is done with and <em>MyDataLoader</em> fetches them from as well as a \"used\" queue where <em>MyDataLoader</em> puts indices of readily filled arrays and <em>MyDataProcessor</em> fetches them from - and a <code>multiprocessing.Event</code> to start a concerted bail out of all workers. We could do away with the latter for now, as we have only one producer and one consumer of arrays, but it doesn't hurt being prepared for more workers.</p>\n\n<p>Then we pre-fill the \"empty\" <em>Queue</em> with all indices of our <em>RawArrays</em> in the list and instantiate one of each type of workers, passing them the necessary communication objects. We start both of them and just wait for them to <code>join()</code>.</p>\n\n<p>Here's how <em>MyDataProcessor</em> could look like, which consumes array indices from the \"used\" <em>Queue</em> and sends the data off to some external black box (<code>debugio.output</code> in the example):</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>class MyDataProcessor(Process):\n def __init__(self, arrays, q_free, q_used, bail, dtype=np.int32, step=1):\n super().__init__()\n self.arrays = arrays\n self.q_free = q_free\n self.q_used = q_used\n self.bail = bail\n self.dtype = dtype\n self.step = step\n\n def run(self):\n # wrap RawArrays inside ndarrays\n arrays = [np.frombuffer(arr, dtype=self.dtype) for arr in self.arrays]\n\n from debugio import output as writer\n\n while True:\n arr_id = self.q_used.get()\n if arr_id is None:\n break\n\n arr = arrays[arr_id]\n\n print('(', end='', flush=True) # just visualizing activity\n for j in range(0, len(arr), self.step):\n writer.write(str(arr[j]) + '\\n')\n print(')', end='', flush=True) # just visualizing activity\n\n self.q_free.put(arr_id)\n\n writer.flush()\n\n self.bail.set() # tell loaders to bail out ASAP\n self.q_free.put(None, timeout=1) # wake up loader blocking on get()\n\n try:\n while True:\n self.q_used.get_nowait() # wake up loader blocking on put()\n except queue.Empty:\n pass\n</code></pre>\n\n<p>The first it does is wrap the received <em>RawArrays</em> in <em>ndarrays</em> using 'np.frombuffer()' and keep the new list, so they are usable as <em>numpy</em> arrays during the process' runtime and it doesn't have to wrap them over and over again.</p>\n\n<p>Note also that <em>MyDataProcessor</em> only ever writes to the <code>self.bail</code> <em>Event</em>, it never checks it. Instead, if it needs to be told to quit, it will find a <code>None</code> mark on the queue instead of an array index. This is done for when a <em>MyDataLoader</em> has no more data available and starts the tear down procedure, <em>MyDataProcessor</em> can still process all valid arrays that are in the queue without prematurely exiting.</p>\n\n<p>This is how <em>MyDataLoader</em> could look like:</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>class MyDataLoader(Process):\n def __init__(self, arrays, q_free, q_used, bail, dtype=np.int32, step=1):\n super().__init__()\n self.arrays = arrays\n self.q_free = q_free\n self.q_used = q_used\n self.bail = bail\n self.dtype = dtype\n self.step = step\n\n def run(self):\n # wrap RawArrays inside ndarrays\n arrays = [np.frombuffer(arr, dtype=self.dtype) for arr in self.arrays]\n\n from debugio import input as reader\n\n for _ in range(10): # for testing we end after a set amount of passes\n if self.bail.is_set():\n # we were asked to bail out while waiting on put()\n return\n\n arr_id = self.q_free.get()\n if arr_id is None:\n # we were asked to bail out while waiting on get()\n self.q_free.put(None, timeout=1) # put it back for next loader\n return\n\n if self.bail.is_set():\n # we were asked to bail out while we got a normal array\n return\n\n arr = arrays[arr_id]\n\n eof = False\n print('&lt;', end='', flush=True) # just visualizing activity\n for j in range(0, len(arr), self.step):\n line = reader.readline()\n if not line:\n eof = True\n break\n\n arr[j] = np.fromstring(line, dtype=self.dtype, sep='\\n')\n\n if eof:\n print('EOF&gt;', end='', flush=True) # just visualizing activity\n break\n\n print('&gt;', end='', flush=True) # just visualizing activity\n\n if self.bail.is_set():\n # we were asked to bail out while we filled the array\n return\n\n self.q_used.put(arr_id) # tell processor an array is filled\n\n if not self.bail.is_set():\n self.bail.set() # tell other loaders to bail out ASAP\n # mark end of data for processor as we are the first to bail out\n self.q_used.put(None)\n</code></pre>\n\n<p>It is very similar in structure to the other worker. The reason it is bloated up a bit is that it checks the <code>self.bail</code> <em>Event</em> at many points, so as to reduce the likelihood to get stuck. (It's not completely foolproof, as there is a tiny chance that the <em>Event</em> could get set between checking and accessing the <em>Queue</em>. If that's a problem, one needs to use some synchronization primitive arbitrating access to both the <em>Event</em> and the <em>Queue</em> combined.)</p>\n\n<p>It also wraps the received <em>RawArrays</em> in <em>ndarrays</em> at the very beginning and reads data from an external black box (<code>debugio.input</code> in the example).</p>\n\n<p>Note that by playing around with the <code>step=</code> arguments to both workers in the <code>main()</code> function, we can change the ratio of how much reading and writing is done (strictly for testing purposes - in a production environment <code>step=</code> would be <code>1</code>, reading and writing all <em>numpy</em> array members).</p>\n\n<p>Increasing both values makes the workers only access a few of the values in the <em>numpy</em> arrays, thereby significantly speeding everything up, which goes to show that the performance is not limited by the communication between the worker processes. Had we put <em>numpy</em> arrays directly onto the <em>Queues</em>, copying them forth and back between the processes in whole, increasing the step size would not have significantly improved the performance - it would have remained slow.</p>\n\n<p>For reference, here is the <code>debugio</code> module I used for testing:</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\nfrom ast import literal_eval\nfrom io import RawIOBase, BufferedReader, BufferedWriter, TextIOWrapper\n\n\nclass DebugInput(RawIOBase):\n def __init__(self, end=None):\n if end is not None and end &lt; 0:\n raise ValueError(\"end must be non-negative\")\n\n super().__init__()\n self.pos = 0\n self.end = end\n\n def readable(self):\n return True\n\n def read(self, size=-1):\n if self.end is None:\n if size &lt; 0:\n raise NotImplementedError(\"size must be non-negative\")\n end = self.pos + size\n elif size &lt; 0:\n end = self.end\n else:\n end = min(self.pos + size, self.end)\n\n lines = []\n while self.pos &lt; end:\n offset = self.pos % 400\n pos = self.pos - offset\n if offset &lt; 18:\n i = (offset + 2) // 2\n pos += i * 2 - 2\n elif offset &lt; 288:\n i = (offset + 12) // 3\n pos += i * 3 - 12\n else:\n i = (offset + 112) // 4\n pos += i * 4 - 112\n\n line = str(i).encode('ascii') + b'\\n'\n line = line[self.pos - pos:end - pos]\n self.pos += len(line)\n size -= len(line)\n lines.append(line)\n\n return b''.join(lines)\n\n def readinto(self, b):\n data = self.read(len(b))\n b[:len(data)] = data\n return len(data)\n\n def seekable(self):\n return True\n\n def seek(self, offset, whence=0):\n if whence == 0:\n pos = offset\n elif whence == 1:\n pos = self.pos + offset\n elif whence == 2:\n if self.end is None:\n raise ValueError(\"cannot seek to end of infinite stream\")\n pos = self.end + offset\n else:\n raise NotImplementedError(\"unknown whence value\")\n\n self.pos = max((pos if self.end is None else min(pos, self.end)), 0)\n return self.pos\n\n\nclass DebugOutput(RawIOBase):\n def __init__(self):\n super().__init__()\n self.buf = b''\n self.num = 1\n\n def writable(self):\n return True\n\n def write(self, b):\n *lines, self.buf = (self.buf + b).split(b'\\n')\n\n for line in lines:\n value = literal_eval(line.decode('ascii'))\n if value != int(value) or int(value) &amp; 255 != self.num:\n raise ValueError(\"expected {}, got {}\".format(self.num, value))\n\n self.num = self.num % 127 + 1\n\n return len(b)\n\n\ninput = TextIOWrapper(BufferedReader(DebugInput()), encoding='ascii')\noutput = TextIOWrapper(BufferedWriter(DebugOutput()), encoding='ascii')\n</code></pre>\n" } ]
31,186,650
1
<lua><torch>
2015-07-02T13:58:36.130
31,189,561
1,366,749
What are the most idiomatic ways to combine tensors in torch?
<p>I'm confronted with concatenating three tensors together so that 3 px1 tensors become one 3px1 tensor. </p> <p>The most succinct I could come up with was:</p> <pre></pre> <p>Are there ways to do this without converting to tables and back to tensors? It seems like there should be a generic way of concatenating tensors along some specified dimension assuming they have compatible shapes.</p> <p>I can see how it would be possible to write such a function, does one not exist?</p>
[ { "AnswerId": "31189561", "CreationDate": "2015-07-02T16:08:17.993", "ParentId": null, "OwnerUserId": "117844", "Title": null, "Body": "<pre><code>a = torch.randn(3,1)\nb = torch.randn(3,1)\nc = torch.randn(3,1)\n\nd = torch.cat(a,b,1):cat(c,1)\n\nprint(d)\n</code></pre>\n" } ]
31,187,549
1
<image-processing><face-recognition><torch><conv-neural-network>
2015-07-02T14:37:54.847
null
4,961,048
False Positives with Face recognition
<p>I have a CNN trained upon the images (cropped faces) of Mark Ruffalo. For my positive class I have around 200 images and for the negative datapoints I have sampled 200 random faces.</p> <p>The model has a high recall but a very low precision. How could I increase the precision ?Also I am constrained by the number of positive images that I have. I am ready to compromise the recall in this tradeoff.</p> <p>I have tried increasing the number of negative samples but that introduces a form of bias and the model starts classifying everything as negative to attain a local optima.</p> <p>I have based my CNN upon overfeat:</p> <pre></pre> <p>Kindly Help</p>
[ { "AnswerId": "31192847", "CreationDate": "2015-07-02T19:16:32.630", "ParentId": null, "OwnerUserId": "4992019", "Title": null, "Body": "<p>Try playing with the raw output of the CNN instead of taking the sign() of the output node (since it is a positive and negative class I assume there is only one output in the range <code>[-1,1]</code>). </p>\n\n<p>For instance, for one sample, the output could be <code>[0.9]</code> indicating that the positive class should be picked. But if you play with this values, you can find a specific <code>threshold</code> value, <em>hopefully</em>, that gives you the precision you need. In other words, if you find that anything greater than <code>[-0.35]</code> should actually be chosen as the positive class because it gived you better precision, then <code>-0.35</code> should be your <code>threshold</code> value.</p>\n\n<p>This is where <a href=\"https://en.wikipedia.org/wiki/Receiver_operating_characteristic\" rel=\"nofollow\">ROC analysis</a> comes in handy. </p>\n\n<p>Let me know if this helps.</p>\n" } ]
31,193,977
1
<torch>
2015-07-02T20:22:34.900
null
1,082,019
Torch, which command to insert data in a Torch Tensor?
<p>I'm using a Torch command to insert data into a simple table, and it works fine:</p> <pre></pre> <p>Now someone made me notice that using Torch Tensors would make everything more efficient. So I replaced the first line with </p> <pre></pre> <p>but unfortunately I cannot find any Tensor function able to replace the table.insert() function.</p> <p>Do you have any idea?</p>
[ { "AnswerId": "31195580", "CreationDate": "2015-07-02T22:19:32.600", "ParentId": null, "OwnerUserId": "2726734", "Title": null, "Body": "<p>There is no function which corresponds to the append functionality of insert since Tensor objects are a fixed size.\nWhat I see your code doing is concatenating three tables into one. If you are using Tensors:</p>\n\n<pre><code> firstHalf = torch.Tensor(firstHalf)\n secondHalf = torch.Tensor(secondHalf)\n presentWord = torch.Tensor(presentWord)\n</code></pre>\n\n<p>then concatenating them together is easy:</p>\n\n<pre><code> completeProfile = firstHalf:cat(secondHalf):cat(presentWord)\n</code></pre>\n\n<p>Another option is to store the last index you inserted to so you know where to \"append\" onto the tensor. The function below creates a closure that will keep track of that last index for you. </p>\n\n<pre><code>function appender(t)\n local last = 0\n return function(i, v)\n last = last + 1\n t[last] = v\n end\nend\n\ncompleteProfile = torch.Tensor(#firstHalf + #secondHalf + #presentWord)\n\nprofile_append = appender(completeProfile)\n\ntable.foreach(firstHalf, profile_append)\ntable.foreach(secondHalf, profile_append)\ntable.foreach(presentWord, profile_append)\n</code></pre>\n" } ]
31,196,016
0
<gnuplot><torch>
2015-07-02T23:05:56.483
null
3,917,668
plotting a 3D+colour scatter with gnuplot (on torch7)
<p>I'm working with torch7, and I created a PCA function, which gives me an Nx3 tensor which I wish to plot (3D scatter).</p> <p>I stored it in a file (). now I want to plot it, I wrote the following lines</p> <blockquote> <p>NOTE: those lines are in (lua), but you don't really need to know the language, because the command uses the regular commands.</p> <p>NOTE 2: I followed helpers on this forum to create this part, I probably read a relevant thread you might want to link here. If you do, please explain what's the difference between the linked explanation an what I did</p> </blockquote> <pre></pre> <p>cols 1 through 3 in are the x,y,z coordinates, col 4 is either 1 or 2 (determines colour).</p> <blockquote> <p>LAST NOTE: my script doesn't print an error of any kind, it just doesn't plot the desired 3D scatter.</p> </blockquote> <p>Thanks ahead</p>
[]
31,198,353
1
<python><machine-learning><logistic-regression><theano>
2015-07-03T04:15:26.460
31,417,748
4,765,036
Python + Theano: Logistic regression weights do not update
<p>I've compared extensively to existing tutorials but I can't figure out why my weights don't update. Here is the function that return the list of updates:</p> <pre></pre> <p>It is defined at the top level, outside of any classes. This is standard gradient descent for each param. The 'params' parameter here is fed in as mlp.params, which is simply the concatenated lists of the param lists for each layer. I removed every layer except for a logistic regression one to isolate the reason as to why my cost was not decreasing. The following is the definition of mlp.params in MLP's constructor. It follows the definition of each layer and their respective param lists. </p> <pre></pre> <p>The following is the train function, which I call for each minibatch during each epoch:</p> <pre></pre> <p>If you require further details, the entire file is available here: <a href="http://pastebin.com/EeNmXfGD" rel="nofollow">http://pastebin.com/EeNmXfGD</a></p> <p>I don't know how many people use Theano (it doesn't seem like plenty); if you've read to this point, thank you. </p> <p>Fixed: I've determined that I can't use average squared error as the cost function. It works as usual after replacing it with a negative log-likelihood.</p>
[ { "AnswerId": "31417748", "CreationDate": "2015-07-14T21:30:23.060", "ParentId": null, "OwnerUserId": "5116849", "Title": null, "Body": "<p>This behavior it caused by a few things but it comes down to the cost not being properly computed. In your implementation , the output of the LogisticRegression layer is the predicted class for every input digit (obtained with the argmax operation) and you take the squared difference between it and the expected prediction.</p>\n\n<p>This will give you gradients of 0s wrt to any parameter in your model because the gradient of the output of the argmax (predicted class) wrt the input of the argmax (class probabilities) will be 0. </p>\n\n<p>Instead, the LogisticRegression should output the probabilities of the classes : </p>\n\n<pre><code>def output(self, input):\n input = input.flatten(2)\n self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)\n return self.p_y_given_x\n</code></pre>\n\n<p>And then in the MLP class, you compute the cost. You can used mean squared error between the desired probabilities for each class and the probabilities computed by the model but people tend to use the Negative Log Likelihood of the expected classes and you can implement it as such in the MLP class : </p>\n\n<pre><code>def neg_log_likelihood(self, x, y):\n p_y_given_x = self.output(x)\n return -T.mean(T.log(p_y_given_x)[T.arange(y.shape[0]), y])\n</code></pre>\n\n<p>Then you can use this function to compute your cost and the model trains :</p>\n\n<pre><code>cost = mlp.neg_log_likelihood(x_, y)\n</code></pre>\n\n<p>A few additional things: </p>\n\n<ul>\n<li>At line 215, when you print your cost, you format it as an integer value but it is a floating point value; this will lose precision in the monitoring.</li>\n<li>Initializing all the weights to 0s as you do in your LogisticRegression class is often not recommended. Weights should differ in their original values so as to help break symmetry</li>\n</ul>\n" } ]
31,201,861
1
<python><numpy><anaconda><theano>
2015-07-03T08:14:59.840
31,208,750
3,624,880
importing theano on anaconda 3.10.0
<p>I'm trying to import theano and I'm using the anaconda version 3.10.0, Can anyone give me directions to how to proceed?</p> <p>Thanks in advance</p> <blockquote> <p>Problem occurred during compilation with the command line below: g++ -shared -g -D NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -m64 -DMS_WIN64 -IC:\Users\Supreeth\Anaconda\lib\site-packages\numpy\core\include -IC:\Users\Supreeth\Anaconda\include -o C:\Users\Supreeth\AppData\Local\Theano\compiledir_Windows-8-6.2.9200-Intel64_Family_6_Model_60_Stepping_3_GenuineIntel-2.7.9-64\lazylinker_ext\lazylinker_ext.pyd C:\Users\Supreeth\AppData\Local\Theano\compiledir_Windows-8-6.2.9200-Intel64_Family_6_Model_60_Stepping_3_GenuineIntel-2.7.9-64\lazylinker_ext\mod.cpp -LC:\Users\Supreeth\Anaconda\libs -LC:\Users\Supreeth\Anaconda -lpython27</p> </blockquote>
[ { "AnswerId": "31208750", "CreationDate": "2015-07-03T14:04:59.587", "ParentId": null, "OwnerUserId": "3624880", "Title": null, "Body": "<p>I find the solution from the <a href=\"http://deeplearning.net/software/theano/install_windows.html\" rel=\"nofollow\">Theano Installation Document</a></p>\n\n<blockquote>\n <p>Specifically for Anaconda users just use the command </p>\n</blockquote>\n\n<pre><code>$ conda install mingw libpython\n</code></pre>\n\n<p>in the Anaconda Command Prompt</p>\n" } ]
31,204,702
1
<torch>
2015-07-03T10:35:00.550
31,210,528
3,284,343
Torch7 Access one element from a Tensor as a Tensor
<p>I'm working on Torch7 to train some neural nets and I've got a Tensor of dim 1 (vector) and I want to access the element i in this vector. Unfortunately, it gives me an integer instead of a Tensor of size 1.</p> <p>I got this :</p> <pre></pre> <p>I want this :</p> <pre></pre> <p>I'm obliged to do this :</p> <pre></pre>
[ { "AnswerId": "31210528", "CreationDate": "2015-07-03T15:51:55.597", "ParentId": null, "OwnerUserId": "1688185", "Title": null, "Body": "<p>You can use <a href=\"https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor--dim1dim2--or--dim1sdim1e-dim2sdim2e-\" rel=\"nofollow\">torch indexing operator</a> as follow:</p>\n\n<pre><code>&gt; t = matrix[{ {1} }]\n&gt; = t\n 1\n[torch.DoubleTensor of size 1]\n</code></pre>\n" } ]
31,209,252
2
<linux><makefile><fedora><caffe><gflags>
2015-07-03T14:33:38.897
null
4,068,560
Caffe compilation fails: make: *** [.build_release/src/caffe/data_transformer.o] Error 1
<p>I am trying to build caffe after the instructions on <a href="http://caffe.berkeleyvision.org/installation.html#prerequisites" rel="nofollow">http://caffe.berkeleyvision.org/installation.html#prerequisites</a> When compiling i get the following error: (I use Fedora 22)</p> <pre></pre> <p>What am i doin wrong?</p>
[ { "AnswerId": "41313192", "CreationDate": "2016-12-24T12:18:33.797", "ParentId": null, "OwnerUserId": "4301883", "Title": null, "Body": "<p>You have to install missing dependencies (gflags). </p>\n\n<p>Fedora/RHEL/CentOS: <code>sudo yum install gflags-devel</code></p>\n\n<p>Ubuntu: <code>sudo apt-get install libgflags-dev</code></p>\n\n<p>There are also instructions for other dependencies: </p>\n\n<p>Fedora/RHEL/CentOS :<a href=\"http://caffe.berkeleyvision.org/install_yum.html\" rel=\"nofollow noreferrer\">http://caffe.berkeleyvision.org/install_yum.html</a></p>\n\n<p>Ubuntu: <a href=\"http://caffe.berkeleyvision.org/install_apt.html\" rel=\"nofollow noreferrer\">http://caffe.berkeleyvision.org/install_apt.html</a></p>\n" }, { "AnswerId": "37355261", "CreationDate": "2016-05-20T20:23:21.250", "ParentId": null, "OwnerUserId": "2950746", "Title": null, "Body": "<p>To install missing gflag dependencies </p>\n\n<pre><code>wget https://github.com/schuhschuh/gflags/archive/master.zip\nunzip master.zip\ncd gflags-master\nmkdir build &amp;&amp; cd build\nexport CXXFLAGS=\"-fPIC\" &amp;&amp; cmake .. &amp;&amp; make VERBOSE=1\nmake \nsudo make install\n</code></pre>\n" } ]
31,214,799
1
<matlab><deep-learning><caffe>
2015-07-03T22:18:14.400
null
3,121,945
Why do the features extracted with matcaffe_demo.m and matcaffe_batch.m for the same input are different?
<p>I am using Caffe to extract features with matlab wrapper.I have 5011 images as test data set.I chopped all the layers after in . I found out if you take the same image as input of and , you will get the different 4096-dim features.<br> Could someone tell me why?<br> what is the differences between you extract features from all these images one by one with and extract features by listing all these images with ? </p>
[ { "AnswerId": "32183343", "CreationDate": "2015-08-24T13:18:00.233", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>You can find the answer to this question <a href=\"https://github.com/BVLC/caffe/issues/2689\" rel=\"nofollow\">at caffe github</a>.<br>\nBasically, <code>matcaffe_demo</code> is used for classification and it <strong>averages</strong> results of 10 crops of the input image, while <code>matcaffe_bathc</code> uses only a single input. </p>\n\n<p>Moreover, note that these m-files are no longer available in recent caffe versions.</p>\n" } ]
31,218,110
1
<ios><iphone><static><caffe>
2015-07-04T07:24:05.243
null
3,132,189
How to solve Unknown layer type in Caffe IOS.?
<p>1.I download source from <a href="https://github.com/aleph7/caffe/" rel="nofollow noreferrer">https://github.com/aleph7/caffe/</a> and builded the caffe as static library for IOS and IPhone. 2.Created sample demo code and Linked Caffe static lib(.a) and execute the code. 3.Now I got Run time error</p> <p>F0519 14:54:12.494139 14504 layer_factory.hpp:77] Check failed: registry.count(t ype) == 1 (0 vs. 1) Unknown layer type: Convolution (known types: MemoryData)</p> <p>4.I searched a lot and found one solution from below link</p> <p><a href="https://stackoverflow.com/questions/30325108/caffe-layer-creation-failure">Caffe layer creation failure</a></p> <p>5.If I create dynamic library instead of static library. It will work.</p> <p>6.I tried to convert static library into Dynamic library.I got Error on Xcode that is cannot open the project I referred below link.</p> <p><a href="https://stackoverflow.com/questions/1349115/how-do-i-change-an-existing-xcode-target-from-dynamic-to-static">How do I change an existing XCode target from dynamic to static?</a></p> <p>Can you help how to solve this..?</p>
[ { "AnswerId": "42093978", "CreationDate": "2017-02-07T15:42:00.683", "ParentId": null, "OwnerUserId": "2572084", "Title": null, "Body": "<p>Caffe register layer classes through <code>REGISTER_LAYER_CLASS</code> macro. Some build tools (e.g. Xcode) will optimize some part of it out (a little complex to explain). You can add <code>-Wl,-force_load path/to/libcaffe.a</code> to the <code>Other Linker Flags</code> build option. It will force Xcode to load all the things in libcaffe.a to the final target. </p>\n\n<p>And more, <code>Unknown layer type</code> error can also caused by stale code. The caffe code in <a href=\"https://github.com/aleph7/caffe/\" rel=\"nofollow noreferrer\">https://github.com/aleph7/caffe/</a> is out of date. You can try my port at <a href=\"https://github.com/solrex/caffe-mobile\" rel=\"nofollow noreferrer\">https://github.com/solrex/caffe-mobile</a>. It inlcudes a demo iOS app, works with the newest build tool. The Caffe source is up to date and you can sync the latest caffe code your self.</p>\n" } ]
31,228,557
2
<python><debugging><theano>
2015-07-05T08:37:27.760
31,229,234
379,539
Printing expressions of variables in Theano
<p>If I want to print some variable for debugging in theano, it is easy, just write<br> , and then use instead of in the following computations. But what if I want to print some expression of , for example . How can I do it?<br> If I write then I will need to insert into the computation graph later, what is the recommended way to do it?</p>
[ { "AnswerId": "38439306", "CreationDate": "2016-07-18T14:29:47.180", "ParentId": null, "OwnerUserId": "86430", "Title": null, "Body": "<p>This is a nasty hack, but I've resorted to things like:</p>\n\n<pre><code>x = 1e-11 * Print(\"mean of x\")(x.mean()) + x\n</code></pre>\n\n<p>If you make it <code>0 * Print(...)</code> then it gets optimised away.</p>\n" }, { "AnswerId": "31229234", "CreationDate": "2015-07-05T10:09:21.107", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>The result of a print operation must be reachable (via some path in the computation graph) from an output of the Theano function. If you want to print an expression that is not used then a simple solution is to just include the result of that expression in the outputs of the Theano function.</p>\n\n<p>Suppose you are interested in <code>x*y</code> but would like to print <code>x+y</code>, then</p>\n\n<pre><code>x = theano.tensor.scalar()\ny = theano.tensor.scalar()\nz = printing.Print('x+y is: ')(x+y)\nf1 = theano.function([x, y], [x * y]\nf2 = theano.function([x, y], [z]\nf3 = theano.function([x, y], [x * y, z]\n</code></pre>\n\n<p>f1 will fail to print <code>x+y</code> because z is not reachable from an output of the function; f2 will print <code>x+y</code> but will not compute <code>x*y</code>; f3 will do both.</p>\n" } ]
31,230,862
1
<arrays><python-3.x><numpy><theano>
2015-07-05T13:29:34.217
31,231,051
3,190,076
White spaces in Theano arrays
<p>I just started playing around with Theano and I got surprised by the result of this code.</p> <pre></pre> <p>Using python3 I get:</p> <pre></pre> <p>The array itself is correct, it contains the right values, however the printed output is odd. I would expect something like this:</p> <pre></pre> <p>or</p> <pre></pre> <p>Why it is so? What are the extra white spaces? Shall I be concerned about?</p>
[ { "AnswerId": "31231051", "CreationDate": "2015-07-05T13:51:02.973", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>What you're printing is a <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html\" rel=\"nofollow\"><code>numpy.ndarray</code></a>. By default they format themselves like this when printed.</p>\n\n<p>The output array is a floating point array because, by default, Theano uses floating point tensors.</p>\n\n<p>If you want to use integer tensors then you need to specify a <code>dtype</code>:</p>\n\n<pre><code>a = T.vector(dtype='int64')\n</code></pre>\n\n<p>Or use a bit of syntactic sugar:</p>\n\n<pre><code>a = T.lvector()\n</code></pre>\n\n<p>Compare your output with the output of the following:</p>\n\n<pre><code>print numpy.array([0, 2, 1026], dtype=numpy.float64)\nprint numpy.array([0, 2, 1026], dtype=numpy.int64)\n</code></pre>\n\n<p>You can change the default printing options of numpy using <a href=\"http://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html\" rel=\"nofollow\"><code>numpy.set_printoptions</code></a>.</p>\n" } ]
31,247,616
1
<caffe>
2015-07-06T13:50:27.040
null
5,085,490
Caffe fails: relu_layer.cu: 29] check failed: error==cudaSuccess <9 vs 0> invalid configuration argument
<p>When I run my Caffe implement, it fails as follows. </p> <p><strong>...relu_layer.cu: 29] check failed: error==cudaSuccess &lt;9 vs 0> invalid configuration argument</strong></p> <p>It is noted that it does not fail in running other Caffe implements (eg. MNIST recognition). My GPU is NVIDIA GeForce GT 620, with computaion ability 2.1, and on Windows 7.</p> <p>If you come across the similar questions, please let me know, and give me some help.</p>
[ { "AnswerId": "39583565", "CreationDate": "2016-09-19T23:31:11.473", "ParentId": null, "OwnerUserId": "6850895", "Title": null, "Body": "<p>The driver is not the most updated. You can use <a href=\"http://www.nvidia.com/Download/Scan.aspx?lang=en-us\" rel=\"nofollow\">http://www.nvidia.com/Download/Scan.aspx?lang=en-us</a> \nto test your driver and update it. </p>\n" } ]
31,253,870
6
<c++><opencv><python-3.x><opencv3.0><caffe>
2015-07-06T19:17:09.283
null
1,561,108
Caffe: opencv error
<p>I've built opencv 3.0 from source and can run a few sample apps, build against the headers ok so I presume it's installed successfully. </p> <p>I'm also using python3 and I now go to install and build caffe. I set a few variables in Makefile.config as I'm using the CPU due to having an AMD GPU and also Anaconda.</p> <p>When I run make all I get this error:</p> <pre></pre> <p>from searching I think this is something to do with using openCV 3 but I'm not sure where to start looking for a solution. Any help?</p> <p>And yes I'm one of the horde of inexperienced users looking to fiddle with the Google Inception learning technique.</p>
[ { "AnswerId": "49849765", "CreationDate": "2018-04-16T04:42:07.530", "ParentId": null, "OwnerUserId": "7360523", "Title": null, "Body": "<p>The problem report is very clear. There is a problem with linking library libraries.The reason may be the difference between 3.0 and 2.x.\nYou need to add</p>\n\n<pre><code>opencv_core opencv_highgui opencv_imgproc opencv_imgcodecs\n</code></pre>\n\n<p>into LIBRARIES +=.</p>\n" }, { "AnswerId": "56014995", "CreationDate": "2019-05-07T03:08:50.383", "ParentId": null, "OwnerUserId": "11462352", "Title": null, "Body": "<p>You can edit <code>Makefile.config</code> with the following 2 lines like this and it worked for me. Note that your opencv path <strong>must be set before default path</strong>!</p>\n\n<pre><code>INCLUDE_DIRS := $(PYTHON_INCLUDE) /home/young/Soft/openCV-3.3.1/include \\\n /usr/local/include /usr/include/hdf5/serial\nLIBRARY_DIRS := $(PYTHON_LIB) /home/young/Soft/openCV-3.3.1/lib \\\n /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial \n</code></pre>\n" }, { "AnswerId": "31289673", "CreationDate": "2015-07-08T10:15:13.747", "ParentId": null, "OwnerUserId": "1561108", "Title": null, "Body": "<p>I used <code>cmake</code> instead with the <code>-DBUILD_TIFF=ON</code> flag and got a successful build.</p>\n" }, { "AnswerId": "41065595", "CreationDate": "2016-12-09T17:14:58.483", "ParentId": null, "OwnerUserId": "4234099", "Title": null, "Body": "<p>It could be that you are using OpenCV version 3. If yes just uncomment the following line in your <code>Makefile.config</code>:</p>\n\n<pre><code># OPENCV_VERSION := 3\n</code></pre>\n\n<p>So it will look like</p>\n\n<pre><code>OPENCV_VERSION := 3\n</code></pre>\n\n<p>You could verify the version currently in use by doing:</p>\n\n<pre><code>$ python\n&gt;&gt;&gt; import cv2\n&gt;&gt;&gt; cv2.__version__\n'3.1.0-dev'\n</code></pre>\n" }, { "AnswerId": "58792879", "CreationDate": "2019-11-10T20:59:45.993", "ParentId": null, "OwnerUserId": "8128190", "Title": null, "Body": "<p>A quick workaround is to add <code>-lopencv_imgcodecs</code> flag when you're compiling your code.<br>\nThis worked for me:<br>\n<code>g++ test.cpp -o test &lt;Some flags&gt; -lopencv_imgcodecs</code></p>\n" }, { "AnswerId": "32922800", "CreationDate": "2015-10-03T12:51:44.833", "ParentId": null, "OwnerUserId": "2297751", "Title": null, "Body": "<p>You can also add the <code>opencv_imgcodecs</code> to the MakeFile in line <a href=\"https://github.com/BVLC/caffe/pull/3140/files#diff-b67911656ef5d18c4ae36cb6741b7965L187\">187</a>, see this <a href=\"https://github.com/BVLC/caffe/pull/3140\">pull</a>.</p>\n" } ]
31,275,282
1
<python-3.x><neural-network><theano>
2015-07-07T17:18:30.010
31,295,512
2,077,723
theano function not updating parameters during gradient optimization in feed forward neural net
<p>Trying to get my hands wet with theano and deep nets by starting with a very simple implementation of a three layer feed forward neural network and testing it on the mnist data set. </p> <p>I am using a rudimentary implementation of stochastic gradient descent to start out with, and the network is not training properly. The parameters of the network are not being updated. </p> <p>Was wondering if anyone could could point out what I'm doing wrong. </p> <p>The following code is my lstm module. I've called it that because I planned on implementing lstm networks in the future. </p> <pre></pre> <p>The following code is where I create, train, and test a simple three-layer feed forward network on the mnist data set. </p> <pre></pre> <p>The problem I'm facing is that the parameters are not being updated properly. I'm not sure if that's because I'm not calculating the gradient properly, or If I'm not using the theano function correctly. </p>
[ { "AnswerId": "31295512", "CreationDate": "2015-07-08T14:21:59.527", "ParentId": null, "OwnerUserId": "1985353", "Title": null, "Body": "<p>You have to make more than one pass on the dataset when using stochastic gradient descent.\nIt is not unusual that the classification error and the confusion matrix do not change much during the first epoch, especially if the dataset is small.</p>\n\n<p>I made the following change in your code to train for 100 epochs</p>\n\n<pre><code>for i in xrange(100):\n for X, y in zip(X_train, labels_train):\n c = update(X, y)\n</code></pre>\n\n<p>The confusion matrix seems to have started improving:</p>\n\n<pre><code>[[ 0 0 18 0 13 4 5 0 5 0]\n [ 0 42 0 2 0 0 0 0 2 0]\n [ 0 0 51 0 0 0 0 1 0 0]\n [ 0 0 0 45 0 1 0 1 2 0]\n [ 0 0 0 0 33 0 0 0 0 0]\n [ 0 0 0 0 0 47 0 0 0 0]\n [ 0 0 0 0 0 0 45 0 0 0]\n [ 0 0 0 0 1 0 0 48 0 0]\n [ 0 2 1 0 0 0 0 0 34 0]\n [ 0 1 0 25 0 3 0 2 16 0]]\n</code></pre>\n" } ]
31,277,820
0
<caffe><atlas>
2015-07-07T19:37:39.583
null
5,064,653
Error building caffe - undefined reference to cblas_sgemv
<p>I am trying to build caffe using Atlas, CUDA and cuDNN on Fedora 22 and I'm getting this error.</p> <pre></pre> <p>The fix mentioned <a href="https://github.com/BVLC/caffe/issues/2348" rel="nofollow">here</a> is to add <em>opencv_imgcodecs</em> to the list of libraries but I'm still getting the same error.</p> <p>Is this problem related to Atlas or is the fix in the Makefile? I had to edit FindAtlas.cmake earlier to get it to find the libraries on my system.</p>
[]
31,286,024
4
<python><neural-network><artificial-intelligence><caffe><deep-dream>
2015-07-08T07:22:25.923
null
3,792,198
DeepDream taking too long to render image
<p>I managed to install #DeepDream in my server.</p> <p>I have duo core and 2gb Ram. but it taking 1min to process a image of size 100kbp.</p> <p>Any advice ?</p>
[ { "AnswerId": "31620288", "CreationDate": "2015-07-24T21:46:55.907", "ParentId": null, "OwnerUserId": "2353472", "Title": null, "Body": "<p>Unless you can move to a better workstation/get a GPU, you'll have to do with resizing the image.</p>\n\n<pre><code>img = PIL.Image.open('sky1024px.jpg')\nimg = np.float32(img.resize( [int(0.5 * s) for s in img.size] ))\n</code></pre>\n" }, { "AnswerId": "36781301", "CreationDate": "2016-04-21T22:07:09.870", "ParentId": null, "OwnerUserId": "4709102", "Title": null, "Body": "<p>Taking 1 minute to process a 100kb image is a sensible turnaround time for #deepdream, and we accept that these renders have an incredibly long baking time. Often, experimental research software will run too slow, hungry for a future of faster computers. That said, there are a couple ways that come to mind about making your setup execute faster.</p>\n\n<ul>\n<li><p>Thread! Increase thread count with a command line argument. Here's one way to enable multi-threading in Caffe <a href=\"https://stackoverflow.com/questions/31395729/how-to-enable-multithreading-with-caffe\">How to enable multithreading with Caffe?</a></p></li>\n<li><p>GPU! Install CUDA and switch from CPU rendering to GPU rendering. If your server doesn't have a special GPU, try getting a GPU instance on amazon ec2. <a href=\"https://github.com/BVLC/caffe/wiki/Install-Caffe-on-EC2-from-scratch-(Ubuntu,-CUDA-7,-cuDNN)\" rel=\"nofollow noreferrer\">https://github.com/BVLC/caffe/wiki/Install-Caffe-on-EC2-from-scratch-(Ubuntu,-CUDA-7,-cuDNN)</a></p></li>\n</ul>\n" }, { "AnswerId": "31430476", "CreationDate": "2015-07-15T12:31:56.650", "ParentId": null, "OwnerUserId": "3533322", "Title": null, "Body": "<p>Do you run it in a Virtual Machine on Windows or OS X? If so, then it's probably not going to work any faster. In a Virtual Machine (I'm using Docker) you're most of the time not able to use CUDA to render the Images. I have the same problem and I'm going to try it by installing Ubuntu and then install the NVidia drivers for CUDA. At the moment I'm rendering 1080p images which are around 300kb and it takes 15 minutes to do 1 image on an Intel core i7 with 8gb of ram. </p>\n" }, { "AnswerId": "39757574", "CreationDate": "2016-09-28T21:00:31.093", "ParentId": null, "OwnerUserId": "5256795", "Title": null, "Body": "<p>As a rule of thumb deep learning is hard on both compute and memory resources. A 2gb RAM Core Duo machine is just not a good choice for deep learning. Keep in mind a lot of the people who pioneered this field did much of their research using GTX Titan cards because CPU computation even on xeon servers is prohibitivly slow when training deep learning networks.</p>\n" } ]
31,288,156
0
<machine-learning><neural-network><deep-learning><caffe><conv-neural-network>
2015-07-08T09:07:14.543
null
1,348,187
Convolutional Neural Networks with Caffe and NEGATIVE IMAGES
<p>When training a set of classes (let's say #clases <em>(number of classes)</em> = N) on Caffe Deep Learning (or any CNN framework) and I make a query to the <em>caffemodel</em>, I get a % of probability of that image could be OK.</p> <p>So, let's take a picture of a similar Class 1, and I get the result:</p> <blockquote> <p>1.- 90%</p> <p>2.- 10%</p> <p>rest... 0%</p> </blockquote> <p>the problem is: when I take a random picture (for example of my environment), <strong>I keep getting the same result</strong>, where one of the class is predominant (>90% probability) but it doesn't belong to any class.</p> <p>So what I'd like to hear is opinions/answers from people which has experienced this and would have solved how to deal with no-sense inputs to the Neural Network.</p> <p>My purposes are:</p> <ol> <li>Train one more extra class with negative images (like with <a href="http://docs.opencv.org/doc/user_guide/ug_traincascade.html" rel="nofollow">train_cascade</a>).</li> <li>Train one more class extra with all the positive images in the TRAIN set, and the negative on the VAL set.</li> </ol> <p>But my purposes don't have any scientific base to execute them, that's why I ask you this question.</p> <p>What would you do?</p> <p>Thank you very much in advance.</p> <p>Rafael.</p>
[]
31,297,745
1
<c++><boost><caffe>
2015-07-08T15:54:00.313
31,299,048
3,742,823
Installing caffe on Yosemite Boost Error
<p>I am trying to install caffe on Yosemite and I am getting the following error:</p> <p></p> <p>As suggested by this blogpost <a href="https://stackoverflow.com/questions/30745837/compiling-caffe-on-yosemite">compiling caffe on Yosemite</a> I downgraded the Boost to v1.57. </p> <p>Any suggestions on how to move forward?</p>
[ { "AnswerId": "31299048", "CreationDate": "2015-07-08T16:59:54.200", "ParentId": null, "OwnerUserId": "3742823", "Title": null, "Body": "<p>This solved the issue:</p>\n\n<pre><code>make clean\nmake all\n</code></pre>\n" } ]
31,304,452
0
<python-2.7><numpy><bayesian><theano><pymc3>
2015-07-08T21:49:51.363
null
5,095,886
Using the pymc3 likelihood/posterior outside of pymc3: how?
<p>For comparison purposes, I want to utilize the posterior density function outside of PyMC3.</p> <p>For my research project, I want to find out how well PyMC3 is performing compared to my own custom made code. As such, I need to compare it to our own in-house samplers and likelihood functions.</p> <p>I think I figured out how to call the internal PyMC3 posterior, but it feels very awkward, and I want to know if there is a better way. Right now I am hand-transforming variables, whereas I should just be able to pass pymc a parameter dictionary and get the posterior density. Is this possible in a straightforward manner?</p> <p>Thanks a lot!</p> <p>Demo code:</p> <pre></pre>
[]
31,304,601
1
<arrays><lua><torch>
2015-07-08T21:59:55.900
31,305,512
1,082,019
Torch / Lua, how to select a subset of an array or tensor?
<p>I'm working on Torch/Lua and have an array of 10 elements.</p> <pre></pre> <p>If I write , I can read the structure of the 1st element of the array.</p> <pre></pre> <p>I need to select just 3 elements among all the 10, but I don't know which command to use. If I were working on Matlab, I would write: , but here does not work.</p> <p>Do you have any suggestions?</p>
[ { "AnswerId": "31305512", "CreationDate": "2015-07-08T23:20:33.693", "ParentId": null, "OwnerUserId": "2726734", "Title": null, "Body": "<h1>In Torch</h1>\n\n<pre><code>th&gt; x = torch.Tensor{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}\n</code></pre>\n\n<p>To select a range, like the first three, use <a href=\"https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor--dim1dim2--or--dim1sdim1e-dim2sdim2e-\" rel=\"noreferrer\">the index operator</a>:</p>\n\n<pre><code>th&gt; x[{{1,3}}]\n1\n2\n3\n</code></pre>\n\n<p>Where 1 is the 'start' index, and 3 is the 'end' index. </p>\n\n<p>See <a href=\"https://github.com/torch/torch7/blob/master/doc/tensor.md#extracting-sub-tensors\" rel=\"noreferrer\">Extracting Sub-tensors</a> for more alternatives using Tensor.sub and Tensor.narrow </p>\n\n<hr>\n\n<h1>In Lua 5.2 or less</h1>\n\n<p>Lua tables, such as your <code>dataset</code> variable, do not have a method for selecting sub-ranges.</p>\n\n<pre><code>function subrange(t, first, last)\n local sub = {}\n for i=first,last do\n sub[#sub + 1] = t[i]\n end\n return sub\nend\n\ndataset = {11,12,13,14,15,16,17,18,19,20}\n\nsub = subrange(dataset, 1, 3)\nprint(unpack(sub))\n</code></pre>\n\n<p>which prints</p>\n\n<pre><code>11 12 13\n</code></pre>\n\n<hr>\n\n<h1>In Lua 5.3</h1>\n\n<p>In Lua 5.3 you can use <a href=\"http://www.lua.org/manual/5.3/manual.html#pdf-table.move\" rel=\"noreferrer\"><code>table.move</code></a>.</p>\n\n<pre><code>function subrange(t, first, last)\n return table.move(t, first, last, 1, {})\nend\n</code></pre>\n" } ]
31,304,860
1
<lua><torch><mnist>
2015-07-08T22:23:16.043
31,320,458
225,814
Torch: luajit out of memory on simple task
<p>I am trying to load the MNIST dataset in the repl and do mean subtraction by the following:</p> <pre></pre> <p>The last line causes the following error:</p> <p></p> <p>I am running this on a laptop with 16GB of RAM. Also MNIST has already been loaded into so not sure why doing would cause this issue. Any ideas?</p> <p>Thanks</p>
[ { "AnswerId": "31320458", "CreationDate": "2015-07-09T14:27:53.000", "ParentId": null, "OwnerUserId": "225814", "Title": null, "Body": "<p>The problem was that it was trying to print the whole matrix (which is large) to the console. </p>\n\n<p>This can be overcome by doing either\n<code>data = data:add(-mean)</code>\nor\n<code>data:add(-mean);</code> - notice the semicolon </p>\n\n<p>Answer provided by Soumith Chintala on the torch gitter.</p>\n" } ]
31,307,181
1
<protocol-buffers><caffe>
2015-07-09T02:38:18.153
null
3,742,823
Installation issue with protobuf and caffe
<p>From the last 2 days, I have been trying to install caffe on OSX 10.10 </p> <p>I was able to run all the installation commands successfully for caffe but when I tried to import caffe in ipython I got the exact same error: <a href="https://github.com/BVLC/caffe/issues/2092" rel="nofollow">https://github.com/BVLC/caffe/issues/2092</a> </p> <p>So, as suggested in the thread, I tried to downgrade from 3.0.0 to 2.6.1. I was successfully able to install and the new version does say 2.6.1 </p> <p>But now I not able to install python library. I am following the instructions mentioned here: <a href="https://github.com/google/protobuf/tree/v2.6.1/python" rel="nofollow">https://github.com/google/protobuf/tree/v2.6.1/python</a> I get the following error while running the command :</p> <blockquote> <p>from google.protobuf import descriptor_pb2</p> <p>File "/path/to/protobuf-2.6.1/python/google/protobuf/descriptor_pb2.py", line 21, in module></p> <p>80\x80\x80\x02\"}\n\x10\x45numValueOptions\x12\x19\n\ndeprecated\x18\x01 \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"{\n\x0eServiceOptions\x12\x19\n\ndeprecated\x18! \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"z\n\rMethodOptions\x12\x19\n\ndeprecated\x18! \x01(\x08:\x05\x66\x61lse\x12\x43\n\x14uninterpreted_option\x18\xe7\x07 \x03(\x0b\x32$.google.protobuf.UninterpretedOption*\t\x08\xe8\x07\x10\x80\x80\x80\x80\x02\"\x9e\x02\n\x13UninterpretedOption\x12;\n\x04name\x18\x02 \x03(\x0b\x32-.google.protobuf.UninterpretedOption.NamePart\x12\x18\n\x10identifier_value\x18\x03 \x01(\t\x12\x1a\n\x12positive_int_value\x18\x04 \x01(\x04\x12\x1a\n\x12negative_int_value\x18\x05 \x01(\x03\x12\x14\n\x0c\x64ouble_value\x18\x06 \x01(\x01\x12\x14\n\x0cstring_value\x18\x07 \x01(\x0c\x12\x17\n\x0f\x61ggregate_value\x18\x08 \x01(\t\x1a\x33\n\x08NamePart\x12\x11\n\tname_part\x18\x01 \x02(\t\x12\x14\n\x0cis_extension\x18\x02 \x02(\x08\"\xb1\x01\n\x0eSourceCodeInfo\x12:\n\x08location\x18\x01 \x03(\x0b\x32(.google.protobuf.SourceCodeInfo.Location\x1a\x63\n\x08Location\x12\x10\n\x04path\x18\x01 \x03(\x05\x42\x02\x10\x01\x12\x10\n\x04span\x18\x02 \x03(\x05\x42\x02\x10\x01\x12\x18\n\x10leading_comments\x18\x03 \x01(\t\x12\x19\n\x11trailing_comments\x18\x04 \x01(\tB)\n\x13\x63om.google.protobufB\x10\x44\x65scriptorProtosH\x01')</p> <p>TypeError: <strong>init</strong>() got an unexpected keyword argument 'syntax'</p> </blockquote> <p>There isn't much on Google. Please help.</p>
[ { "AnswerId": "35203808", "CreationDate": "2016-02-04T14:35:07.503", "ParentId": null, "OwnerUserId": "834565", "Title": null, "Body": "<p>I'm using <a href=\"http://brew.sh/\" rel=\"nofollow\">brew</a> and pip and things work out fine, try a fresh install for protobuf 2.6.0 as follows :</p>\n\n<pre><code># First, uninstall protobuf\n# Then let's install protobuf 2.6.0 for Mac\nbrew install homebrew/versions/protobuf260\n# And install the corresponding python library version\npip install protobuf==2.6.0\n</code></pre>\n\n<p>I used 2.6.0 here because 2.6.1 seems not yet available on brew.</p>\n" } ]
31,316,455
1
<python><machine-learning><theano>
2015-07-09T11:44:03.953
31,318,434
2,889,087
Deconvolutional autoencoder in theano
<p>I'm new to theano and trying to use the examples <a href="http://deeplearning.net/tutorial/lenet.html" rel="nofollow">convolutional network</a> and <a href="http://deeplearning.net/tutorial/dA.html" rel="nofollow">denoising autoencoder</a> to make a denoising convolutional network. I am currently struggling with how to make W', the reverse weights. <a href="http://people.idsia.ch/~masci/papers/2011_icann.pdf" rel="nofollow">In this paper</a> they use tied weights for W' that are flipped in both dimensions.</p> <p>I'm currently working on a 1d signal, so my image shape is (batch_size, 1, 1, 1000) and filter/W size is (num_kernels, 1, 1, 10) for example. The output of the convolution is then (batch_size, num_kernels, 1, 991). Since I want to W' to be just the flipped in 2 dimensions (or 1d in my case), I'm tempted to do this</p> <pre></pre> <p>where I reverse flip it in the relevant dimension and repeat those weights so that they are the same dimension as the feature maps from the hidden layer.</p> <p>With this setup, do I only have to get the gradients for W to update or should W_prime also be a part of the grad computation?</p> <p>When I do it like this, the MSE drops a lot after the first minibatch and then stops changing. Using cross entropy gives NaN from the first iteration. I don't know if that is related to this issue or if it's one of many other potential bugs I have in my code.</p>
[ { "AnswerId": "31318434", "CreationDate": "2015-07-09T13:07:05.673", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>I can't comment on the validity of your <code>W_prime</code> approach but I can say that you only need to compute the gradient of the cost with respect to each of the original shared variables. Your <code>W_prime</code> is a symbolic function of <code>W</code>, not a shared variable itself so you don't need to compute gradients with respect to <code>W_prime</code>.</p>\n\n<p>Whenever you get NaNs, the first thing to try is to reduce the size of the learning rate.</p>\n" } ]
31,319,023
1
<python><theano>
2015-07-09T13:31:02.553
31,319,619
133,374
Theano advanced indexing for tensor, shared index
<p>I have a tensor with .</p> <p>And I have a tensor with where the values are label indices, i.e. for the third dimension in .</p> <p>Now I want to get a tensor with where the third dimension is the index in . Basically</p> <pre></pre> <p>for all .</p> <p>How can I achieve this?</p> <p>A similar problem with solution was posted <a href="https://stackoverflow.com/a/31043630/133374">here</a>.</p> <p>The solution there, if I understand correctly, would be:</p> <pre></pre> <p>But that doesn't seem to work. I get: .</p> <p>Also, isn't the creation of the temporal a bit costly? Esp when I try to workaround by really making it a full dense integer array. There should be a better way.</p> <p>Maybe ? But as far as I understand, that doesn't parallelize the code, so this is also not a solution.</p>
[ { "AnswerId": "31319619", "CreationDate": "2015-07-09T13:53:18.760", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>This works for me:</p>\n\n<pre><code>import theano\nimport theano.tensor as T\n\nmax_time, num_batches, num_labels = 3, 4, 6\nmax_seq_len = 5\n\nprobs_ = np.arange(max_time * num_batches * num_labels).reshape(\n max_time, num_batches, num_labels)\n\ntargets_ = np.arange(num_batches * max_seq_len).reshape(max_seq_len, \n num_batches) % (num_batches - 1) # mix stuff up\n\nprobs, targets = map(theano.shared, (probs_, targets_))\n\nprint probs_\nprint targets_\n\nprobs_y = probs[:, T.arange(targets.shape[1])[:, np.newaxis], targets.T]\n\nprint probs_y.eval()\n</code></pre>\n\n<p>Above used a transposed version of your indices. Your exact proposition also works</p>\n\n<pre><code>probs_y2 = probs[:, T.arange(targets.shape[1])[np.newaxis, :], targets]\n\nprint probs_y2.eval()\nprint (probs_y2.dimshuffle(0, 2, 1) - probs_y).eval()\n</code></pre>\n\n<p>So maybe your problem is somewhere else.</p>\n\n<p>As for speed, I am at a loss as to what could be faster than this. <code>map</code>, which is a specialization of <code>scan</code> almost certainly is not. I do not know to what extent the <code>arange</code> is actually built rather than simply iterated over.</p>\n" } ]
31,324,404
0
<python><c++><caffe>
2015-07-09T17:30:18.727
null
1,452,257
Preventing C++ error check from stopping Python
<p>I'm using <a href="http://caffe.berkeleyvision.org/" rel="nofollow">caffe</a> for training convolutional neural networks. Sometimes the training will diverge, which caffe will find using <a href="https://github.com/BVLC/caffe/pull/1479" rel="nofollow">one of the error checks</a>. However, this check will stop both the C++ and Python scripts. How can I catch that C++ has stopped in Python (response in log is: with the previous line refering to the previously mentioned error check) and continue the Python script?</p>
[]
31,324,633
1
<lua><parallel-processing><neural-network><torch>
2015-07-09T17:42:46.753
31,403,876
1,082,019
Torch / Lua, how to correctly implement minibatch training in a siamese neural network?
<p>I'm still working on my implementation of a siamese neural network in Torch, as mentioned in some of my previous questions. I finally got a good working implementation of it, but now I'd like to add a mini-batch training. That is, I would like to train the siamese neural network with a set of training elements, instead of using just one.</p> <p>Unfortunately, my implementation for 2 minibatches does not work. There's a problem in the back-propagation of the error, that I cannot solve. Here's the main architecture:</p> <pre></pre> <p>I've an upper neural network, put together with a lower neural network. They all are insereted into a parallel table. This parallel table is then inserted into a perceptron The same is made for a second parallel table. Then the two parallel-table-perceptrons are put together into a general parallel table, that is inserted in a general percepron.</p> <p>I think this architecture is right, but I'm missing something with the gradient_update function.</p> <p>Here's my code:</p> <pre></pre> <p>The problem comes with the call to backwards() function. Possibly there's a problem in the dimensions...</p> <p>Do you have any ideas on how to solve this?</p>
[ { "AnswerId": "31403876", "CreationDate": "2015-07-14T10:19:14.000", "ParentId": null, "OwnerUserId": "1688185", "Title": null, "Body": "<blockquote>\n <p>The problem comes with the call to backwards() function. Possibly there's a problem in the dimensions...</p>\n</blockquote>\n\n<p>Technically speaking regarding the structure of <code>perceptron_general</code> when you perform a backward the 2nd argument (= <code>gradOutput</code>) should be <strong>a table made of 2 x 1D tensors</strong> (i.e. one <code>gradOutput</code> per branch of your top parallel table) which gives something like:</p>\n\n<pre><code>gradientWrtOutput = {\n torch.Tensor{realTarget[1]},\n torch.Tensor{realTarget[2]}\n}\n</code></pre>\n\n<p><em>Note: right after there is another error within your main training loop.</em></p>\n" } ]
31,324,739
2
<python><c++><neural-network><deep-learning><caffe>
2015-07-09T17:48:20.607
31,847,179
1,452,257
Finding gradient of a Caffe conv-filter with regards to input
<p>I need to find the gradient with regards to the input layer for a single convolutional filter in a convolutional neural network (CNN) as a way to <a href="http://research.google.com/pubs/pub38115.html" rel="nofollow noreferrer">visualize the filters</a>.<br> Given a trained network in the Python interface of <a href="http://caffe.berkeleyvision.org/" rel="nofollow noreferrer">Caffe</a> such as the one in <a href="https://github.com/BVLC/caffe/blob/master/examples/01-learning-lenet.ipynb" rel="nofollow noreferrer">this example</a>, how can I then find the gradient of a conv-filter with respect to the data in the input layer?</p> <p><strong>Edit:</strong></p> <p>Based on the <a href="https://stackoverflow.com/a/31349941/1714410">answer by cesans</a>, I added the code below. The dimensions of my input layer is . My first conv-layer, , has 11 filters with a size of , resulting in the dimensions .</p> <pre></pre> <p>As you can see from the output, the dimensions of the arrays returned by are equal to the dimensions of my layers in Caffe. After some testing I've found that this output is the gradients of the loss with regards to respectively the layer and the layer.</p> <p>However, my question was how to find the gradient of a single conv-filter with respect to the data in the input layer, which is something else. How can I achieve this?</p>
[ { "AnswerId": "31349941", "CreationDate": "2015-07-10T20:42:29.847", "ParentId": null, "OwnerUserId": "1899150", "Title": null, "Body": "<p>You can get the gradients in terms of any layer when you run the <code>backward()</code> pass. Just specify the list of layers when calling the function. To show the gradients in terms of the data layer:</p>\n\n<pre><code>net.forward()\ndiffs = net.backward(diffs=['data', 'conv1'])`\ndata_point = 16\nplt.imshow(diffs['data'][data_point].squeeze())\n</code></pre>\n\n<p>In some cases you may want to force all layers to carry out backward, look at the <code>force_backward</code> parameter of the model.</p>\n\n<p><a href=\"https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto\">https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto</a></p>\n" }, { "AnswerId": "31847179", "CreationDate": "2015-08-06T05:02:05.603", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>Caffe net juggles two \"streams\" of numbers.<br>\nThe first is the data \"stream\": images and labels pushed through the net. As these inputs progress through the net they are converted into high-level representation and eventually into class probabilities vectors (in classification tasks).<br>\nThe second \"stream\" holds the parameters of the different layers, the weights of the convolutions, the biases etc. These numbers/weights are changed and learned during the train phase of the net.</p>\n\n<p>Despite the fundamentally different role these two \"streams\" play, caffe nonetheless use the same data structure, <code>blob</code>, to store and manage them.<br>\nHowever, for each layer there are two <strong>different</strong> blobs vectors one for each stream.</p>\n\n<p>Here's an example that I hope would clarify:</p>\n\n<pre><code>import caffe\nsolver = caffe.SGDSolver( PATH_TO_SOLVER_PROTOTXT )\nnet = solver.net\n</code></pre>\n\n<p>If you now look at</p>\n\n<pre><code>net.blobs\n</code></pre>\n\n<p>You will see a dictionary storing a \"caffe blob\" object for each layer in the net. Each blob has storing room for both data and gradient</p>\n\n<pre><code>net.blobs['data'].data.shape # &gt;&gt; (32, 3, 224, 224)\nnet.blobs['data'].diff.shape # &gt;&gt; (32, 3, 224, 224)\n</code></pre>\n\n<p>And for a convolutional layer:</p>\n\n<pre><code>net.blobs['conv1/7x7_s2'].data.shape # &gt;&gt; (32, 64, 112, 112)\nnet.blobs['conv1/7x7_s2'].diff.shape # &gt;&gt; (32, 64, 112, 112)\n</code></pre>\n\n<p><code>net.blobs</code> holds the first data stream, it's shape matches that of the input images up to the resulting class probability vector.</p>\n\n<p>On the other hand, you can see another member of <code>net</code></p>\n\n<pre><code>net.layers\n</code></pre>\n\n<p>This is a caffe vector storing the parameters of the different layers.<br>\nLooking at the first layer (<code>'data'</code> layer):</p>\n\n<pre><code>len(net.layers[0].blobs) # &gt;&gt; 0\n</code></pre>\n\n<p>There are no parameters to store for an input layer.<br>\nOn the other hand, for the first convolutional layer</p>\n\n<pre><code>len(net.layers[1].blobs) # &gt;&gt; 2\n</code></pre>\n\n<p>The net stores one blob for the filter weights and another for the constant bias. Here they are</p>\n\n<pre><code>net.layers[1].blobs[0].data.shape # &gt;&gt; (64, 3, 7, 7)\nnet.layers[1].blobs[1].data.shape # &gt;&gt; (64,)\n</code></pre>\n\n<p>As you can see, this layer performs 7x7 convolutions on 3-channel input image and has 64 such filters.</p>\n\n<p>Now, how to get the gradients? well, as you noted</p>\n\n<pre><code>diffs = net.backward(diffs=['data','conv1/7x7_s2'])\n</code></pre>\n\n<p>Returns the gradients of the <em>data</em> stream. We can verify this by</p>\n\n<pre><code>np.all( diffs['data'] == net.blobs['data'].diff ) # &gt;&gt; True\nnp.all( diffs['conv1/7x7_s2'] == net.blobs['conv1/7x7_s2'].diff ) # &gt;&gt; True\n</code></pre>\n\n<p>(<strong>TL;DR</strong>) You want the gradients of the parameters, these are stored in the <code>net.layers</code> with the parameters:</p>\n\n<pre><code>net.layers[1].blobs[0].diff.shape # &gt;&gt; (64, 3, 7, 7)\nnet.layers[1].blobs[1].diff.shape # &gt;&gt; (64,)\n</code></pre>\n\n<hr>\n\n<p>To help you map between the names of the layers and their indices into <code>net.layers</code> vector, you can use <code>net._layer_names</code>. </p>\n\n<hr>\n\n<p><strong>Update</strong> regarding the use of gradients to visualize filter responses:<br>\nA gradient is normally defined for a <strong>scalar</strong> function. The loss is a scalar, and therefore you can speak of a gradient of pixel/filter weight with respect to the scalar loss. This gradient is a single number per pixel/filter weight.<br>\nIf you want to get the input that results with maximal activation of a <strong>specific</strong> internal hidden node, you need an \"auxiliary\" net which loss is exactly a measure of the activation to the specific hidden node you want to visualize. Once you have this auxiliary net, you can start from an arbitrary input and change this input based on the gradients of the auxilary loss to the input layer: </p>\n\n<pre><code>update = prev_in + lr * net.blobs['data'].diff\n</code></pre>\n" } ]
31,326,015
7
<cuda><computer-vision><caffe><conv-neural-network><cudnn>
2015-07-09T18:58:39.783
31,349,250
3,785,114
How to verify CuDNN installation?
<p>I have searched many places but ALL I get is HOW to install it, not how to verify that it is installed. I can verify my NVIDIA driver is installed, and that CUDA is installed, but I don't know how to verify CuDNN is installed. Help will be much appreciated, thanks!</p> <p>PS.<br> This is for a caffe implementation. Currently everything is working without CuDNN enabled.</p>
[ { "AnswerId": "46200018", "CreationDate": "2017-09-13T14:20:49.950", "ParentId": null, "OwnerUserId": "249226", "Title": null, "Body": "<p>You first need to find the installed cudnn file and then parse this file. To find the file, you can use:</p>\n\n<pre><code>whereis cudnn.h\nCUDNN_H_PATH=$(whereis cudnn.h)\n</code></pre>\n\n<p>If that doesn't work, see \"Redhat distributions\" below.</p>\n\n<p>Once you find this location you can then do the following (replacing <code>${CUDNN_H_PATH}</code> with the path):</p>\n\n<pre><code>cat ${CUDNN_H_PATH} | grep CUDNN_MAJOR -A 2\n</code></pre>\n\n<p>The result should look something like this:</p>\n\n<pre><code>#define CUDNN_MAJOR 7\n#define CUDNN_MINOR 5\n#define CUDNN_PATCHLEVEL 0\n--\n#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)\n</code></pre>\n\n<p>Which means the version is 7.5.0.</p>\n\n<h1>Ubuntu 18.04 (via sudo apt install nvidia-cuda-toolkit)</h1>\n\n<p>This method of installation installs cuda in /usr/include and /usr/lib/cuda/lib64, hence the file you need to look at is in /usr/include/cudnn.h.</p>\n\n<pre><code>CUDNN_H_PATH=/usr/include/cudnn.h\ncat ${CUDNN_H_PATH} | grep CUDNN_MAJOR -A 2\n</code></pre>\n\n<h1>Debian and Ubuntu</h1>\n\n<p>From CuDNN v5 onwards (at least when you install via <code>sudo dpkg -i &lt;library_name&gt;.deb</code> packages), it looks like you might need to use the following:</p>\n\n<pre><code>cat /usr/include/x86_64-linux-gnu/cudnn_v*.h | grep CUDNN_MAJOR -A 2\n</code></pre>\n\n<p>For example:</p>\n\n<pre><code>$ cat /usr/include/x86_64-linux-gnu/cudnn_v*.h | grep CUDNN_MAJOR -A 2 \n#define CUDNN_MAJOR 6\n#define CUDNN_MINOR 0\n#define CUDNN_PATCHLEVEL 21\n--\n#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)\n\n#include \"driver_types.h\"\n</code></pre>\n\n<p>indicates that CuDNN version 6.0.21 is installed.</p>\n\n<h1>Redhat distributions</h1>\n\n<p>On CentOS, I found the location of CUDA with:</p>\n\n<pre><code>$ whereis cuda\ncuda: /usr/local/cuda\n</code></pre>\n\n<p>I then used the procedure about on the cudnn.h file that I found from this location:</p>\n\n<pre><code>$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2\n</code></pre>\n" }, { "AnswerId": "48928917", "CreationDate": "2018-02-22T13:41:41.180", "ParentId": null, "OwnerUserId": "9277085", "Title": null, "Body": "<p>Run <code>./mnistCUDNN</code> in <code>/usr/src/cudnn_samples_v7/mnistCUDNN</code>\n<p>Here is an example:</p>\n\n<pre><code>cudnnGetVersion() : 7005 , CUDNN_VERSION from cudnn.h : 7005 (7.0.5)\nHost compiler version : GCC 5.4.0\nThere are 1 CUDA capable devices on your machine :\ndevice 0 : sms 30 Capabilities 6.1, SmClock 1645.0 Mhz, MemSize (Mb) 24446, MemClock 4513.0 Mhz, Ecc=0, boardGroupID=0\nUsing device 0\n</code></pre>\n" }, { "AnswerId": "49590972", "CreationDate": "2018-03-31T18:20:34.170", "ParentId": null, "OwnerUserId": "843442", "Title": null, "Body": "<p>When installing on ubuntu via <code>.deb</code> you can use <code>sudo apt search cudnn | grep installed</code></p>\n" }, { "AnswerId": "36978616", "CreationDate": "2016-05-02T08:56:47.373", "ParentId": null, "OwnerUserId": "562769", "Title": null, "Body": "<p>The installation of CuDNN is just copying some files. Hence to check if CuDNN is installed (and which version you have), you only need to check those files.</p>\n\n<h2>Install CuDNN</h2>\n\n<p>Step 1: Register an nvidia developer account and <a href=\"https://developer.nvidia.com/cudnn\" rel=\"noreferrer\">download cudnn here</a> (about 80 MB). You might need <code>nvcc --version</code> to get your cuda version.</p>\n\n<p>Step 2: Check where your cuda installation is. For most people, it will be <code>/usr/local/cuda/</code>. You can check it with <code>which nvcc</code>.</p>\n\n<p>Step 3: Copy the files:</p>\n\n<pre><code>$ cd folder/extracted/contents\n$ sudo cp include/cudnn.h /usr/local/cuda/include\n$ sudo cp lib64/libcudnn* /usr/local/cuda/lib64\n$ sudo chmod a+r /usr/local/cuda/lib64/libcudnn*\n</code></pre>\n\n<h2>Check version</h2>\n\n<p>You might have to adjust the path. See step 2 of the installation.</p>\n\n<pre><code>$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2\n</code></pre>\n\n<h2>Notes</h2>\n\n<p>When you get an error like</p>\n\n<pre><code>F tensorflow/stream_executor/cuda/cuda_dnn.cc:427] could not set cudnn filter descriptor: CUDNN_STATUS_BAD_PARAM\n</code></pre>\n\n<p>with TensorFlow, you might consider using CuDNN v4 instead of v5.</p>\n\n<p><strong>Ubuntu users who installed it via <code>apt</code></strong>: <a href=\"https://askubuntu.com/a/767270/10425\">https://askubuntu.com/a/767270/10425</a></p>\n" }, { "AnswerId": "31349250", "CreationDate": "2015-07-10T19:56:28.280", "ParentId": null, "OwnerUserId": "1899150", "Title": null, "Body": "<p>Installing CuDNN just involves placing the files in the CUDA directory. If you have specified the routes and the CuDNN option correctly while installing caffe it will be compiled with CuDNN.</p>\n\n<p>You can check that using <code>cmake</code>. Create a directory <code>caffe/build</code> and run <code>cmake ..</code> from there. If the configuration is correct you will see these lines:</p>\n\n<pre><code>-- Found cuDNN (include: /usr/local/cuda-7.0/include, library: /usr/local/cuda-7.0/lib64/libcudnn.so)\n\n-- NVIDIA CUDA:\n-- Target GPU(s) : Auto\n-- GPU arch(s) : sm_30\n-- cuDNN : Yes\n</code></pre>\n\n<p>If everything is correct just run the <code>make</code> orders to install caffe from there.</p>\n" }, { "AnswerId": "47436840", "CreationDate": "2017-11-22T14:12:13.637", "ParentId": null, "OwnerUserId": "5468983", "Title": null, "Body": "<p><strong>To check installation of CUDA, run below command</strong>, if it’s installed properly then below command will not throw any error and will print correct version of library.</p>\n\n<pre><code>function lib_installed() { /sbin/ldconfig -N -v $(sed 's/:/ /' &lt;&lt;&lt; $LD_LIBRARY_PATH) 2&gt;/dev/null | grep $1; }\nfunction check() { lib_installed $1 &amp;&amp; echo \"$1 is installed\" || echo \"ERROR: $1 is NOT installed\"; }\ncheck libcuda\ncheck libcudart\n</code></pre>\n\n<p><strong>To check installation of CuDNN, run below command</strong>, if CuDNN is installed properly then you will not get any error.</p>\n\n<pre><code>function lib_installed() { /sbin/ldconfig -N -v $(sed 's/:/ /' &lt;&lt;&lt; $LD_LIBRARY_PATH) 2&gt;/dev/null | grep $1; }\nfunction check() { lib_installed $1 &amp;&amp; echo \"$1 is installed\" || echo \"ERROR: $1 is NOT installed\"; }\ncheck libcudnn \n</code></pre>\n\n<p><strong>OR</strong></p>\n\n<p>you can run below command from any directory </p>\n\n<pre><code>nvcc -V\n</code></pre>\n\n<p>it should give output something like this</p>\n\n<pre><code> nvcc: NVIDIA (R) Cuda compiler driver\n Copyright (c) 2005-2016 NVIDIA Corporation\n Built on Tue_Jan_10_13:22:03_CST_2017\n Cuda compilation tools, release 8.0, V8.0.61\n</code></pre>\n" }, { "AnswerId": "51202754", "CreationDate": "2018-07-06T03:48:53.263", "ParentId": null, "OwnerUserId": "207661", "Title": null, "Body": "<p><strong>Getting cuDNN Version [Linux]</strong></p>\n\n<p>Use following to find path for cuDNN:</p>\n\n<pre><code>$ whereis cuda\ncuda: /usr/local/cuda\n</code></pre>\n\n<p>Then use this to get version from header file,</p>\n\n<pre><code>$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2\n</code></pre>\n\n<p><strong>Getting cuDNN Version [Windows]</strong></p>\n\n<p>Use following to find path for cuDNN:</p>\n\n<pre><code>C:\\&gt;where cudnn*\nC:\\Program Files\\cuDNN6\\cuda\\bin\\cudnn64_6.dll\n</code></pre>\n\n<p>Then use this to dump version from header file,</p>\n\n<pre><code>type \"%PROGRAMFILES%\\cuDNN6\\cuda\\include\\cudnn.h\" | findstr \"CUDNN_MAJOR CUDNN_MINOR CUDNN_PATCHLEVEL\"\n</code></pre>\n\n<p><strong>Getting CUDA Version</strong></p>\n\n<p>This works on Linux as well as Windows:</p>\n\n<pre><code>nvcc --version\n</code></pre>\n" } ]
31,342,134
1
<numpy><theano>
2015-07-10T13:33:18.847
31,343,583
133,374
argmax result as a subtensor
<p>I want to use the argmax with kept dimensions as a subtensor. I have:</p> <pre></pre> <p>And I want to set those values to zero in . I.e. I need to use . To use that, I need to specify the subtensor of at but I'm not exactly sure how that looks like. is wrong for multiple dimensions.</p> <p>This should hold:</p> <pre></pre> <p>In the end, I want to do:</p> <pre></pre> <p>My current solution:</p> <pre></pre> <p>However, does not work.</p>
[ { "AnswerId": "31343583", "CreationDate": "2015-07-10T14:38:22.757", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>You need to use Theano's <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#indexing\" rel=\"nofollow\">advanced indexing features</a> which, unfortunately, differ from <a href=\"http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing\" rel=\"nofollow\">numpy's advanced indexing</a>.</p>\n\n<p>Here's an example that does what you want.</p>\n\n<p><strong>Update:</strong> Now works with parametrized axis but note that <code>axis</code> cannot be symbolic.</p>\n\n<pre><code>import numpy\n\nimport theano\nimport theano.tensor as tt\n\ntheano.config.compute_test_value = 'raise'\n\naxis = 2\n\nx = tt.tensor3()\nx.tag.test_value = numpy.array([[[3, 2, 6], [5, 1, 4]], [[2, 1, 6], [6, 1, 5]]],\n dtype=theano.config.floatX)\n\n# Identify the largest value in each row\nx_argmax = tt.argmax(x, axis=axis, keepdims=True)\n\n# Construct a row of indexes to the length of axis\nindexes = tt.arange(x.shape[axis]).dimshuffle(\n *(['x' for dim1 in xrange(axis)] + [0] + ['x' for dim2 in xrange(x.ndim - axis - 1)]))\n\n# Create a binary mask indicating where the maximum values appear\nmask = tt.eq(indexes, x_argmax)\n\n# Alter the original matrix only at the places where the maximum values appeared\nx_prime = tt.set_subtensor(x[mask.nonzero()], 0)\n\nprint x_prime.tag.test_value\n</code></pre>\n" } ]
31,343,525
1
<neural-network><deep-learning><caffe>
2015-07-10T14:36:08.700
31,374,909
395,857
How to generate the predicted labels in Caffe through the CLI?
<p>I trained a neural network model using Caffe:</p> <pre></pre> <p>I then scored the learned model on the validation set:</p> <pre></pre> <p>But how to get the labels predicted by the trained neural network model in Caffe?</p> <hr> <p>I know I can use the Python or Matlab bindings for that purpose, but I am curious to know whether we can get the predicted labels in Caffe directly through the command line interface.</p> <p>It does not seem to be mentioned in the <a href="http://caffe.berkeleyvision.org/tutorial/interfaces.html" rel="nofollow">official Caffe's tutorial on interfaces</a>, and looking at 's help didn't help:</p> <pre></pre>
[ { "AnswerId": "31374909", "CreationDate": "2015-07-13T03:14:08.703", "ParentId": null, "OwnerUserId": "395857", "Title": null, "Body": "<p>If you don't want to go through Python, you can add a <a href=\"https://github.com/BVLC/caffe/blob/master/docs/tutorial/layers.md#hdf5-output\" rel=\"nofollow\">HDF5_OUTPUT</a> layer: it will save the predicted outputs in an HDF5 file.</p>\n\n<p>Otherwise if you feel like going in the code, you could print or save <code>bottom_data_vector[k].second</code> at around <a href=\"https://github.com/BVLC/caffe/blob/master/src/caffe/layers/accuracy_layer.cpp#L74\" rel=\"nofollow\">https://github.com/BVLC/caffe/blob/master/src/caffe/layers/accuracy_layer.cpp#L74</a></p>\n" } ]
31,345,255
1
<python><neural-network><deep-learning><caffe>
2015-07-10T15:57:19.757
null
395,857
make pycaffe -> "fatal error: cublas_v2.h: No such file or directory"
<p>I compiled and I know trying to compile . When I run in the root folder, I get:</p> <pre></pre> <p>How to fix that?</p>
[ { "AnswerId": "31345318", "CreationDate": "2015-07-10T15:59:59.970", "ParentId": null, "OwnerUserId": "395857", "Title": null, "Body": "<p>If you don't plan to use the GPU, you can circumvent the issue by uncommenting <code>CPU_ONLY := 1</code> in Makefile.config:</p>\n\n<pre><code># CPU-only switch (uncomment to build without GPU support).\nCPU_ONLY := 1\n</code></pre>\n" } ]
31,352,311
2
<python><loops><dictionary><clone><theano>
2015-07-11T00:47:06.113
31,352,379
5,009,112
Creating an alternating dictionary with a loop in Python
<p>I'm trying to create a dictionary with a loop, with alternating entries, although the entries not necessarily need to be alternating, as long as they are all in the dictionary, I just need the simplest solution to get them all in one dictionary. A simple example of what I'm trying to achieve:</p> <p>Normally, for creating a dictionary with a loop I would do this:</p> <pre></pre> <p>Now I'm trying the follwing:</p> <pre></pre> <p>The desired output is:</p> <pre></pre> <p>Or, this output would also be fine, as long as all elements are in the dictionary (order does not matter):</p> <pre></pre> <p>The context is that I'm using </p> <pre></pre> <p>where all items you want to replace must be given in one dictionary. </p> <p>Thanks in advance!</p>
[ { "AnswerId": "31352379", "CreationDate": "2015-07-11T00:58:36.363", "ParentId": null, "OwnerUserId": "2617068", "Title": null, "Body": "<p>Comprehensions are cool, but not strictly necessary. Also, standard dictionaries cannot have duplicate keys. There are <a href=\"https://stackoverflow.com/questions/10664856/make-dictionary-with-duplicate-keys-in-python\">data structures for that</a>, but you can also try having a <code>list</code> of values for that key:</p>\n\n<pre><code>d = {}\nd[8] = []\nfor i in range(3):\n d[i] = 9\n d[8].append(i)\n</code></pre>\n\n<p>&nbsp;</p>\n\n<pre><code>&gt;&gt;&gt; d\n{8: [0, 1, 2], 0: 9, 2: 9, 1: 9}\n</code></pre>\n\n<p>For your new example without duplicate keys:</p>\n\n<pre><code>d = {}\nfor i in range(3):\n d[i] = 9\n d[i+5] = 8\n</code></pre>\n\n<p>&nbsp;</p>\n\n<pre><code>&gt;&gt;&gt; d\n{0: 9, 1: 9, 2: 9, 5: 8, 6: 8, 7: 8}\n</code></pre>\n" }, { "AnswerId": "31352492", "CreationDate": "2015-07-11T01:18:12.530", "ParentId": null, "OwnerUserId": "1013719", "Title": null, "Body": "<p>You can pull this off in a single line with <code>itertools</code></p>\n\n<pre><code>dict(itertools.chain.from_iterable(((i, 9), (i+5, 8)) for i in range(3)))\n</code></pre>\n\n<hr>\n\n<p><strong>Explained:</strong></p>\n\n<p>The inner part creates a bunch of tuples</p>\n\n<pre><code>((i, 9), (i+5, 8)) for i in range(3)\n</code></pre>\n\n<p>which in list form expands to</p>\n\n<pre><code>[((0, 9), (5, 8)), ((1, 9), (6, 8)), ((2, 9), (7, 8))]\n</code></pre>\n\n<p>The <code>chain.from_iterable</code> then flattens it by a level to produce</p>\n\n<pre><code>[(0, 9), (5, 8), (1, 9), (6, 8), (2, 9), (7, 8)]\n</code></pre>\n\n<p>This of course works with the <code>dict</code> init that takes a sequence of tuples</p>\n" } ]
31,358,451
4
<python><macos><compiler-errors><neural-network><caffe>
2015-07-11T14:55:59.287
31,359,361
5,106,073
make pycaffe fatal error: 'Python.h' file not found
<p>I compiled caffe on a mac running OSX 10.9.5 and I know trying to compile pycaffe. When I run make pycaffe in the caffe root folder, I get:</p> <pre></pre> <p>how can I fix this?</p> <p>Perhaps is something wrong with Makefile.config. How do I know what is my PYTHONPATH?</p>
[ { "AnswerId": "31359361", "CreationDate": "2015-07-11T16:35:28.027", "ParentId": null, "OwnerUserId": "395857", "Title": null, "Body": "<p>Looking at the comments, I see that you use Anaconda. In <a href=\"https://github.com/BVLC/caffe/blob/master/Makefile.config.example\" rel=\"noreferrer\"><code>Makefile.config</code></a>, you should uncomment the lines dedicated to Anaconda:</p>\n\n<pre><code># Anaconda Python distribution is quite popular. Include path:\n# Verify anaconda location, sometimes it's in root.\n# ANACONDA_HOME := $(HOME)/anaconda\n# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \\\n # $(ANACONDA_HOME)/include/python2.7 \\\n # $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \\\n\n# We need to be able to find libpythonX.X.so or .dylib.\nPYTHON_LIB := /usr/lib\n# PYTHON_LIB := $(ANACONDA_HOME)/lib\n</code></pre>\n\n<p><code>Python.h</code> is in <code>$(ANACONDA_HOME)/include/python2.7</code> as you can see running <code>sudo find / -name 'Python.h'</code>.</p>\n" }, { "AnswerId": "46387511", "CreationDate": "2017-09-24T07:06:03.343", "ParentId": null, "OwnerUserId": "6908096", "Title": null, "Body": "<p>I uncommented the below code in Makefile.config</p>\n\n<pre><code>PYTHON_INCLUDE := /usr/include/python3.5m \\\n /usr/lib/python3.5/dist-packages/numpy/core/include\n</code></pre>\n\n<p>Then did sudo make pycaffe.</p>\n\n<p>It worked.</p>\n" }, { "AnswerId": "40734269", "CreationDate": "2016-11-22T05:18:51.353", "ParentId": null, "OwnerUserId": "1904943", "Title": null, "Body": "<p>I just finished a tedious Caffe install on Arch Linux; hopefully my install notes (link below) will help others.</p>\n\n<p>While specific to my Caffe install, those notes address the \"Python.h\" install error (this Question), as well as a downstream issue mentioned in another SO question,</p>\n\n<p><a href=\"https://stackoverflow.com/questions/28177298/import-caffe-error\">Import caffe error</a>.</p>\n\n<pre><code>https://stackoverflow.com/questions/28177298/import-caffe-error\n</code></pre>\n\n<p>My gist file (notes):</p>\n\n<p><a href=\"https://gist.github.com/victoriastuart/fb2cb22209ccb2771963a25c06221213\" rel=\"nofollow noreferrer\">Caffe Installation Notes</a></p>\n\n<pre><code>https://gist.github.com/victoriastuart/fb2cb22209ccb2771963a25c06221213\n</code></pre>\n" }, { "AnswerId": "44726134", "CreationDate": "2017-06-23T16:28:47.703", "ParentId": null, "OwnerUserId": "8119677", "Title": null, "Body": "<p>I met this problem too.\nI have set the <code>PYTHON_INCLUDE</code> PATH</p>\n\n<pre><code> PYTHON_INCLUDE := $(ANACONDA_HOME)/include \\\n $(ANACONDA_HOME)/include/python2.7\n</code></pre>\n\n<p>But it still can't find the <code>Python.h</code></p>\n\n<p>So I just give the include path manually to the compiler as follows:</p>\n\n<pre><code> export CPLUS_INCLUDE_PATH=/home/woolawren/anaconda2/include/python2.7/:$CPLUS_INCLUDE_PATH\n</code></pre>\n\n<p>if you don't use anaconda2, you can use:</p>\n\n<pre><code> export CPLUS_INCLUDE_PATH=/usr/include/python2.7:$CPLUS_INCLUDE_PATH\n</code></pre>\n\n<p>I have successfully done \"make pycaffe\" by doing this.</p>\n" } ]
31,361,377
1
<python><numpy><scipy><theano>
2015-07-11T20:13:34.260
31,362,146
1,293,964
How to get value from a theano tensor variable backed by a shared variable?
<p>I have a theano tensor variable created from casting a shared variable. How can I extract the original or casted values? (I need that so I don't have to carry the original shared/numpy values around.)</p> <pre></pre>
[ { "AnswerId": "31362146", "CreationDate": "2015-07-11T21:43:50.760", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p><code>get_value</code> only works for shared variables. <code>TensorVariables</code> are general expressions and thus potentially need extra input in order to be able to determine their value (Imagine you set <code>y = x + z</code>, where <code>z</code> is another tensor variable. You would need to specify <code>z</code> before being able to calculate <code>y</code>). You can either create a function to provide this input or provide it in a dictionary using the <code>eval</code> method.</p>\n\n<p>In your case, <code>y</code> only depends on <code>x</code>, so you can do</p>\n\n<pre><code>import theano\nimport theano.tensor as T\n\nx = theano.shared(numpy.asarray([1, 2, 3], dtype='float32'))\ny = T.cast(x, 'int32')\ny.eval()\n</code></pre>\n\n<p>and you should see the result</p>\n\n<pre><code>array([1, 2, 3], dtype=int32)\n</code></pre>\n\n<p>(And in the case <code>y = x + z</code>, you would have to do <code>y.eval({z : 3.})</code>, for example)</p>\n" } ]
31,362,140
1
<neural-network><deep-learning><caffe>
2015-07-11T21:43:38.443
null
395,857
Splitting train_test.prototxt into train.prototxt and test.prototxt
<p>In Caffe, when configuring a neural neutral architecture, one can either define one single train_test.prototxt, or 2 prototxt files train.prototxt and test.prototxt.</p> <p>For instance, in the examples, <a href="https://github.com/BVLC/caffe/tree/master/examples/hdf5_classification" rel="nofollow">hdf5_classification</a> uses 2 prototxt files (<a href="https://github.com/BVLC/caffe/blob/master/examples/hdf5_classification/nonlinear_auto_train.prototxt" rel="nofollow">nonlinear_auto_train.prototxt</a> and <a href="https://github.com/BVLC/caffe/blob/master/examples/hdf5_classification/nonlinear_auto_test.prototxt" rel="nofollow">nonlinear_auto_test.prototxt</a>), while <a href="https://github.com/BVLC/caffe/tree/master/examples/mnist" rel="nofollow">mnist</a> uses 1 prototxt file (<a href="https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_train_test.prototxt" rel="nofollow">lenet_train_test.prototxt</a>).</p> <p>What difference does that make to use 2 prototxt files instead of 1?</p>
[ { "AnswerId": "32092488", "CreationDate": "2015-08-19T10:07:22.257", "ParentId": null, "OwnerUserId": "2466336", "Title": null, "Body": "<p>Merging train and val .prototxt files into a single file is an attempt at reducing the number of files required to drive an experiment. See <a href=\"https://groups.google.com/forum/#!topic/caffe-users/LpXNE42hfG0\" rel=\"nofollow\">this</a> post in the caffe-users group regarding the general intent to merging files.\nI don't know if the plan is to have single file solutions and eliminate any splitting, but can imagine it be kept for advanced purposes, where the user wishes to have vary test conditions while keeping training conditions constant.</p>\n" } ]
31,362,914
1
<lua><machine-learning><torch>
2015-07-11T23:36:46.033
31,370,295
4,747,870
What is correct usage of DataSource with torch/dp library
<p>I am new to both programming language lua and torch library. I am trying to get some machine learning algorithms to work ASAP. I tried to get neural nets using dp library using example <a href="https://github.com/nicholas-leonard/dp/blob/master/examples/neuralnetwork.lua" rel="nofollow">here</a>. But I am unable to get my dataset into the form to feed into learning algorithm. I think my best and also initial guess was to do this: </p> <pre></pre> <p>Which gives error: </p> <p>Where dataset[ 1] is a torch.Tensor containing information about the data and dataset[2] is torch.Tensor of binary information about the data I would like know.</p> <p>Hope it is not a stupid syntax error.</p>
[ { "AnswerId": "31370295", "CreationDate": "2015-07-12T17:00:08.670", "ParentId": null, "OwnerUserId": "4850610", "Title": null, "Body": "<p>Yep, this is a syntax error. Lua has no named arguments. Lua adepts <a href=\"http://www.lua.org/pil/5.3.html\" rel=\"nofollow\">use table to emulate such a feature</a>.</p>\n\n<p>So, try this:\n<code>dp.DataSource({train_set=train_set, test_set=test_set})\n</code>\nor just\n<code>dp.DataSource{train_set=train_set, test_set=test_set}\n</code> (you can remove parenthesis if a function has one parameter).</p>\n" } ]
31,365,592
5
<python><matlab><cmake><caffe><matcaffe>
2015-07-12T07:44:27.903
null
2,711,403
Cannot find -lpython2 : MatCaffe installation error
<p>During the build of MatCaffe (Matlab wrapper of Caffe), I face the following error :</p> <pre></pre> <p>On closer examination I found using following command that the following file is responsible for the above error : </p> <pre></pre> <p>It revealed to me the following :</p> <pre></pre> <p>So, I changed the corresponding -lpython2 to -lpython2.7 with the hope of resolving the problem. But to no yield.</p> <p>I also tried the following :</p> <ol> <li>Delete the CMakeCache.txt and doing the make. But it did not work.</li> <li><p>I edited the default CMakeLists.txt file in /cmake-master, to change some default settings. I found that the default python version setting in the CMakeLists.txt in Caffe is 2 .</p> <p>//Specify which python version to use python_version:STRING=2.7</p></li> </ol> <p>I changed it to 2 , and repeated the whole configure-generate-make process in a fresh build folder. But to no yield. Everytime the same matlab/build.make file shows -lpython2 , and changing that to 2.7 directly does not yield.</p> <ol start="3"> <li>I tried to look into matlab/build.make file but could not find anything over there which I could directly connect with this error.</li> </ol> <p>Any solid help would be deeply appreciated. I use MATLAB 2014a, on Ubuntu 14.04.</p>
[ { "AnswerId": "44247396", "CreationDate": "2017-05-29T17:00:28.913", "ParentId": null, "OwnerUserId": "6322527", "Title": null, "Body": "<p>I had this problem and it seems that it roots in the unability of Cmake to get different versions of python, such as \"libpython.so.1.0\". I changed my CMakeCache.txt file to \"libpython.so\" and the problem solved. It is not only for python, I had this issue with my \"cudnn\" as well, and this solution fixed that. </p>\n" }, { "AnswerId": "31371971", "CreationDate": "2015-07-12T19:59:09.637", "ParentId": null, "OwnerUserId": "3440745", "Title": null, "Body": "<p>According to error message and command, causes it, it seems that python libary is installed at unusual location, so <code>ld</code>(linker) cannot find it in its default paths. As CMake script has found headers, it should also setup <code>mex</code> executable for work with library itself, but for some reason it doesn't.</p>\n\n<p>The simplest way to make building package work is to set <code>LD_LIBRARY_PATH</code> to the directory, where you python library is located, and run <code>make</code>. If you want to fix CMake script, this <a href=\"http://www.cmake.org/Wiki/CMake_RPATH_handling\" rel=\"nofollow\">wiki</a> may help you.</p>\n" }, { "AnswerId": "31394301", "CreationDate": "2015-07-13T22:07:05.193", "ParentId": null, "OwnerUserId": "2711403", "Title": null, "Body": "<p>Thanks to @Tsyvarev for the answer. I found a rather simple solution. I just made a symbolic link (libpython2.so) which points to libpython2.7.so in /usr/lib folder. This solved the problem. libpython2.7.so was also present in /usr/lib so i dont think it was the issue of an unusual install.</p>\n" }, { "AnswerId": "31472654", "CreationDate": "2015-07-17T09:33:07.500", "ParentId": null, "OwnerUserId": "5126916", "Title": null, "Body": "<p>I had the same problem. In despair I just removed the <code>-lpython2</code> from <code>build-matlab/matlab/CMakeFiles/matlab.dir/build.make</code></p>\n\n<p>It did compile after that, seems that it found whatever it needed regardless.</p>\n" }, { "AnswerId": "35286218", "CreationDate": "2016-02-09T07:28:11.373", "ParentId": null, "OwnerUserId": "1686769", "Title": null, "Body": "<p>This is because of a bug in <code>caffe_parse_linker_libs</code> function in Utils.cmake which converts something like <code>/usr/lib/x86_64-linux-gnu/libpython2.7.so</code> to <code>-lpython2</code></p>\n\n<p>This can be fixed by <strong>replacing</strong> (in cmake/Utils.cmake)</p>\n\n<pre><code>elseif(IS_ABSOLUTE ${lib})\n get_filename_component(name_we ${lib} NAME_WE)\n get_filename_component(folder ${lib} PATH)\n\n string(REGEX MATCH \"^lib(.*)\" __match ${name_we})\n list(APPEND libflags -l${CMAKE_MATCH_1})\n list(APPEND folders ${folder})\nelse()\n</code></pre>\n\n<p><strong>with</strong></p>\n\n<pre><code>elseif(IS_ABSOLUTE ${lib})\n get_filename_component(folder ${lib} PATH)\n get_filename_component(filename ${lib} NAME)\n string(REGEX REPLACE \"\\\\.[^.]*$\" \"\" filename_without_shortest_ext ${filename})\n\n string(REGEX MATCH \"^lib(.*)\" __match ${filename_without_shortest_ext})\n list(APPEND libflags -l${CMAKE_MATCH_1})\n list(APPEND folders ${folder})\nelse()\n</code></pre>\n\n<p>The updated function correctly converts something like <code>/usr/lib/x86_64-linux-gnu/libpython2.7.so</code> to <code>-lpython2.7</code></p>\n" } ]
31,373,615
0
<opencv><cmake><caffe>
2015-07-12T23:35:58.977
null
5,013,415
Error in installing caffe in Ubuntu 14.04 LTS
<p>Having already required dependencies for caffe installation: in cmake step, when I use:</p> <pre></pre> <p>I get this error:</p> <pre></pre> <p>which in Libraries it has detected wrong path instead of /home/majid/anaconda/lib/libpython2.7.so</p> <p>I tried to force to use the correct path by:</p> <pre></pre> <p>but after:</p> <pre></pre> <p>I am getting a couple of time this warning:</p> <pre></pre> <p>in the end it stops by this error: </p> <pre></pre> <p>while when installing opencv I used this cmake:</p> <pre></pre> <p>in which it is: LIBTIFF=ON. </p> <p>after making opencv I set the packages' path by:</p> <pre></pre> <p>and also:</p> <pre></pre> <p>I don't know why I am having this problem from x86-64-linux-gnu while I have the same libopencv-highgui.so in the /home/majid/opencv/lib</p> <p>I really appreciate it if someone could help me out in this matter.</p> <p>the result of forced cmake:</p> <pre></pre>
[]
31,379,971
1
<python><numpy><theano>
2015-07-13T09:39:02.387
31,384,142
133,374
`uniq` for 2D Theano tensor
<p>I have this Numpy code:</p> <pre></pre> <p>Now, I want to extend this to support 2D arrays and make it use Theano. It should be fast on the GPU.</p> <p>I will get an array with multiple sequences as multiple batches in the format (time,batch), and a which specifies indirectly the length of each sequence.</p> <p>My current try:</p> <pre></pre> <p>How to construct directly?</p> <p>I would do something like but I'm not exactly sure how to express that.</p>
[ { "AnswerId": "31384142", "CreationDate": "2015-07-13T13:01:13.390", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>Here's a quick answer that only addresses part of your task:</p>\n\n<pre><code>def compile_theano_uniq(x):\n diffs = x[1:] - x[:-1]\n diffs = tt.concatenate([tt.ones_like([x[0]], dtype=diffs.dtype), diffs])\n y = diffs.nonzero_values()\n return theano.function(inputs=[x], outputs=y)\n\ntheano_uniq = compile_theano_uniq(tt.vector(dtype='int32'))\n</code></pre>\n\n<p>The key is <code>nonzero_values()</code>.</p>\n\n<p><strong>Update:</strong> I can't imagine any way to do this without using <code>theano.scan</code>. To be clear, and using 0 as padding, I'm assuming that given the input</p>\n\n<pre><code>1 1 2 3 3 4 0\n1 2 2 2 3 3 4\n1 2 3 4 5 0 0\n</code></pre>\n\n<p>you would want the output to be</p>\n\n<pre><code>1 2 3 4 0 0 0\n1 2 3 4 0 0 0\n1 2 3 4 5 0 0\n</code></pre>\n\n<p>or even</p>\n\n<pre><code>1 2 3 4 0\n1 2 3 4 0\n1 2 3 4 5\n</code></pre>\n\n<p>You could identify the indexes of the items you want to keep without using scan. Then either a new tensor needs to be constructed from scratch or the values you want to keep some how moved to make the sequences contiguous. Neither approaches seem feasible without <code>theano.scan</code>.</p>\n" } ]
31,385,427
1
<python-2.7><caffe><deep-dream>
2015-07-13T13:58:53.687
null
1,339,643
How to train a caffe model?
<p>Has anyone successfully trained a caffe model? I have a training ready image set that I would like to use to create a caffe model for use with Google's Deep Dream. </p> <p>The only resources I've been able to find on how to train a model are these:<br> <a href="https://github.com/jgoode21/caffe-oxford102" rel="noreferrer">ImageNet Tutorial</a><br> EDIT: Here's another, but it's not creating a deploy.prototxt file. When I try to use one from another model it "works" but isn't correct.<br> <a href="https://github.com/jgoode21/caffe-oxford102" rel="noreferrer">caffe-oxford 102</a><br> Can anyone point me in the right direction to training my own model?</p>
[ { "AnswerId": "31659332", "CreationDate": "2015-07-27T17:18:24.427", "ParentId": null, "OwnerUserId": "395857", "Title": null, "Body": "<p>I have written a simple example to train a Caffe model on the Iris data set in Python. It also gives the predicted outputs given some user-defined inputs. The network as well as the solver settings need some more tuning but I just wanted to have some code skeleton to get started. Feel free to edit to improve.</p>\n\n<p>(<a href=\"https://github.com/Franck-Dernoncourt/caffe_demos\" rel=\"nofollow noreferrer\">GitHub repository</a>)</p>\n\n<p><code>iris_tuto.py</code></p>\n\n<pre><code>'''\n\nRequirements:\n - Caffe (script to install Caffe and pycaffe on a new Ubuntu 14.04 LTS x64 or Ubuntu 14.10 x64. \n CPU only, multi-threaded Caffe. https://stackoverflow.com/a/31396229/395857)\n - sudo pip install pydot\n - sudo apt-get install -y graphviz\n\nInteresting resources on Caffe:\n - https://github.com/BVLC/caffe/tree/master/examples\n - http://nbviewer.ipython.org/github/joyofdata/joyofdata-articles/blob/master/deeplearning-with-caffe/Neural-Networks-with-Caffe-on-the-GPU.ipynb\n\nInteresting resources on Iris with ANNs:\n - iris data set test bed: http://deeplearning4j.org/iris-flower-dataset-tutorial.html\n - http://se.mathworks.com/help/nnet/examples/iris-clustering.html\n - http://lab.fs.uni-lj.si/lasin/wp/IMIT_files/neural/doc/seminar8.pdf\n\nSynonyms:\n - output = label = target\n - input = feature \n'''\n\nimport subprocess\nimport platform\nimport copy\n\nfrom sklearn.datasets import load_iris\nimport sklearn.metrics \nimport numpy as np\nfrom sklearn.cross_validation import StratifiedShuffleSplit\nimport matplotlib.pyplot as plt\nimport h5py\nimport caffe\nimport caffe.draw\n\n\ndef load_data():\n '''\n Load Iris Data set\n '''\n data = load_iris()\n print(data.data)\n print(data.target)\n targets = np.zeros((len(data.target), 3))\n for count, target in enumerate(data.target):\n targets[count][target]= 1 \n print(targets)\n\n new_data = {}\n #new_data['input'] = data.data\n new_data['input'] = np.reshape(data.data, (150,1,1,4))\n new_data['output'] = targets\n #print(new_data['input'].shape)\n #new_data['input'] = np.random.random((150, 1, 1, 4))\n #print(new_data['input'].shape) \n #new_data['output'] = np.random.random_integers(0, 1, size=(150,3)) \n #print(new_data['input'])\n\n return new_data\n\ndef save_data_as_hdf5(hdf5_data_filename, data):\n '''\n HDF5 is one of the data formats Caffe accepts\n '''\n with h5py.File(hdf5_data_filename, 'w') as f:\n f['data'] = data['input'].astype(np.float32)\n f['label'] = data['output'].astype(np.float32)\n\n\ndef train(solver_prototxt_filename):\n '''\n Train the ANN\n '''\n caffe.set_mode_cpu()\n solver = caffe.get_solver(solver_prototxt_filename)\n solver.solve()\n\n\ndef print_network_parameters(net):\n '''\n Print the parameters of the network\n '''\n print(net)\n print('net.inputs: {0}'.format(net.inputs))\n print('net.outputs: {0}'.format(net.outputs))\n print('net.blobs: {0}'.format(net.blobs))\n print('net.params: {0}'.format(net.params)) \n\ndef get_predicted_output(deploy_prototxt_filename, caffemodel_filename, input, net = None):\n '''\n Get the predicted output, i.e. perform a forward pass\n '''\n if net is None:\n net = caffe.Net(deploy_prototxt_filename,caffemodel_filename, caffe.TEST)\n\n #input = np.array([[ 5.1, 3.5, 1.4, 0.2]])\n #input = np.random.random((1, 1, 1))\n #print(input)\n #print(input.shape)\n out = net.forward(data=input)\n #print('out: {0}'.format(out))\n return out[net.outputs[0]]\n\n\nimport google.protobuf \ndef print_network(prototxt_filename, caffemodel_filename):\n '''\n Draw the ANN architecture\n '''\n _net = caffe.proto.caffe_pb2.NetParameter()\n f = open(prototxt_filename)\n google.protobuf.text_format.Merge(f.read(), _net)\n caffe.draw.draw_net_to_file(_net, prototxt_filename + '.png' )\n print('Draw ANN done!')\n\n\ndef print_network_weights(prototxt_filename, caffemodel_filename):\n '''\n For each ANN layer, print weight heatmap and weight histogram \n '''\n net = caffe.Net(prototxt_filename,caffemodel_filename, caffe.TEST)\n for layer_name in net.params: \n # weights heatmap \n arr = net.params[layer_name][0].data\n plt.clf()\n fig = plt.figure(figsize=(10,10))\n ax = fig.add_subplot(111)\n cax = ax.matshow(arr, interpolation='none')\n fig.colorbar(cax, orientation=\"horizontal\")\n plt.savefig('{0}_weights_{1}.png'.format(caffemodel_filename, layer_name), dpi=100, format='png', bbox_inches='tight') # use format='svg' or 'pdf' for vectorial pictures\n plt.close()\n\n # weights histogram \n plt.clf()\n plt.hist(arr.tolist(), bins=20)\n plt.savefig('{0}_weights_hist_{1}.png'.format(caffemodel_filename, layer_name), dpi=100, format='png', bbox_inches='tight') # use format='svg' or 'pdf' for vectorial pictures\n plt.close()\n\n\ndef get_predicted_outputs(deploy_prototxt_filename, caffemodel_filename, inputs):\n '''\n Get several predicted outputs\n '''\n outputs = []\n net = caffe.Net(deploy_prototxt_filename,caffemodel_filename, caffe.TEST)\n for input in inputs:\n #print(input)\n outputs.append(copy.deepcopy(get_predicted_output(deploy_prototxt_filename, caffemodel_filename, input, net)))\n return outputs \n\n\ndef get_accuracy(true_outputs, predicted_outputs):\n '''\n\n '''\n number_of_samples = true_outputs.shape[0]\n number_of_outputs = true_outputs.shape[1]\n threshold = 0.0 # 0 if SigmoidCrossEntropyLoss ; 0.5 if EuclideanLoss\n for output_number in range(number_of_outputs):\n predicted_output_binary = []\n for sample_number in range(number_of_samples):\n #print(predicted_outputs)\n #print(predicted_outputs[sample_number][output_number]) \n if predicted_outputs[sample_number][0][output_number] &lt; threshold:\n predicted_output = 0\n else:\n predicted_output = 1\n predicted_output_binary.append(predicted_output)\n\n print('accuracy: {0}'.format(sklearn.metrics.accuracy_score(true_outputs[:, output_number], predicted_output_binary)))\n print(sklearn.metrics.confusion_matrix(true_outputs[:, output_number], predicted_output_binary))\n\n\ndef main():\n '''\n This is the main function\n '''\n\n # Set parameters\n solver_prototxt_filename = 'iris_solver.prototxt'\n train_test_prototxt_filename = 'iris_train_test.prototxt'\n deploy_prototxt_filename = 'iris_deploy.prototxt'\n deploy_prototxt_filename = 'iris_deploy.prototxt'\n deploy_prototxt_batch2_filename = 'iris_deploy_batchsize2.prototxt'\n hdf5_train_data_filename = 'iris_train_data.hdf5' \n hdf5_test_data_filename = 'iris_test_data.hdf5' \n caffemodel_filename = 'iris__iter_5000.caffemodel' # generated by train()\n\n # Prepare data\n data = load_data()\n print(data)\n train_data = data\n test_data = data\n save_data_as_hdf5(hdf5_train_data_filename, data)\n save_data_as_hdf5(hdf5_test_data_filename, data)\n\n # Train network\n train(solver_prototxt_filename)\n\n # Print network\n print_network(deploy_prototxt_filename, caffemodel_filename)\n print_network(train_test_prototxt_filename, caffemodel_filename)\n print_network_weights(train_test_prototxt_filename, caffemodel_filename)\n\n # Compute performance metrics\n #inputs = input = np.array([[[[ 5.1, 3.5, 1.4, 0.2]]],[[[ 5.9, 3. , 5.1, 1.8]]]])\n inputs = data['input']\n outputs = get_predicted_outputs(deploy_prototxt_filename, caffemodel_filename, inputs)\n get_accuracy(data['output'], outputs)\n\n\nif __name__ == \"__main__\":\n main()\n #cProfile.run('main()') # if you want to do some profiling\n</code></pre>\n\n<p><code>iris_train_test.prototxt</code>:</p>\n\n<pre><code>name: \"IrisNet\"\nlayer {\n name: \"iris\"\n type: \"HDF5Data\"\n top: \"data\"\n top: \"label\"\n include {\n phase: TRAIN\n }\n hdf5_data_param {\n source: \"iris_train_data.txt\"\n batch_size: 1\n\n }\n}\n\nlayer {\n name: \"iris\"\n type: \"HDF5Data\"\n top: \"data\"\n top: \"label\"\n include {\n phase: TEST\n }\n hdf5_data_param {\n source: \"iris_test_data.txt\"\n batch_size: 1\n\n }\n}\n\n\n\n\nlayer {\n name: \"ip1\"\n type: \"InnerProduct\"\n bottom: \"data\"\n top: \"ip1\"\n param {\n lr_mult: 1\n }\n param {\n lr_mult: 2\n }\n inner_product_param {\n num_output: 50\n weight_filler {\n type: \"xavier\"\n }\n bias_filler {\n type: \"constant\"\n }\n }\n}\nlayer {\n name: \"relu1\"\n type: \"ReLU\"\n bottom: \"ip1\"\n top: \"ip1\"\n}\nlayer {\n name: \"drop1\"\n type: \"Dropout\"\n bottom: \"ip1\"\n top: \"ip1\"\n dropout_param {\n dropout_ratio: 0.5\n }\n}\n\n\nlayer {\n name: \"ip2\"\n type: \"InnerProduct\"\n bottom: \"ip1\"\n top: \"ip2\"\n param {\n lr_mult: 1\n }\n param {\n lr_mult: 2\n }\n inner_product_param {\n num_output: 50\n weight_filler {\n type: \"xavier\"\n }\n bias_filler {\n type: \"constant\"\n }\n }\n}\nlayer {\n name: \"drop2\"\n type: \"Dropout\"\n bottom: \"ip2\"\n top: \"ip2\"\n dropout_param {\n dropout_ratio: 0.4\n }\n}\n\n\n\nlayer {\n name: \"ip3\"\n type: \"InnerProduct\"\n bottom: \"ip2\"\n top: \"ip3\"\n param {\n lr_mult: 1\n }\n param {\n lr_mult: 2\n }\n inner_product_param {\n num_output: 3\n weight_filler {\n type: \"xavier\"\n }\n bias_filler {\n type: \"constant\"\n }\n }\n}\n\nlayer {\n name: \"drop3\"\n type: \"Dropout\"\n bottom: \"ip3\"\n top: \"ip3\"\n dropout_param {\n dropout_ratio: 0.3\n }\n}\n\nlayer {\n name: \"loss\"\n type: \"SigmoidCrossEntropyLoss\" \n # type: \"EuclideanLoss\" \n # type: \"HingeLoss\" \n bottom: \"ip3\"\n bottom: \"label\"\n top: \"loss\"\n}\n</code></pre>\n\n<p><code>iris_deploy.prototxt</code>:</p>\n\n<pre><code>name: \"IrisNet\"\ninput: \"data\"\ninput_dim: 1 # batch size\ninput_dim: 1\ninput_dim: 1\ninput_dim: 4\n\n\nlayer {\n name: \"ip1\"\n type: \"InnerProduct\"\n bottom: \"data\"\n top: \"ip1\"\n param {\n lr_mult: 1\n }\n param {\n lr_mult: 2\n }\n inner_product_param {\n num_output: 50\n weight_filler {\n type: \"xavier\"\n }\n bias_filler {\n type: \"constant\"\n }\n }\n}\nlayer {\n name: \"relu1\"\n type: \"ReLU\"\n bottom: \"ip1\"\n top: \"ip1\"\n}\nlayer {\n name: \"drop1\"\n type: \"Dropout\"\n bottom: \"ip1\"\n top: \"ip1\"\n dropout_param {\n dropout_ratio: 0.5\n }\n}\n\n\nlayer {\n name: \"ip2\"\n type: \"InnerProduct\"\n bottom: \"ip1\"\n top: \"ip2\"\n param {\n lr_mult: 1\n }\n param {\n lr_mult: 2\n }\n inner_product_param {\n num_output: 50\n weight_filler {\n type: \"xavier\"\n }\n bias_filler {\n type: \"constant\"\n }\n }\n}\nlayer {\n name: \"drop2\"\n type: \"Dropout\"\n bottom: \"ip2\"\n top: \"ip2\"\n dropout_param {\n dropout_ratio: 0.4\n }\n}\n\n\nlayer {\n name: \"ip3\"\n type: \"InnerProduct\"\n bottom: \"ip2\"\n top: \"ip3\"\n param {\n lr_mult: 1\n }\n param {\n lr_mult: 2\n }\n inner_product_param {\n num_output: 3\n weight_filler {\n type: \"xavier\"\n }\n bias_filler {\n type: \"constant\"\n }\n }\n}\n\nlayer {\n name: \"drop3\"\n type: \"Dropout\"\n bottom: \"ip3\"\n top: \"ip3\"\n dropout_param {\n dropout_ratio: 0.3\n }\n}\n</code></pre>\n\n<p><code>iris_solver.prototxt</code>:</p>\n\n<pre><code># The train/test net protocol buffer definition\nnet: \"iris_train_test.prototxt\"\n# test_iter specifies how many forward passes the test should carry out.\ntest_iter: 1\n# Carry out testing every test_interval training iterations.\ntest_interval: 1000\n# The base learning rate, momentum and the weight decay of the network.\nbase_lr: 0.0001\nmomentum: 0.001\nweight_decay: 0.0005\n# The learning rate policy\nlr_policy: \"inv\"\ngamma: 0.0001\npower: 0.75\n# Display every 100 iterations\ndisplay: 1000\n# The maximum number of iterations\nmax_iter: 5000\n# snapshot intermediate results\nsnapshot: 5000\nsnapshot_prefix: \"iris_\"\n# solver mode: CPU or GPU\nsolver_mode: CPU # GPU\n</code></pre>\n\n<p>FYI: <a href=\"https://stackoverflow.com/a/31396229/395857\">Script to install Caffe and pycaffe on Ubuntu</a>.</p>\n" } ]
31,390,427
0
<python><neural-network><anaconda><caffe><conda>
2015-07-13T18:08:20.203
null
5,106,073
ImportError: No module named caffe
<p>I am trying to run <a href="http://googleresearch.blogspot.co.uk/2015/07/deepdream-code-example-for-visualizing.html" rel="nofollow">Google Research's DeepDream code</a> on a mac running OSx 10.9.5.<br> There are a few dependencies that I had to install. I am using the Anaconda distribution of python and I made sure that I have all the packages required.</p> <p>The hardest thing was to install Caffe. I have ATLAS installed using fink. Then I have compiled caffe and pycaffe. When I ran 'make runtest' all tests passed. I also ran 'make distribute'.</p> <p>When I run <a href="https://github.com/google/deepdream" rel="nofollow">the notebook released from Google</a>, I get the following error:</p> <pre></pre> <p>How can this be? What can I try to do to fix it?</p>
[]
31,391,527
1
<python><theano><nolearn><lasagne>
2015-07-13T19:11:26.030
null
1,879,940
Neural network for more than one class not working
<p>I am trying to use a Neural network for a classification problem. I have 6 possible classes and the same input may be in more than one class.</p> <p>The problem is that when I try to train one NN for each class, I set output_num_units = 1 and on train, I pass the first column of y, y[:,0]. I get the following output and error:</p> <pre></pre> <p>If I try to use (6) and the full y (all six fields), first I get an error of the KStratifiedFold, because it seems that it does not expect y to have multiple rows. If I set , than I get the following error:</p> <pre></pre> <p>The only configuration that is working is setting more than one output unit and passing only one column to y. Than it trains the NN, but is does not seem to be right as it is giving me 2 output layers, and I have only one Y to compare to.</p> <p><strong>What am I doing wrong? Why can't I use only one output? Should I convert my y classes from a vector of 6 columns to a vector of only one column with a number?</strong></p> <p>I use the following code (extract):</p> <pre></pre>
[ { "AnswerId": "31533004", "CreationDate": "2015-07-21T07:38:53.153", "ParentId": null, "OwnerUserId": "1646409", "Title": null, "Body": "<p>I then wanted to use CNNs in Lasagne, but didn't get it to work the same way, as the predictions were always 0... recommend you to look at <a href=\"https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py\" rel=\"nofollow\">the MNIST example</a>. I find that one much better to use and to extend, as old code snippets didn't fully work due to API changes over time. I've amended the MNIST example, my target vector has labels 0 or 1 and create the output layer for the NN this way:</p>\n\n<pre><code># Finally, we'll add the fully-connected output layer, of 2 softmax units:\nl_out = lasagne.layers.DenseLayer(\n l_hid2_drop, num_units=2,\n nonlinearity=lasagne.nonlinearities.softmax)\n</code></pre>\n\n<p>And for the CNN:</p>\n\n<pre><code> layer = lasagne.layers.DenseLayer(\n lasagne.layers.dropout(layer, p=.5),\n num_units=2,\n nonlinearity=lasagne.nonlinearities.softmax)\n</code></pre>\n" } ]
31,393,897
1
<python><macos><neural-network><anaconda><caffe>
2015-07-13T21:33:17.657
null
5,106,073
ImportError: dlopen(...) library not open
<p>I am trying to run <a href="http://googleresearch.blogspot.co.uk/2015/07/deepdream-code-example-for-visualizing.html" rel="nofollow noreferrer">Google Research's DeepDream code</a> on a mac running OSx 10.9.5.<br> There are a few dependencies that I had to install. I am using the Anaconda distribution of python and I made sure that I have all the packages required.</p> <p>The hardest thing was to install Caffe. I have ATLAS installed using fink. Then I have compiled caffe and pycaffe. When I ran 'make runtest' all tests passed. I also ran 'make distribute'.</p> <p>When I run <a href="https://github.com/google/deepdream" rel="nofollow noreferrer">the notebook released from Google</a>, I get the following error:</p> <pre></pre> <p>What can I do to fix this?</p> <hr> <p><img src="https://i.stack.imgur.com/aGCqO.png" alt="here is the screendump requested in a comment"></p>
[ { "AnswerId": "31408146", "CreationDate": "2015-07-14T13:31:54.857", "ParentId": null, "OwnerUserId": "4378596", "Title": null, "Body": "<p>libcudart.7.0.dylib is a GPU related library.</p>\n\n<p>Does the machine you're running on have a GPU? If not, then you need to specify CPU mode in Makefile.config for caffe.</p>\n\n<p>If you do have a GPU, then please take a look here.\n<a href=\"https://github.com/BVLC/caffe/issues/779\" rel=\"nofollow\">https://github.com/BVLC/caffe/issues/779</a></p>\n" } ]
31,395,027
1
<python><theano>
2015-07-13T23:13:01.277
null
864,128
how to convert numpy.ndarray object to theano.tensor?
<p>I have a csv file to change to thenao.tensor, and I plan to do it via 2 steps 1 csv to ndarray: easy by using <strong>genfromtxt</strong> method. 2 ndarray to theano.tensor: how to do this step? Is there any sample code?</p> <p>Thx!</p>
[ { "AnswerId": "31395847", "CreationDate": "2015-07-14T00:47:32.863", "ParentId": null, "OwnerUserId": "2156909", "Title": null, "Body": "<p>You can use <code>_shared</code>, apparently: <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#converting-from-python-objects\" rel=\"nofollow noreferrer\">http://deeplearning.net/software/theano/library/tensor/basic.html#converting-from-python-objects</a></p>\n\n<pre><code>from theano.tensor import _shared\nimport numpy as np\nx = _shared(np.arange(10))\n</code></pre>\n" } ]
31,395,729
3
<multithreading><ubuntu><neural-network><deep-learning><caffe>
2015-07-14T00:32:46.923
31,396,229
395,857
How to enable multithreading with Caffe?
<p>I would like to compile / configure Caffe so that when I trained an artificial neural network with it, the training is multi-threaded (CPU only, no GPU). How to enable multithreading with Caffe? I use Caffe on Ubuntu 14.04 LTS x64.</p>
[ { "AnswerId": "44172201", "CreationDate": "2017-05-25T04:20:12.773", "ParentId": null, "OwnerUserId": "3698136", "Title": null, "Body": "<p>While building caffe, you have to add the -fopenmp to the CXXFLAGS and LINKFLAGS to support OPENMP. If you have a flag named OPENMP in the Makefil.config, you can simply set that to 1. You can use either OPENBLAS or Intel MKL BLAS library. While building the OPENBLAS you need to set USE_OPENMP=1 flag so that it supports OPENMP. After building caffe, please export the number of threads you want to use during runtime by setting up OMP_NUM_THREADS=n where n is the number of threads you want. Here is a good discussion related to multi-threading in Caffe: <a href=\"https://github.com/BVLC/caffe/pull/439\" rel=\"nofollow noreferrer\">https://github.com/BVLC/caffe/pull/439</a></p>\n" }, { "AnswerId": "51347019", "CreationDate": "2018-07-15T09:21:23.927", "ParentId": null, "OwnerUserId": "5330223", "Title": null, "Body": "<p>This is to just extend <strong>Franck's</strong> <a href=\"https://stackoverflow.com/a/31396229/5330223\">answer</a> where he used <code>sed</code> to modify the <code>config</code> file. If you are having problems with that, here is another way to get the same thing done.</p>\n\n<p>The difference is that instead of changing the config file you directly change the <code>camke</code> flag <code>cmake -DCPU_ONLY=1 -DBLAS=open ..</code></p>\n\n<pre><code>$sudo apt update &amp;&amp; sudo apt-get install -y libopenblas-dev\n$git clone -b 1.0 --depth 1 https://github.com/BVLC/caffe.git . &amp;&amp; \\\n pip install --upgrade pip &amp;&amp; \\\n cd python &amp;&amp; pip install -r requirements.txt &amp;&amp; cd .. &amp;&amp; \\\n mkdir build &amp;&amp; cd build &amp;&amp; \\\n cmake -DCPU_ONLY=1 -DBLAS=open .. &amp;&amp; \\\n make -j\"$(nproc)\"\n</code></pre>\n" }, { "AnswerId": "31396229", "CreationDate": "2015-07-14T01:37:41.620", "ParentId": null, "OwnerUserId": "395857", "Title": null, "Body": "<p>One way is to use OpenBLAS instead of the default ATLAS. To do so, </p>\n\n<ol>\n<li><code>sudo apt-get install -y libopenblas-dev</code></li>\n<li>Before compiling Caffe, edit <a href=\"https://github.com/BVLC/caffe/blob/master/Makefile.config.example\" rel=\"noreferrer\"><code>Makefile.config</code></a>, replace <code>BLAS := atlas</code> by <code>BLAS := open</code></li>\n<li>After compiling Caffe, running <code>export OPENBLAS_NUM_THREADS=4</code> will cause Caffe to use 4 cores.</li>\n</ol>\n\n<hr>\n\n<p>If interested, here is a script to install Caffe and pycaffe on a new Ubuntu 14.04 LTS x64 or Ubuntu 14.10 x64. CPU only, multi-threaded Caffe. It can probably be improved, but it's good enough for me for now:</p>\n\n<pre class=\"lang-bash prettyprint-override\"><code># This script installs Caffe and pycaffe on Ubuntu 14.04 x64 or 14.10 x64. CPU only, multi-threaded Caffe.\n# Usage: \n# 0. Set up here how many cores you want to use during the installation:\n# By default Caffe will use all these cores.\nNUMBER_OF_CORES=4\n# 1. Execute this script, e.g. \"bash compile_caffe_ubuntu_14.04.sh\" (~30 to 60 minutes on a new Ubuntu).\n# 2. Open a new shell (or run \"source ~/.bash_profile\"). You're done. You can try \n# running \"import caffe\" from the Python interpreter to test.\n\n#http://caffe.berkeleyvision.org/install_apt.html : (general install info: http://caffe.berkeleyvision.org/installation.html)\ncd\nsudo apt-get update\n#sudo apt-get upgrade -y # If you are OK getting prompted\nsudo DEBIAN_FRONTEND=noninteractive apt-get upgrade -y -q -o Dpkg::Options::=\"--force-confdef\" -o Dpkg::Options::=\"--force-confold\" # If you are OK with all defaults\n\nsudo apt-get install -y libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev\nsudo apt-get install -y --no-install-recommends libboost-all-dev\nsudo apt-get install -y libatlas-base-dev \nsudo apt-get install -y python-dev \nsudo apt-get install -y python-pip git\n\n# For Ubuntu 14.04\nsudo apt-get install -y libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler \n\n# LMDB\n# https://github.com/BVLC/caffe/issues/2729: Temporarily broken link to the LMDB repository #2729\n#git clone https://gitorious.org/mdb/mdb.git\n#cd mdb/libraries/liblmdb\n#make &amp;&amp; make install \n\ngit clone https://github.com/LMDB/lmdb.git \ncd lmdb/libraries/liblmdb\nsudo make \nsudo make install\n\n# More pre-requisites \nsudo apt-get install -y cmake unzip doxygen\nsudo apt-get install -y protobuf-compiler\nsudo apt-get install -y libffi-dev python-dev build-essential\nsudo pip install lmdb\nsudo pip install numpy\nsudo apt-get install -y python-numpy\nsudo apt-get install -y gfortran # required by scipy\nsudo pip install scipy # required by scikit-image\nsudo apt-get install -y python-scipy # in case pip failed\nsudo apt-get install -y python-nose\nsudo pip install scikit-image # to fix https://github.com/BVLC/caffe/issues/50\n\n\n# Get caffe (http://caffe.berkeleyvision.org/installation.html#compilation)\ncd\nmkdir caffe\ncd caffe\nwget https://github.com/BVLC/caffe/archive/master.zip\nunzip -o master.zip\ncd caffe-master\n\n# Prepare Python binding (pycaffe)\ncd python\nfor req in $(cat requirements.txt); do sudo pip install $req; done\necho \"export PYTHONPATH=$(pwd):$PYTHONPATH \" &gt;&gt; ~/.bash_profile # to be able to call \"import caffe\" from Python after reboot\nsource ~/.bash_profile # Update shell \ncd ..\n\n# Compile caffe and pycaffe\ncp Makefile.config.example Makefile.config\nsed -i '8s/.*/CPU_ONLY := 1/' Makefile.config # Line 8: CPU only\nsudo apt-get install -y libopenblas-dev\nsed -i '33s/.*/BLAS := open/' Makefile.config # Line 33: to use OpenBLAS\n# Note that if one day the Makefile.config changes and these line numbers change, we're screwed\n# Maybe it would be best to simply append those changes at the end of Makefile.config \necho \"export OPENBLAS_NUM_THREADS=($NUMBER_OF_CORES)\" &gt;&gt; ~/.bash_profile \nmkdir build\ncd build\ncmake ..\ncd ..\nmake all -j$NUMBER_OF_CORES # 4 is the number of parallel threads for compilation: typically equal to number of physical cores\nmake pycaffe -j$NUMBER_OF_CORES\nmake test\nmake runtest\n#make matcaffe\nmake distribute\n\n# Bonus for other work with pycaffe\nsudo pip install pydot\nsudo apt-get install -y graphviz\nsudo pip install scikit-learn\n\n# At the end, you need to run \"source ~/.bash_profile\" manually or start a new shell to be able to do 'python import caffe', \n# because one cannot source in a bash script. (http://stackoverflow.com/questions/16011245/source-files-in-a-bash-script)\n</code></pre>\n\n<p>I have placed this script on GitHub: <br><a href=\"https://github.com/Franck-Dernoncourt/caffe_demos/tree/master/caffe_installation\" rel=\"noreferrer\">https://github.com/Franck-Dernoncourt/caffe_demos/tree/master/caffe_installation</a> . </p>\n" } ]
31,404,890
2
<neural-network><caffe><conv-neural-network>
2015-07-14T11:07:48.377
null
1,348,187
Convolutional Neural Networks with Caffe and NEGATIVE or FALSE IMAGES
<p>When training a set of classes (let's say #clases (number of classes) = N) on Caffe Deep Learning (or any CNN framework) and I make a query to the caffemodel, I get a % of probability of that image could be OK.</p> <p>So, let's take a picture of a similar Class 1, and I get the result:</p> <blockquote> <p>1.- 96%</p> <p>2.- 4%</p> </blockquote> <p>rest... 0% the problem is: when I take a random picture (for example of my environment), I keep getting the same result, where one of the class is predominant (>90% probability) but it doesn't belong to any class.</p> <p>So what I'd like to hear is opinions/answers from people which has experienced this and would have solved how to deal with no-sense inputs to the Neural Network.</p> <p>My purposes are:</p> <ul> <li>Train one more extra class with negative images (like with train_cascade).</li> <li>Train one more class extra with all the positive images in the TRAIN set, and the negative on the VAL set. But my purposes don't have any scientific base to execute them, that's why I ask you this question.</li> </ul> <p>What would you do?</p> <p>Thank you very much in advance.</p> <p>Rafael.</p> <hr> <p><strong>EDIT:</strong></p> <p>After two months, A colleague of mine throw me a clue: <strong>The Activation Function.</strong></p> <p>I've seen that I use <strong>ReLU</strong> in every layer which means that the value for x is x when x > 0 and 0 otherwise. These are my layers:</p> <pre></pre> <p>if I make ReLU as x for any x (so negative for x &lt; 0) my network converges in accuracy = 0...</p> <p>Is there better way to do it ?</p>
[ { "AnswerId": "31405126", "CreationDate": "2015-07-14T11:18:47.690", "ParentId": null, "OwnerUserId": "213615", "Title": null, "Body": "<p>Train an extra class with negative examples.<br>\nOr - this will probably work - use pre-trained network and weights if the network definition satisfies you, for example from ImageNet, and add you classes as additional labels. In that way you have higher chances not to overfit to that additional (the negative) class. If your network is different you can train it from scratch on a larger dataset instead of using the pre-trained weights.</p>\n" }, { "AnswerId": "31447289", "CreationDate": "2015-07-16T06:56:10.197", "ParentId": null, "OwnerUserId": "5122242", "Title": null, "Body": "<p>well i am also working on a similar problem , what i dont understand is even if you are to tell the neural network that this is a +ve image or -ve image , i dont understand how thats going to alter the cascade .I think you have to pick out features from the training image .May be you can build a hybrid system where it alters the XML cascade </p>\n" } ]
31,414,341
1
<ubuntu><compilation><neural-network><deep-learning><caffe>
2015-07-14T18:19:46.523
31,414,383
395,857
Compiling Caffe: undefined reference to `PyString_FromString'
<p>I am trying to compile Caffe from the official GitHub sources + using a couple of layer cpp files added by a user. When compiling I get the following error:</p> <pre></pre>
[ { "AnswerId": "31414383", "CreationDate": "2015-07-14T18:22:10.457", "ParentId": null, "OwnerUserId": "395857", "Title": null, "Body": "<p>The compilation error means that at least some cpp files added or altered by the user use Python. To remediate the issue, you should uncomment <code>WITH_PYTHON_LAYER := 1</code> in the <a href=\"https://github.com/BVLC/caffe/blob/master/Makefile.config.example\" rel=\"nofollow\"><code>Makefile.config</code></a> before compiling.</p>\n" } ]
31,419,921
1
<windows><lua><torch><luarocks>
2015-07-15T01:17:26.080
null
3,234,562
Installing Torch7 on Win7; cmake and PATH problems
<p>I'm trying to install Torch 7 on my Win7 system to run an RNN, and it's insane. I installed it easily on my Ubuntu VM, but that can't access my GPU for CUDA acceleration, so either I try experimental PCI passthrough software, or I try to get Torch on Windows. I've managed to install Lua and LuaRocks so far (but I can't run it from anywhere but the C:\Program Files (x86)\LuaRocks\2.2 path). I've installed mingw and cmake. I tried installing Torch using the following command:</p> <pre></pre> <p>(source: <a href="https://stackoverflow.com/questions/27276822/installing-torch7-with-luarocks-on-windows-with-mingw-build-error">Installing Torch7 with Luarocks on Windows with mingw build error</a>)</p> <p>but I get:</p> <pre></pre> <p>I don't know where the cl compiler is, or even if I have it on my system. Regarding my PATH variable, it's apparently a user variable, not a system one (I don't have a system PATH variable). I don't know if that's a problem. It currently looks like this: </p> <pre></pre> <p>I have no clue if that's correct, but if it's meant to let me run lua or luarocks from outside their respective bin directories, it fails at that. If anyone has an easier way of installing Torch on Windows, please let me know (or heck, even a way of enabling GPU acceleration in a VM. Anything to get out of this stuck situation).</p>
[ { "AnswerId": "31420920", "CreationDate": "2015-07-15T03:22:12.650", "ParentId": null, "OwnerUserId": "1442917", "Title": null, "Body": "<p><code>cl</code> is command line compiler from Visual Studio. CMake is looking for it as its default settings use it. To use mingw that you have, you need to provide an additional option (<a href=\"https://stackoverflow.com/a/28058692/1442917\">as I described here</a>), but I'm not sure how to pass it to luarocks as I usually do it from the command line directly.</p>\n\n<p>You may try to follow the steps in the answer I linked; there are details in the torch7 ticket referenced. In short, the steps will involve:</p>\n\n<ol>\n<li>Clone, compile and install <a href=\"https://github.com/torch/paths\" rel=\"nofollow noreferrer\">torch/paths</a>;</li>\n<li>Clone, compile and install <a href=\"https://github.com/torch/cwrap\" rel=\"nofollow noreferrer\">torch/cwrap</a>;</li>\n<li>Clone, compile and install <a href=\"https://github.com/torch/torch7\" rel=\"nofollow noreferrer\">torch/torch</a>; make sure you grab the latest code as it includes the changes I submitted for mingw compilation.</li>\n<li>Clone, compile and install <a href=\"https://github.com/torch/nn\" rel=\"nofollow noreferrer\">torch/nn</a>. See the discussion in <a href=\"https://github.com/torch/torch7/pull/287\" rel=\"nofollow noreferrer\">this ticket</a> for one change you may need to apply.</li>\n</ol>\n\n<p>The ticket also provides specific commands you can run to compile from the command line.</p>\n" } ]
31,427,094
1
<image-processing><machine-learning><deep-learning><computer-vision><caffe>
2015-07-15T09:53:01.303
31,431,716
5,118,798
A guide to convert_imageset.cpp
<p>I am relatively new to machine learning/python/ubuntu.</p> <p>I have a set of images in .jpg format where half contain a feature I want caffe to learn and half don't. I'm having trouble in finding a way to convert them to the required lmdb format.</p> <p>I have the necessary text input files. </p> <p>My question is can anyone provide a step by step guide on how to use in the ubuntu terminal?</p> <p>Thanks</p>
[ { "AnswerId": "31431716", "CreationDate": "2015-07-15T13:26:29.693", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<h2>A quick guide to Caffe's <code>convert_imageset</code></h2>\n\n<h3>Build</h3>\n\n<p>First thing you must do is build caffe and caffe's tools (<code>convert_imageset</code> is one of these tools).<br>\nAfter installing caffe and <code>make</code>ing it make sure you ran <code>make tools</code> as well.<br>\nVerify that a binary file <code>convert_imageset</code> is created in <code>$CAFFE_ROOT/build/tools</code>.</p>\n\n<h3>Prepare your data</h3>\n\n<p><em>Images:</em> put all images in a folder (I'll call it here <code>/path/to/jpegs/</code>).<br>\n<em>Labels:</em> create a text file (e.g., <code>/path/to/labels/train.txt</code>) with a line per input image . For example: </p>\n\n<blockquote>\n <p>img_0000.jpeg 1<br>\n img_0001.jpeg 0<br>\n img_0002.jpeg 0 </p>\n</blockquote>\n\n<p>In this example the first image is labeled <code>1</code> while the other two are labeled <code>0</code>. </p>\n\n<h3>Convert the dataset</h3>\n\n<p>Run the binary in shell</p>\n\n<pre><code>~$ GLOG_logtostderr=1 $CAFFE_ROOT/build/tools/convert_imageset \\\n --resize_height=200 --resize_width=200 --shuffle \\\n /path/to/jpegs/ \\\n /path/to/labels/train.txt \\\n /path/to/lmdb/train_lmdb\n</code></pre>\n\n<p>Command line explained: </p>\n\n<ul>\n<li><code>GLOG_logtostderr</code> flag is set to 1 <em>before</em> calling <code>convert_imageset</code> indicates the logging mechanism to redirect log messages to stderr. </li>\n<li><code>--resize_height</code> and <code>--resize_width</code> resize <strong>all</strong> input images to same size <code>200x200</code>. </li>\n<li><code>--shuffle</code> randomly change the order of images and does not preserve the order in the <code>/path/to/labels/train.txt</code> file. </li>\n<li>Following are the path to the images folder, the labels text file and the output name. Note that the output name should not exist prior to calling <code>convert_imageset</code> otherwise you'll get a scary error message.</li>\n</ul>\n\n<p>Other flags that might be useful:</p>\n\n<ul>\n<li><code>--backend</code> - allows you to choose between an <code>lmdb</code> dataset or <code>levelDB</code>.</li>\n<li><code>--gray</code> - convert all images to gray scale.</li>\n<li><code>--encoded</code> and <code>--encoded_type</code> - keep image data in encoded (jpg/png) compressed form in the database.</li>\n<li><code>--help</code> - shows some help, see all relevant flags under <em>Flags from tools/convert_imageset.cpp</em> </li>\n</ul>\n\n<p>You can check out <a href=\"https://github.com/BVLC/caffe/blob/master/examples/imagenet/create_imagenet.sh\" rel=\"noreferrer\"><code>$CAFFE_ROOT/examples/imagenet/convert_imagenet.sh</code></a>\nfor an example how to use <code>convert_imageset</code>.</p>\n" } ]
31,433,577
2
<python><deep-learning><caffe>
2015-07-15T14:43:13.260
31,457,745
894,903
Passing arguments to the forward function in caffe
<p>I have found this line of code to send inputs to the network in caffe:</p> <pre></pre> <p>I tried adapting this code for my work as follows:</p> <pre></pre> <p>where (just for testing) but I get the following error:</p> <pre></pre> <p>I am unable to understand why I cant run the forward method this way. I am able to call regularly though. </p> <pre></pre> <hr> <p>From the comments I understand that I am supposed to initialize the input array with possibly the function. </p> <p>I tried the following loop:</p> <pre></pre> <p>But this still causes the same error. </p>
[ { "AnswerId": "31446685", "CreationDate": "2015-07-16T06:25:04.330", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>the <code>net</code> object is expecting a <strong>list</strong> of images as input, you supply only one image.</p>\n\n<p>Try</p>\n\n<pre><code>out = net.forward(**{net.inputs[0]: [ im_proc ] })\n</code></pre>\n\n<p>Note the square brackets (<code>[]</code>) around <code>im_proc</code> converting it into a list containing a single image. </p>\n" }, { "AnswerId": "31457745", "CreationDate": "2015-07-16T14:55:45.797", "ParentId": null, "OwnerUserId": "894903", "Title": null, "Body": "<p>The solution to this was to switch out the prototext. The C++ code PyCaffe wraps around causes this behavior. This thread has the solution: <a href=\"https://stackoverflow.com/questions/29124840/prediction-in-caffe-exception-input-blob-arguments-do-not-match-net-inputs\">Prediction in Caffe - Exception: Input blob arguments do not match net inputs</a></p>\n" } ]
31,444,173
4
<python-2.7><theano>
2015-07-16T01:58:57.050
null
5,121,662
Error importing theano "cannot import name gof"
<p>I am current getting the error </p> <blockquote> <p>ImportError: cannot import name gof</p> </blockquote> <p>when importing theano. </p> <pre></pre> <p>I am using python 2.7.10 (). Theano is installed using . Hope to get you suggestion to solve this problem</p>
[ { "AnswerId": "43197332", "CreationDate": "2017-04-04T02:35:05.487", "ParentId": null, "OwnerUserId": "395287", "Title": null, "Body": "<p>This <code>ImportError</code> can be caused because Theano is <a href=\"https://github.com/Theano/Theano/issues/2406\" rel=\"nofollow noreferrer\">unable to compile the <code>gof</code> module itself</a>. If this is the case, you will see an error message that looks like \"<code>Exception: Compilation Failed (return status=1): C:\\Long\\Path\\...\\mod.cpp:1: sorry, unimplemented: 64-bit mode not compiled in</code>\".</p>\n\n<h2>Fixing With Conda</h2>\n\n<p>If you are installing <code>theano</code> into a <code>conda</code> environment, make sure that you have a C compiler available to that environment.</p>\n\n<p>The command</p>\n\n<pre><code>conda install m2w64-toolchain\n</code></pre>\n\n<p>will provide a C compiler to your environment that's isolated from the rest of the machine. </p>\n\n<p>After the <code>m2w64-toolchain</code> package is installed, <code>import theano</code> should work </p>\n\n<h2>Fixing Manually</h2>\n\n<p>If you are installing Theano yourself, two points from <a href=\"https://github.com/Theano/Theano/issues/2406\" rel=\"nofollow noreferrer\">these</a> <a href=\"https://github.com/Theano/Theano/issues/2732#issuecomment-93009341\" rel=\"nofollow noreferrer\">threads</a> may help:</p>\n\n<ul>\n<li>Install the <a href=\"http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions\" rel=\"nofollow noreferrer\">bleeding edge version of Theano</a></li>\n<li>Install <code>libpython</code> from <a href=\"http://www.lfd.uci.edu/%7Egohlke/pythonlibs/\" rel=\"nofollow noreferrer\">http://www.lfd.uci.edu/%7Egohlke/pythonlibs/</a></li>\n</ul>\n" }, { "AnswerId": "31530395", "CreationDate": "2015-07-21T04:28:21.183", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>Most of the time, when I see this error, it is caused by those 2 errors:</p>\n\n<p>1) A syntax error in Theano. Update Theano and make sure to have no local modifcation. I nerver saw this error in the master of Theano, but just in case.</p>\n\n<p>2) When there is multiple version of Theano that are installed.</p>\n\n<p>In both case, remove all version of Theano. Do it multiple time to be sure there is none left. Then install again.</p>\n\n<p>From memory, this always solved the problem when it wasn't a syntax error during development (but not in the master version of Theano that you use)</p>\n" }, { "AnswerId": "47970697", "CreationDate": "2017-12-25T17:24:40.887", "ParentId": null, "OwnerUserId": "5108509", "Title": null, "Body": "<p>I assume you're using Windows 7 or later.</p>\n\n<p>If you have installed Python Anaconda, then open Windows Powershell or Command Prompt and type <code>conda install mingw libpython</code> before typing <code>pip install theano</code></p>\n\n<blockquote>\n <p>Alternatively, if you don't have Anaconda, download those packages from </p>\n \n <ul>\n <li><a href=\"https://anaconda.org/anaconda/mingw/files\" rel=\"nofollow noreferrer\">anaconda.org/anaconda/mingw/files</a></li>\n <li><a href=\"https://anaconda.org/anaconda/libpython/files\" rel=\"nofollow noreferrer\">anaconda.org/anaconda/libpython/files</a></li>\n <li><a href=\"https://github.com/Theano/Theano\" rel=\"nofollow noreferrer\">github.com/Theano/Theano</a></li>\n </ul>\n \n <p>Then open Command Prompt, navigate to each folder and type <code>python setup.py install</code></p>\n</blockquote>\n\n<p>Now run Python and <code>import theano</code></p>\n\n<p><strong>Possible errors:</strong></p>\n\n<p>If you get the RuntimeError: \"<a href=\"https://github.com/Theano/Theano/issues/6507#issuecomment-342519898\" rel=\"nofollow noreferrer\">To use MKL 2018 with Theano you MUST set \"MKL_THREADING_LAYER=GNU\" in your environement</a>\" then</p>\n\n<ol>\n<li><p>Go to Control Panel > System > Advanced system settings and select \"Environment Variables\".</p></li>\n<li><p>In the \"System variables\" section, make a new variable name <code>MKL_THREADING_LAYER</code> and set its value to <code>GPU</code></p></li>\n</ol>\n\n<p>If you get other kinds of errors, then try the following:</p>\n\n<ol>\n<li><p>Make an empty file called <code>.theanorc</code> (a file extension without a file name) in your home folder C:\\Users\\&lt;username&gt;. If you get the error \"You must type a file name\" then see <a href=\"https://stackoverflow.com/q/5004633\">stackoverflow.com/q/5004633</a></p></li>\n<li><p>Open <code>.theanorc</code> and write this:</p>\n\n<pre><code>[global]\ncxx=C:\\&lt;path to Anaconda&gt;\\Anaconda3\\MinGW\\bin\\g++.exe\n</code></pre></li>\n<li><p>Run Python again and import theano. If it works, then you can probably delete <code>.theanorc</code></p></li>\n</ol>\n" }, { "AnswerId": "55789385", "CreationDate": "2019-04-22T05:07:47.223", "ParentId": null, "OwnerUserId": "1682419", "Title": null, "Body": "<p>In my case, the fix was to install a python build that's callable as a shared library:</p>\n\n<pre><code>PYTHON_CONFIGURE_OPTS=\"--enable-shared\" pyenv install 2.7.15\n</code></pre>\n" } ]
31,444,272
1
<lua><luajit><torch>
2015-07-16T02:10:43.730
31,448,164
3,106,889
Read images from local disk using torch7 while those images saved in different subfolders?
<p>I have images which have saved in the desk. The data saved as follow: 4 main folders (1,2,3 and 4) each folder has 26 subfolders ( these subfolders represent the class of images (A, B, C, D, ..,Z)). Each of these subfolder contains more than 500 images. However, I am looking for file or code in torch that can read these images. In MATLAB I could wrote a code but here I find it confuse. Could you please advise me.</p>
[ { "AnswerId": "31448164", "CreationDate": "2015-07-16T07:41:16.587", "ParentId": null, "OwnerUserId": "1688185", "Title": null, "Body": "<p>What you can do is use <a href=\"http://stevedonovan.github.io/Penlight/api/index.html\" rel=\"nofollow\">Penlight</a> (the library <a href=\"https://github.com/torch/distro/blob/54f060f/install.sh#L68\" rel=\"nofollow\">is installed</a> when you install Torch).</p>\n\n<p>Penlight provides <a href=\"http://stevedonovan.github.io/Penlight/api/libraries/pl.dir.html\" rel=\"nofollow\"><code>pl.dir</code></a> that makes it easy to scan files in (sub-)folders. For example what you can do is:</p>\n\n<pre><code>local pl = require('pl.import_into')()\nlocal t = {}\nfor i,f in ipairs(pl.dir.getallfiles('/data/foo', '*.jpg')) do\n t[i] = { f, pl.path.basename(pl.path.dirname(f)) }\nend\n</code></pre>\n\n<p>This creates a list of pairs (filename, class label = \"A\" or \"B\" ...). Of course you are free to change the file pattern (<code>*.jpg</code>) or to omit it (in such a case Penlight will simply list all files). You can also load the images on the fly:</p>\n\n<pre><code>t[i] = { image.load(f), pl.path.basename(pl.path.dirname(f)) }\n</code></pre>\n\n<p>Or do that right after when manipulating <code>t</code>.</p>\n" } ]
31,454,740
2
<deep-learning><caffe>
2015-07-16T12:53:26.693
null
5,118,798
Caffe: can't open imagenet_mean_test.binaryproto
<p>Upon running ; creating the layer data, setting up the data, loading the training file and opening the training lmdb file all works.</p> <p>However when it comes to loading the test file for the test data I get the following error:</p> <blockquote> <p>Loading mean file from: /home/pwhc/caffe/Learn/imagenet_mean_test.binaryproto<br> F0716 13:12:13.917732 3385 db.hpp:109] Check failed: mdb_status == 0 (2 vs. 0) No such file or directory<br> *** Check failure stack trace: ***<br> @ 0x7f8337946daa (unknown)<br> @ 0x7f8337946ce4 (unknown)<br> @ 0x7f83379466e6 (unknown)<br> @ 0x7f8337949687 (unknown)<br> @ 0x7f8337cbf5be caffe::db::LMDB::Open()<br> @ 0x7f8337d16b82 caffe::DataLayer&lt;>::DataLayerSetUp()<br> @ 0x7f8337d806f9 caffe::BasePrefetchingDataLayer&lt;>::LayerSetUp()<br> @ 0x7f8337ca3db3 caffe::Net&lt;>::Init()<br> @ 0x7f8337ca5b22 caffe::Net&lt;>::Net()<br> @ 0x7f8337cb0a24 caffe::Solver&lt;>::InitTestNets()<br> @ 0x7f8337cb111b caffe::Solver&lt;>::Init()<br> @ 0x7f8337cb12e6 caffe::Solver&lt;>::Solver()<br> @ 0x40c4c0 caffe::GetSolver&lt;>()<br> @ 0x406503 train()<br> @ 0x404ab1 main<br> @ 0x7f8336e58ec5 (unknown)<br> @ 0x40505d (unknown)<br> @ (nil) (unknown)<br> Aborted (core dumped)</p> </blockquote> <p>I modified the to point the to appropriate files (using absolute paths) and have checked and double checked to make sure everything matches. </p> <p>Any thoughts would be greatly appreciated. </p>
[ { "AnswerId": "31549397", "CreationDate": "2015-07-21T20:52:35.483", "ParentId": null, "OwnerUserId": "2404152", "Title": null, "Body": "<p>See my answer here:\n<a href=\"https://github.com/BVLC/caffe/issues/2780#issuecomment-123385714\" rel=\"nofollow\">https://github.com/BVLC/caffe/issues/2780#issuecomment-123385714</a></p>\n\n<p>Can you post your data layers? It seems like you've switched up <code>data_param.source</code> and <code>transform_param.mean_file</code>.</p>\n" }, { "AnswerId": "57052650", "CreationDate": "2019-07-16T08:00:17.687", "ParentId": null, "OwnerUserId": "5495014", "Title": null, "Body": "<p>When you creating new LMDB database from image net, please delete the previous LMDB. This error will come to happen when writing the new image net data for existing LMDB database.</p>\n" } ]
31,458,983
1
<python><numpy><caffe>
2015-07-16T15:51:31.077
31,459,123
894,903
Correct way to broadcast a 100x9 to a 100x9x1x1 numpy array for computation in Caffe
<p>I am trying to input my own data into a caffe model using the python wrappers. I read the data from HDF5 as a numpy array with dimension 100x9. But for the input to model, I use the following code:</p> <pre></pre> <p>So basically I need to fill out input_ from a 100x9 array.</p>
[ { "AnswerId": "31459123", "CreationDate": "2015-07-16T15:57:26.930", "ParentId": null, "OwnerUserId": "3854029", "Title": null, "Body": "<p>Heres how you would convert a 100x9 array to a 100x9x1x1 array:</p>\n\n<pre><code>x = np.zeros((100,9))\ny = x[:,:,np.newaxis,np.newaxis]\n</code></pre>\n" } ]
31,460,102
0
<python><python-2.7><ipython><caffe>
2015-07-16T16:47:38.693
null
1,623,856
Caffe import working with python2 but not python2.7 / ipython notebook, despite both being 2.7.10?
<p>I've successfully compiled the caffe library and the python module.</p> <p>I can do this:</p> <pre></pre> <p>But, bizarrely, this fails:</p> <pre></pre> <p>I cannot understand this at all! Whenever I try to run using iPython notebook I get the same crash.</p> <p>Any ideas as to what may be causing this, and how I might fix it, or at least get iPython Notebook to use the different python version so I can run this thing?</p> <p>Thanks!</p>
[]
31,475,307
1
<image-processing><deep-learning><torch>
2015-07-17T11:56:22.440
31,875,549
2,598,910
How to load images and labels in Torch for a Convolutional Neural Network
<p>I'm new to Torch and would like to load some images from two directories (one for each label). I'm trying to build a convolutional neural network that will classify images as belonging to one class or another (i.e. a binary classifier) but I am unsure how load images, label those images and get the data into the correct format. I'm using the following tutorial, however the training data is loaded in a different way which I am not familiar with.</p> <p><a href="http://code.madbits.com/wiki/doku.php?id=tutorial_supervised" rel="noreferrer">http://code.madbits.com/wiki/doku.php?id=tutorial_supervised</a></p> <p>Hope someone can help me get started and point me in the right direction. </p> <p>Many thanks in advance.</p>
[ { "AnswerId": "31875549", "CreationDate": "2015-08-07T10:29:01.187", "ParentId": null, "OwnerUserId": "5072453", "Title": null, "Body": "<p>Check <a href=\"https://github.com/torch/demos/tree/master/train-face-detector\" rel=\"nofollow\">this link</a>\nThe data.lua file in that is very simple to understand. They have used torch tensors, and the code is self explanatory. I had achieved a clear understanding of loading the data, post reading this link; hope it helps you too. </p>\n" } ]
31,477,724
1
<python><artificial-intelligence><neural-network><deep-learning><caffe>
2015-07-17T14:05:15.817
31,498,402
238,971
Setting batch_size in data_param for Caffe has no effect
<p>When I set in <a href="https://github.com/BVLC/caffe/blob/master/models/bvlc_googlenet/deploy.prototxt" rel="nofollow">deploy.prototxt file</a> in Google deep dream <em>bvlc_googlenet</em> to lower GPU memory requirements, it has no effect on speed nor memory requirements. It's as if it was ignored. I know the file itself (deploy.prototxt) is being used because other changes are reflected in results so that's not the issue. I also tried to set batch_size on all related layers as well ("inception_4c/1x1", etc), again no difference.</p> <p>This is how I'm setting it:</p> <pre></pre> <p>When I time the runtime of the script, it's the same with batch_size 1 and with batch_size 512, there is no difference.</p> <p>What am I doing wrong?</p>
[ { "AnswerId": "31498402", "CreationDate": "2015-07-19T05:52:39.730", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p><code>data_param</code> is a parameter of the input data layer. You can set <code>batch_size</code> only for the input and this value propagates through the network.</p>\n\n<p>In the <a href=\"https://github.com/BVLC/caffe/blob/master/models/bvlc_googlenet/deploy.prototxt#L3\" rel=\"nofollow\"><em>deploy.prototxt</em></a> the batch size is set by the first <code>'input_dim'</code> argument (<a href=\"https://github.com/BVLC/caffe/blob/master/models/bvlc_googlenet/deploy.prototxt#L3\" rel=\"nofollow\">third line</a>), try changing this value and see if it has any effect on the memory consumption of your network.</p>\n\n<p>The first few lines of the <em>deploy.prototxt</em> file should be interpreted as</p>\n\n<pre><code>input: \"data\" # the \"top\" name of input layer\ninput_dim: 10 # 4th dimension - batch size\ninput_dim: 3 # 3rd dimension - number of channels\ninput_dim: 224 # 2nd dimension - height of input\ninput_dim: 224 # 1st dimension - width of input\n</code></pre>\n\n<p>Thus, you expect at your fist conv layer (<code>\"conv1/7x7_s2\"</code>) a \"bottom\" named \"data\" with shape <code>10</code>-by-<code>3</code>-by-<code>224</code>-by-<code>224</code>.</p>\n" } ]
31,478,194
1
<lua><torch>
2015-07-17T14:28:21.587
null
3,755,060
Torch Tutorial: meaning of "trainData.data[{ {},i,{},{} }]:mean()" in 1_data.lua
<p>In the torch tutorial, I found the line:</p> <pre></pre> <p>Is there anyone who can explain what the indexing { {},i,{},{} } is doing? I could guess, but wanted to know the exact mechanism.</p> <p>Thanks in advance.</p>
[ { "AnswerId": "31479970", "CreationDate": "2015-07-17T16:00:19.873", "ParentId": null, "OwnerUserId": "1417179", "Title": null, "Body": "<p>This is actually a concise syntax for tensor narrowing / slicing, detailed <a href=\"https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor--dim1dim2--or--dim1sdim1e-dim2sdim2e-\" rel=\"nofollow\">here</a> in the documentation.</p>\n\n<p>Inside the <code>[{ ... }]</code>, you can for each dimension of a tensor:</p>\n\n<ul>\n<li>pass a number <code>n</code> to only keep the <code>n</code>-th component along this dimension,</li>\n<li>pass a range <code>{start,end}</code> to keep all the components from <code>start</code> to <code>end</code> along this dimension,</li>\n<li>pass <code>{}</code> to keep all the components along this dimension.</li>\n</ul>\n\n<p>In this precise case, it's a narrowing from a <code>u * v * w * x</code> tensor to a <code>u * 1 * w * x</code> tensor by keeping only the <code>i</code>-th component along the 2nd dimension.</p>\n" } ]
31,492,699
2
<python><theano>
2015-07-18T15:30:57.023
31,498,732
5,130,560
Use of None in Array indexing in Python
<p>I am using the LSTM tutorial for Theano (<a href="http://deeplearning.net/tutorial/lstm.html">http://deeplearning.net/tutorial/lstm.html</a>). In the lstm.py (<a href="http://deeplearning.net/tutorial/code/lstm.py">http://deeplearning.net/tutorial/code/lstm.py</a>) file, I don't understand the following line:</p> <pre></pre> <p>What does mean? In this case is the theano vector while is a matrix.</p>
[ { "AnswerId": "31498732", "CreationDate": "2015-07-19T06:55:06.570", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>This question has been asked and answered on the Theano mailing list, but is actually about the basics of numpy indexing.</p>\n\n<p>Here are the question and answer\n<a href=\"https://groups.google.com/forum/#!topic/theano-users/jq92vNtkYUI\">https://groups.google.com/forum/#!topic/theano-users/jq92vNtkYUI</a></p>\n\n<p>For completeness, here is another explanation: slicing with <code>None</code> adds an axis to your array, see the relevant numpy documentation, because it behaves the same in both numpy and Theano:</p>\n\n<p><a href=\"http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#numpy.newaxis\">http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#numpy.newaxis</a></p>\n\n<p>Note that <code>np.newaxis is None</code>:</p>\n\n<pre><code>import numpy as np\na = np.arange(30).reshape(5, 6)\n\nprint a.shape # yields (5, 6)\nprint a[np.newaxis, :, :].shape # yields (1, 5, 6)\nprint a[:, np.newaxis, :].shape # yields (5, 1, 6)\nprint a[:, :, np.newaxis].shape # yields (5, 6, 1)\n</code></pre>\n\n<p>Typically this is used to adjust shapes to be able to broadcast to higher dimensions. E.g. tiling 7 times in the middle axis can be achieved as</p>\n\n<pre><code>b = a[:, np.newaxis] * np.ones((1, 7, 1))\n\nprint b.shape # yields (5, 7, 6), 7 copies of a along the second axis\n</code></pre>\n" }, { "AnswerId": "31492800", "CreationDate": "2015-07-18T15:40:43.960", "ParentId": null, "OwnerUserId": "4354477", "Title": null, "Body": "<p>I think the Theano vector's <code>__getitem__</code> method expects a tuple as an argument! like this:</p>\n\n<pre><code>class Vect (object):\n def __init__(self,data):\n self.data=list(data)\n\n def __getitem__(self,key):\n return self.data[key[0]:key[1]+1]\n\na=Vect('hello')\nprint a[0,2]\n</code></pre>\n\n<p>Here <code>print a[0,2]</code> when <code>a</code> is an ordinary list will raise an exception:</p>\n\n<pre><code>&gt;&gt;&gt; a=list('hello')\n&gt;&gt;&gt; a[0,2]\nTraceback (most recent call last):\n File \"&lt;string&gt;\", line 1, in &lt;module&gt;\nTypeError: list indices must be integers, not tuple\n</code></pre>\n\n<p>But here the <code>__getitem__</code> method is different and it accepts a tuple as an argument.</p>\n\n<p>You can pass the <code>:</code> sign to <code>__getitem__</code> like this as <code>:</code> means <em>slice</em>:</p>\n\n<pre><code>class Vect (object):\n def __init__(self,data):\n self.data=list(data)\n\n def __getitem__(self,key):\n return self.data[0:key[1]+1]+list(key[0].indices(key[1]))\n\na=Vect('hello')\nprint a[:,2]\n</code></pre>\n\n<p>Speaking about <code>None</code>, it can be used when indexing in plain Python as well: </p>\n\n<pre><code>&gt;&gt;&gt; 'hello'[None:None]\n'hello'\n</code></pre>\n" } ]
31,494,687
1
<python><gpu><caffe>
2015-07-18T19:15:31.453
null
3,211,036
Caffe - inconsistency in the activation feature values - GPU mode
<p>Hi I am using <strong>caffe</strong> on <strong>Ubuntu 14.04</strong>, <strong>CUDA version 7.0</strong> (latest) <strong>cudnn version 2</strong> (latest) <strong>GPU : NVIDIA GT 730</strong></p> <p>In caffe first I get the initialization done and then I load the imagenet model (Alexnet). I also initialize the gpu using After that I take an image. I copy this image onto the caffe source blob. Then I perform a forward pass for this image by using : Then I extract the 4096 dimensional fc7 output.(the activation features of the fc7 layer)</p> <p>The problem I am facing is that when I run the same code multiple times, everytime I obtain a different result. That is, in GPU mode, everytime the activation features are different for the same image. When I am using forward pass, the function of the network is supposed to be deterministic right ? So I should get the same output everytime for the same image. </p> <p>On the other hand, when I run caffe on cpu by using everything works perfectly, i.e, I get the same output each time The code used and the outputs obtained are shown below. I am not able to understand what the problem is. Is it that the problem is caused due to GPU rounding off ? But the errors are very large. Or is it due to some issues with the latest CUDNN version ? Or is it something else altogether ?</p> <p><strong>Following is the CODE</strong></p> <h1>1) IMPORT libraries</h1> <pre></pre> <h1>2) IMPORT Caffe Models and define utility functions</h1> <pre></pre> <h1>3) LOADING Image and setting constants</h1> <pre></pre> <h1>4) Setting the source image and making the forward pass to obtain fc7 activation features</h1> <pre></pre> <p><strong>FOLLOWING is the output that I obtained for 'print dst.data' when I ran the above code multiple times</strong></p> <h1>output on 1st execution of code</h1> <pre></pre> <h1>output on 2nd execution of code</h1> <pre></pre> <h1>output on 3rd execution of code</h1> <pre></pre> <h1>output on 4th execution of code</h1> <pre></pre> <p>The output values keep becoming larger and larger and then again become smaller after some time. I am not able to understand the issue.</p>
[ { "AnswerId": "31654008", "CreationDate": "2015-07-27T13:10:33.560", "ParentId": null, "OwnerUserId": "5160755", "Title": null, "Body": "<p>Switch your network to Test mode to prevent the effect of dropout which is non-deterministic and needed for training mode.</p>\n\n<p>Add the following line right after initializing your network:</p>\n\n<p>net.set_phase_test()</p>\n\n<p>So that you'll always have the same results.</p>\n\n<p>Soner</p>\n" } ]
31,496,020
4
<python><opencv><caffe>
2015-07-18T22:03:02.830
null
5,125,298
ImportError on cv2.so
<p>I am trying to run fast-rcnn on a cluster, where cv2.so is not installed for public use. So I directly move the cv2.so into a PATH, but it turns as:</p> <p><strong>/lib64/libc.so.6: version `GLIBC_2.14' not found</strong> </p> <p>So I have to install the opencv on my local path again, this time it says:</p> <p><strong>ImportError: /home/username/.local/lib/python2.7/site-packages/cv2.so: undefined symbol: _ZN2cv11arrowedLineERNS_3MatENS_6Point_IiEES3_RKNS_7Scalar_IdEEiiid</strong></p> <p>This really confused me, could anyone give me a hand?</p>
[ { "AnswerId": "42237346", "CreationDate": "2017-02-14T22:22:38.400", "ParentId": null, "OwnerUserId": "3633525", "Title": null, "Body": "<p>I know this is a little late, but I just got this same error with python 2.7 and opencv 3.1.0 on Ubuntu. Turns out I had to reinstall <code>opencv-python</code>. Running <code>sudo pip install opencv-python</code> did the trick.</p>\n" }, { "AnswerId": "32407947", "CreationDate": "2015-09-04T23:31:43.283", "ParentId": null, "OwnerUserId": "4965916", "Title": null, "Body": "<p>I ran into the same issue, but for me <code>PYTHONPATH</code> looked something like:</p>\n\n<pre><code>PYTHONPATH=/usr/local/lib/python2.7/dist-packages:/opt/opencv2.4.9/lib/python2.7/dist-packages\n</code></pre>\n\n<p>Removing <code>/opt/opencv2.4.9/lib/python2.7/dist-packages</code> from the path provided the fix.</p>\n" }, { "AnswerId": "31497506", "CreationDate": "2015-07-19T02:43:11.973", "ParentId": null, "OwnerUserId": "5125298", "Title": null, "Body": "<p>The problem has been solved by some tryings.</p>\n\n<p>Since I installed under my /.~local path, it should be noticed that [include],[bin] and [lib] should all point to the local version by modifying the bashrc.</p>\n\n<p>I just change the lib path while the other 2 paths remained unchanged, which point to the cluster's opencv version 2.4.9.(Mine is 2.4.11)</p>\n" }, { "AnswerId": "46825407", "CreationDate": "2017-10-19T08:14:03.990", "ParentId": null, "OwnerUserId": "3930957", "Title": null, "Body": "<p>After struggling with the above solutions, the following one (<a href=\"https://github.com/CharlesShang/FastMaskRCNN/issues/111\" rel=\"nofollow noreferrer\">source</a>) solved my problem:</p>\n\n<pre><code>sudo pip install --upgrade opencv-python\n</code></pre>\n" } ]
31,507,865
1
<image-processing><neural-network><directory-structure><deep-learning><caffe>
2015-07-20T01:54:50.050
31,509,621
4,282,823
Preparing image dataset for input into Caffe deep learning
<p>I know the first step is to create two file lists with the corresponding labels, one for the training and one for the test set. Suppose the former is called train.txt and the latter val.txt. The paths in these file lists should be relative. The labels should start at 0 and look similar to this:</p> <pre></pre> <p>For each of these two sets, we will create a separate LevelDB. Is this formatted as a text file? I thought I would create a directory with several subdirectories for each of my classes. Do I manually have to create a text file?</p>
[ { "AnswerId": "31509621", "CreationDate": "2015-07-20T05:54:39.490", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>Please see <a href=\"https://stackoverflow.com/a/31431716/1714410\">this tutorial</a> on how to use <code>convert_imageset</code> to build <code>levelDb</code> or <code>lmdb</code> datasets for caffe's training. </p>\n\n<p>As you can see from these instruction it does not matter how you arrange the image files on your disk (same folder/different folders...) as long as you have the correct paths in your <code>'train.txt'</code>/<code>'val.txt'</code> files relative to <code>'/path/to/jpegs/'</code> argument. But if you want to use <code>convert_imageset</code> tool, you'll have to create a text file listing all the images you want to use.</p>\n" } ]
31,509,756
1
<image><format><theano><deep-learning>
2015-07-20T06:05:51.597
null
5,121,855
How to change a dataset of images to train-images-idx3-ubyte format
<p>I have 10000 images. I want to convert them to a format like 'train-images-idx3-ubyte'. This format comes from <a href="http://yann.lecun.com/exdb/mnist/" rel="nofollow">here</a>. I want them to use the deep learning methods described <a href="https://github.com/Newmu/Theano-Tutorials" rel="nofollow">here</a> </p> <p>I appreciate any help</p>
[ { "AnswerId": "31510287", "CreationDate": "2015-07-20T06:49:18.253", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>Take a look at how these files are loaded <a href=\"https://github.com/Newmu/Theano-Tutorials/blob/master/load.py#L17\" rel=\"nofollow\">here</a>.</p>\n\n<p>The use of <code>numpy.fromfile</code> indicates that the data are simply saved as raw bytes of a specific dtype. You can achieve this using <code>numpy.tofile</code>.</p>\n\n<p>However, make sure that this is really what you want to do. If you want to use certain networks on other images, these images will likely need to be of exactly the same size. It is worth digging further into the tutorials - after a while the transposition to other datasets will become easier.</p>\n" } ]
31,509,912
2
<image><lua><data-visualization><convolution><torch>
2015-07-20T06:18:53.833
31,510,973
4,357,605
Visualize images in intermediate layers in torch (lua)
<p>In the conv-nets model, I know how to visualize the filters, we can do itorch.image(model:get(1).weight)</p> <p>But how could I efficiently visualize the output images after the convolution? especially those images in the second or third layer in a deep neural network?</p> <p>Thanks.</p>
[ { "AnswerId": "31510973", "CreationDate": "2015-07-20T07:32:37.617", "ParentId": null, "OwnerUserId": "117844", "Title": null, "Body": "<p>Similarly to weight, you can use:</p>\n\n<pre><code>itorch.image(model:get(1).output)\n</code></pre>\n" }, { "AnswerId": "32471992", "CreationDate": "2015-09-09T06:05:21.330", "ParentId": null, "OwnerUserId": "309653", "Title": null, "Body": "<p>To visualize the weights:</p>\n\n<pre><code>-- visualizing weights\nn = nn.SpatialConvolution(1,64,16,16)\nitorch.image(n.weight)\n</code></pre>\n\n<p>To visualize the feature maps:</p>\n\n<pre><code>-- initialize a simple conv layer\nn = nn.SpatialConvolution(1,16,12,12)\n\n-- push lena through net :)\nres = n:forward(image.rgb2y(image.lena())) \n\n-- res here is a 16x501x501 volume. We view it now as 16 separate sheets of size 1x501x501 using the :view function\nres = res:view(res:size(1), 1, res:size(2), res:size(3))\nitorch.image(res)\n</code></pre>\n\n<p>For more: <a href=\"https://github.com/torch/tutorials/blob/master/1_get_started.ipynb\" rel=\"noreferrer\">https://github.com/torch/tutorials/blob/master/1_get_started.ipynb</a></p>\n" } ]
31,511,115
1
<output><convolution><theano><deep-learning>
2015-07-20T07:40:36.567
null
4,744,673
How to find the output of convolutional_mlp of Theano tutorial based on python
<p>I have 10000 images fed to CNN in Theano tutorial described <a href="https://github.com/lisa-lab/DeepLearningTutorials" rel="nofollow">here</a>.</p> <p>In the classification step, I want to classify those images into 40 classes. So, the number of units in the last layer would be 40. I want to get the predicted value from there. Layer3 calls 'LogisticRegression' function that is available in this package. I think CNN goes to 'LogisticRegression' function to evaluate the predicted values. How can I access to these value?<br> The info related to layers are:</p> <pre></pre> <p>I think, if we could get output from 'layer3', it would be great. I want this value, because I want to count how many samples in each class have been predicted accurately. Is there any body to help</p>
[ { "AnswerId": "34728844", "CreationDate": "2016-01-11T18:49:37.553", "ParentId": null, "OwnerUserId": "5775130", "Title": null, "Body": "<p>I'm not too familiar with Theano but the reason you don't have access to layer0 is because when training the convolutional_mlp, you have to not only save layer3, you have to save all the layers. (Alternatively, you can choose to save the parameters from each layer and recreate them).</p>\n\n<p>For example, in the while loop you can add the following:</p>\n\n<pre><code>with gzip.open('./testing/model.pkl.gz', 'w') as f:\n cPickle.dump([layer0_input, layer0, layer1, layer2_input, layer2, layer3], f)\n</code></pre>\n\n<p>Then you can do something like the following as a predict function. </p>\n\n<pre><code> def predict(model='./testing/model.pkl.gz', \n testset='./testing/testset.pkl.gz',\n batch_size=5):\n\n \"\"\" Load a trained model and use it to predict labels.\n\n :type model: Layers to accept inputs and produce outputs.\n \"\"\"\n\n # Load the saved model.\n classifiers = cPickle.load(gzip.open(model))\n\n # Pick out the individual layer\n layer0_input = classifiers[0]\n layer0 = classifiers[1]\n layer1 = classifiers[2]\n layer2_input = classifiers[3]\n layer2 = classifiers[4]\n layer3 = classifiers[5]\n\n # Apply it to our test set\n testsets = load_data(testset)\n test_set_x = testsets.get_value()\n\n # compile a predictor function\n index = T.lscalar()\n\n predict_model = theano.function(\n [layer0_input],\n layer3.y_pred,\n )\n\n predicted_values = predict_model(\n test_set_x[:batch_size].reshape((batch_size, 1, 28, 23))\n )\n\n print('Prediction complete.')\n return predicted_values\n</code></pre>\n\n<p>The annoying thing is that by design of the tutorial, you have to pass in batches of the same size as you trained by. Here I only did one batch but you'd want to loop through all the batches in the test_set_x. </p>\n" } ]
31,530,360
1
<c><lua><luajit><torch>
2015-07-21T04:25:24.140
31,530,405
678,392
How does this C code (from lua library, Torch) even compile/work?
<p>See <a href="https://github.com/torch/nn/blob/master/generic/Tanh.c" rel="nofollow">https://github.com/torch/nn/blob/master/generic/Tanh.c</a></p> <p>For example,</p> <pre></pre> <p>First, it I don't know how to interpret the first line: </p> <pre></pre> <p>What are the arguments here? What does Tanh_updateOutput refer to? Does "nn_" have special meaning? </p> <p>Second, "TH_TENSOR_APPLY2" and "THTensor_(...)" are both used but I don't see where they are defined? There are no other includes in this file? </p>
[ { "AnswerId": "31530405", "CreationDate": "2015-07-21T04:29:19.847", "ParentId": null, "OwnerUserId": "2357112", "Title": null, "Body": "<p><code>nn_</code> is a macro. You can find the definition by searching the repository for <code>\"#define nn_\"</code>; it's in <a href=\"https://github.com/torch/nn/blob/fdfcd12d789a885458222e4f7475e74a3ebc516f/init.c\"><code>init.c</code></a>:</p>\n\n<pre><code>#define nn_(NAME) TH_CONCAT_3(nn_, Real, NAME)\n</code></pre>\n\n<p>You can keep following the chain of macro definitions, and you'll probably end up with some token pasting thing that makes <code>nn_(Tanh_updateOutput)</code> expand to the name of the function.</p>\n\n<p>(It's weird that <code>generic/Tanh.c</code> doesn't have any includes; <code>generic/Tanh.c</code> must be included by some other file. That's unusual for <code>.c</code> files.)</p>\n" } ]
31,531,807
1
<python><hash><computer-vision><caffe><locality-sensitive-hash>
2015-07-21T06:27:51.773
null
196,048
Trouble shooting locality sensitive hash
<p>I am using <a href="/questions/tagged/caffe" class="post-tag" title="show questions tagged &#39;caffe&#39;" rel="tag">caffe</a>, a deep neural network <a href="http://caffe.berkeleyvision.org" rel="nofollow">library</a>, to generate image features for image based retrieval. The particular network I am using generates a 4096 dimensional feature. </p> <p>I am using <a href="https://github.com/kayzh/LSHash" rel="nofollow">LSHash</a> to generate hash buckets from the features. When I do a brute for comparison of all available feature, sorting images by euclidean distance, I find the features represent image similarity well. When I use LSHash, however, I find that similar features rarely land in the same bucket. </p> <p>Are the source features too large for use with LSH? Are there other ways to reduce the dimensions of the image features before attempting to hash them?</p>
[ { "AnswerId": "31532224", "CreationDate": "2015-07-21T06:55:33.420", "ParentId": null, "OwnerUserId": "1714410", "Title": null, "Body": "<p>If you are looking for intelligent dimensionality reduction, you can simply add another <code>\"InnerProduct\"</code> layer on top of your net with lower output dimension.<br>\nTo train only this layer without altering the rest of the weights you can set the <code>lr_mult</code> values for all the layers (apart from the new one) to zero thus training (aka \"finetuning\") only the top dim-reduction layer.</p>\n" } ]
31,549,129
0
<python><cuda><theano>
2015-07-21T20:36:24.520
null
5,140,643
Importing theano - nvcc : fatal error : nvcc cannot find a supported version of Microsoft Visual Studio
<p>I got this warning/error message whenever I try importing theano-based packages for the first time (it disappear the second time...). I'm wondering how to get rid of it or disable it since it seems not to affect anything except for being ugly... Thanks in advance for your help!</p> <p>I have Visual Studio 2010 and C++ installed and added the path of cl.exe into the system path. Below is the original error message:</p> <p>===============================</p> <blockquote> <p>nvcc : fatal error : nvcc cannot find a supported version of Microsoft Visual Studio. Only the versions 2008, 2010, and 2012 are supported ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu: ('nvcc return status', -1, 'for cmd', 'nvcc -shared -O3 -Xlinker /DEBUG -D HAVE_ROUND -m64 -Xcompiler -DCUDA_NDARRAY_CUH=47047b939bf764f5977458e316cf235a,-DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,/Zi,/MD -IC:\Anaconda\lib\site-packages\theano\sandbox\cuda -IC:\Anaconda\lib\site-packages\numpy\core\include -IC:\Anaconda\include -IC:\Anaconda\lib\site-packages\theano\gof -o C:\Users\YDuan\AppData\Local\Theano\compiledir_Windows-7-6.1.7601-SP1-Intel64_Family_6_Model_58_Stepping_9_GenuineIntel-2.7.9-64\cuda_ndarray\cuda_ndarray.pyd mod.cu -LC:\Anaconda\libs -LC:\Anaconda -lpython27 -lcublas -lcudart') ERROR:theano.sandbox.cuda:Failed to compile cuda_ndarray.cu: ('nvcc return status', -1, 'for cmd', 'nvcc -shared -O3 -Xlinker /DEBUG -D HAVE_ROUND -m64 -Xcompiler -DCUDA_NDARRAY_CUH=47047b939bf764f5977458e316cf235a,-DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,/Zi,/MD -IC:\Anaconda\lib\site-packages\theano\sandbox\cuda -IC:\Anaconda\lib\site-packages\numpy\core\include -IC:\Anaconda\include -IC:\Anaconda\lib\site-packages\theano\gof -o C:\Users\YDuan\AppData\Local\Theano\compiledir_Windows-7-6.1.7601-SP1-Intel64_Family_6_Model_58_Stepping_9_GenuineIntel-2.7.9-64\cuda_ndarray\cuda_ndarray.pyd mod.cu -LC:\Anaconda\libs -LC:\Anaconda -lpython27 -lcublas -lcudart')</p> <p>['nvcc', '-shared', '-O3', '-Xlinker', '/DEBUG', '-D HAVE_ROUND', '-m64', '-Xcompiler', '-DCUDA_NDARRAY_CUH=47047b939bf764f5977458e316cf235a,-DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION,/Zi,/MD', '-IC:\Anaconda\lib\site-packages\theano\sandbox\cuda', '-IC:\Anaconda\lib\site-packages\numpy\core\include', '-IC:\Anaconda\include', '-IC:\Anaconda\lib\site-packages\theano\gof', '-o', 'C:\Users\YDuan\AppData\Local\Theano\compiledir_Windows-7-6.1.7601-SP1-Intel64_Family_6_Model_58_Stepping_9_GenuineIntel-2.7.9-64\cuda_ndarray\cuda_ndarray.pyd', 'mod.cu', '-LC:\Anaconda\libs', '-LC:\Anaconda', '-lpython27', '-lcublas', '-lcudart']</p> </blockquote>
[]
31,549,838
1
<python><anaconda><theano>
2015-07-21T21:21:49.797
31,556,146
3,775,577
Import error while using scikit-neuralnetwork
<p>I'm using Anaconda with Python 3.4 64 bit on windows 8</p> <p>While trying to use the package scikit-neuralnetwork, this line of code raises the following exception:</p> <pre></pre> <p>I installed the GCC and g++ (4.8.1) compilers and added them correctly to PATH, however the following error continues to appear:</p> <pre></pre>
[ { "AnswerId": "31556146", "CreationDate": "2015-07-22T07:06:43.333", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>This error is often solved by running the following, as mentioned in the <a href=\"http://deeplearning.net/software/theano/install_windows.html\" rel=\"nofollow\">documentation</a>.</p>\n\n<pre><code>conda install mingw libpython\n</code></pre>\n\n<p>Since you've now installed your own installation of GCC, you may need to play around with environment variables, especially <code>PATH</code> to get things working.</p>\n" } ]
31,556,268
2
<python><neural-network><xor><keras>
2015-07-22T07:13:16.683
null
5,142,261
How to use keras for XOR
<p>I want to practice keras by code a xor, but the result is not right, the followed is my code, thanks for everybody to help me.</p> <pre></pre> <blockquote> <p>Output</p> </blockquote> <pre></pre>
[ { "AnswerId": "31884273", "CreationDate": "2015-08-07T18:20:16.793", "ParentId": null, "OwnerUserId": "12981", "Title": null, "Body": "<p>If I increase the number of epochs in your code to 50000 it does often converge to the right answer for me, just takes a little while :)</p>\n\n<p>It does often get stuck, though. I get better convergence properties if I change your loss function to 'mean_squared_error', which is a smoother function.</p>\n\n<p>I get still faster convergence if I use the Adam or RMSProp optimizers. My final compile line, which works:</p>\n\n<pre><code>model.compile(loss='mse', optimizer='adam')\n...\nmodel.fit(train_data, label, nb_epoch = 10000,batch_size = 4,verbose = 1,shuffle=True,show_accuracy = True)\n</code></pre>\n" }, { "AnswerId": "41686065", "CreationDate": "2017-01-16T22:31:24.203", "ParentId": null, "OwnerUserId": "7427701", "Title": null, "Body": "<p>I used a single hidden layer with 4 hidden nodes, and it almost always converges to the right answer within 500 epochs. I used sigmoid activations.</p>\n" } ]
31,558,162
1
<python><theano>
2015-07-22T08:42:39.370
31,558,574
5,130,560
updates argument in theano functions
<p>What does the "updates" argument do when called this way?</p> <pre></pre> <p>All the documentation I have seen about the "updates" argument in theano functions talk about pairs of the form (shared variables, expression used to update the shared variable). However, here there is only an expression so how to I know which shared variable is updated?</p> <p>I guess the shared variable is somehow implicit but and both depends on different shared variables:</p> <pre></pre> <p>This code comes from in <a href="http://deeplearning.net/tutorial/lstm.html" rel="nofollow">http://deeplearning.net/tutorial/lstm.html</a></p> <p>Thanks</p>
[ { "AnswerId": "31558574", "CreationDate": "2015-07-22T09:01:32.750", "ParentId": null, "OwnerUserId": "127480", "Title": null, "Body": "<p>It is correct to think that <code>updates</code> should be a list (or dictionary) of key value pairs where the key is a shared variable and the value is a symbolic expression describing how to update the corresponding shared variable.</p>\n\n<p>These two lines create the pairs:</p>\n\n<pre><code>zgup = [(zg, g) for zg, g in zip(zipped_grads, grads)]\nrg2up = [(rg2, 0.95 * rg2 + 0.05 * (g ** 2))\n for rg2, g in zip(running_grads2, grads)]\n</code></pre>\n\n<p><code>zipped_grads</code> and <code>running_grads2</code> were created in the previous lines are each just a list of shared variables. Here, those shared variables are linked to updates using the Python <code>zip</code> function, which emits a list of pairs. In fact, the first of these lines could be replaced with</p>\n\n<pre><code>zgup = zip(zipped_grads, grads)\n</code></pre>\n\n<p>This code is quite complex because it is implementing the AdaDelta update mechanism. If you want to see how <code>updates</code> works in a simpler setting, take a look at the basic stochastic gradient descent update in the <a href=\"http://deeplearning.net/tutorial/mlp.html\" rel=\"nofollow\">Theano MLP tutorial</a>.</p>\n\n<pre><code>updates = [\n (param, param - learning_rate * gparam)\n for param, gparam in zip(classifier.params, gparams)\n ]\n</code></pre>\n" } ]
31,573,856
1
<numpy><theano>
2015-07-22T20:57:46.247
31,579,778
87,240
theano: row-wise outer product between two matrices
<p>I'm trying to compute the row-wise outer-product between two matrices in theano, without using scan. I can do this in numpy by using einsum, which isn't available in theano.</p> <pre></pre>
[ { "AnswerId": "31579778", "CreationDate": "2015-07-23T06:34:01.173", "ParentId": null, "OwnerUserId": "3489247", "Title": null, "Body": "<p>This should be doable using some reshaping: Many of the simple einsum operations boil down to that. The complicated ones don't.</p>\n\n<pre><code>import theano\nimport theano.tensor as T\nimport numpy as np\n\na = np.array([[1,1,1],[2,2,2]]).astype('float32')\nb = np.array([[3,3,3,3],[4,4,4,4]]).astype('float32')\n\nA = T.fmatrix()\nB = T.fmatrix()\n\nC = A[:, :, np.newaxis] * B[:, np.newaxis, :]\n\nprint C.eval({A:a, B:b})\n</code></pre>\n\n<p>results in</p>\n\n<pre><code>[[[ 3. 3. 3., 3.]\n [ 3. 3. 3., 3.]\n [ 3. 3. 3.. 3.]]\n\n [[ 8. 8. 8., 8.]\n [ 8. 8. 8., 8.]\n [ 8. 8. 8., 8.]]]\n</code></pre>\n" } ]
31,588,281
0
<python><gpu><theano><numba><numba-pro>
2015-07-23T13:07:31.900
null
1,088,305
What is the main difference between Numba Pro and Theano/pyautodiff for GPU calculations?
<p>Both Numba Pro and pyautodiff based on Theano supports conversion of Python code into GPU machine code. Theano will also allow symbolic derivation of the resulted syntax tree, but this is outside the scope of my question.</p> <p>My question is whether there are technical limitations in one or the other framework, which would make the code less efficient.</p>
[]
31,612,074
1
<python><theano><deep-learning>
2015-07-24T13:39:32.137
null
200,340
Theano continue training
<p>I am looking for some suggestions about how to do continue training in theano. For example, I have the following:</p> <pre> classifier = my_classifier() cost = () updates = [] train_model = theano.function(...) eval_model = theano.function(...) best_accuracy = 0 while (epoch &lt; n_epochs): train_model() current_accuracy = eval_model() if current_accuracy > best_accuracy: save classifier or save theano functions? best_accuracy = current_accuracy else: load saved classifier or save theano functions? if we saved classifier previously, do we need to redefine train_model and eval_model functions? epoch+=1 #training is finished save classifier </pre> <p>I want to save the current trained model if it has higher accuracy than previously trained models, and load the saved model later if the current trained model accuracy is lower than the best accuracy. </p> <p>My questions are:</p> <p>When saving, should I save the classifier, or theano functions? </p> <p>If the classifier needs to be saved, do I need to redefine theano functions when loading it, since classifier is changed. </p> <p>Thanks,</p>
[ { "AnswerId": "31778280", "CreationDate": "2015-08-03T01:32:00.797", "ParentId": null, "OwnerUserId": "949321", "Title": null, "Body": "<p>When pickling models, it is always better to save the parameters and when loading re-create the shared variable and rebuild the graph out of this. This allow to swap the device between CPU and GPU.</p>\n\n<p>But you can pickle Theano functions. If you do that, pickle all associated function at the same time. Otherwise, they will have each of them a different copy of the shared variable. Each call to load() will create new shared variable if they where pickled. This is a limitation of pickle.</p>\n" } ]