QuestionId
int64 388k
59.1M
| AnswerCount
int64 0
47
| Tags
stringlengths 7
102
| CreationDate
stringlengths 23
23
| AcceptedAnswerId
float64 388k
59.1M
⌀ | OwnerUserId
float64 184
12.5M
⌀ | Title
stringlengths 15
150
| Body
stringlengths 12
29.3k
| answers
listlengths 0
47
|
---|---|---|---|---|---|---|---|---|
30,376,730 | 0 |
<multithreading><mpi><caffe>
|
2015-05-21T14:28:26.980
| null | 4,925,131 |
Parallel Caffe installation : This MPI version is NOT support multi-thread! *
|
<p>The mpi_train.sh has the following line:</p>
<p>mpiexec.hydra -prepend-rank -host node11 -n 16</p>
<p>While running examples/cifar10/mpi_train_quick.sh in parallel caffe I get the following error:</p>
<pre></pre>
<p>I have installed mpich-3.1.4 version.
When I do which mpirun, which mpiexec, which mpicc, I get the path to the mpich-3.1.4 folder from all of these
Also the system has both Open-Mpi and MPICH-3.1.4 installed since open-mpi gets installed automatically while installing opencv which is a pre-requisite for parallel caffe . How can I resolve this error? </p>
|
[] |
30,383,404 | 1 |
<python><theano><keras>
|
2015-05-21T20:08:14.430
| 30,384,484 | 2,991,243 |
What is data type for Python Keras deep learning package?
|
<p>I didn't find anything about data type that we need to work with with <a href="http://keras.io/" rel="noreferrer">this</a> link. I checked array and list but returned error. Any clue?</p>
|
[
{
"AnswerId": "30384484",
"CreationDate": "2015-05-21T21:16:32.537",
"ParentId": null,
"OwnerUserId": "2929337",
"Title": null,
"Body": "<p>Keras uses <code>numpy</code> arrays containing the <code>theano.config.floatX</code> floating point type. This can be configured in your <a href=\"http://deeplearning.net/software/theano/library/config.html#envvar-THEANORC\" rel=\"nofollow\"><code>.theanorc</code></a> file.</p>\n\n<p>Typically, it will be <code>float64</code> for CPU computations and <code>float32</code> for GPU computations, although you can also set it to <code>float32</code> when working on the CPU if you prefer.</p>\n\n<p>You can create a zero-filled array of the proper type by the command</p>\n\n<pre><code>X = numpy.zeros((4,3), dtype=theano.config.floatX)\n</code></pre>\n"
}
] |
30,384,908 | 1 |
<python><numpy><canopy><theano><keras>
|
2015-05-21T21:45:01.803
| 30,386,018 | 2,991,243 |
Python keras neural network (Theano) package returns an error about data dimensions
|
<p>I have this code:</p>
<pre></pre>
<p>Returns this error:</p>
<pre></pre>
<p>What is the problem?</p>
|
[
{
"AnswerId": "30386018",
"CreationDate": "2015-05-21T23:27:57.407",
"ParentId": null,
"OwnerUserId": "2929337",
"Title": null,
"Body": "<p>You specified the wrong output dimensions for your internal layers. See for instance this example from the Keras documentation:</p>\n\n<pre><code>model = Sequential()\nmodel.add(Dense(20, 64, init='uniform'))\nmodel.add(Activation('tanh'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(64, 64, init='uniform'))\nmodel.add(Activation('tanh'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(64, 2, init='uniform'))\nmodel.add(Activation('softmax'))\n</code></pre>\n\n<p>Note how the output size of one layer matches the input size of the next one:</p>\n\n<pre><code>20x64 -> 64x64 -> 64x2\n</code></pre>\n\n<p>The first number is always the input size (number of neurons on the previous layer), the second number the output size (number of neurons on the next layer). So in this example you have four layers:</p>\n\n<ul>\n<li>an input layer with 20 neurons</li>\n<li>a hidden layer with 64 neurons</li>\n<li>a hidden layer with 64 neurons</li>\n<li>an output layer with 2 neurons</li>\n</ul>\n\n<p>The only hard restriction you have is that the first (input) layer needs to have as many neurons as you have features, and the last (output) layer needs to have as many neurons as you need for your task.</p>\n\n<p>For your example, since you have three features, you need to change the input layer size to 3, and you can keep the two output neurons from this example to do binary classification (or use one, as you did, with logistic loss).</p>\n"
}
] |
30,385,351 | 0 |
<neural-network><deep-learning><caffe>
|
2015-05-21T22:23:36.170
| null | 2,284,821 |
how to setup Caffe imagenet_solver.prototxt file for fewer jpgs, program exited after iteration 0
|
<p>We need help to understand the parameters to use for smaller set of training (6000 jpgs) and val (170 jpgs) jpgs. Our execution was killed and exited after test score 0/1 in Iteration 0. </p>
<p>We are trying to run the imagenet sample on the caffe website tutorial at</p>
<pre></pre>
<p>Instead of using the full set of ILSVRC2 images in the package, we use our own training set of 6000 jpegs and val set of 170 jpeg images. They are each 256 x 256 jpeg files in the train and val directories as instructed. We ran the script to get the auxillary data: </p>
<pre></pre>
<p>The train.txt and val.txt files are setup to describe one of two possible categories for each jpeg file.
Then we ran the script to compute the mean image data which appeared to run correctly:</p>
<pre></pre>
<p>We used the model definitions supplied in the tutorial for imagenet_train.prototxt and imagenet_val.prototxt.
Since we are training on much fewer images we modified the imagenet_solver.prototxt as follows:</p>
<pre></pre>
<p>when we run it using:</p>
<pre></pre>
<p>We get the following output where it hangs:</p>
<pre></pre>
|
[] |
30,398,000 | 0 |
<python><macos><theano>
|
2015-05-22T13:29:05.270
| null | 4,248,384 |
How to make Theano operate on Mac Lion?
|
<p>I need to have Theano operate on my Mac for a school project. I downloaded sources from <a href="https://pypi.python.org/pypi/Theano#downloads" rel="nofollow">here</a> (first line of the table) and put the untared file in the repertory of my school project. I didn't know if this was sufficient, so I also used . Everything seems fine since I got this : </p>
<pre></pre>
<p>As you can see, I'm using Anaconda. </p>
<p>However, when I run a test in a python console, I get theses messages : </p>
<pre></pre>
<p>Is there any problem ? Are Theano and its compiler well installed ?
I tried the first example of <a href="http://deeplearning.net/software/theano/tutorial/adding.html" rel="nofollow">this tutorial</a> and had the same results without any bugs. But I'm wondering if there will be any bugs if I use more sophisticated functions and classes. </p>
|
[] |
30,403,590 | 2 |
<computer-vision><neural-network><deep-learning><caffe><matcaffe>
|
2015-05-22T18:31:14.680
| null | 242,871 |
number of parameters in Caffe LENET or Imagenet models
|
<p>How to calculate number of parameters in a model e.g. LENET for mnist, or ConvNet for imagent model etc.
Is there any specific function in caffe that returns or saves number of parameters in a model.
regards</p>
|
[
{
"AnswerId": "31399560",
"CreationDate": "2015-07-14T06:46:59.320",
"ParentId": null,
"OwnerUserId": "4788536",
"Title": null,
"Body": "<p>I can offer an explicit way to do this via the Matlab interface (make sure the matcaffe is installed first).\nBasically, you extract set of parameters from each network layer and count them.\nIn Matlab:</p>\n\n<pre><code>% load the network\nnet_model = <path to your *deploy.prototxt file>\nnet_weights = <path to your *.caffemodel file>\nphase = 'test';\ntest_net = caffe.Net(net_model, net_weights, phase);\n\n% get the list of layers\nlayers_list = test_net.layer_names;\n% for those layers which have parameters, count them\ncounter = 0;\nfor j = 1:length(layers_list),\n if ~isempty(test_net.layers(layers_list{j}).params)\n feat = test_net.layers(layers_list{j}).params(1).get_data();\n counter = counter + numel(feat)\n end\nend\n</code></pre>\n\n<p>In the end, 'counter' contains the number of parameters. </p>\n"
},
{
"AnswerId": "39687866",
"CreationDate": "2016-09-25T14:19:35.313",
"ParentId": null,
"OwnerUserId": "1621562",
"Title": null,
"Body": "<p>Here is a python snippet to compute the number of parameters in a Caffe model:</p>\n\n<pre><code>import caffe\ncaffe.set_mode_cpu()\nimport numpy as np\nfrom numpy import prod, sum\nfrom pprint import pprint\n\ndef print_net_parameters (deploy_file):\n print \"Net: \" + deploy_file\n net = caffe.Net(deploy_file, caffe.TEST)\n print \"Layer-wise parameters: \"\n pprint([(k, v[0].data.shape) for k, v in net.params.items()])\n print \"Total number of parameters: \" + str(sum([prod(v[0].data.shape) for k, v in net.params.items()]))\n\ndeploy_file = \"/home/ubuntu/deploy.prototxt\"\nprint_net_parameters(deploy_file)\n\n# Sample output:\n# Net: /home/ubuntu/deploy.prototxt\n# Layer-wise parameters: \n#[('conv1', (96, 3, 11, 11)),\n# ('conv2', (256, 48, 5, 5)),\n# ('conv3', (384, 256, 3, 3)),\n# ('conv4', (384, 192, 3, 3)),\n# ('conv5', (256, 192, 3, 3)),\n# ('fc6', (4096, 9216)),\n# ('fc7', (4096, 4096)),\n# ('fc8', (819, 4096))]\n# Total number of parameters: 60213280\n</code></pre>\n\n<p><a href=\"https://gist.github.com/kaushikpavani/a6a32bd87fdfe5529f0e908ed743f779\" rel=\"nofollow\">https://gist.github.com/kaushikpavani/a6a32bd87fdfe5529f0e908ed743f779</a></p>\n"
}
] |
30,407,358 | 1 |
<function><lua><lua-table><torch>
|
2015-05-22T23:42:37.093
| null | 3,113,501 |
How does this function: input.nn.MSECriterion_updateOutput(self, input, target) work (in Lua/Torch)?
|
<p>I have this function:</p>
<pre></pre>
<p>Now, </p>
<pre></pre>
<p>returns a number. I have no idea how it does it. I have walked step by step in the debugger and it seems this just computes a number with no intermediate steps.</p>
<pre></pre>
<p>I'm confused as to how this can compute a number.</p>
<p>I'm confused as to why this is even allowed. The parameter input is a tensor, which doesn't even have any methods called nn.MSE input.nn.MSECriterion_updateOutput.</p>
|
[
{
"AnswerId": "30416207",
"CreationDate": "2015-05-23T18:30:54.167",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<p>When you perform <code>require \"nn\"</code> this loads <code>init.lua</code> which in turns performs a <code>require('libnn')</code>. This is the C extension of torch/nn.</p>\n\n<p>If you look at <code>init.c</code> you can find <a href=\"https://github.com/torch/nn/blob/3dd5d1d/init.c#L127\"><code>luaopen_libnn</code></a>: this is the initialization function called when <code>libnn.so</code> is <code>require</code>-ed.</p>\n\n<p>This function takes care to initialize all parts of torch/nn, including the native parts of <code>MSECriterion</code> via <code>nn_FloatMSECriterion_init(L)</code> and <code>nn_DoubleMSECriterion_init(L)</code>.</p>\n\n<p>If you look at <code>generic/MSECriterion.c</code> you can find the <em>generic</em> (i.e macro expanded for <code>float</code> and <code>double</code>) <a href=\"https://github.com/torch/nn/blob/3dd5d1d/generic/MSECriterion.c#L47-L52\">initialization function</a>:</p>\n\n<pre><code>static void nn_(MSECriterion_init)(lua_State *L)\n{\n luaT_pushmetatable(L, torch_Tensor);\n luaT_registeratname(L, nn_(MSECriterion__), \"nn\");\n lua_pop(L,1);\n}\n</code></pre>\n\n<p>This init function modifies the metatable of any <code>torch.FloatTensor</code> and <code>torch.DoubleTensor</code> so that it is filled with a bunch of functions under the <code>nn</code> key (see <a href=\"https://github.com/torch/torch7/blob/9b51907/lib/luaT/README.md\">Torch7 Lua C API</a> for more details). These functions are defined right before:</p>\n\n<pre><code>static const struct luaL_Reg nn_(MSECriterion__) [] = {\n {\"MSECriterion_updateOutput\", nn_(MSECriterion_updateOutput)},\n {\"MSECriterion_updateGradInput\", nn_(MSECriterion_updateGradInput)},\n {NULL, NULL}\n};\n</code></pre>\n\n<p>In other words any tensor has these functions <em>attached</em> thanks to its metatable:</p>\n\n<pre><code>luajit -lnn\n> print(torch.Tensor().nn.MSECriterion_updateOutput)\nfunction: 0x40921df8\n> print(torch.Tensor().nn.MSECriterion_updateGradInput)\nfunction: 0x40921e20\n</code></pre>\n\n<p><em>Note: this mechanism is the same for all torch/nn modules that have a C native implementation counterpart.</em></p>\n\n<p>So <code>input.nn.MSECriterion_updateOutput(self, input, target)</code> has for effect to call <code>static int nn_(MSECriterion_updateOutput)(lua_State *L)</code> as you can see on <a href=\"https://github.com/torch/nn/blob/3dd5d1d/generic/MSECriterion.c#L5\">generic/MSECriterion.c</a>.</p>\n\n<p>This function computes the mean squared error between the input tensors.</p>\n"
}
] |
30,420,807 | 1 |
<c><lua><torch>
|
2015-05-24T06:41:50.947
| 30,420,916 | 916,451 |
Strange C syntax in Lua library
|
<p>I see functions like <a href="https://github.com/torch/torch7/blob/master/lib/TH/generic/THTensor.c#L11" rel="nofollow">this</a> throughout the <a href="https://github.com/torch/torch7" rel="nofollow">torch library</a>'s C code:</p>
<pre></pre>
<p>Is this a preprocessor thing, or something lua specific? The idea, I think has something to do with the fact that is a method of sorts on the "class", but I've never seen this kind of syntax.</p>
|
[
{
"AnswerId": "30420916",
"CreationDate": "2015-05-24T07:01:23.807",
"ParentId": null,
"OwnerUserId": "1322972",
"Title": null,
"Body": "<p>It is a preprocessor macro</p>\n\n<pre><code>lib/TH/THTensor.h:\n#define THTensor_(NAME) TH_CONCAT_4(TH,Real,Tensor_,NAME)\n</code></pre>\n\n<p>which leads to...</p>\n\n<pre><code>lib/TH/THGeneral.h.in:\n#define TH_CONCAT_4(x,y,z,w) TH_CONCAT_4_EXPAND(x,y,z,w)\n</code></pre>\n\n<p>and finally...</p>\n\n<pre><code>lib/TH/THGeneral.h.in:\n#define TH_CONCAT_4_EXPAND(x,y,z,w) x ## y ## z ## w\n</code></pre>\n\n<p>Therefore, </p>\n\n<pre><code>long THTensor_(storageOffset)(const THTensor *self)\n</code></pre>\n\n<p>ultimately becomes this:</p>\n\n<pre><code>long THRealTensor_storageOffset(const THTensor *self)\n</code></pre>\n\n<p>Aren't preprocessors just <em>grand</em> ?</p>\n"
}
] |
30,426,216 | 1 |
<python><theano><pymc3>
|
2015-05-24T17:21:23.403
| null | 2,213,825 |
Using Theano shared variables with PyMC3
|
<p>My observable data is very big and its likelihood function is quite complex so I load all the observable data in the GPU and then use a theano function to get their likelihood depending on the parameters that I'm trying to estimate.</p>
<pre></pre>
<p>I don't know if this is the right approach or not. I am getting the error:</p>
<pre></pre>
<p>----------//-----------
I tried to make a simple example using the example given by John Salvatier:
P.S. This model doesn't make any sense.. I am just trying to figure it out how does Theano work with pymc3</p>
<pre></pre>
<p>And I get this error:</p>
<pre></pre>
|
[
{
"AnswerId": "30426896",
"CreationDate": "2015-05-24T18:30:21.710",
"ParentId": null,
"OwnerUserId": "359944",
"Title": null,
"Body": "<p>The error you're getting is because you're compiling the logp function into get_logp and then passing it to DensityDist which actually expects a function that returns a theano variable, not a compiled theano function.</p>\n\n<p>I think you want something like: </p>\n\n<pre><code>with mc.Model() as model:\n data_values = load_data()\n X = theano.shared(np.asarray(data_values,\n dtype=theano.config.floatX),\n borrow=borrow)\n\n sigma_r = mc.Uniform('sigma_r',0,1,testval=0.5,dtype='float32')\n sigma_u = mc.Uniform('sigma_u',0,1,testval=0.5,dtype='float32')\n B = mc.Uniform('B',0,1,testval=np.array([0.5,0.5,.5,.5],dtype=np.float32),shape=4,dtype='float32')\n\n def logp(x):\n #need to pass B sigma r/u to the function\n P = complexTheanoFunction(x, B, sigma_r, sigma_u) \n return T.sum(T.log(P))\n\n obs = mc.DensityDist('observations',logp, observed=X) \n start = mc.find_MAP()\n step = mc.NUTS(state=start)\n</code></pre>\n"
}
] |
30,426,734 | 1 |
<linux><caffe><fpic><gflags>
|
2015-05-24T18:12:51.783
| null | 1,596,734 |
caffe recompile libgflags.a with -fPIC error
|
<p>I'm getting an error when I try to install Caffe on Linux Ubuntu 64.
The error is as follows:</p>
<blockquote>
<p>/usr/bin/ld: /usr/local/lib/libgflags.a(gflags.cc.o): relocation R_X86_64_32S against `.rodata' can not be used when making a shared object; recompile with -fPIC<br>
/usr/local/lib/libgflags.a: error adding symbols: Bad value</p>
</blockquote>
<p>I tried recompile the gflags library with , but the error changed to as follow: </p>
<blockquote>
<p>src/caffe/common.cpp: In function ‘void caffe::GlobalInit(int*, char***)’:<br>
src/caffe/common.cpp:35:5: error: ‘::gflags’ has not been declared<br>
::gflags::ParseCommandLineFlags(pargc, pargv, true);</p>
</blockquote>
<p>I also tried to change the CMakeCache.txt of caffe to set the , but do not work either.</p>
|
[
{
"AnswerId": "31129223",
"CreationDate": "2015-06-30T03:53:33.607",
"ParentId": null,
"OwnerUserId": "3285719",
"Title": null,
"Body": "<p>This error arrises because gflags 2.1 changed the name of the namespace from <code>google</code> to <code>gflags</code>. There are attempts by members of the caffe community to fix this error albeit they are not finalized. You should reassign the namespace from google to gflags as follows.</p>\n\n<p>In files</p>\n\n<ul>\n<li>caffe/include/caffe/common.hpp</li>\n<li>caffe/examples/mnist/convert_mnist_data.cpp</li>\n</ul>\n\n<p>Comment out the <code>ifndef</code></p>\n\n<pre><code>// #ifndef GFLAGS_GFLAGS_H_\nnamespace gflags = google;\n// #endif // GFLAGS_GFLAGS_H_\n</code></pre>\n\n<p>This should work temporarily. You should fork and occasionally sync your caffe repo with the BVLC/caffe repo on github so that you get the latest updates of the code.</p>\n"
}
] |
30,438,900 | 1 |
<python><slice><pca><theano>
|
2015-05-25T12:56:24.413
| null | 1,544,186 |
Slicing a matrix in the givens argument of a theano function
|
<p>I have the following piece of code, in which I attempt to apply PCA to the MNIST dataset.</p>
<pre></pre>
<p>As can be seen in the code above, I try to substitute with a slice of the training dataset in the params. The above code, however, results in the following error:</p>
<pre></pre>
<p>Which implies that I am trying to assign a vector to a matrix, which isn't the behaviour I would expect (I doubled checked using numpy). I also tried a different approach, whereby I index the training dataset X_train with the array of booleans directly instead of using indices are performing the slices my self, but that also didn't work.</p>
<pre></pre>
<p>Which gives the following error:</p>
<pre></pre>
<p>The only approach that did work is using disregarding the param, and using only the inputs and outputs, as such:</p>
<pre></pre>
<p>None the less, I feel curious as to why my first two approaches do not work, and would appreciate if any one could point me to where I might be going wrong, since I am just starting with Theano.</p>
|
[
{
"AnswerId": "30441790",
"CreationDate": "2015-05-25T15:43:31.683",
"ParentId": null,
"OwnerUserId": "2929337",
"Title": null,
"Body": "<p>The <a href=\"http://www.deeplearning.net/software/theano/library/compile/function.html#module-function\" rel=\"nofollow\">theano documentation</a> states:</p>\n\n<blockquote>\n <p><code>givens</code> (iterable over pairs <code>(Var1, Var2)</code> of Variables. List, tuple or\n dict. The <code>Var1</code> and <code>Var2</code> in each pair must have the same Type.) –\n specific substitutions to make in the computation graph (<code>Var2</code> replaces\n <code>Var1</code>).</p>\n</blockquote>\n\n<p>And in the <a href=\"http://www.deeplearning.net/software/theano/tutorial/examples.html#basictutexamples\" rel=\"nofollow\">tutorial examples</a>, there is the statement (emphasis mine)</p>\n\n<blockquote>\n <p>In practice, a good way of thinking about the <code>givens</code> is as a mechanism\n that allows you to replace any part of your formula with a different\n expression that evaluates <strong>to a tensor of same shape and dtype</strong>.</p>\n</blockquote>\n\n<p>So, you cannot replace a matrix by a vector by means of the <code>givens</code> parameter since they don't have the same shape.</p>\n"
}
] |
30,470,517 | 1 |
<python><multithreading><machine-learning><theano>
|
2015-05-26T23:10:49.067
| 30,500,400 | 1,821,593 |
Theano/Pylearn2. How to parallelize training?
|
<p>I have Convolutional Neural Network model described in YAML. When I run pylearn2's , I see that only one core of four is used.</p>
<p><em>Is there a way to run training multi-threaded?</em></p>
<p>Yeah, may be it's rather a Theano question. I followed this <a href="http://deeplearning.net/software/theano/tutorial/multi_cores.html" rel="nofollow">http://deeplearning.net/software/theano/tutorial/multi_cores.html</a> Theano tutorial about multi cores support, and doesn't work for me - I see only one thread running. And further question:
<em>can training be parallelized with ?</em> Because I can't check it since doesn't do the thing. <em>Should I bother about my BLAS then?</em></p>
<p>I have BLAS with LAPACK, connected to them, python 2.7.9, my system is Ubuntu 15.04 on Core i5 4300U. </p>
<p>Thank you, warm wishes!</p>
|
[
{
"AnswerId": "30500400",
"CreationDate": "2015-05-28T07:51:49.753",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>The most direct answer to your question is: you can't parallelize training in the way you desire.</p>\n\n<p>BLAS, OpenMP, and/or running on a GPU only allow certain operations to be parallelized. The training itself can only be parallelized, in the way you want, if the training algorithm is designed to be parallelized. By default PyLearn2 uses the ordinary stochastic gradient descent (SGD) training algorithm which is not parallelizable. There are version of SGD that support parallelization (e.g. <a href=\"http://papers.nips.cc/paper/4687-large-scale-distributed-deep-networks.pdf\" rel=\"nofollow noreferrer\">Google's DistBelief</a>) but these are not available in PyLearn2 off-the-shelf. This is mostly because PyLearn2 is built on top of Theano and Theano is very much designed for shared memory operations.</p>\n\n<p>If you have a GPU then you'll almost certainly get faster training by switching to the GPU. If that isn't an option you should see more than one core being used some of time as long as your BLAS and OpenMP are set up correctly. The fact that <code>check_blas.py</code> doesn't show any improvement when <code>OMP_NUM_THREADS > 2</code> suggests you don't have them set up correctly. I suggest opening a new question if you need help with this, providing more information about what you've done, and the settings shown by numpy when you print its config (see <a href=\"https://stackoverflow.com/questions/9000164/how-to-check-blas-lapack-linkage-in-numpy-scipy\">here</a> for example).</p>\n"
}
] |
30,473,429 | 1 |
<theano>
|
2015-05-27T04:50:16.733
| 30,476,856 | 2,097,312 |
Bad input argument to theano function
|
<p>I am new to theano. I am trying to implement simple linear regression but my program throws following error:</p>
<blockquote>
<p>TypeError: ('Bad input argument to theano function with name "/home/akhan/Theano-Project/uog/theano_application/linear_regression.py:36" at index 0(0-based)', 'Expected an array-like object, but found a Variable: maybe you are trying to call a function on a (possibly shared) variable instead of a numeric array?')</p>
</blockquote>
<p>Here is my code:</p>
<pre></pre>
<p>What is the explanation behind this error? (didn't got from the error message). Thanks in Advance.</p>
|
[
{
"AnswerId": "30476856",
"CreationDate": "2015-05-27T08:19:14.680",
"ParentId": null,
"OwnerUserId": "2929337",
"Title": null,
"Body": "<p>There's an important distinction in Theano between defining a computation graph and a function which uses such a graph to compute a result.</p>\n\n<p>When you define</p>\n\n<pre><code>out = T.dot(X, W)\npredict = theano.function(inputs=[X], outputs=out)\n</code></pre>\n\n<p>you first set up a computation graph for <code>out</code> in terms of <code>X</code> and <code>W</code>. Note that <code>X</code> is a purely symbolic variable, it doesn't have any value, but the definition for <code>out</code> tells Theano, \"given a value for <code>X</code>, this is how to compute <code>out</code>\".</p>\n\n<p>On the other hand, <code>predict</code> is a <code>theano.function</code> which takes the computation graph for <code>out</code> and actual numeric values for <code>X</code> to produce a numeric output. What you pass into a <code>theano.function</code> when you call it always has to have an actual numeric value. So it simply makes no sense to do</p>\n\n<pre><code>y = predict(X)\n</code></pre>\n\n<p>because <code>X</code> is a symbolic variable and doesn't have an actual value.</p>\n\n<p>The reason you want to do this is so that you can use <code>y</code> to further build your computation graph. But there is no need to use <code>predict</code> for this: the computation graph for <code>predict</code> is already available in the variable <code>out</code> defined earlier. So you can simply remove the line defining <code>y</code> altogether and then define your cost as</p>\n\n<pre><code>cost = T.mean(T.sqr(out - Y))\n</code></pre>\n\n<p>The rest of the code will then work unmodified.</p>\n"
}
] |
30,475,415 | 3 |
<c++><makefile><deep-learning><caffe><gflags>
|
2015-05-27T07:06:46.597
| null | 2,163,392 |
Caffe Compilation Error: gflags.cc' is being linked both statically and dynamically into this executable
|
<p>I am trying to install caffe following this <a href="http://caffe.berkeleyvision.org/installation.html#compilation" rel="nofollow noreferrer">tutorial</a></p>
<p>Basically I have the following error when I type the last make command:</p>
<pre></pre>
<p>I don't understand how to solve this error. Did anybody find this error before? how can I solve it?</p>
|
[
{
"AnswerId": "31229250",
"CreationDate": "2015-07-05T10:10:59.927",
"ParentId": null,
"OwnerUserId": "2452553",
"Title": null,
"Body": "<p>Whether or not you've already solved this somewhere else, I'm posting the answer here in-case others run into the same problem.</p>\n\n<p>Primarily, this problem seems to have come about because <strong>we don't always read things properly</strong> and blindly follow all instructions thinking they all apply to our case. <em>hint: <strong>they don't.</em></strong></p>\n\n<p>In the installation instructions for Caffe (presuming Ubuntu instructions), there is a section which states:</p>\n\n<blockquote>\n <p><strong>Everything is packaged in 14.04.</strong></p>\n\n<pre><code>sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler\n</code></pre>\n</blockquote>\n\n<p>Blindly ignoring the next title, which states clearly:</p>\n\n<blockquote>\n <p><strong>Remaining dependencies, 12.04</strong></p>\n</blockquote>\n\n<p>we go on to install these dependencies, building and installing as required, resulting in the unfortunate side-effect of having 2 versions of libgflags, one dynamic (in <code>/usr/lib[/x86_x64]</code> and one static in <code>/usr/local/lib</code></p>\n\n<p><strong>Resolution</strong></p>\n\n<ol>\n<li>Promise ourselves failthfully we'll read instructions properly next time around.</li>\n<li><p>Uninstall libgflags</p>\n\n<pre><code>sudo apt-get remove -y libgflags\n</code></pre></li>\n<li><p>Delete <code>make install</code> versions</p>\n\n<pre><code>sudo rm -f /usr/local/lib/libgflags.a /usr/local/lib/libgflags_nothreads.a\nsudo rm -rf /usr/local/include/gflags\n</code></pre></li>\n<li><p>Clean Caffe build</p>\n\n<pre><code>cd <path>/<to>/caffe\nmake clean\n</code></pre></li>\n<li><p>Re-install libgflags package</p>\n\n<pre><code>sudo apt-get install -y libgflags-dev\n</code></pre></li>\n<li><p>Rebuild Caffe</p>\n\n<pre><code>make all\nmake test\nmake runtest\n</code></pre></li>\n</ol>\n\n<p>Et Voila. All tests should now run and you're ready to rock the deep-learning boat.</p>\n"
},
{
"AnswerId": "56551179",
"CreationDate": "2019-06-11T20:20:36.067",
"ParentId": null,
"OwnerUserId": "7083698",
"Title": null,
"Body": "<p>I've worked out a way to debug this issue analytically. In my case, I was cross-compiling for an older ABI, so apt-get wasn't an option and I was compiling all dependencies manually.</p>\n\n<p>First let's take a look at what this issue actually is. In the Google GFlags library, flags are declared through global objects. When the global object's constructor is run, it calls into the GFlags library to register that command line flag. If the global constructor gets run multiple times (due to multiple versions of the library containing it being loaded into memory), then the GFlags register method dies with an error.</p>\n\n<p>What does GLog have to do with this? Well, GLog uses GFlags, and it has globally declared flag objects. Even if GFlags is linked correctly, if the GLog library gets loaded multiple times, you get an error pointing to logging.cc in GLog.</p>\n\n<p>Sounds like quite a mess, huh. Even if GLog and GFlags are linked as shared in most cases, if another library links to a static version or some other version, kaboom. </p>\n\n<p>Luckily, we can debug this issue using GDB and other tools, if you're willing to delve through some tricky symbol analysis.\nFirst, you'll want to run GDB on the Python interpreter when it tries to import caffe: </p>\n\n<pre><code>gdb --args python -c 'import caffe'\n</code></pre>\n\n<p>Now, run the program once through so that GDB can pick up all the libraries it imports:</p>\n\n<pre><code>(gdb) r\n</code></pre>\n\n<p>Now, we can set a breakpoint on the place in the function (<code>FlagRegistry::RegisterFlag()</code>) that prints the error message, and run it again. Note that this line number is from my version of GFlags (2.2.2), you may have to look at the source code of your GFlags version and get the line number.</p>\n\n<pre><code>(gdb) break gflags.c:728\n(gdb) r\n</code></pre>\n\n<p>Hopefully, GDB should then break on the first instance of the error (if not, check that gflags has been built with debugging symbols).\nLook at the backtrace:</p>\n\n<pre><code>(gdb) bt\n#0 google::(anonymous namespace)::FlagRegistry::RegisterFlag (this=0xa33b30, flag=0x1249d20) at dev/gflags-2.2.2/src/gflags.cc:728\n#1 0x00007ffff0f3247a in _GLOBAL__sub_I_logging.cc () from prefix/lib/libcaffe2.so\n#2 0x00007ffff7de76ca in call_init (l=<optimized out>, argc=argc@entry=3, argv=argv@entry=0x7fffffffdb08, env=env@entry=0x7fffffffdb28) at dl-init.c:72\n#3 0x00007ffff7de77db in call_init (env=0x7fffffffdb28, argv=0x7fffffffdb08, argc=3, l=<optimized out>) at dl-init.c:30\n#4 _dl_init (main_map=main_map@entry=0xd9c2a0, argc=3, argv=0x7fffffffdb08, env=0x7fffffffdb28) at dl-init.c:120\n#5 0x00007ffff7dec8f2 in dl_open_worker (a=a@entry=0x7fffffffcf70) at dl-open.c:575\n#6 0x00007ffff7de7574 in _dl_catch_error (objname=objname@entry=0x7fffffffcf60, errstring=errstring@entry=0x7fffffffcf68, mallocedp=mallocedp@entry=0x7fffffffcf5f, \n operate=operate@entry=0x7ffff7dec4e0 <dl_open_worker>, args=args@entry=0x7fffffffcf70) at dl-error.c:187\n#7 0x00007ffff7debdb9 in _dl_open (file=0x9aee70 \"prefix/lib/python2.7/site-packages/caffe2/python/caffe2_pybind11_state.so\", mode=-2147483646, \n caller_dlopen=0x51bb39 <_PyImport_GetDynLoadFunc+233>, nsid=-2, argc=<optimized out>, argv=<optimized out>, env=0x7fffffffdb28) at dl-open.c:660\n#8 0x00007ffff75ecf09 in dlopen_doit (a=a@entry=0x7fffffffd1a0) at dlopen.c:66\n#9 0x00007ffff7de7574 in _dl_catch_error (objname=0xabf9f0, errstring=0xabf9f8, mallocedp=0xabf9e8, operate=0x7ffff75eceb0 <dlopen_doit>, args=0x7fffffffd1a0) at dl-error.c:187\n#10 0x00007ffff75ed571 in _dlerror_run (operate=operate@entry=0x7ffff75eceb0 <dlopen_doit>, args=args@entry=0x7fffffffd1a0) at dlerror.c:163\n#11 0x00007ffff75ecfa1 in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87\n#12 0x000000000051bb39 in _PyImport_GetDynLoadFunc ()\n<snip>\n</code></pre>\n\n<p>Well that's a lot to deal with, but let's focus on the line that's actually important:</p>\n\n<pre><code>#1 0x00007ffff0f3247a in _GLOBAL__sub_I_logging.cc () from prefix/lib/libcaffe2.so\n</code></pre>\n\n<p>This is the call to the constructor for the global variables in logging.cc (which is part of GLog). As you can see, this call is in libcaffe2.so, meaning that GLog has been statically linked to libcaffe2.so [I was using caffe2, but this procedure should be the same for both].</p>\n\n<p>You can then set a breakpoint on <code>google::(anonymous namespace)::FlagRegistry::RegisterFlag</code> and rerun the program from the start. Look at each call to RegisterFlag(), and figure out where this particular flag was registered the first time. If the library providing the flag is a shared library, then it should only ever get registered from that .so file, and nowhere else.</p>\n\n<p>To confirm the diagnosis, you can use </p>\n\n<pre><code>nm <library> | grep _GLOBAL__sub_I_logging.cc\n</code></pre>\n\n<p>to check for that init function in a library file. Once you've found your culprit, you'll need to rebuild it so that it doesn't link to GFlags/GLog statically.</p>\n"
},
{
"AnswerId": "33550606",
"CreationDate": "2015-11-05T17:06:12.433",
"ParentId": null,
"OwnerUserId": "5530303",
"Title": null,
"Body": "<p>I also had two libraries installed, a shared .so library and a static .a library. I removed them all as well as the /usr/local/include/glog folder.\nThe .so file I had brought over when I (cross) compiled the system, while the .a was from a native and up-to-date build.\nUltimately it came down to building glog (natively) in such a way that it provided the .so files.\nI started with a clean download:</p>\n\n<p><code>git clone git://github.com/google/glog</code></p>\n\n<p>Then I edited CMakeLists.txt.\nWhere it says:</p>\n\n<pre><code>add_library (glog\n ${GLOG_SRCS}\n)\n</code></pre>\n\n<p>I changed it to:</p>\n\n<pre><code>add_library (glog SHARED\n ${GLOG_SRCS}\n)\n</code></pre>\n\n<p>Next you should be able to follow the other instructions. For my particular case I had to use slightly different instructions, not saying you have to do this. For me it was:\nmkdir build\ncd build</p>\n\n<pre><code>export CXXFLAGS=\"-fPIC\"\ncmake ..\nmake\nsudo make install\n</code></pre>\n\n<p>This gave me the .so files and put them in the right place. Then I started over with caffe and it fixed the error for me.</p>\n"
}
] |
30,486,033 | 2 |
<c++><machine-learning><neural-network><deep-learning><caffe>
|
2015-05-27T14:52:41.100
| 30,497,907 | 4,841,248 |
Tackling Class Imbalance: scaling contribution to loss and sgd
|
<p><strong>(An update to this question has been added.)</strong></p>
<p>I am a graduate student at the university of Ghent, Belgium; my research is about emotion recognition with deep convolutional neural networks. I'm using the <a href="http://caffe.berkeleyvision.org/" rel="noreferrer">Caffe</a> framework to implement the CNNs.</p>
<p>Recently I've run into a problem concerning class imbalance. I'm using 9216 training samples, approx. 5% are labeled positively (1), the remaining samples are labeled negatively (0).</p>
<p>I'm using the <a href="http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1SigmoidCrossEntropyLossLayer.html" rel="noreferrer">SigmoidCrossEntropyLoss</a> layer to calculate the loss. When training, the loss decreases and the accuracy is extremely high after even a few epochs. This is due to the imbalance: the network simply always predicts negative (0). <em>(Precision and recall are both zero, backing this claim)</em></p>
<p>To solve this problem, I would like to <strong>scale the contribution to the loss depending on the prediction-truth combination</strong> (punish false negatives severely). My mentor/coach has also advised me to <strong>use a scale factor when backpropagating</strong> through stochastic gradient descent (sgd): the factor would be correlated to the imbalance in the batch. A batch containing only negative samples would not update the weights at all.</p>
<p><em>I have only added one custom-made layer to Caffe: to report other metrics such as precision and recall. My experience with Caffe code is limited but I have a lot of expertise writing C++ code.</em></p>
<hr>
<p><strong>Could anyone help me or point me in the right direction on how to adjust the <a href="http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1SigmoidCrossEntropyLossLayer.html" rel="noreferrer">SigmoidCrossEntropyLoss</a> and <a href="http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1SigmoidLayer.html" rel="noreferrer">Sigmoid</a> layers to accomodate the following changes:</strong></p>
<ol>
<li>adjust the contribution of a sample to the total loss depending on the prediction-truth combination (true positive, false positive, true negative, false negative).</li>
<li>scale the weight update performed by stochastic gradient descent depending on the imbalance in the batch (negatives vs. positives).</li>
</ol>
<p>Thanks in advance!</p>
<hr>
<h2>Update</h2>
<p>I have incorporated the <strong><a href="http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1InfogainLossLayer.html#details" rel="noreferrer">InfogainLossLayer</a> as suggested by <a href="https://stackoverflow.com/a/30497907/1714410">Shai</a></strong>. I've also added another custom layer that builds the infogain matrix based on the imbalance in the current batch.</p>
<p>Currently, the matrix is configured as follows:</p>
<pre></pre>
<p><em>I'm planning on experimenting with different configurations for the matrix in the future.</em></p>
<p>I have tested this on a 10:1 imbalance. The results have shown that the network is learning useful things now: <em>(results after 30 epochs)</em></p>
<ul>
<li>Accuracy is approx. ~70% (down from ~97%);</li>
<li>Precision is approx. ~20% (up from 0%);</li>
<li>Recall is approx. ~60% (up from 0%).</li>
</ul>
<p>These numbers were reached at around 20 epochs and didn't change significantly after that.</p>
<p><em>!! The results stated above are merely a proof of concept, they were obtained by training a simple network on a 10:1 imbalanced dataset. !!</em></p>
|
[
{
"AnswerId": "30497907",
"CreationDate": "2015-05-28T05:28:51.720",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>Why don't you use the <a href=\"http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1InfogainLossLayer.html\" rel=\"nofollow noreferrer\"><strong>InfogainLoss</strong></a> layer to compensate for the imbalance in your training set?</p>\n\n<p>The Infogain loss is defined using a weight matrix <code>H</code> (in your case 2-by-2) The meaning of its entries are</p>\n\n<pre><code>[cost of predicting 1 when gt is 0, cost of predicting 0 when gt is 0\n cost of predicting 1 when gt is 1, cost of predicting 0 when gt is 1]\n</code></pre>\n\n<p>So, you can set the entries of <code>H</code> to reflect the difference between errors in predicting 0 or 1.</p>\n\n<p>You can find how to define matrix <code>H</code> for caffe in <a href=\"https://stackoverflow.com/questions/27632440/infogain-loss-layer\">this thread</a>.</p>\n\n<p>Regarding sample weights, you may find <a href=\"http://deepdish.io/2014/11/04/caffe-with-weighted-samples/\" rel=\"nofollow noreferrer\">this post</a> interesting: it shows how to modify the <strong>SoftmaxWithLoss</strong> layer to take into account sample weights.</p>\n\n<hr>\n\n<p>Recently, a modification to cross-entropy loss was proposed by <a href=\"https://arxiv.org/abs/1708.02002\" rel=\"nofollow noreferrer\"><em>Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, Piotr Dollár</em> <strong>Focal Loss for Dense Object Detection</strong>, (ICCV 2017)</a>.<br>\nThe idea behind focal-loss is to assign different weight for each example based on the relative difficulty of predicting this example (rather based on class size etc.). From the brief time I got to experiment with this loss, it feels superior to <code>\"InfogainLoss\"</code> with class-size weights. </p>\n"
},
{
"AnswerId": "47626265",
"CreationDate": "2017-12-04T03:55:01.993",
"ParentId": null,
"OwnerUserId": "6064933",
"Title": null,
"Body": "<p>I have also come across this class imbalance problem in my classification task. Right now I am using CrossEntropyLoss with weight (documentation <a href=\"http://pytorch.org/docs/master/nn.html#torch.nn.CrossEntropyLoss\" rel=\"nofollow noreferrer\">here</a>) and it works fine. The idea is to give more loss to samples in classes with smaller number of images.</p>\n\n<h3>Calculating the weight</h3>\n\n<p>weight for each class in inversely proportional to the image number in this class. Here is a snippet to calculate weight for all class using numpy,</p>\n\n<pre><code>cls_num = []\n# train_labels is a list of class labels for all training samples\n# the labels are in range [0, n-1] (n classes in total)\ntrain_labels = np.asarray(train_labels)\nnum_cls = np.unique(train_labels).size\n\nfor i in range(num_cls):\n cls_num.append(len(np.where(train_labels==i)[0]))\n\ncls_num = np.array(cls_num)\n\ncls_num = cls_num.max()/cls_num\nx = 1.0/np.sum(cls_num)\n\n# the weight is an array which contains weight to use in CrossEntropyLoss\n# for each class.\nweight = x*cls_num\n</code></pre>\n"
}
] |
30,496,978 | 1 |
<theano><pymc3>
|
2015-05-28T03:56:25.587
| null | 4,430,895 |
Is there a workaround for not fusing the observed data into model definition in Pymc3?
|
<p>Problem definition: consider the "Simpletest" model (from pymc3 examples)which is something similar to the following one:</p>
<pre></pre>
<p>I'd like to change it so that I'll have a fixed model structure but run the sampling several iterations, each time adding a new data point to the previous (observed) dataset. Since the observed data is somehow embedded inside the model definition, the only way I know to do this is to put the whole model definition inside a loop:</p>
<pre></pre>
<p>This may produce some unnecessary overhead specially if the model is large. To refrain from the overhead of repeatedly defining the same model, I wonder if there is a solution so that the same results could be achieved with something similar to the following idea:</p>
<pre></pre>
<p>or even better:</p>
<pre></pre>
|
[
{
"AnswerId": "30585998",
"CreationDate": "2015-06-02T01:42:27.943",
"ParentId": null,
"OwnerUserId": "4430895",
"Title": null,
"Body": "<p>It seems that it's not possible right know. But there is an open issue on this topic with possible solutions here : <a href=\"https://github.com/pymc-devs/pymc/issues/10\" rel=\"nofollow\">https://github.com/pymc-devs/pymc/issues/10</a> </p>\n"
}
] |
30,500,332 | 1 |
<python><theano><hessian-matrix>
|
2015-05-28T07:48:03.257
| null | 3,282,028 |
How to use theano.gradient.hessian? Example needed
|
<p>I tried the code below:</p>
<pre></pre>
<p>Then I run it with following real values</p>
<pre></pre>
<p>Then I encountered following error</p>
<pre></pre>
<p>I am new to python and am exploring Theano for building neural networks.</p>
|
[
{
"AnswerId": "30500423",
"CreationDate": "2015-05-28T07:53:02.103",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p><code>h</code> is a function that accepts two parameters. You are giving it a single parameter which is a list containing two elements.</p>\n\n<p>Try changing <code>h([x,y])</code> to <code>h(x,y)</code>.</p>\n"
}
] |
30,500,977 | 2 |
<caffe><ubuntu-15.04>
|
2015-05-28T08:21:32.407
| null | 601,314 |
Caffe fails to build on ubuntu 15.04
|
<p>Having followed the <a href="http://caffe.berkeleyvision.org/installation.html#compilation" rel="nofollow noreferrer">Caffe build instructions</a>, I get the following error</p>
<pre></pre>
<p>I check the install of with apt-get:</p>
<pre></pre>
<p>cuda7 is installed, opencv 3 ...</p>
|
[
{
"AnswerId": "30501122",
"CreationDate": "2015-05-28T08:28:24.237",
"ParentId": null,
"OwnerUserId": "2929337",
"Title": null,
"Body": "<p>Maybe try installing the entire <a href=\"https://launchpad.net/ubuntu/+source/hdf5\" rel=\"nofollow noreferrer\">hdf5</a> package, not only the dev portion.</p>\n\n<p>If that doesn't work, verify that you have the hdf5.h header on your system and check its path.</p>\n\n<p>You can check gcc's include path with the command <a href=\"https://stackoverflow.com/a/6666338/2929337\">[source]</a></p>\n\n<pre><code>gcc -xc -E -v -\n</code></pre>\n"
},
{
"AnswerId": "33025755",
"CreationDate": "2015-10-08T20:53:11.157",
"ParentId": null,
"OwnerUserId": "1079075",
"Title": null,
"Body": "<p>The steps required to make it build on Ubuntu 15.04 and Debian 8.x can be found in this <a href=\"https://github.com/BVLC/caffe/issues/2347\" rel=\"nofollow\">GitHub issue</a>.</p>\n\n<p>To summarise:</p>\n\n<pre><code>#!/bin/bash\n# manipulate header path, before building caffe on debian jessie\n# usage:\n# 1. cd root of caffe\n# 2. bash <this_script>\n# 3. build\n\n# transformations :\n# #include \"hdf5/serial/hdf5.h\" -> #include \"hdf5/serial/hdf5.h\"\n# #include \"hdf5_hl.h\" -> #include \"hdf5/serial/hdf5_hl.h\"\n\nfind . -type f -exec sed -i -e 's^\"hdf5.h\"^\"hdf5/serial/hdf5.h\"^g' -e 's^\"hdf5_hl.h\"^\"hdf5/serial/hdf5_hl.h\"^g' '{}' \\;\n</code></pre>\n\n<p>Followed by</p>\n\n<p>Modify <code>INCLUDE_DIRS</code> in <code>Makefile.config</code></p>\n\n<pre><code>INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/\n</code></pre>\n\n<p>And finally, make some simlinks to HD5</p>\n\n<pre><code>cd /usr/lib/x86_64-linux-gnu\nsudo ln -s libhdf5_serial.so.8.0.2 libhdf5.so\nsudo ln -s libhdf5_serial_hl.so.8.0.2 libhdf5_hl.so\n</code></pre>\n"
}
] |
30,501,043 | 2 |
<python><theano>
|
2015-05-28T08:25:12.967
| null | 2,213,825 |
How to apply a Gaussian Blur using theano
|
<p>I am not being able to understand function I want to blur a set of images with a Gaussian kernel:</p>
<pre></pre>
<p>The above code doesn't work it gives the error:</p>
<pre></pre>
<p>Either way since the gaussian filter is separable I would rather apply two linear filters one in the and one in the of the images. But I don't understand how to pull this off and the documentation of <a href="http://deeplearning.net/software/theano/library/tensor/signal/conv.html#theano.tensor.signal.conv.conv2d" rel="nofollow">conv2d</a> is not helping me much.</p>
|
[
{
"AnswerId": "30501266",
"CreationDate": "2015-05-28T08:35:30.547",
"ParentId": null,
"OwnerUserId": "2100935",
"Title": null,
"Body": "<p>Your filter is bigger than your input image. conv2d cannot handle this situation when <code>border_mode='valid'</code>, which is the default border mode.</p>\n\n<p>Why? Because the valid region of your image has the size 1 x 1 x 37 x -2. Negative dimensions make no sense.</p>\n\n<p>Solution: use <code>R2 = conv2d(M,G_kernel,border_mode='full')</code></p>\n\n<p>As for the second part of the question, you can use the seperated kernels using <code>R2 = conv2d(conv2d(M,G_x_kernel,border_mode='full'),G_y_kernel,border_mode='full')</code>\nHowever, I'm not sure it will increase speed with these small filters and running on a GPU.</p>\n"
},
{
"AnswerId": "30505604",
"CreationDate": "2015-05-28T11:41:53.870",
"ParentId": null,
"OwnerUserId": "2100935",
"Title": null,
"Body": "<p>The pixel dimensions need to be the last dimensions for convolutions in Theano (because those convolutions are usually used for neural nets, not images). In theano, the convention for these feature maps is (batch_size, image channels, x, y). So instead of</p>\n\n<pre><code>images = mpimg.imread('Lenna.png')\n</code></pre>\n\n<p>try</p>\n\n<pre><code>images = np.rollaxis(mpimg.imread('Lenna.png'), 2, 0)\n</code></pre>\n\n<p>To convert your image to 3x512x512 instead of 512x512x3.</p>\n"
}
] |
30,502,464 | 1 |
<numpy><theano>
|
2015-05-28T09:26:21.897
| 30,504,930 | 1,367,788 |
How is theano dot product broadcasted
|
<p>Could anyone example how i theano dot product broadcast. It seems it is different from numpy</p>
<pre></pre>
<p>Here are the experiments I tried with theano.</p>
<pre></pre>
|
[
{
"AnswerId": "30504930",
"CreationDate": "2015-05-28T11:11:36.810",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>Theano does broadcasting just like numpy.\nTo demonstrate, this code compares Theano and numpy directly:</p>\n\n<pre><code>import numpy\n\nimport theano\nimport theano.tensor as T\n\nTENSOR_TYPES = dict([(0, T.scalar), (1, T.vector), (2, T.matrix), (3, T.tensor3), (4, T.tensor4)])\n\nrand = numpy.random.rand\n\n\ndef theano_dot(x, y):\n sym_x = TENSOR_TYPES[x.ndim]('x')\n sym_y = TENSOR_TYPES[y.ndim]('y')\n return theano.function([sym_x, sym_y], theano.dot(sym_x, sym_y))(x, y)\n\n\ndef compare_dot(x, y):\n print theano_dot(x, y).shape, numpy.dot(x, y).shape\n\n\nprint compare_dot(rand(3, 5, 10), rand(2, 5, 10, 4))\nprint compare_dot(rand(3, 10), rand(2, 5, 10, 4))\nprint compare_dot(rand(3, 10), rand(10, 4))\nprint compare_dot(rand(3, 10), rand(2, 10, 4))\nprint compare_dot(rand(5, 10), rand(2, 10, 10))\n</code></pre>\n\n<p>The output is</p>\n\n<pre><code>(3L, 5L, 2L, 5L, 4L) (3L, 5L, 2L, 5L, 4L)\n(3L, 2L, 5L, 4L) (3L, 2L, 5L, 4L)\n(3L, 4L) (3L, 4L)\n(3L, 2L, 4L) (3L, 2L, 4L)\n(5L, 2L, 10L) (5L, 2L, 10L)\n</code></pre>\n\n<p>Theano and numpy produce results with the same shape in every case you describe.</p>\n"
}
] |
30,510,722 | 1 |
<machine-learning><neural-network><deep-learning><caffe>
|
2015-05-28T15:20:54.500
| 30,525,487 | 4,841,248 |
solver parameter 'test_iter' changes label values during test phase
|
<p>I'm using the <a href="http://caffe.berkeleyvision.org/" rel="nofollow"><strong>Caffe</strong></a> framework to construct and research convolutional neural networks.</p>
<p>I've discovered (what I believe to be) a bug by accident. <em>(I've already reported it on <a href="https://github.com/BVLC/caffe/issues/2524" rel="nofollow">Github</a>.)</em></p>
<p><strong>This is the issue:</strong> during the test phase, label values get changed depending on the value of parameter (defined in the solver file).</p>
<hr>
<p>I'm using 10240 images to train and test a network. Each image has 38 labels, each label can have two (0 or 1) values. I'm using the HDF5 file format to get my image data and labels into Caffe; each file stores 1024 images and their respective labels. <em>(I've checked the HDF5 files, everything is correct there.)</em></p>
<p>I'm using 9216 (= 9 files) images for training and 1024 (= 1 file) for testing. My Nvidia 540M graphics card merely has 1GB of memory, which means I have to process in batch (usually 32 or 64 images per batch).</p>
<p>I'm using the following network to replicate the problem:</p>
<pre></pre>
<p>This network simply outputs all label values. I'm using the following solver for this network: <em>(Mostly copied from my real network.)</em></p>
<pre></pre>
<p>The following results were obtained by changing the and parameters. According to <a href="http://caffe.berkeleyvision.org/gathered/examples/mnist.html#define-the-mnist-network" rel="nofollow">this</a> tutorial, of the test data and in the solver should balance out to make sure all test samples are used during testing. In my case, I'll make sure that .</p>
<p><strong>These are my results when changing the values:</strong><br>
: Everything is okay.<br>
: Labels that were '1' changed to '0.50'.<br>
: Labels that were '1' changed to '0.50' or '0.25'<br>
: Labels that were '1' changed to '0.50' or '0.25' or '0.125'<br>
The pattern continues. </p>
<hr>
<p><strong>What is going on that affects the values of the labels during testing?</strong> Am I simply interpreting the use of and wrong, or am I missing something else?</p>
|
[
{
"AnswerId": "30525487",
"CreationDate": "2015-05-29T09:03:45.257",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>The results shown in the output log are averages of the iterations, so if you have 2 iterations labels that are one are averaged to 0.5.</p>\n\n<p>So, if batch size is 1024, you have 1024 outputs displayed and everything is ok. When batch size is 512, you only have 512 outputs displayed each is an <em>average</em> of two labels the <code>i</code>-th and the <code>i+512</code>-th label, most chances the labels do not co-inside.</p>\n\n<p>To verify this, you can arrange your test data such that labels 1 are placed at even places, so when changing the batch_size the labels 1 still coincide and you should get exactly 1 for output. </p>\n"
}
] |
30,517,570 | 1 |
<for-loop><theano><pymc3>
|
2015-05-28T21:41:30.283
| 30,585,944 | 4,484,463 |
Improving performance of a Theano for loop
|
<p>I have the following code that accomplishes what I want to do. But I'm wondering if there's a better way of doing it by avoiding For Loops. Performance is important here since I call these operations many times. </p>
<p>I think it could be improved by using "scan" and "function" but I'm not experienced enough with Theano for it to be obvious to me. I did try putting everything inside a theano.function but it didn't work. </p>
<pre></pre>
<p>By the way, this is an implementation of the constrained Probabilistic Matrix Factorization (equation 7 in the paper by Salakhutdinov and Mnih). I'm doing it with pymc3 so "W" and "Y" are really stochastic pymc3 tensors (which I believe are just theano tensors). </p>
<p>Thanks!</p>
|
[
{
"AnswerId": "30585944",
"CreationDate": "2015-06-02T01:35:11.793",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>You need to understand how to vectorise your code. For example:</p>\n\n<pre><code>Ui=np.zeros(dim)\nfor k in range(m):\n Ui+=t.dot(I[i,k],W[k,:])\n</code></pre>\n\n<p>can be implemented as:</p>\n\n<pre><code>Ui = I[None, i] * W\n</code></pre>\n\n<p>Learn on numpy broadcasting. This is a really powerful way of thinking and it do computation faster and with less memory. This work for NumPy and Theano code. <a href=\"http://deeplearning.net/software/theano/tutorial/numpy.html#broadcasting\" rel=\"nofollow\">http://deeplearning.net/software/theano/tutorial/numpy.html#broadcasting</a></p>\n\n<p>This can be done at other place I think to speed it up even more.</p>\n"
}
] |
30,520,523 | 1 |
<theano>
|
2015-05-29T03:16:04.780
| 30,529,673 | 211,450 |
Why do I get int64 when creating itensor3 in Theano?
|
<p>I'm quite new to Theano,
I'm trying to create a tensor of int32 using itensor3,but for some reason I get int64 instead of int32.
Do I need to specify anything in the config file?</p>
<pre></pre>
|
[
{
"AnswerId": "30529673",
"CreationDate": "2015-05-29T12:30:12.333",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>In Theano I believe shapes are always specified in <code>int64</code> values.</p>\n\n<p>The result of your Theano function, <code>f</code>, is a shape size, i.e. <code>l.shape[0]</code> so the type of the result returned by <code>f</code> is going to be <code>int64</code>. This does not change the fact that the input is of type <code>int32</code>.</p>\n"
}
] |
30,521,390 | 1 |
<python><neural-network><lstm><keras>
|
2015-05-29T04:53:54.437
| null | 4,206,428 |
Keras LSTM predicting only 1 category, in multi-category classification - how to fix?
|
<p>I have a text dataset that has equal number of labels - . I ran the example (imdb example) on their website with my dataset and the compile line changed to </p>
<pre></pre>
<p>But the model predicts only 1 category, that is, accuracy consistently.</p>
<p>Could you please help me fix it / change settings as required?</p>
|
[
{
"AnswerId": "34077294",
"CreationDate": "2015-12-03T22:17:08.203",
"ParentId": null,
"OwnerUserId": "1879926",
"Title": null,
"Body": "<p>You'll need to modify the with</p>\n\n<pre><code>model.add(Dense(nb_classes))\n</code></pre>\n\n<p>where nb_classes corresponds to the number of categorical classes. </p>\n"
}
] |
30,521,680 | 1 |
<python><arrays><numpy><theano><mnist>
|
2015-05-29T05:22:24.430
| 30,524,636 | 3,748,690 |
Python Numpy Error: ValueError: setting an array element with a sequence
|
<p>I am trying to build a dataset similar to mnist.pkl.gz provided in theano logistic_sgd.py implementation. Following is my code snippet. </p>
<pre></pre>
<p>Error Message:
Traceback (most recent call last):</p>
<pre></pre>
<hr>
<p>csv file contains two fields.. image name, classification label
when is run this in python interpreter, it seems to be working for me.. as follows.. I dont get error saying setting an array element with a sequence here..</p>
<p>---------python interpreter output----------</p>
<pre></pre>
<p>Even though i am running same set of instruction (logically), when i run sample.py, i get valueError: setting an array element with a sequence.. I trying to understand this behavior.. any help would be great..</p>
|
[
{
"AnswerId": "30524636",
"CreationDate": "2015-05-29T08:21:16.367",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>The problem is probably similar to that of <a href=\"https://stackoverflow.com/questions/4674473/valueerror-setting-an-array-element-with-a-sequence\">this question</a>.</p>\n\n<p>You're trying to create a matrix of pixel values with a row per image. But each image has a different size so the number of pixels in each row is different.</p>\n\n<p>You can't create a \"jagged\" float typed array in numpy -- every row must be of the same length.</p>\n\n<p>You'll need to pad each row to the length of the largest image.</p>\n"
}
] |
30,548,699 | 1 |
<python><numpy><k-means><theano>
|
2015-05-30T16:43:08.150
| null | 1,544,186 |
Theano function: Unused input
|
<p>I am trying to implement mini-batch Kmeans. The part that seems to be giving me a really hard time is specifying the minibatches as inputs to theano. I have a class with an function, where is in this case the minibatch, and is the size of the batch. I also have a function which takes no arguments, but instead uses the data passed to .</p>
<p>My main script consists of the following:</p>
<pre></pre>
<p>What I intended to do was to initialize a KmeansMiniBatch object with a symbolic variable , which gets replaced by the a at each iteration. Each of the minibatches is generated by the function which basically takes as input the entire dataset, and using returns only a subset of that dataset, which is a array. Unfortunately, I cannot seem to accomplish what I set out to achieve, as the above code results in the following error message:</p>
<blockquote>
<p>theano.compile.function_module.UnusedInputError: theano.function was asked to create a function computing outputs given certain inputs, but the provided input variable at index 0 is not part of the computational graph needed to compute the outputs: X.
To make this error into a warning, you can pass the parameter on_unused_input='warn' to theano.function. To disable it completely, use on_unused_input='ignore'.</p>
</blockquote>
<p>I am not sure why exactly I do get this error, since I do replace the symbolic variable by the function input . Furthermore, if I do set I end up with the following error message during evaluating :</p>
<blockquote>
<p>theano.gof.fg.MissingInputError: ("An input of the graph, used to compute Shape(X), was not provided and not given a value.Use the Theano flag exception_verbosity='high',for more information on this error.", X)</p>
</blockquote>
<p>Any help would be very much appreciated !</p>
<hr>
<p>Edit:</p>
<p>So I finally got it working ! My function used to update a matrix , which is an attribute of the class KmeanMiniBatch, but didn't return it, which apparently caused to complain, since the input was indeed not used in the output. What I did is, I modified to return , and that basically solved the issue. Here is my modified </p>
<pre></pre>
<p>Apparently functions do not play well with void python functions.</p>
<hr>
<p>Edit 2:</p>
<p>Just to shed some more light on the problem that I would like to solve. So, the version of I am implementing is also know as , whereby a dictionary basically provides a compression for the dataset into . Initially, the part of concerning was as follows:</p>
<pre></pre>
<p>So basically, at every iteration the dictionary would be updated, and it would therefore, make no sense to return , which I had to do in order to stop from complaining:</p>
<pre></pre>
<p>D is initialized as follows in :</p>
<pre></pre>
<p>What I would like to achieve is:
1. Not have to return , but instead update, and evaluate in place, which I can then retrieve through
2. I am not about my choice of having as a symbolic variable? Perhaps a shared variable would be a better choice?
3. Most importantly, at each of the 30 iterations, I would like to substitute the data to the model with a minibatch, and hence, my use of the givens param. Is there a better way to achieve that?</p>
|
[
{
"AnswerId": "30568157",
"CreationDate": "2015-06-01T07:43:42.610",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>As far as you as a user of Theano are concerned, symbolic variables do not support a concept of \"current value\" or \"updating\". For that you would need a shared variable.</p>\n\n<p>You need to be clearer on how you would like to use your <code>KmeansMiniBatch</code> class. At the moment it does not encapsulate the D-updating behaviour since the Theano function is compiled and executed outside of <code>KmeansMiniBatch</code>. You might prefer a usage like this:</p>\n\n<pre><code>kmeans = KmeansMiniBatch()\n\ndata = load_data()\nfor i in xrange(30):\n kmeans.update(get_batch(data, batch_size=10000))\n\nimage = Image.fromarray(\ntile_raster_images(X=np.transpose(kmeans.get_D()),\n img_shape=(12, 12), tile_shape=(10, 30),\n tile_spacing=(1, 1)))\n</code></pre>\n\n<p>Note that there is no Theano functionality visible here, that's all encapsulated into the <code>KmeansMiniBatch</code> class. We also don't need to tell <code>KmeansMiniBatch</code> what the batch size is because that doesn't change the symbolic expression; instead we tell <code>get_batch</code> how large a batch to get.</p>\n\n<p>Inside <code>KmeansMiniBatch</code> you have two possible approaches.</p>\n\n<ol>\n<li><p>Make <code>D</code> a shared variable and use <code>updates=...</code> in your Theano function to change its contents on each <code>update</code>.</p>\n\n<pre><code>class KmeansMiniBatch:\n def __init__(dimensions, K):\n # ... init srng ...\n D = srng.normal(size=(dimensions, K))\n D = D / numpy.sqrt(numpy.sum(numpy.sqr(D), axis=0)))\n self.D = theano.shared(D, 'D')\n mini_batch = T.matrix('mini_batch', dtype='float64')\n self.func = theano.function(inputs=[mini_batch], updates=fit_once(mini_batch))\n\n def update(batch):\n self.func(batch)\n\n def fit_once(mini_batch):\n # ... do work to create S symbolically ...\n D_update = T.dot(mini_batch, T.transpose(S))\n D_update = D_update / T.sqrt(T.sum(T.sqr(D_update), axis=0))\n return [(self.D, D_update)]\n\n def get_D():\n return self.D.get_value()\n</code></pre>\n\n<p>Note that the init of D has changed from a Theano operation to a numpy operation.</p></li>\n<li><p>Make <code>D</code> a regular numpy array, pass it in as an input to your Theano function, and change the value to the output of your Theano function, on each <code>update</code>.</p>\n\n<pre><code>class KmeansMiniBatch:\n def __init__(dimensions, K):\n # ... init srng ...\n self.D = srng.normal(size=(dimensions, K))\n self.D = self.D / numpy.sqrt(numpy.sum(numpy.sqr(self.D), axis=0)))\n mini_batch = T.matrix('mini_batch', dtype='float64')\n self.func = theano.function(inputs=[mini_batch], outputs=fit_once())\n\n def update(batch):\n self.D = self.func(batch)\n\n def fit_once(mini_batch):\n # ... do work to create S symbolically ...\n D_update = T.dot(mini_batch, T.transpose(S))\n D_update = D_update / T.sqrt(T.sum(T.sqr(D_update), axis=0))\n return D_update\n\n def get_D():\n return self.D\n</code></pre></li>\n</ol>\n\n<p>As far as I can see there's no need to use <code>givens=...</code> at all.</p>\n"
}
] |
30,556,013 | 1 |
<python><theano>
|
2015-05-31T09:45:09.610
| 30,568,365 | 3,301,357 |
How to flatten a calculation graph for a loop with accumulator(symbolic variables)?
|
<p>I'm a newbie in theano. But I've already googled, read official theano documentation &
I haven't found any clue how to solve my problem.</p>
<p>I'm trying to reinvent the wheel: I'm implementing my own batch convolution using theano.
(I'm doing so to learn this library)</p>
<p>So, here's what I'm trying to do:</p>
<pre></pre>
<p>This results in a very deep calculation graph, because inc_subtensor is added on top of each previous operation:</p>
<p>inc_subtensor_stepN(inc_subtensor_stepN-1(inc_subtensor_stepN-2...</p>
<p>So I tried to flatten it. As all variables are symbolic, I realised, that I have to substitute them in some way in graph.</p>
<p>I tried theano.clone, but it results into the same situation as inc_subtensor.</p>
<p>Then I tried to use theano.scan:</p>
<pre></pre>
<p>But still, I'm getting "RuntimeError: maximum recursion depth exceeded in comparison"
right the first time when sym_im2col_prev_layer_curr_batch = theano.clone is being executed.</p>
<p>The latter code snippet example shows right the thing I'm going to do. But I have no idea why I'm getting a 'maximum recursion depth exceeded'.
Because each time I do theano.clone, theano is supposed to substitute sym_im2col_prev_layer_batch_idx (which is already used in scan) with
it's exact symbolic value - im2col_prev_layer[batch_idx], and give me a copy of this subgraph.
I might have missed something...</p>
<p>How such(or similar) tasks are solved in theano & how can I avoid too deep calculation graphs when
doing such tasks?</p>
<p>Also I tried such approach:</p>
<p>I've tried such approach:</p>
<pre></pre>
<p>But when trying to print the value of 'convolved' right after the 'for' cycle, I'm getting:</p>
<pre></pre>
<p>So, the same story.</p>
<p>Increasing recursion depth for python is NOT an option.</p>
<p>Any ideas how to flatten the calculation graph for my case?</p>
|
[
{
"AnswerId": "30568365",
"CreationDate": "2015-06-01T07:56:27.487",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>In general <code>theano.scan</code> is the solution for recursive situations. In a case like yours <code>theano.scan</code> should be used to <em>replace</em> a Python <code>for</code> loop, not in addition to the <code>for</code> loop.</p>\n\n<p>It's difficult to see exactly what you're trying to achieve but the extensive use of <code>set_subtensor</code> and <code>inc_subtensor</code> suggests you're thinking about this in a way that is not well matched with how Theano wants to work. <code>theano.scan</code> might allow you to achieve what you want using the approach you're currently taking but, after a quick scan through the code you've provided, it doesn't look like even <code>theano.scan</code> is required. If one iteration does not rely on results from a previous iteration, as appears to be the case, then you can probably do this without any loops at all (neither Python <code>for</code> loops or a <code>theano.scan</code>) by judicious use of Theano tensor operations. The non-loop approach would almost certainly be far more efficient and speedy than doing things via a loop of some kind. Admittedly, these can be more difficult to wrap your head around than sequential, one-row-at-a-time type, operations.</p>\n\n<p>If you can't see how your computation might be achieved via plain multi-dimensional tensor operations without loops then I would suggest looking into how you can replace your Python <code>for</code> loops with as few <code>theano.scan</code> operations as you can get away with.</p>\n"
}
] |
30,558,439 | 1 |
<for-loop><matrix><lua><deep-learning><torch>
|
2015-05-31T14:16:44.340
| 30,560,653 | 1,056,274 |
Fast way to initialize a tensor in torch7
|
<p>I need to initialize a 3D tensor with an index-dependent function in torch7, i.e.</p>
<pre></pre>
<p>then I initialize a 3D tensor A like this:</p>
<pre></pre>
<p>But this code runs very slow, and I found it takes up 92% of total running time. Are there any more efficient ways to initialize a 3D tensor in torch7?</p>
|
[
{
"AnswerId": "30560653",
"CreationDate": "2015-05-31T17:51:03.243",
"ParentId": null,
"OwnerUserId": "2726734",
"Title": null,
"Body": "<p>See the documentation for the <a href=\"https://github.com/torch/torch7/blob/master/doc/tensor.md#applying-a-function-to-a-tensor\"><code>Tensor:apply</code></a></p>\n\n<blockquote>\n <p>These functions apply a function to each element of the tensor on\n which the method is called (self). These methods are much faster than\n using a for loop in Lua.</p>\n</blockquote>\n\n<p>The example in the docs initializes a 2D array based on its index i (in memory). Below is an extended example for 3 dimensions and below that one for N-D tensors. Using the apply method is much, <em>much</em> faster on my machine:</p>\n\n<pre><code>require 'torch'\n\nA = torch.Tensor(100, 100, 1000)\nB = torch.Tensor(100, 100, 1000)\n\nfunction func(i,j,k) \n return i*j*k \nend\n\nt = os.clock()\nfor i=1,A:size(1) do\n for j=1,A:size(2) do\n for k=1,A:size(3) do\n A[{i, j, k}] = i * j * k\n end\n end\nend\nprint(\"Original time:\", os.difftime(os.clock(), t))\n\nt = os.clock()\nfunction forindices(A, func)\n local i = 1\n local j = 1\n local k = 0\n local d3 = A:size(3)\n local d2 = A:size(2) \n return function()\n k = k + 1\n if k > d3 then\n k = 1\n j = j + 1\n if j > d2 then\n j = 1\n i = i + 1\n end\n end\n return func(i, j, k)\n end\nend\n\nB:apply(forindices(A, func))\nprint(\"Apply method:\", os.difftime(os.clock(), t))\n</code></pre>\n\n<hr>\n\n<p><strong>EDIT</strong></p>\n\n<p>This will work for any Tensor object:</p>\n\n<pre><code>function tabulate(A, f)\n local idx = {}\n local ndims = A:dim()\n local dim = A:size()\n idx[ndims] = 0\n for i=1, (ndims - 1) do\n idx[i] = 1\n end\n return A:apply(function()\n for i=ndims, 0, -1 do\n idx[i] = idx[i] + 1\n if idx[i] <= dim[i] then\n break\n end\n idx[i] = 1\n end\n return f(unpack(idx))\n end)\nend\n\n-- usage for 3D case.\ntabulate(A, function(i, j, k) return i * j * k end)\n</code></pre>\n"
}
] |
30,558,465 | 1 |
<theano><deep-learning>
|
2015-05-31T14:19:30.220
| 30,567,486 | 379,539 |
Theano multiplying by zero
|
<p>Can anybody explain to me what is the meaning behind these two lines of code from here: <a href="https://github.com/Newmu/Theano-Tutorials/blob/master/4_modern_net.py" rel="nofollow">https://github.com/Newmu/Theano-Tutorials/blob/master/4_modern_net.py</a></p>
<pre></pre>
<p>Is it a mistake? Why do we instantiate acc to zero and then multiply it by rho in next line? It looks like it will not achieve anything this way and remain zero. Will there be any difference if we replace "rho * acc" by just "acc"?</p>
<p>The full function is given below:</p>
<pre></pre>
|
[
{
"AnswerId": "30567486",
"CreationDate": "2015-06-01T07:01:17.077",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>This is just a way to tell Theano \"create a shared variable and initialize its value to be zero <em>in the same shape as p</em>.\"</p>\n\n<p>This <code>RMSprop</code> method is a <em>symbolic</em> method. It does not actually compute the RmsProp parameter updates, it only tells Theano how parameter updates should be computed when the eventual Theano function is executed.</p>\n\n<p>If you look further down <a href=\"https://github.com/Newmu/Theano-Tutorials/blob/master/4_modern_net.py\" rel=\"noreferrer\">the tutorial code you linked to</a> you'll see the symbolic execution graph for the parameter updates are constructed by <code>RMSprop</code> via a call on line 67. These updates are then compiled into a Theano function called <code>train</code> in Python on line 69 and the train function is executed many times on line 74 within the for loops of lines 72 and 73. The Python function <code>RMSprop</code> will be called only once, irrespective of how many times the <code>train</code> function is called within the for loops on lines 72 and 73.</p>\n\n<p>Within <code>RMSprop</code>, we are telling Theano that, for each parameter <code>p</code>, we need a new Theano variable whose <em>initial</em> value has the same shape as <code>p</code> and is 0. throughout. We then go on to tell Theano how it should update both this new variable (unnamed as far as Theano is concerned but named <code>acc</code> in Python) and how to update the parameter <code>p</code> itself. These commands do not alter either <code>p</code> or <code>acc</code>, they just tell Theano how <code>p</code> and <code>acc</code> should be updated later, once the function has been compiled (line 69) each time it is executed (line 74).</p>\n\n<p>The function executions on line 74 will <em>not</em> call the <code>RMSprop</code> Python function, they execute a compiled version of <code>RMSprop</code>. There will be no initialization inside the compiled version because that already happened in the Python version of RMSprop. Each <code>train</code> execution of the line <code>acc_new = rho * acc + (1 - rho) * g ** 2</code> will use the <em>current</em> value of <code>acc</code> not its initial value.</p>\n"
}
] |
30,566,793 | 1 |
<image><qt><lua><window><torch>
|
2015-06-01T06:16:13.520
| 30,575,161 | 1,457,196 |
Preserving aspect ratio when using torch's image.display
|
<p>I have the following very simple script written in lua. I am running it with qlua.</p>
<p></p>
<p>If the image is large the qt window simply takes the whole screen, which also stretches the image to fit the screen.</p>
<p>I can't figure out a way to keep this from happening.</p>
<p>Thanks!</p>
|
[
{
"AnswerId": "30575161",
"CreationDate": "2015-06-01T13:42:45.787",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p>If the image is large, resize it down to what you can configure as \"Max height/Max width\", while preserving the aspect ratio.</p>\n\n<p>Sample code:</p>\n\n<pre><code>maxSize = 480\n-- find the smaller dimension, and resize it to maxSize (while keeping aspect ratio)\nlocal iW = input:size(3)\nlocal iH = input:size(2)\nif iW < iH then\n input = image.scale(input, maxSize, maxSize * iH / iW)\nelse\n input = image.scale(input, maxSize * iW / iH, maxSize)\nend\n</code></pre>\n"
}
] |
30,569,647 | 2 |
<python><theano><deep-learning>
|
2015-06-01T09:10:21.540
| 30,800,488 | 4,924,192 |
Unable to run the Theano CNN.py file for a different dataset
|
<p>I have converted my dataset exactly in the form of the mnist.pkl.gz file and it runs for the logistic_sgd.py and mlp.py programs given in the Theano deeplearning tutorial PDF (University of Montreal). However on running the CNN.py file it gives a huge error which is difficult to understand. Can anyone who has succesfully run the CNN.py program for a different dataset help me out here as I am totally clueless about what the error is.
My training set has 1176 entries and my validation and test set has 168 entries each. Maybe the problem is with the batch size. If so can someone please suggest me an appropriate batch size?</p>
<p>I am using the Spyder GUI for Python 2.7 that comes with the Anaconda Bundle.</p>
<p>Error occurs soon after printing '... building model'</p>
<p>Code Snippet:</p>
<pre></pre>
<p>Prompt at console as soon as error occurs:</p>
<pre></pre>
<p>Then the following error message appears.</p>
<p>Error message:</p>
<pre></pre>
|
[
{
"AnswerId": "30571192",
"CreationDate": "2015-06-01T10:30:37.090",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>I encouraged you to add more information (thanks for that) so felt obliged to help as far as I can, but unfortunately that may not be much. Hopefully others can add better answers.</p>\n\n<p>I don't think the error is occurring within the code snippet you provided. Yes, it's after the '... building the model' message, but I think it's further on. It's probably generating the error at a <code>theano.function</code> call.</p>\n\n<p>Your problem appears to be in the C++ toolchain on your system. I would suggest searching for more information with respect to the \"undefined reference to __imp_PyExc_ImportError\" message.</p>\n\n<p>For example, <a href=\"https://stackoverflow.com/questions/2842469/python-undefined-reference-to-imp-py-initmodule4\">this StackOverflow question</a> suggests adding <code>-D MS_WIN64</code> to the command line. But this is probably an internal command line, not the one you are using originally.</p>\n\n<p>I don't understand why it works for one dataset but not another.</p>\n"
},
{
"AnswerId": "30800488",
"CreationDate": "2015-06-12T10:09:10.017",
"ParentId": null,
"OwnerUserId": "4924192",
"Title": null,
"Body": "<p>There was some problem in my laptop only. It is running fine on a different computer.</p>\n"
}
] |
30,576,509 | 1 |
<python><theano>
|
2015-06-01T14:45:30.767
| 30,596,963 | 661,935 |
Why indexing a Theano tensor variable with a list of `slice` objects is invalid?
|
<p>In an <a href="https://github.com/goodfeli/theano_exercises/blob/master/01_basics/03_advanced_expressions/02_nd_indexing.py" rel="nofollow">exercise</a> of Dr. Goodfellow's Theano tutorial, it's ok to slice with tuple , but Theano will raise exception for . </p>
<p>Exception info:</p>
<blockquote>
<p>theano.tensor.var.AsTensorError: ('Cannot convert [slice(, Elemwise{neg,no_inplace}.0, None), slice(, Elemwise{neg,no_inplace}.0, None), slice(, Elemwise{neg,no_inplace}.0, None)] to TensorType', )</p>
</blockquote>
<p>Why doesn't it work with ? BTW, slicing a tensor variable with an integer list is ok.
I've red the <a href="https://theano.readthedocs.org/en/latest/library/tensor/basic.html#indexing" rel="nofollow">document</a>, but didn't find the reason.</p>
|
[
{
"AnswerId": "30596963",
"CreationDate": "2015-06-02T13:04:23.403",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>This was a bug in Theano. It has been fixed by Frédéric Bastien via <a href=\"https://github.com/Theano/Theano/pull/2992\" rel=\"nofollow\">https://github.com/Theano/Theano/pull/2992</a></p>\n\n<p>More here: <a href=\"https://groups.google.com/forum/#!topic/theano-users/nTRfigJD19w\" rel=\"nofollow\">https://groups.google.com/forum/#!topic/theano-users/nTRfigJD19w</a></p>\n"
}
] |
30,582,388 | 2 |
<macos><lua><osx-yosemite><libpng><torch>
|
2015-06-01T20:11:47.577
| 30,620,988 | 3,993,741 |
Lua error loading module 'libpng' (Torch, MacOSX)
|
<p>How do I make libpng load properly in Lua? I am running Lua/Torch in iTorch Notebook in Mac OSX 10.10.3, where other basic functions in Lua work, such as plotting and calculations. </p>
<pre></pre>
<blockquote>
<p>Warning: libpng-1.6.17 already installed</p>
</blockquote>
<p>If I run:</p>
<pre></pre>
<blockquote>
<p>error loading module 'libpng' from file '/usr/local/lib/lua/5.1/libpng.so':
dlopen(/usr/local/lib/lua/5.1/libpng.so, 6): Library not loaded: /usr/local/lib/libpng15.15.dylib
Referenced from: /usr/local/lib/lua/5.1/libpng.so
Reason: Incompatible library version: libpng.so requires version 33.0.0 or later, but libpng15.15.dylib provides version 29.0.0
warning: could not be loaded (is it installed?)
/usr/local/share/lua/5.1/dok/inline.lua:736: libpng package not found, please install libpng
stack traceback:
[C]: in function 'error'
/usr/local/share/lua/5.1/dok/inline.lua:736: in function 'error'
/usr/local/share/lua/5.1/image/init.lua:142: in function 'saver'
/usr/local/share/lua/5.1/image/init.lua:355: in function 'save'
/Users/MY/torch/install/share/lua/5.1/itorch/gfx.lua:25: in function 'f'
[string "local f = function() return itorch.image(iii)..."]:1: in main chunk
[C]: in function 'xpcall'
/Users/MY/torch/install/share/lua/5.1/itorch/main.lua:177: in function
/Users/MY/torch/install/share/lua/5.1/lzmq/poller.lua:75: in function 'poll'
/Users/MY/torch/install/share/lua/5.1/lzmq/impl/loop.lua:307: in function 'poll'
/Users/MY/torch/install/share/lua/5.1/lzmq/impl/loop.lua:325: in function 'sleep_ex'
/Users/MY/torch/install/share/lua/5.1/lzmq/impl/loop.lua:370: in function 'start'
/Users/MY/torch/install/share/lua/5.1/itorch/main.lua:344: in main chunk
[C]: in function 'require'
[string "arg={'/Users/MY/.ipython/profile_default/secu..."]:1: in main chunk</p>
</blockquote>
|
[
{
"AnswerId": "30583013",
"CreationDate": "2015-06-01T20:48:17.250",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p>Reinstalling the image package as well as forcing the linkage of libpng might fix it:</p>\n\n<pre><code>brew link libpng --force\nluarocks install image\n</code></pre>\n"
},
{
"AnswerId": "30620988",
"CreationDate": "2015-06-03T13:06:13.423",
"ParentId": null,
"OwnerUserId": "4455483",
"Title": null,
"Body": "<p>I had a similar problem (OSX 10.9.5). You probably have multiple versions of libpng installed, with the one called during install of luarocks having architecture i386 (x86_64 required).</p>\n\n<p>To solve this:</p>\n\n<ol>\n<li><p>Try installing image again, and reading the log:</p>\n\n<p>luarocks install image</p></li>\n<li><p>Check the log to see if you get a message of type:</p>\n\n<p>ld: warning: ignoring file /Library/Frameworks//libpng.framework/libpng, missing required architecture x86_64 in file /Library/Frameworks//libpng.framework/libpng (2 slices)</p></li>\n<li><p>If this is the case (assuming using brew) remove the libpng framework in /Library/Frameworks and do a </p>\n\n<p>brew install libpng --universal</p></li>\n<li><p>Reinstall image and run.</p></li>\n</ol>\n\n<p>This worked for me, I hope it works for you too.</p>\n"
}
] |
30,591,099 | 0 |
<python-2.7><installation><neural-network><theano>
|
2015-06-02T08:34:03.110
| null | 4,774,186 |
Installation nolearn issue
|
<p>I want to use nolearn but I am not able to install it. I can' t use pip installation so I tried as in the figure. Any idea how can I solve this? Thank you
<img src="https://i.stack.imgur.com/ZeDQs.png" alt="enter image description here"></p>
|
[] |
30,594,858 | 2 |
<python><installation><theano><nvcc>
|
2015-06-02T11:29:35.930
| 30,596,660 | 4,774,186 |
Theano installation, nvcc not in the path
|
<p>I have installed theano on windows7,64bit on winpython using their guide <a href="http://deeplearning.net/software/theano/install_windows.html" rel="nofollow">http://deeplearning.net/software/theano/install_windows.html</a> and I thought it worked since when I ran their first example I did have the expected results and no errors. I wanted to go ahead and install the part: <strong>Configure Theano for GPU use</strong> but when I runned again it I had this in the python console: </p>
<pre></pre>
<p>the .theanorc file I am using is:</p>
<pre></pre>
<p>and I added it in C:\SciSoft\WinPython-64bit-2.7.9.4\settings as I understood from the guide. </p>
<p>By the way, I checked the C:\SciSoft\env.bat and when I write <strong>where nvcc</strong> it says not file found instead I have no problems with the other checkings. Is that because I haven't an NVIDIA card? I am totaly lost.Any help? Thank you </p>
|
[
{
"AnswerId": "30596660",
"CreationDate": "2015-06-02T12:51:01.147",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>Theano is designed to work (almost) identically on both CPU and GPU. You don't need a GPU to use Theano and if you don't have a Nvidia GPU then you shouldn't try installing any GPU-specific stuff at all.</p>\n"
},
{
"AnswerId": "40499036",
"CreationDate": "2016-11-09T01:21:54.670",
"ParentId": null,
"OwnerUserId": "6831692",
"Title": null,
"Body": "<p>aleju, if you don't want (or can't) use theano with GPU, you just need to change .theanorc to use only cpu. This will not cause any problem, except for poor performance.</p>\n\n<pre><code>[global]\ndevice = cpu\n...\n</code></pre>\n"
}
] |
30,595,818 | 1 |
<numpy><linear-algebra><theano>
|
2015-06-02T12:15:37.683
| 30,596,094 | 1,937,197 |
How to express c[i,j,k] = a[i,j] * b[i,k] in Numpy/Theano?
|
<p>The definition </p>
<pre></pre>
<p>is an element-wise product with respect to , and an outer product with respect to and . Is there any way to express this in NumPy/Theano without loops?</p>
|
[
{
"AnswerId": "30596094",
"CreationDate": "2015-06-02T12:28:16.050",
"ParentId": null,
"OwnerUserId": "1937197",
"Title": null,
"Body": "<p>I found a solution that works with both Numpy and Theano:</p>\n\n<pre><code>c = a[:, :, np.newaxis] * b[:, np.newaxis, :]\n</code></pre>\n"
}
] |
30,627,302 | 1 |
<python-2.7><numpy><theano>
|
2015-06-03T17:54:18.383
| 30,651,282 | 2,890,103 |
Can not update a subset of shared tensor variable after a cast
|
<p>I have the following code:</p>
<pre></pre>
<p>Which is compiling fine (where is a numpy array of dtype ).</p>
<p>To prevent future type error I want to cast my shared tensor into (or which is equivalent as I have set to in the config file).</p>
<p>I so add and then I get the following error: </p>
<p>. </p>
<p>I do not understand why. According to this <a href="https://stackoverflow.com/questions/15917849/how-can-i-assign-update-subset-of-tensor-shared-variable-in-theano">question</a>, using should allow me to update a subset of the shared variable.</p>
<p>How can I cast my shared tensor while being able to update it?</p>
|
[
{
"AnswerId": "30651282",
"CreationDate": "2015-06-04T18:19:07.840",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>The problem is that you are trying to update a symbolic variable, not a shared variable.</p>\n\n<pre><code>U = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float64)\nWords = theano.shared(value=U, name='Words')\nzero_vec_tensor = T.vector()\nset_zero = theano.function([zero_vec_tensor], updates=[(Words, T.set_subtensor(Words[0, :], zero_vec_tensor))])\n</code></pre>\n\n<p>works fine because the thing you are updating, <code>Words</code> is a shared variable.</p>\n\n<pre><code>U = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.float64)\nWords = theano.shared(value=U, name='Words')\nWords = T.cast(Words, dtype = theano.config.floatX)\nzero_vec_tensor = T.vector()\nset_zero = theano.function([zero_vec_tensor], updates=[(Words, T.set_subtensor(Words[0, :], zero_vec_tensor))])\n</code></pre>\n\n<p>does not work because now Words is no longer a shared variable, it is a symbolic variable that, when executed, will compute a cast the values in the shared variable to <code>theano.config.floatX</code>.</p>\n\n<p>The <code>dtype</code> of a shared variable is determined by the value assigned to it. So you probably just need to change the type of U:</p>\n\n<pre><code>U = np.array([[1, 2, 3], [4, 5, 6]], dtype=theano.config.floatX)\n</code></pre>\n\n<p>Or cast it using numpy instead of symbolically:</p>\n\n<pre><code>U = np.dtype(theano.config.floatX).type(U)\n</code></pre>\n"
}
] |
30,628,900 | 1 |
<python><eclipse><python-2.7><cuda><theano>
|
2015-06-03T19:26:14.697
| null | 3,878,508 |
Theano Test File Will Not Compile
|
<p>I have been trying to install Theano for windows 7 64 bit machine based off of the tutorial found on the website <a href="http://deeplearning.net/software/theano/install_windows.html#visual-studio-and-cuda" rel="nofollow">here</a>. I have gotten almost everything to work but after installing CUDA 5.5 and then continuing to on verifying the programs with these commands:</p>
<p>"Please do so, and verify that the following programs are found:</p>
<ol>
<li>where gcc</li>
<li>where gendef</li>
<li>where cl</li>
<li>where nvcc"</li>
</ol>
<p>The first three work fine but the last one returns "INFO: Could not find files for the given pattern(s)." I am not sure why because I installed CUDA and nvcc should be found. This is causing a larger problem because when I try to run this test file:</p>
<pre></pre>
<p>Eclipsed throws the error at the fourth line under config saying "Undefined variable from import: config". Then when I run it anyways the error in the console is "AttributeError: 'module' object has no attribute 'config'"</p>
<p>Any suggestions or advice on any of this is much appreciated.</p>
|
[
{
"AnswerId": "30666848",
"CreationDate": "2015-06-05T12:42:25.017",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>The first problem was fixed in the comments on the question.</p>\n\n<p>For the import, I suggest that you uninstall Theano. Do this many times to make sure you remove all version. Depending how you installed python, it could have installed an old version of Theano at the same time.</p>\n\n<p>Then install Theano development version.</p>\n\n<p>Then it can't find theano.config, most of the time is because there is problem in the installation or you use an old version that had problems related to Windows.</p>\n"
}
] |
30,632,204 | 1 |
<python><theano>
|
2015-06-03T22:52:52.313
| null | 1,758,727 |
What is the 'name' field used for in the definition of a theano shared variable?
|
<p>I notice that you can define a theano shared variable by:</p>
<pre></pre>
<p>In theano doc <a href="http://deeplearning.net/software/theano/library/compile/shared.html#module-shared" rel="nofollow">here</a>, it says the name field is "the name for this variable". I see in most cases people just pass a exactly same string to the name field as the shared variable's name in Python. For example:</p>
<pre></pre>
<p>What is the rule of thumb to define the name field? What is the name field used for?</p>
|
[
{
"AnswerId": "30651047",
"CreationDate": "2015-06-04T18:04:57.397",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>Assigning names to Theano objects helps during debugging. For example, <code>theano.printing.debugprint</code> will include the name, if present, in its output; this can make understanding a complex computation graph easier.</p>\n\n<p>You can also give names to input variables (e.g. <code>x=theano.tensor.scalar('x')</code>) and functions (e.g. <code>f=theano.function(inputs, outputs, name='f')</code>.</p>\n\n<p>The output of <code>theano.printing.debugprint</code> can be limited by naming nodes whose content you don't need to see and using the <code>stop_on_name</code> parameter.</p>\n\n<p>Keep Python names and Theano names in sync can be tedious and error prone. In some circumstances you might like to use functions such as:</p>\n\n<pre><code>def name_node(variable, variable_names):\n for name, query_variable in variable_names.iteritems():\n if query_variable is variable:\n variable.name = name\n\n return variable\n\n\ndef name_nodes(variables, variable_names):\n if not isinstance(variables, (tuple, list)):\n return name_node(variables, variable_names)\n\n for variable in variables:\n for name, query_variable in variable_names.iteritems():\n if query_variable is variable:\n variable.name = name\n\n return variables\n</code></pre>\n\n<p>In a symbolic function you can use these like this:</p>\n\n<pre><code>def create_graph(w_filename):\n x = T.scalar()\n w = theano.shared(numpy.load(w_filename))\n y = T.tanh(theano.dot(x, w))\n return name_nodes([x, w, y], locals())\n</code></pre>\n\n<p>This is a bit of a Python hack and won't be suitable in all circumstances.</p>\n"
}
] |
30,633,181 | 1 |
<python><theano>
|
2015-06-04T00:43:07.473
| 30,653,254 | 3,353,215 |
The output size of theano.tensor.nnet.conv.conv2d
|
<p>The function that is currently being used widely on tutorials and other place is of the form:</p>
<pre></pre>
<ol>
<li><p>If for the first layer of a CNN, I have as which is the number of kernals being 20, each 7 X 7, what does the '1' stand for ? My is .</p></li>
<li><p>This convolution now outputs a tensor of shape which I understand. My next layer now takes the parameters = , = and produces a output of shape . I seem to kind of understand this operation, except, if I want to use a '50' filters layer each working on previous 20 feature maps produced, shouldn't I produce 1000 feature maps in all instead of producing just 50 feature maps ? To restate my question, I have a stack of 20 feature maps each running 50 kernals of convolution, shouldn't my output shape be instead of ?</p></li>
</ol>
|
[
{
"AnswerId": "30653254",
"CreationDate": "2015-06-04T20:12:42.910",
"ParentId": null,
"OwnerUserId": "3489247",
"Title": null,
"Body": "<p>To answer your questions:</p>\n\n<ol>\n<li><p>The <code>1</code> stands for the number of input channels. As you seem to be using gray scale images, this is one. For color images it can be 3. For other convolutional layers as in your second question, it must be equal to the number of outputs that the previous layer generated.</p></li>\n<li><p>Using a filter of size <code>[50, 20, 5, 5]</code> on an input signal of <code>[100, 20, 26, 26]</code> is actually a good example for your first question, as well. You have here 50 filters of shape <code>[20, 5, 5]</code>. Every image is of shape <code>[20, 26, 26]</code>. The convolution uses all the 20 channels each time: Filter 0 gets applied to image channel 0, filter 1 gets applied to image 1, and the whole result gets summed up. Does that make sense?</p></li>\n</ol>\n"
}
] |
30,640,102 | 1 |
<theano>
|
2015-06-04T09:29:39.813
| 30,642,674 | 1,157,605 |
Theano subtraction when you can't tile
|
<p>Say we have a theano matrix X that is nxm, and another one u that is nx1. We want to do , but if we do that we'll get an input dimension mismatch. We could try tiling u, but tile only accepts constants and not variables. How do we do this?</p>
<pre></pre>
<p>I then get the error </p>
|
[
{
"AnswerId": "30642674",
"CreationDate": "2015-06-04T11:28:46.863",
"ParentId": null,
"OwnerUserId": "3489247",
"Title": null,
"Body": "<p><code>X - u</code> should work exactly as you write it by broadcasting:</p>\n\n<pre><code>import theano\nimport theano.tensor as T\n\nn = 10\nm = 20\n\nX = T.arange(n * m).reshape((n, m))\nu = T.arange(0, n * m, m).reshape((n, 1))\n\nr = X - u\n\nr.eval()\n</code></pre>\n\n<p>Similar to your updated question, you can do</p>\n\n<pre><code>import theano\nimport theano.tensor as T\n\nX = T.dmatrix()\nu = T.addbroadcast(T.dmatrix(), 1)\n\nr = X - u\n\nf = theano.function([X, u], r)\n\nXX = np.arange(20.).reshape(2, 10)\nuu = np.array([1., 100.]).reshape(2, 1)\n\nf(XX, uu)\n</code></pre>\n"
}
] |
30,663,780 | 2 |
<debugging><gdb><torch>
|
2015-06-05T10:03:48.793
| null | 1,851,417 |
how to configure gdb to debug a script not a binary [gdb : file format not recognized]
|
<p>I'm trying to use gdb to debug <a href="http://torch.ch/" rel="nofollow">Torch library</a> binary file to . When I run from the command line :
gdb --args th </p>
<p>I get the following error:</p>
<pre></pre>
<p>I checked if my current installation of is 64 bit i installed gdb64 and when i run </p>
<pre></pre>
<p>I still get the same error, the output of :</p>
<pre></pre>
<p>is : </p>
<pre></pre>
<p>I have learned that the problem is that the executable file is not a binary, but a script, so gdb is trying to debug the script instead. </p>
<p>My question is how to overcome this and let gdb debug the execution of the command itself. or even replace the Torch installation to be a binary execution instead of a script.</p>
|
[
{
"AnswerId": "30679176",
"CreationDate": "2015-06-06T05:13:46.517",
"ParentId": null,
"OwnerUserId": "1851417",
"Title": null,
"Body": "<p>with some help from the comments i was able to run gdb over the torch script, through : </p>\n\n<pre><code>gdb64 /bin/bash # check your gdb configuration either it's i686 or x86_64 \nrun /path/to/th # th is the torch running script to be debugged\n</code></pre>\n"
},
{
"AnswerId": "31819576",
"CreationDate": "2015-08-04T21:22:10.613",
"ParentId": null,
"OwnerUserId": "55075",
"Title": null,
"Body": "<p>Try <a href=\"https://en.wikipedia.org/wiki/LLDB_(debugger)\" rel=\"nofollow\">LLDB Debugger</a> (<code>lldb</code>) instead which aims to replace GNU Debugger (<code>gdb</code>).</p>\n\n<p>It's available by default on BSD/OS X, on Linux install via: <code>sudo apt-get install lldb</code> (or use <code>yum</code>). </p>\n\n<p>For usage, check <a href=\"http://lldb.llvm.org/lldb-gdb.html\" rel=\"nofollow\">gdb to lldb command map</a> page.</p>\n"
}
] |
30,663,902 | 1 |
<matlab><caffe><conv-neural-network><leveldb><matcaffe>
|
2015-06-05T10:10:32.150
| 30,869,166 | 2,191,652 |
Convert data to leveldb for caffe
|
<p>I have a bunch of 2D data matrices in Matlab (no image data, but some single precision data). </p>
<p>Does anyone know how to convert 2D matlab matrices to the leveldb format which is required by caffe to train a custom neural network?</p>
<p>I already did the tutorial on how to train on images (using the imagenet architecture) and on mnist (digit recognition dataset). However in the latter example they didn't show how to create the respective database. In the tutorial the database was already provided.</p>
|
[
{
"AnswerId": "30869166",
"CreationDate": "2015-06-16T13:34:33.200",
"ParentId": null,
"OwnerUserId": "2191652",
"Title": null,
"Body": "<p>I still don't know to create a leveldb database of my 2D data matrices for usage in caffe but I finally solved by problem:<br>\nI ended up using <a href=\"https://stackoverflow.com/questions/30663902/convert-data-to-leveldb-for-caffe#comment49431386_30663902\">Shai's proposal</a> to convert the data to HDF5 format. It is quite easy to read and write HDF5 databases in Matlab. You just have to use the functions <code>hdf5info()</code>,<code>h5read()</code>,<code>h5create()</code> and <code>h5write()</code> which are already implemented in Matlab.</p>\n\n<p><strong>Example:</strong><br>\n- Change the data type in your caffe prototxt file to \"hdf5layer\", like this:</p>\n\n<pre><code>name: \"LeNet\"\nlayer {\n name: \"mnist\"\n type: \"HDF5Data\"\n top: \"data\"\n top: \"label\"\n include {\n phase: TRAIN\n }\n hdf5_data_param {\n source: \"/path/to/your/database/myMnist_train.txt\"\n batch_size: 64\n }\n}\n</code></pre>\n\n<p>Use Matlab to create HDF5 databases:<br>\n- Caffe: Your input training data has to be a 4-D matrix where the last two dimensions are equal to the size of your 2D input data matrix in matlab.<br>\n- Example: Take a 2d matrix (image or single precision data) of size 54x24 (#rows x cols)<br>\n- -> transpose it, and stack it into a 24x54x1xN matrix, where N is the number of 2d matrices (training samples)<br>\n- The labels are in a 1xN row vectors in matlab.<br>\n- Now create your hdf5 database:</p>\n\n<pre><code>h5create(['train.h5'],'/data',[24 54 1 length(trainLabels)]);\nh5create(['train.h5'],'/label',[1 length(trainLabels)]);\nh5write(['train.h5'],'/data',trainData);\nh5write(['train.h5'],'/label',trainLabels);\n</code></pre>\n\n<ul>\n<li>As you can see, caffe expects a hdf5 database with the variables \"data\" and \"label\"</li>\n<li>Reading a database:<br>\nUse <code>hdf5info(filename)</code> to get the dataset names inside a hdf5 database.\nThen use <code>data = h5read(filename,dataset)</code> to read the dataset</li>\n</ul>\n"
}
] |
30,687,089 | 2 |
<c++><armadillo><caffe>
|
2015-06-06T20:07:21.983
| 30,700,276 | 3,733,814 |
Armadillo in Caffe for pseudo-inverse and transpose
|
<p>I need pseudo inverse and transpose function to implement a Layer in caffe. So I am using Armadillo library to do that. But How can I convert Caffe Blobs(2-D) to armadillo Mat and vice-versa ?</p>
|
[
{
"AnswerId": "30688162",
"CreationDate": "2015-06-06T22:24:07.333",
"ParentId": null,
"OwnerUserId": "4092300",
"Title": null,
"Body": "<p>A Caffe blob is a 4-dimensional matrix, I believe armadillo's <code>pinv(A)</code> and <code>.t()</code> or <code>trans(A)</code> are meant to be used with 2 dimensional matrices of type <code>mat</code>.</p>\n\n<p>You could get a vector of a vector of 2-dimensional armadillo matrices to represent a 4-dimensional Caffe blob. You can do so with something like:</p>\n\n<pre><code>using namespace arma;\nusing namespace caffe;\n\nvector<vector<mat>> blob2vvmat (Blob m) {\n\n vector<vector<mat>> vvm;\n\n for (int i=0; i<m.shape().at(0); i++) {\n\n vector<mat> vm;\n\n for (int j=0; i<m.shape().at(1); i++) {\n\n mat M (m.shape().at(2), m.shape().at(3));\n\n for (int k=0; i<m.shape().at(2); i++) {\n\n for (int l=0; i<m.shape().at(3); i++) {\n\n M(k,l) = m.data_at(i, j, k, l);\n\n }\n\n }\n\n vm.push_back(M);\n\n }\n\n vvm.push_back(vm);\n\n }\n\n return vvm;\n}\n</code></pre>\n\n<p>I haven't testing the code, but that should theoretically work, unless you're not looking for a vector of vector of matrices.</p>\n\n<p>If you know your row, column, height and 4d sizes, you can use the symbolic library on MATLAB or octave, or even better: Mathematica to derive and equation to calculate the pseudo-inverse and transpose of a 4d matrix and port it to your program using the <code>ccode(expr)</code> function.</p>\n\n<p>Hope this helps!</p>\n"
},
{
"AnswerId": "30700276",
"CreationDate": "2015-06-08T01:53:11.040",
"ParentId": null,
"OwnerUserId": "3733814",
"Title": null,
"Body": "<p>Although, I haven''t found a way of conversion between armadillo and caffe blobs.\nBut I noticed that caffe uses MKL library for CPU computations, which has both of the required functions i.e. pseudo inverse and transpose.\nIn case, caffe is not configured to use MKL, I can easily implement a function for transpose (which would be better than getting it done using armadillo and then converting data types in O(mn) time).</p>\n"
}
] |
30,689,685 | 1 |
<python><matrix><gpgpu><theano>
|
2015-06-07T03:07:23.297
| null | 1,293,964 |
How to toggle theano matrix based on vector of int position
|
<p>Using theano tensor operations, how can I toggle one cell on each row of a matrix based on a integer position indicator on the correspond row index of a vector (i.e. |v| = rows of the matrix). For example, given a 100x5 matrix of zeros</p>
<pre></pre>
<p>and a 100-element vector of integer in the range of [0, 4]. </p>
<pre></pre>
<p>update (or create another) matrix M to</p>
<pre></pre>
<p>(I know how to do this iteratively using conventional codes, but I want to run it as part of an algorithm on GPU without complicating my input which is currently vector V, so a direct theano implementation would be great.)</p>
|
[
{
"AnswerId": "30690168",
"CreationDate": "2015-06-07T04:51:41.450",
"ParentId": null,
"OwnerUserId": "1293964",
"Title": null,
"Body": "<p>I figured out the answer myself. This operation is known as <a href=\"http://en.wikipedia.org/wiki/One-hot\" rel=\"nofollow\">one-hot</a> and it is supported as the \"to_one_hot\" in <a href=\"http://www.deeplearning.net/software/theano/library/tensor/extra_ops.html\" rel=\"nofollow\">Theano's extra_ops package</a>. Code:</p>\n\n<pre><code>M_one_hot = theano.tensor.extra_ops.to_one_hot(V, 5, dtype='int32')\n</code></pre>\n"
}
] |
30,692,742 | 1 |
<theano>
|
2015-06-07T11:00:34.527
| 30,696,815 | 4,983,163 |
Creating square matrix with eigenvalues on the diagonal in theano
|
<p>I would like to create a square matrix with the eigenvalues on the diagonal:</p>
<pre></pre>
<p>However apparently theano does not treat the D matrix I created as a standard matrix, thus I cannot use it in the succeding computations.</p>
<pre></pre>
|
[
{
"AnswerId": "30696815",
"CreationDate": "2015-06-07T18:10:28.147",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>You're using an operation class as if it were an operation function.</p>\n\n<p>Instead of </p>\n\n<pre><code>D = T.nlinalg.AllocDiag(eigen_values)\n</code></pre>\n\n<p>try</p>\n\n<pre><code>D = T.nlinalg.AllocDiag()(eigen_values)\n</code></pre>\n\n<p>or</p>\n\n<pre><code>D = T.nlinalg.alloc_diag(eigen_values)\n</code></pre>\n"
}
] |
30,700,455 | 2 |
<caffe>
|
2015-06-08T02:17:34.007
| null | 4,984,613 |
Caffe layer registry error
|
<p>I'm new to Caffe and I have a problem running the Caffe mnist example. The error message is as follows:</p>
<pre></pre>
<p>I've searched for solutions and tried linking against dynamic library as suggested in <a href="https://stackoverflow.com/questions/30325108/caffe-layer-creation-failure">this post</a>. However it does not work. I can see the known layers is empty. What could be the cause? Please help me out. Thanks. I'm using Ubuntu 15.04.</p>
|
[
{
"AnswerId": "39996488",
"CreationDate": "2016-10-12T10:36:56.293",
"ParentId": null,
"OwnerUserId": "7005820",
"Title": null,
"Body": "<p>I'm not sure if you are using the original solver.prototxt, the problem seems to be you define a wrong layer in the prototxt</p>\n"
},
{
"AnswerId": "57052580",
"CreationDate": "2019-07-16T07:56:22.603",
"ParentId": null,
"OwnerUserId": "5495014",
"Title": null,
"Body": "<p>Please use the CMake when build the caffe from source. I also had the different types of layer mismatching. CMake will fix the al issue.</p>\n"
}
] |
30,710,355 | 1 |
<python><linux><anaconda><caffe>
|
2015-06-08T13:25:24.033
| 30,713,187 | 2,191,652 |
Cannot import caffe into python, libjpeg.so.62 not found
|
<p>I cannot import caffe into (anaconda-) python.
I'm following a <a href="http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/hdf5_classification.ipynb" rel="nofollow">notebook example</a> on "logistic regression on non-image HDF5 data". When I execute the line</p>
<pre></pre>
<p>I get the following error:</p>
<pre></pre>
<p>The library is definetly installed under . I don't know what is going wrong here or how to tell anacondapython where to look for .</p>
<p>I already tried out but apt-get says </p>
<p>I compiled caffe while modifying "Makefile.config" such that it was pointing it to the ananconda python path. I also exported the PYTHONPATH and PATH of my anaconda directory:</p>
<pre></pre>
|
[
{
"AnswerId": "30713187",
"CreationDate": "2015-06-08T15:25:47.970",
"ParentId": null,
"OwnerUserId": "2191652",
"Title": null,
"Body": "<p>Ok I finally found the solution:</p>\n\n<p>I had to <code>sudo apt-get install libjpeg62</code></p>\n\n<p>After that a new error occurred while trying to <code>import caffe</code>, namely</p>\n\n<pre><code>ImportError: /home/myName/libs/anaconda/bin/../lib/libm.so.6: version `GLIBC_2.15' not found (required by /usr/lib/x86_64-linux-gnu/libx264.so.142)\n</code></pre>\n\n<p>That could be solved by removing some buggy anaconda libraries thus resorting to the system libraries,quote shelhamer:\n\"Some versions of Anaconda seem to come with a bad libm. <code>rm ~/anaconda/lib/libm.*</code> takes care of this by reverting to the system libm.\"</p>\n\n<p>see <a href=\"https://github.com/BVLC/caffe/issues/985\" rel=\"nofollow\">github bvlc</a></p>\n"
}
] |
30,717,208 | 2 |
<python><deep-learning><caffe>
|
2015-06-08T19:11:33.030
| 30,717,972 | 4,561,745 |
Two errors while running Caffe
|
<p>I've installed Caffe on my Ubuntu 14.04 machine. the runs perfectly fine with 581 tests passed. I'm trying to work with the command line and python interface and getting the following two errors:</p>
<ol>
<li><p><strong>Command Line Interface:</strong> When I try to run the command , I'm getting the following error:</p>
<pre></pre></li>
<li><p><strong>Python Interface:</strong> When I run the command , I'm getting the following error:</p>
<pre></pre></li>
</ol>
|
[
{
"AnswerId": "34112166",
"CreationDate": "2015-12-05T23:28:00.577",
"ParentId": null,
"OwnerUserId": "3138238",
"Title": null,
"Body": "<p>Regarding the second problem you had, I had the same problem, I solved uncommenting this line inside the Makefile.config:</p>\n\n<pre><code># Decomment le line uncommented below:\n# Homebrew installs numpy in a non standard path (keg only)\nPYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include\n# PYTHON_LIB += $(shell brew --prefix numpy)/lib\n</code></pre>\n"
},
{
"AnswerId": "30717972",
"CreationDate": "2015-06-08T19:58:15.953",
"ParentId": null,
"OwnerUserId": "773226",
"Title": null,
"Body": "<p>Make sure that numpy is installed correctly and the path is mentioned to detect the newly installed library. The <a href=\"http://caffe.berkeleyvision.org/installation.html#prerequisites\" rel=\"nofollow\">steps</a> are provided in Caffe website itself.</p>\n\n<p>For the 'Caffe' command to work, you will have to step into the folder where the 'Caffe' executable is created and then try running the executable through the terminal.</p>\n"
}
] |
30,729,071 | 1 |
<python><amazon-web-services><neural-network><theano>
|
2015-06-09T10:09:23.167
| null | 1,372,512 |
Run out of VRAM using Theano on Amazon cluster
|
<p>I'm trying to execute the <a href="http://deeplearning.net/tutorial/code/logistic_sgd.py" rel="nofollow noreferrer">logistic_sgd.py</a> code on an Amazon cluster running the ami-b141a2f5 (Theano - CUDA 7) image. </p>
<p>Instead of the included MNIST database I am using the SD19 database, which requires changing a few dimensional constants, but otherwise no code has been touched. The code runs fine locally, on my CPU, but once I SSH the code and data to the Amazon cluster and run it there, I get this output:
<img src="https://i.stack.imgur.com/lakAK.png" alt="enter image description here"></p>
<p>It looks to me like it is running out of VRAM, but it was my understanding that the code should run on a GPU already, without any tinkering on my part necessary. After following the suggestion from the error message, the error persists.</p>
|
[
{
"AnswerId": "33219275",
"CreationDate": "2015-10-19T16:17:28.783",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>There's nothing especially strange here. The error message is almost certainly accurate: there really isn't enough VRAM. Often, a script will run fine on CPU but then fail like this on GPU simply because there is usually much more system memory available than GPU memory, especially since the system memory is virtualized (and can page out to disk if required) while the GPU memory isn't.</p>\n\n<p>For this script, there needs to be enough memory to store the training, validation, and testing data sets, the model parameters, and enough working space to store intermediate results of the computation. There are two options available:</p>\n\n<ol>\n<li><p>Reduce the amount of memory needed for one or more of these three components. Reducing the amount of training data is usually easiest; reducing the size of the model next. Unfortunately both of those two options will often impair the quality of the result that is being looked for. Reducing the amount of memory needed for intermediate results is usually beyond the developers control -- it is managed by Theano, but there is sometimes scope for altering the computation to achieve this goal once a good understanding of Theano's internals is achieved.</p></li>\n<li><p>If the model parameters and working memory can fit in GPU memory then the most common solution is to change the code so that the data is no longer stored in GPU memory (i.e. just store it as numpy arrays, not as Theano shared variables) then pass each batch of data in as <code>inputs</code> instead of <code>givens</code>. The <a href=\"http://deeplearning.net/tutorial/code/lstm.py\" rel=\"nofollow\">LSTM sample code</a> is an example of this approach.</p></li>\n</ol>\n"
}
] |
30,729,737 | 1 |
<theano>
|
2015-06-09T10:41:16.797
| null | 2,782,619 |
shuffling the first ith column in a theano matrix
|
<p>I need to shuffle the first i column of each row for a theano tensor matrix.</p>
<p>The only method I find is raw_random.shuffle_row_elements, but it shuffles every entire row (all columns).</p>
<p>Can anyone give me a hint?</p>
|
[
{
"AnswerId": "30793100",
"CreationDate": "2015-06-11T23:27:31.443",
"ParentId": null,
"OwnerUserId": "4013571",
"Title": null,
"Body": "<p>Use the following</p>\n\n<p><code>theano.tensor.basic.swapaxes(y, axis1, axis2)</code></p>\n\n<p>This is defined as:</p>\n\n<pre><code>def swapaxes(y, axis1, axis2):\n \"swap axes of inputted tensor\"\n y = as_tensor_variable(y)\n ndim = y.ndim\n li = range(0, ndim)\n li[axis1], li[axis2] = li[axis2], li[axis1]\n return y.dimshuffle(li)\n</code></pre>\n"
}
] |
30,745,647 | 1 |
<python><caffe>
|
2015-06-10T01:37:17.180
| 30,858,102 | 4,561,745 |
Python interface of Caffe: Error in "import caffe"
|
<p>I'm trying to run Caffe in it's Python interface. I've already run the command in the caffe directory and it worked fine. Now, when I run the command in the python environment in the terminal (Ubuntu 14.04), I'm getting the following error:</p>
<pre></pre>
<p>I tried to search my computer for 'caffe.io' but couldn't find any file by that name. Any idea why this error is occurring and how to correct it?</p>
|
[
{
"AnswerId": "30858102",
"CreationDate": "2015-06-16T03:03:14.510",
"ParentId": null,
"OwnerUserId": "4973198",
"Title": null,
"Body": "<p>You need to add Python Caffe to PYTHONPATH. In your case:<br>\n<strong>export PYTHONPATH=$PYTHONPATH:/home/pras/caffe/python</strong></p>\n"
}
] |
30,745,837 | 2 |
<c++><gcc><osx-yosemite><caffe>
|
2015-06-10T01:59:12.720
| 30,751,674 | 1,871,528 |
compiling caffe on Yosemite
|
<p>I'm trying to install caffe on Yosemite, and my C is not the strongest. Here is my error:</p>
<pre></pre>
<p>I'm guessing that the problem is with the compiler, so I installed gcc from brew and tried running it using</p>
<pre></pre>
<p>which still did not help.</p>
<p>Any suggestions?</p>
|
[
{
"AnswerId": "30751674",
"CreationDate": "2015-06-10T08:58:27.990",
"ParentId": null,
"OwnerUserId": "1871528",
"Title": null,
"Body": "<p>its a boost version problem.</p>\n\n<p>If you using brew do the following:</p>\n\n<p><a href=\"http://itinerantbioinformaticist.blogspot.com/2015/05/caffe-incompatible-with-boost-1580.html\" rel=\"noreferrer\">http://itinerantbioinformaticist.blogspot.com/2015/05/caffe-incompatible-with-boost-1580.html</a></p>\n"
},
{
"AnswerId": "31306966",
"CreationDate": "2015-07-09T02:11:57.520",
"ParentId": null,
"OwnerUserId": "3508192",
"Title": null,
"Body": "<p>I had the same problem. As someone else said, it turned out for me to be a problem with boost 1.58.0</p>\n\n<p>I fixed it by doing the following (assuming you have brew installed)</p>\n\n<pre><code> $ cd /usr/local/Library/Formula \n $ cd Library/Formula/\n $ cp boost.rb boost_backup.rb\n $ cp boost-python.rb boost-python_backup.rb\n $ wget https://raw.githubusercontent.com/Homebrew/homebrew/6fd6a9b6b2f56139a44dd689d30b7168ac13effb/Library/Formula/boost.rb\n $ mv boost.rb.1 boost.rb\n $ wget https://raw.githubusercontent.com/Homebrew/homebrew/3141234b3473717e87f3958d4916fe0ada0baba9/Library/Formula/boost-python.rb\n $ mv boost-python.rb.1 boost-python.rb\n $ brew uninstall --force boost\n $ brew install boost\n</code></pre>\n\n<p>After doing this I was able to make all with GPU support no problem.</p>\n\n<p><a href=\"http://playittodeath.ru/how-to-install-caffe-on-mac-os-x-yosemite-10-10-4/\" rel=\"nofollow\">source</a></p>\n"
}
] |
30,761,257 | 1 |
<python><csv><pickle><theano>
|
2015-06-10T15:49:03.727
| null | 4,995,648 |
theano csv to pkl file
|
<p>I am trying to make a pkl file to be loaded into theano from a csv starting point</p>
<pre class="lang-py prettyprint-override"></pre>
<p>When I run the resulting pkl file through Thenao, (as a DBN or SdA) it pretrains just fine, which makes me think the data is stored correctly. </p>
<p>However when it comes to finetune I get the following error:</p>
<pre>
epoch 1, minibatch 2775/2775, validation error 0.000000 %
Traceback (most recent call last):
File "SdA_custom.py", line 489, in
test_SdA()
File "SdA_custom.py", line 463, in test_SdA
test_losses = test_model()
File "SdA_custom.py", line 321, in test_score
return [test_score_i(i) for i in xrange(n_test_batches)]
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 606, in __call__
storage_map=self.fn.storage_map)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 595, in __call__
outputs = self.fn()
ValueError: Input dimension mis-match. (input[0].shape[0] = 10, input[1].shape[0] = 3)
Apply node that caused the error: Elemwise{neq,no_inplace}(argmax, Subtensor{int64:int64:}.0)
Inputs types: [TensorType(int64, vector), TensorType(int32, vector)]
Inputs shapes: [(10,), (3,)]
Inputs strides: [(8,), (4,)]
Inputs values: ['not shown', array([0, 0, 0], dtype=int32)]
Backtrace when the node is created:
File "/home/dean/Documents/DeepLearningRepo/DeepLearningTutorials-master/code/logistic_sgd.py", line 164, in errors
return T.mean(T.neq(self.y_pred, y))
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
</pre>
<p>10 is the size of my batch, if I change to a batch size of 1 I get the following:</p>
<pre>
ValueError: Input dimension mis-match. (input[0].shape[0] = 1, input[1].shape[0] = 0)
</pre>
<p>I think I am storing the labels wrong when I make a pkl, but I can't seem to spot what is happening or why changing the batch alters the error </p>
<p>Hope you can help!</p>
|
[
{
"AnswerId": "36026008",
"CreationDate": "2016-03-16T02:53:28.697",
"ParentId": null,
"OwnerUserId": "6051033",
"Title": null,
"Body": "<p>Saw this just now as was looking for similar error I was getting. Posting a reply so that it might help someone looking for similar error. For me the error resolved when I changed n_out to 2 from 1 in dbn_test() parameter list. n_out was the number of labels rather than number of output layers.</p>\n"
}
] |
30,761,433 | 3 |
<python><c++><deep-learning><caffe>
|
2015-06-10T15:56:36.377
| 30,763,062 | 1,348,187 |
[Caffe]: Check failed: ShapeEquals(proto) shape mismatch (reshape not set)
|
<p>I have this error and I have tried to take a look in Internet but I got nothing clear.</p>
<p>I trained my net with Caffe successfully with around 82% of accuracy.</p>
<p>Now I'm trying to try it with an image through this code:</p>
<p></p>
<p>yes, my images are 64x64, </p>
<p>these are the last lines I'm getting:</p>
<blockquote>
<p>I0610 15:33:44.868100 28657 net.cpp:194] conv3 does not need backward computation.
I0610 15:33:44.868110 28657 net.cpp:194] norm2 does not need backward computation.
I0610 15:33:44.868120 28657 net.cpp:194] pool2 does not need backward computation.
I0610 15:33:44.868130 28657 net.cpp:194] relu2 does not need backward computation.
I0610 15:33:44.868142 28657 net.cpp:194] conv2 does not need backward computation.
I0610 15:33:44.868152 28657 net.cpp:194] norm1 does not need backward computation.
I0610 15:33:44.868162 28657 net.cpp:194] pool1 does not need backward computation.
I0610 15:33:44.868173 28657 net.cpp:194] relu1 does not need backward computation.
I0610 15:33:44.868182 28657 net.cpp:194] conv1 does not need backward computation.
I0610 15:33:44.868192 28657 net.cpp:235] This network produces output fc8_pascal
I0610 15:33:44.868214 28657 net.cpp:482] Collecting Learning Rate and Weight Decay.
I0610 15:33:44.868238 28657 net.cpp:247] Network initialization done.
I0610 15:33:44.868249 28657 net.cpp:248] Memory required for data: 3136120
F0610 15:33:45.025965 28657 blob.cpp:458] Check failed: ShapeEquals(proto) shape mismatch (reshape not set)
<strong>* Check failure stack trace: *</strong>
Aborted (core dumped)</p>
</blockquote>
<p>I've tried to not setting the --mean_file and more things, but my shots are over. </p>
<p>This is my imagenet_deploy.prototxt which I've modified in some parameters to debug, but didn't work anything.</p>
<pre></pre>
<p>Does anyone could give me a clue?
Thank you very much.</p>
<hr>
<p>The same happens with C++ and the <strong>classification bin</strong> they provide:</p>
<blockquote>
<p>F0610 18:06:14.975601 7906 blob.cpp:455] Check failed: ShapeEquals(proto) shape mismatch (reshape not set)
<strong>* Check failure stack trace: *</strong>
@ 0x7f0e3c50761c google::LogMessage::Fail()
@ 0x7f0e3c507568 google::LogMessage::SendToLog()
@ 0x7f0e3c506f6a google::LogMessage::Flush()
@ 0x7f0e3c509f01 google::LogMessageFatal::~LogMessageFatal()
@ 0x7f0e3c964a80 caffe::Blob<>::FromProto()
@ 0x7f0e3c89576e caffe::Net<>::CopyTrainedLayersFrom()
@ 0x7f0e3c8a10d2 caffe::Net<>::CopyTrainedLayersFrom()
@ 0x406c32 Classifier::Classifier()
@ 0x403d2b main
@ 0x7f0e3b124ec5 (unknown)
@ 0x4041ce (unknown)
Aborted (core dumped)</p>
</blockquote>
|
[
{
"AnswerId": "31251378",
"CreationDate": "2015-07-06T16:48:53.573",
"ParentId": null,
"OwnerUserId": "1865831",
"Title": null,
"Body": "<p>I just had the same error. In my case my output parameters of the final layer were incorrect: Switching datasets, I changed the number of classes in the train.prototxt and failed to do so in test.prototxt (or deploy.prototxt). Correcting this mistake solved the problem for me.</p>\n"
},
{
"AnswerId": "30763062",
"CreationDate": "2015-06-10T17:16:09.263",
"ParentId": null,
"OwnerUserId": "773226",
"Title": null,
"Body": "<p>Let me confirm whether the basic steps are correct.</p>\n\n<pre><code>input_dim: 10\ninput_dim: 3\ninput_dim: 64\ninput_dim: 64\n</code></pre>\n\n<p>Have you tried changing the first parameter to 1 as you are only passing a single image.</p>\n\n<p>The above mentioned error occurs when the dimensions of the top or bottom blobs are not correct. And there is no where that could go wrong other than the input blobs.</p>\n\n<p><strong>Edit 2:</strong></p>\n\n<p><code>ShapeEquals(proto) shape mismatch (reshape not set)</code> error message occurs when 'reshape' parameter is set to false for the <a href=\"https://github.com/BVLC/caffe/blob/master/src/caffe/blob.cpp#L455\" rel=\"nofollow\"><em>fromproto</em> function call</a>.</p>\n\n<p>I did a quick search for the fromproto function call within the library as shown <a href=\"https://github.com/BVLC/caffe/search?utf8=%E2%9C%93&q=FromProto\" rel=\"nofollow\">here</a>. Other than 'CopyTrainedLayersFrom' function no other function actually set the above mentioned parameter as <code>false</code>.</p>\n\n<p>This is actually confusing. Two methods that I would suggest is:</p>\n\n<ol>\n<li>Check whether the caffe source code is updated from the repository.</li>\n<li>Try running the <em>test</em> portion of <em>caffe.bin</em> executable found in /build/tools/.</li>\n</ol>\n"
},
{
"AnswerId": "32349239",
"CreationDate": "2015-09-02T09:19:09.917",
"ParentId": null,
"OwnerUserId": "1754513",
"Title": null,
"Body": "<p>In my case, the size of the kernel in the second convolutional layer in my solver file differed from the one in the train file. Changing the size in the solver file solved the problem.</p>\n"
}
] |
30,762,476 | 2 |
<file-io><lua><torch>
|
2015-06-10T16:46:07.373
| null | 3,113,501 |
How to read a Bunch of files in a directory in lua
|
<p>I have a path (as a string) to a directory. In that directory, are a bunch of text files. I want to go to that directory open it, and go to each text file and read the data.</p>
<p>I've tried</p>
<pre></pre>
<p>I get the error "nil Is a directory"</p>
<p>I've tried:</p>
<pre></pre>
<p>I get the error: "Permission denied"</p>
<p>Is it just me, but it seems to be a lot harder than it should be to do basic file io in lua?</p>
|
[
{
"AnswerId": "30762613",
"CreationDate": "2015-06-10T16:52:56.807",
"ParentId": null,
"OwnerUserId": "258523",
"Title": null,
"Body": "<p>A directory isn't a file. You can't just open it.</p>\n\n<p>And yes, lua itself has (intentionally) limited functionality.</p>\n\n<p>You can use <a href=\"https://keplerproject.github.io/luafilesystem/\" rel=\"nofollow\">luafilesystem</a> or <a href=\"https://github.com/luaposix/luaposix\" rel=\"nofollow\">luaposix</a> and similar modules to get more features in this area.</p>\n"
},
{
"AnswerId": "30762936",
"CreationDate": "2015-06-10T17:09:05.360",
"ParentId": null,
"OwnerUserId": "343123",
"Title": null,
"Body": "<p>You can also use the following script to list the names of the files in a given directory (assuming Unix/Posix):</p>\n\n<pre><code>dirname = '.'\nf = io.popen('ls ' .. dirname)\nfor name in f:lines() do print(name) end\n</code></pre>\n"
}
] |
30,766,364 | 1 |
<python><indexing><theano>
|
2015-06-10T20:10:43.213
| 30,770,321 | 3,112,506 |
Remove inf from Theano array
|
<p>Supposing I have a vector in Theano and some of the elements are , how do I remove them? Consider the following example:</p>
<pre></pre>
<p>According to the Theano documentation, this should remove the elements via indexing. However, this is not the case as returns . </p>
<p>How can I do this so that returns ?</p>
|
[
{
"AnswerId": "30770321",
"CreationDate": "2015-06-11T01:58:43.443",
"ParentId": null,
"OwnerUserId": "4013571",
"Title": null,
"Body": "<p>I found an awkward workaround</p>\n\n<pre><code>import theano\nimport theano.tensor as T\nimport numpy as np\n\nvec = T.vector()\ncompare = T.isinf(vec)\nout = vec[(1-compare).nonzero()]\n\nv = [ 1., 1., 1., 1., np.inf, 3., 4., 5., 6., np.inf]\nv = np.asarray(v)\n\nout.eval({var:v})\narray([ 1., 1., 1., 1., 3., 4., 5., 6.])\n</code></pre>\n\n<p>For your example:</p>\n\n<pre><code>fin = vec[(1-T.isinf(vec)).nonzero()]\nf = theano.function([vec], fin)\n\nf([1,2,np.inf])\narray([ 1., 2.])\n</code></pre>\n"
}
] |
30,769,048 | 3 |
<python><numpy><anaconda><caffe><lmdb>
|
2015-06-10T23:28:40.987
| 30,788,664 | 4,561,745 |
Error in creating LMDB database file in Python for Caffe
|
<p>I'm trying to create an LMDB data base file in Python to be used with Caffe according to <a href="http://deepdish.io/2015/04/28/creating-lmdb-in-python/" rel="noreferrer">this</a> tutorial. The commands and run perfectly fine. However, when I try to run and , I'm getting the following errors:</p>
<pre></pre>
<p>I'm running Python 2.7.9 through Anaconda 2.2.0 (64-bit) on Ubuntu 14.04. While installing the dependencies for Caffe according to <a href="http://caffe.berkeleyvision.org/install_apt.html" rel="noreferrer">this</a> page, I've already installed the lmdb package through .</p>
<p>Any ideas why this error might be occuring?</p>
|
[
{
"AnswerId": "37486716",
"CreationDate": "2016-05-27T15:02:36.787",
"ParentId": null,
"OwnerUserId": "4624901",
"Title": null,
"Body": "<p>If you're using Anaconda, then this can solve your problem (it worked for me):</p>\n\n<pre><code>conda install -c https://conda.binstar.org/dougal lmdb\n</code></pre>\n"
},
{
"AnswerId": "52155660",
"CreationDate": "2018-09-03T20:03:43.910",
"ParentId": null,
"OwnerUserId": "9954163",
"Title": null,
"Body": "<p>For Anaconda users, installing <code>python-lmdb</code> package from <code>conda-forge</code> should fix the <code>lmdb</code> import error:</p>\n\n<pre><code>conda install -c conda-forge python-lmdb\n</code></pre>\n\n<p>This was tested on <code>conda 4.5.11</code> on an <code>lxc</code>-containerized system running <code>Ubuntu 18.04</code>. </p>\n\n<p>Note that there is a <code>conda</code> package named <code>lmdb</code> (without <code>python-</code>), installable via:</p>\n\n<pre><code>conda install -c conda-forge lmdb\n</code></pre>\n\n<p>that does not fix the import error.</p>\n"
},
{
"AnswerId": "30788664",
"CreationDate": "2015-06-11T18:28:08.230",
"ParentId": null,
"OwnerUserId": "4561745",
"Title": null,
"Body": "<p>Well, the <code>apt-get install liblmdb-dev</code> might work with bash (in the terminal) but apparently it doesn't work with Anaconda Python. I figured Anaconda Python might require it's own module for lmdb and I followed <a href=\"https://lmdb.readthedocs.org/en/release/#installation-unix\" rel=\"noreferrer\">this</a> link. The Python installation for lmdb module can be performed by running the command <code>pip install lmdb</code> in the terminal. And then <code>import lmdb</code> in Python works like a charm!</p>\n\n<p>The above installation commands may require sudo.</p>\n"
}
] |
30,784,588 | 1 |
<python><debugging><neural-network><theano>
|
2015-06-11T15:05:56.003
| 30,792,985 | 2,312,926 |
Print output of a Theano network
|
<p>I am sorry, very newbee question... I trained a neural network with Theano and now I want to see what it outputs for a certain input.</p>
<p>So I can say:</p>
<pre></pre>
<p>where output_layer is my network.
Now, the last layer happens to be a softmax, so if I say:</p>
<pre></pre>
<p>I get</p>
<pre></pre>
<p>I see why I get this I think (namely, because the output is a symbolic tensor variable), but I don't see how I can see the actual values.</p>
<p>And just so you know, I did read <a href="https://stackoverflow.com/questions/17445280/theano-print-value-of-tensorvariable">this post</a> and also the <a href="http://deeplearning.net/software/theano/library/printing.html" rel="nofollow noreferrer">documentation on printing</a> and <a href="http://deeplearning.net/software/theano/tutorial/debug_faq.html#how-do-i-print-an-intermediate-value-in-a-function-method" rel="nofollow noreferrer">FAQ</a>, which I am also not fully grasping, I am afraid...</p>
|
[
{
"AnswerId": "30792985",
"CreationDate": "2015-06-11T23:13:43.360",
"ParentId": null,
"OwnerUserId": "4013571",
"Title": null,
"Body": "<ol>\n<li>Use <code>.eval()</code> to evaluate the symbolic expression</li>\n<li>Use <a href=\"http://deeplearning.net/software/theano/tutorial/debug_faq.html#using-test-values\" rel=\"nofollow\">Test Values</a></li>\n</ol>\n"
}
] |
30,785,836 | 1 |
<python><nan><theano><gradient-descent>
|
2015-06-11T16:00:45.620
| null | 4,999,948 |
Theano stochastic gradient descent NaN output
|
<p>I am using Theano stochastic gradient descent for solving a minimization problem. When running my code, the first iterations seem to work, but after a while and all of a sudden, the optimized parameter (eta) becomes NaNs (as well as the derivatives g_eta). It seems to be a Theano technical issue more than a bug in my code since I have checked it in several different ways. </p>
<p>Anyone has an idea of which could be the reason? My code is the following:</p>
<pre></pre>
<p>Thank you!</p>
|
[
{
"AnswerId": "30793051",
"CreationDate": "2015-06-11T23:21:56.690",
"ParentId": null,
"OwnerUserId": "4013571",
"Title": null,
"Body": "<p>The fact that you are getting the same problem but more slowly with a slow learning rate suggests that you possible have an instability in your function which blows up near where you start SGD.</p>\n\n<ol>\n<li>Try different starting values</li>\n<li>Adjust your cost function to penalise the nasty area that is blowing up</li>\n<li>Try a different gradient descent method</li>\n</ol>\n"
}
] |
30,792,665 | 1 |
<lua><neural-network><backpropagation><training-data><torch>
|
2015-06-11T22:43:23.257
| 30,818,191 | 1,082,019 |
Torch Lua: Why is my gradient descent not optimizing the error?
|
<p>I've been trying to implement a siamese neural network in Torch/Lua, as <a href="https://stackoverflow.com/questions/30581199/torch-lua-is-this-a-good-implementation-of-a-siamese-neural-network">I already explained here</a>.
Now I have my first implementation, that I suppose to be good.</p>
<p>Unfortunately, I'm facing a problem: during training back-propagation, the gradient descent does not update the error. That is, it always computes the same value (that is +1 or -1), without changing it.
In a correct implementation, the error should go from +1 to -1 or from -1 to +1. In my case, it's just stuck in the upper value and nothing changes.</p>
<p>Why? I'm really looking for someone that could give me some hints.</p>
<p>Here's my working code, that you might try to run:</p>
<pre></pre>
<p>The question is: <strong>why the <em>predictionValue</em> variable is always the same? Why doesn't it get updates?</strong></p>
<p><strong>EDIT</strong>: I now realized that the problem was that I was using only 1 output layer dimension. I moved it to 6, but unfortunately I have a new problem. The gradient is not updating in the right direction.
For example, here's what happens by using my previous code with output_layer_number=6</p>
<pre></pre>
<p><strong>That is, the predictionValue never goes towards -1. Why?</strong></p>
|
[
{
"AnswerId": "30818191",
"CreationDate": "2015-06-13T11:34:38.207",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<blockquote>\n <p>why the predictionValue variable is always the same? Why doesn't it get updates?</p>\n</blockquote>\n\n<p>First of all you should perform the backward propagation only if <code>predictionValue*targetValue < 1</code> to make sure you back-propagate only if the pairs need to be pushed together (<code>targetValue = 1</code>) or pulled apart (<code>targetValue = -1</code>).</p>\n\n<p><em>See also this torch/nn <a href=\"https://github.com/torch/nn/blob/3bbed8d/doc/table.md#cosinedistance\" rel=\"nofollow noreferrer\">official example</a> that illustrates this.</em></p>\n\n<p>That being said you have <strong>only 1 output unit</strong> (<code>output_layer_number = 1</code>). That means that each branch of your siamese network produces a single scalar, resp. <code>u</code> and <code>v</code>. This pair of scalars are then compared by the cosine distance:</p>\n\n<pre><code>C(u,v) = cosine(u, v) = (u / |u|) x (v / |v|)\n</code></pre>\n\n<p><em>Note: this criterion can only take two values here: 1 or -1 (see below in blue).</em></p>\n\n<p>When it is time to back-propagate you compute the derivatives of this criterion with respect to the inputs, i.e. <code>dC/du</code> and <code>dC/dv</code>. But these <strong>derivatives are null</strong> and undefined at 0 (see below in red):</p>\n\n<p><img src=\"https://i.stack.imgur.com/Guhom.png\" alt=\"enter image description here\"></p>\n\n<p>This is why the back-propagation does nothing here, i.e. it remains static (and you can verify this in practice by printing out the norms of these derivatives).</p>\n"
}
] |
30,797,301 | 2 |
<python-2.7><theano>
|
2015-06-12T07:11:37.340
| null | 4,680,426 |
Fail importing theano
|
<p>I could import theano ok yesterday. Today, when I want to import theano, it says "No module named theano". </p>
<p>But question is numpy and scipy could be imported and they are in the same address as theano. I first think theano might error in itself, but after I uninstall and install it again. It still just can't work.</p>
<p>I just upgrade my Mac to "El Capitan preview" today. Is there any relation between them.</p>
|
[
{
"AnswerId": "31890735",
"CreationDate": "2015-08-08T07:23:02.170",
"ParentId": null,
"OwnerUserId": "3497273",
"Title": null,
"Body": "<p>In a python shell, type:</p>\n\n<pre><code>>>> pip install theano\n</code></pre>\n\n<p>If you don't have pip, download <a href=\"https://bootstrap.pypa.io/get-pip.py\" rel=\"nofollow\">getpip.py</a> and execute it (outside of the python shell).</p>\n\n<pre><code>python getpip.py\n</code></pre>\n\n<p>Then,</p>\n\n<pre><code>python\n>>> pip install theano\n</code></pre>\n"
},
{
"AnswerId": "31902313",
"CreationDate": "2015-08-09T08:24:24.407",
"ParentId": null,
"OwnerUserId": "1196752",
"Title": null,
"Body": "<p>Are you using something like <a href=\"https://store.continuum.io/cshop/anaconda/\" rel=\"nofollow\">Anaconda</a> or non-stock python?</p>\n\n<p>You might want to check the <code>~/.bash_profile</code> and add:</p>\n\n<pre><code>export PATH = /path/to/anaconda/bin:$PATH\n</code></pre>\n\n<p>If you have multiple installations of python, maybe you're executing the one that doesn't have theano installed.</p>\n"
}
] |
30,808,735 | 4 |
<python><python-2.7><caffe>
|
2015-06-12T17:15:04.243
| 30,809,008 | 2,938,494 |
Error when using classify in caffe
|
<p>I am using caffe in python to classify. I get code from <a href="http://www.openu.ac.il/home/hassner/projects/cnn_agegender/">here</a>. In here, I just use simple code such as</p>
<pre></pre>
<p>However, I got error such as</p>
<pre></pre>
<p>Could you help me to reslove it? Thanks</p>
|
[
{
"AnswerId": "30809008",
"CreationDate": "2015-06-12T17:31:47.217",
"ParentId": null,
"OwnerUserId": "3755376",
"Title": null,
"Body": "<p>Let go to line 253-254 in caffe/python/caffe/io.py \nReplace </p>\n\n<pre><code>if ms != self.inputs[in_][1:]:\n raise ValueError('Mean shape incompatible with input shape.')\n</code></pre>\n\n<p>By </p>\n\n<pre><code>if ms != self.inputs[in_][1:]:\n print(self.inputs[in_])\n in_shape = self.inputs[in_][1:]\n m_min, m_max = mean.min(), mean.max()\n normal_mean = (mean - m_min) / (m_max - m_min)\n mean = resize_image(normal_mean.transpose((1,2,0)),in_shape[1:]).transpose((2,0,1)) * (m_max - m_min) + m_min\n #raise ValueError('Mean shape incompatible with input shape.')\n</code></pre>\n\n<p>Rebuild. Hope it help</p>\n"
},
{
"AnswerId": "47394617",
"CreationDate": "2017-11-20T14:43:26.910",
"ParentId": null,
"OwnerUserId": "1947377",
"Title": null,
"Body": "<p>Edit deploy_gender.prototxt and set:\ninput_dim: 256\ninput_dim: 256</p>\n\n<p>Don't know why it was written wrong...</p>\n"
},
{
"AnswerId": "39298492",
"CreationDate": "2016-09-02T18:11:30.457",
"ParentId": null,
"OwnerUserId": "6788794",
"Title": null,
"Body": "<p>I was having the same problem, based in the imagenet web demo I modified the script using this way to load the mean file in line 95</p>\n\n<p><code>mean = np.load(args.mean_file).mean(1).mean(1)</code></p>\n"
},
{
"AnswerId": "45254585",
"CreationDate": "2017-07-22T12:31:23.083",
"ParentId": null,
"OwnerUserId": "2496554",
"Title": null,
"Body": "<p>I am pretty scared to rebuild the code as caffe installation did not come easy for me.\nBut to fix, the solution to resize mean require in_shape (user8264's response), which is set internally in caffe/classifier.py</p>\n\n<p>Anyway, I debugged and found the value for in_shape = (3, 227, 227) for age_net.caffemodel</p>\n\n<p>So the model used for age and gender prediction would the following change:</p>\n\n<pre><code>age_net_pretrained='./age_net.caffemodel'\nage_net_model_file='./deploy_age.prototxt'\nage_net = caffe.Classifier(age_net_model_file, age_net_pretrained,\n mean=mean,\n channel_swap=(2,1,0),\n raw_scale=255,\n image_dims=(227, 227))\n</code></pre>\n\n<p>But mean needs to be modified first: </p>\n\n<pre><code>m_min, m_max = mean.min(), mean.max()\nnormal_mean = (mean - m_min) / (m_max - m_min)\nin_shape=(227, 227)\nmean = caffe.io.resize_image(normal_mean.transpose((1,2,0)),in_shape)\n .transpose((2,0,1)) * (m_max - m_min) + m_min\n</code></pre>\n\n<p>This will get rid of \"ValueError: Mean shape incompatible with input shape\". But I am not sure about the accuracy though. Apparently, for me skipping mean parameter gave better age prediction :)</p>\n"
}
] |
30,812,280 | 2 |
<python><deep-learning><caffe>
|
2015-06-12T21:15:22.890
| 31,043,900 | 2,824,962 |
Caffe net.predict() outputs random results (GoogleNet)
|
<p>I used pretrained GoogleNet from <a href="https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet">https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet</a> and finetuned it with my own data (~ 100k images, 101 classes).
After one day training I achieved 62% in top-1 and 85% in top-5 classification and try to use this network to predict several images. </p>
<p>I just followed example from <a href="https://github.com/BVLC/caffe/blob/master/examples/classification.ipynb">https://github.com/BVLC/caffe/blob/master/examples/classification.ipynb</a>,</p>
<p>Here is my Python code:</p>
<pre></pre>
<p>In my deploy.prototxt I changed the last layer only to predict my 101 classes.</p>
<pre></pre>
<p>Here is the distribution of softmax output:</p>
<pre></pre>
<p>It seems just like random distribution with no sense.</p>
<p>Thank you for any help or hint and best regards,
Alex</p>
|
[
{
"AnswerId": "30826566",
"CreationDate": "2015-06-14T06:28:56.253",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>Please check the image transformation you are using - is it the same in training and test-time?</p>\n\n<p>AFAIK bvlc_googlenet subtract image mean with one value per channel, while your python <code>classifier</code> uses different mean <code>mean=np.load('ilsvrc_2012_mean.npy').mean(1).mean(1)</code>. This might cause the net to be unable to classify your inputs correctly.</p>\n"
},
{
"AnswerId": "31043900",
"CreationDate": "2015-06-25T07:28:50.537",
"ParentId": null,
"OwnerUserId": "2824962",
"Title": null,
"Body": "<p>The solution is really simple: I just forgot to rename the last layer in deploy file:</p>\n\n<pre><code>layer {\n name: \"loss3/classifier\"\n type: \"InnerProduct\"\n bottom: \"pool5/7x7_s1\"\n top: \"loss3/classifier\"\n param {\n lr_mult: 1\n decay_mult: 1\n }\n</code></pre>\n"
}
] |
30,815,035 | 1 |
<python><python-2.7><opencv><caffe>
|
2015-06-13T04:20:42.443
| null | 2,938,494 |
How to convert Mat from opencv to caffe format
|
<p>I am using opencv to crop face from my camera. And then I used caffe to predict that image belongs to male or female. I have a original code that load image from static image. However, I want to use image from camera for it. This is original code in caffe</p>
<pre></pre>
<p>Now, I will use opencv to capture frame and call predict method</p>
<pre></pre>
<p>After having resized_image, I will conver it to caffe type such as function</p>
<pre></pre>
<p>However, when I call that function. I don't know what is self. Could you help me to fix it?</p>
<p>Thank you for help!</p>
|
[
{
"AnswerId": "30858059",
"CreationDate": "2015-06-16T02:56:35.907",
"ParentId": null,
"OwnerUserId": "4973198",
"Title": null,
"Body": "<p>You can use <strong>CVMatToDatum</strong> function in caffe.io. More info here: <a href=\"https://github.com/BVLC/caffe/blob/master/src/caffe/util/io.cpp\" rel=\"nofollow\">https://github.com/BVLC/caffe/blob/master/src/caffe/util/io.cpp</a></p>\n\n<p>Edit: I think you can use <strong>array_to_datum</strong> from\n<a href=\"https://github.com/BVLC/caffe/blob/master/python/caffe/io.py\" rel=\"nofollow\">https://github.com/BVLC/caffe/blob/master/python/caffe/io.py</a>, \nthough it might be necessary to convert Mat to ndarray first</p>\n"
}
] |
30,822,009 | 2 |
<python><computer-vision><neural-network><deep-learning><caffe>
|
2015-06-13T18:16:28.797
| 32,440,865 | 2,938,494 |
How to speed up caffe classifer in python
|
<p>I am using python to use caffe classifier. I got image from my camera and peform predict image from training set. It work well but the problem is speed very slow. I thinks just 4 frames/second. Could you suggest to me some way to improve computational time in my code?
The problem can be explained as following. I have to reload an network model that its size about 80MB by following code</p>
<pre></pre>
<p>And for each input image (), I call the predict function</p>
<pre></pre>
<p>I think that due to size of network is very large. Then predict function takes long time to predict image. I think the slow time is from it.<br>
This is my full reference code. It changed by me. </p>
<pre></pre>
|
[
{
"AnswerId": "45878676",
"CreationDate": "2017-08-25T09:43:52.343",
"ParentId": null,
"OwnerUserId": "4265775",
"Title": null,
"Body": "<p>You can also try <a href=\"https://arxiv.org/abs/1707.06168\" rel=\"nofollow noreferrer\">channel pruning</a> your network. It's an algorithm that effectively prune channels in each layer, which could speed up network 2-5x. The github address is : <a href=\"https://github.com/yihui-he/channel-pruning\" rel=\"nofollow noreferrer\">https://github.com/yihui-he/channel-pruning</a></p>\n"
},
{
"AnswerId": "32440865",
"CreationDate": "2015-09-07T14:19:58.660",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>Try calling <code>age_net.predict([caffe_input])</code> with <a href=\"https://github.com/BVLC/caffe/blob/master/python/caffe/classifier.py#L47\" rel=\"nofollow\"><code>oversmaple=False</code></a>:</p>\n\n<pre><code>prediction = age_net.predict([caffe_input], oversample=False)\n</code></pre>\n\n<p>The default behavior of <code>predict</code> is to create 10, slightly different, crops of the input image and feed them to the network to classify, by disabling this option you should get a x10 speedup.</p>\n"
}
] |
30,828,095 | 1 |
<cuda><theano>
|
2015-06-14T10:03:59.320
| 30,858,008 | 5,008,021 |
MAC getting Theano to use the GPU
|
<p>I am having quite a bit of trouble setting up Theano to work with my graphics card on a mac, I really hope you can give me some help.</p>
<p>WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu0 is not available (error: Unable to get the number of gpus available: CUDA driver version is insufficient for CUDA runtime version)</p>
|
[
{
"AnswerId": "30858008",
"CreationDate": "2015-06-16T02:48:22.827",
"ParentId": null,
"OwnerUserId": "4013571",
"Title": null,
"Body": "<p>You need an NVIDIA graphics card to use CUDA as CUDA is an NVIDIA product.</p>\n\n<p>To check if your mac has an NVIDIA graphics card:</p>\n\n<ul>\n<li>Click on the in the top left of the screen</li>\n<li>About This Mac</li>\n<li>Check the graphics card under Displays...</li>\n<li>If like me you don't see NVIDIA or GeForce anywhere, you can't use CUDA. </li>\n<li>See <a href=\"https://stackoverflow.com/a/30701002/4013571\">this post</a> about alternatives for non NVIDIA for deep learning...</li>\n</ul>\n\n<p><img src=\"https://i.stack.imgur.com/6WIeU.png\" alt=\"Non Nvidia graphics card\"></p>\n"
}
] |
30,830,987 | 1 |
<python><python-2.7><face-recognition><caffe>
|
2015-06-14T15:12:37.140
| 30,966,717 | 2,938,494 |
How to create an caffemodel file from training image and its labeled?
|
<p>I am working in age classification based on the opensource at <a href="http://www.openu.ac.il/home/hassner/projects/cnn_agegender/" rel="nofollow">here</a>
The python code has </p>
<pre></pre>
<p>In which file is shown as below. I remain one file that is . As the source code, he provided it before. However, I would like to create it again based on my face database. Could you have any tutorial or some way to create it? I assume that I have a folder image that include 100 images and divided belongs to each age groups (1 to 1) such as</p>
<pre></pre>
<p>This is prototxt file. Thanks in advance</p>
<pre></pre>
|
[
{
"AnswerId": "30966717",
"CreationDate": "2015-06-21T16:23:48.903",
"ParentId": null,
"OwnerUserId": "1899150",
"Title": null,
"Body": "<p>To get a caffemodel you need to train the network. That prototxt file is only to deploy the model and cannot be used to train it. </p>\n\n<p>You need to add a data layer that points to your database. To use a list of files as you mention, the source of the layer should be HDF5. You will probably want to add a transform_param with the mean value. The image files can be replaced by a LMDB or LevelDB database for efficiency purposes.</p>\n\n<p>At the end of the network you will have to substitute the 'prob' layer with a 'loss' layer. Something like this:</p>\n\n<p>layers {\n name: \"loss\"\n type: SoftmaxWithLoss\n bottom: \"fc8\"\n top: \"loss\"\n}</p>\n\n<p>The layer catalogue can be found here:</p>\n\n<p><a href=\"http://caffe.berkeleyvision.org/tutorial/layers.html\" rel=\"nofollow\">http://caffe.berkeleyvision.org/tutorial/layers.html</a></p>\n\n<p>Or, as your network is a well known one... just look at this tutorial :P.</p>\n\n<p><a href=\"http://caffe.berkeleyvision.org/gathered/examples/imagenet.html\" rel=\"nofollow\">http://caffe.berkeleyvision.org/gathered/examples/imagenet.html</a></p>\n\n<p>The correct prototxt file for training is included in caffe ('train_val.prototxt').</p>\n"
}
] |
30,836,029 | 1 |
<cmake><caffe>
|
2015-06-15T00:44:51.917
| null | 1,282,043 |
installing caffe cmake error
|
<p>I've followed all the steps to install the caffe dependencies. I've bumped into the following error:</p>
<pre></pre>
<p>I'm not really sure what's going on. It seems as if cmake hasn't been installed properly? What's curious is that I have two cmake folders: One in my caffe folder and the other one inside an extra folder I called "dependencies" in which I am compiling all other dependencies for caffe:</p>
<p>doing form my /caffe/ folder:</p>
<pre></pre>
<p>doing ls from my /caffe/dependencies/ folder:</p>
<pre></pre>
<p>Was I supposed to install all the dependencies inside my /caffe/ folder?</p>
|
[
{
"AnswerId": "35313238",
"CreationDate": "2016-02-10T10:55:29.903",
"ParentId": null,
"OwnerUserId": "2324271",
"Title": null,
"Body": "<p>I suggest you should use the <code>cmake-gui</code> for <code>making</code> Caffe. There you can set the appropriate paths easily. No need to install all the dependencies in the Caffe root directory. You may install them in your system home directory.</p>\n"
}
] |
30,850,543 | 1 |
<python><pip><ubuntu-14.04><caffe><lmdb>
|
2015-06-15T16:43:33.190
| 30,853,705 | 4,561,745 |
Error while installing deepdish
|
<p>I'm trying to create an LMDB database file to be used with Caffe according to <a href="http://deepdish.io/" rel="nofollow">this</a> tutorial on an Ubuntu 14.04 machine using Anaconda Python 2.7.9. However, when I do , I'm getting the following error:</p>
<pre></pre>
<p>Any ideas why this error might be occurring and how to go about correcting it? Any help is much appreciated. Thank you.</p>
|
[
{
"AnswerId": "30853705",
"CreationDate": "2015-06-15T19:44:16.920",
"ParentId": null,
"OwnerUserId": "4561745",
"Title": null,
"Body": "<p>Can't seem to figure out why the error occurred, but after struggling for a while, I did the following and it seemed to work:</p>\n\n<ol>\n<li>Download the zip file from Github. </li>\n<li>Unzip the file after navigating to the directory where you've downloaded the file using the command <code>unzip deepdish-master.zip</code>.</li>\n<li>Navigate to the unzipped folder: <code>cd deepdish-master</code>.</li>\n<li>Make the setup.py file executable: <code>chmod +x setup.py</code>.</li>\n<li>Run the file with the 'install' option: <code>./setup.py install</code>.</li>\n<li>If you want, you may delete the zip folder and the unzipped folder.</li>\n</ol>\n\n<p>Now if you go into the python environment in the terminal and do <code>import deepdish as dd</code>, it works like a charm!</p>\n"
}
] |
30,860,533 | 2 |
<python><closures><theano>
|
2015-06-16T06:46:10.897
| null | 4,491,330 |
What's the advantage of using this piece of code in python ? (set i = [0] first, and later i[0] += size)
|
<p>I'm reading a python script with the module <a href="http://deeplearning.net/software/theano/" rel="nofollow">Theano</a> used. I'm confused by the follow piece of code. </p>
<pre></pre>
<p>The function read_param is used in functions beneath. </p>
<pre></pre>
<p>I was told that it uses the concept of "closure" and is advantageous in writing and reading. But I'm not sure why and how it works. </p>
|
[
{
"AnswerId": "30860625",
"CreationDate": "2015-06-16T06:50:32.440",
"ParentId": null,
"OwnerUserId": "248296",
"Title": null,
"Body": "<p>My only guess is that the author didn't want to define <code>i</code> as a global variable:</p>\n\n<pre><code>i = 0\ndef read_param(size):\n global i\n ret = lparam[i:i+size]\n i += size\n return ret\n</code></pre>\n\n<p>I don't know why (maybe the author doesn't know Python well).</p>\n\n<p>There is no advantage in using <code>i[0]</code>, only disadvantages -- bad readibility and more CPU used.</p>\n\n<p>The trick here is that <code>i</code> is a mutable object -- <code>list</code> and you mutate it instead of reassigning (which would require to define it as <code>global</code>). I would use a more pythonic approach like in @Marius's answer.</p>\n\n<p>I think the author considers this a cool trick (though this one is cooler: <code>def read_param(size, i=[0]):</code>) and will defend his approach, but <a href=\"https://www.python.org/dev/peps/pep-0020/\" rel=\"nofollow\">The Zen of Python</a> should be followed:</p>\n\n<ul>\n<li>Explicit is better than implicit.</li>\n<li>Simple is better than complex.</li>\n<li>Readability counts.</li>\n<li>Special cases aren't special enough to break the rules.</li>\n<li>In the face of ambiguity, refuse the temptation to guess.</li>\n<li>There should be one-- and preferably only one --obvious way to do it.</li>\n</ul>\n"
},
{
"AnswerId": "30860735",
"CreationDate": "2015-06-16T06:55:59.257",
"ParentId": null,
"OwnerUserId": "1222578",
"Title": null,
"Body": "<p>I'm struggling to see the advantages of the current code. <code>i</code> is created as a list so it can be mutated within the function, but as warvariuc shows, you could just use <code>global</code> to achieve the same thing.</p>\n\n<p>Here is how I would do the same thing \"pythonically\":</p>\n\n<pre><code>class ParamReader(object):\n def __init__(self, params):\n self.params = params\n self.i = 0\n\n def read(self, size):\n ret = self.params[self.i:(self.i + size)]\n self.i += size\n return ret\n\n# Dummy values for lparam as I don't have theano\nlparam = list(range(100))\nreader = ParamReader(lparam)\n\nreader.read(5)\nOut[8]: [0, 1, 2, 3, 4]\n\nreader.read(6)\nOut[9]: [5, 6, 7, 8, 9, 10]\n</code></pre>\n\n<p>The read function has to maintain state, so it seems like an obvious choice to just use a simple class.</p>\n"
}
] |
30,867,336 | 1 |
<cuda><deep-learning><caffe>
|
2015-06-16T12:16:17.737
| null | 2,281,121 |
Caffe Framework Runtest Core dumped error
|
<p>I have been installing Caffe Framework with the following GPU:
Geforce 9500 GT
CUDA 6.5 (not work with 7.0)</p>
<p>when I run: the following errors appeared and I don't know what are the reasons:</p>
<pre></pre>
|
[
{
"AnswerId": "30883960",
"CreationDate": "2015-06-17T06:38:33.953",
"ParentId": null,
"OwnerUserId": "2281121",
"Title": null,
"Body": "<p>It turns out that (as @Jez suggested) my GPU does not support double precision which used by Caffe math functions. That's the reason for crashes. I have searched for workaround on this issue but haven't found one. Maybe the only solution is to use more modern GPU</p>\n"
}
] |
30,879,506 | 1 |
<python><variables><shape><theano>
|
2015-06-16T22:43:10.687
| null | 4,480,756 |
Problems on Python Code using Theano
|
<p>I am learning the Python Theano Library. I just encountered a block of code using Theano shown below:</p>
<pre></pre>
<p>My problem is I cannot understand this part of code:</p>
<pre></pre>
<p>Also, I am wondering if there is any way in Python to check the values as well as dimensions of Theano tensor variables. I really appreciate if anyone can help me solve the problem.</p>
|
[
{
"AnswerId": "30879859",
"CreationDate": "2015-06-16T23:21:17.500",
"ParentId": null,
"OwnerUserId": "2594119",
"Title": null,
"Body": "<p>According to the docs <code>shape</code> returns an lvector representing the shape of x. You can read more about that <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html#shaping-and-shuffling\" rel=\"nofollow\">here</a></p>\n\n<p>In the code block you are referencing <code>n1</code> will return the object at index 0 and <code>n2</code> will return the object at index 1.</p>\n\n<p>You can read a bit more about lists in python <a href=\"http://www.tutorialspoint.com/python/python_lists.htm\" rel=\"nofollow\">here</a>.</p>\n\n<p>If you are running this script from the command line, you can use a <code>print</code> statement to see what is contained is those variables by adding a line like this:</p>\n\n<p><code>n1 = x.shape[0]\nprint n1</code></p>\n"
}
] |
30,885,342 | 2 |
<python><numpy><theano>
|
2015-06-17T07:46:23.767
| 30,898,018 | 1,740,705 |
Optional parameter to theano function
|
<p>I have a function in theano which takes two parameters, one of them optional. When I call the function with the optional parameter being the check inside fails. This script reproduces the error:</p>
<pre></pre>
<p>Fails with the error message</p>
<pre></pre>
<p>It makes sense that has no input shapes nor input strides. But I wonder why the check inside does not seem to work.</p>
<p>How can I make the check inside work such that the optional parameter is handled correctly?</p>
|
[
{
"AnswerId": "30898018",
"CreationDate": "2015-06-17T17:07:09.387",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>Theano does not support optional parameters. By specifying the function's input parameters as <code>ins=[y,c]</code> you are telling Theano that the function has two 1-dimensional (vector) parameters. As far as Theano is concerned, both are mandatory. When you try to pass <code>None</code> in for <code>c</code> Theano checks that the types of the values you pass in match the types declared when you compiled the function (i.e. two vectors) but clearly <code>None</code> is not a vector so this exception is raised.</p>\n\n<p>A solution is to compile two Theano functions, one that accepts just one parameter and the other that accepts both. You could even use your existing Python function <code>f</code> for both.</p>\n"
},
{
"AnswerId": "30902424",
"CreationDate": "2015-06-17T21:07:37.027",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>I'm going to try a more complete response.</p>\n\n<p>1) the condition \"c is not None\" is run only once when you build the graph. As c is a symbolic variable, the else path will always be executed. If you want to execution condition at run time see this documentation page: </p>\n\n<p><a href=\"http://deeplearning.net/software/theano/tutorial/conditions.html\" rel=\"nofollow\">http://deeplearning.net/software/theano/tutorial/conditions.html</a></p>\n\n<p>2) Theano have a special Type for None. But I do not recommand that you use it. It is not useful most of the time and it is not documented. So don't use it until you get more familiar with Theano.</p>\n\n<p>3) The other answer that tell to use 2 functions will work.</p>\n\n<p>4) In that simple case, you could pass a vector of the right size with only one instead of None. That would also work, but be slower.</p>\n"
}
] |
30,898,411 | 1 |
<nginx><lua><luajit><torch><openresty>
|
2015-06-17T17:28:56.177
| null | 5,019,970 |
Openresty torch module loading issue
|
<p>I'm trying to use openresty with torch for a Rest api for a neural network.
First query works, any query after that fails. </p>
<h1>Nginx Config</h1>
<pre></pre>
<h1>testFile.lua</h1>
<pre></pre>
<h2>The error:</h2>
<pre></pre>
<p>Would appreciate any help</p>
|
[
{
"AnswerId": "30951592",
"CreationDate": "2015-06-20T07:46:02.620",
"ParentId": null,
"OwnerUserId": "282536",
"Title": null,
"Body": "<p>You didn't <code>require</code> the torch library.\nAdd <code>local torch = require \"torch\"</code> at the top.</p>\n"
}
] |
30,900,477 | 1 |
<python><theano><deep-learning>
|
2015-06-17T19:17:35.350
| 31,148,806 | 1,357,690 |
Scan function from Theano replicates non_sequences shared variables
|
<p>I'm trying to implement a custom convolutional layer for a CNN network in Theano, and in order to do so I'm using the scan function. The idea is to apply the new convolution mask to each pixel.</p>
<p>The function compiles correctly, but for some reason I get an out-of-memory error. The debug (see below) indicates that the variables are replicated for each instance of the loop (for each pixel), which of course kills my GPU memory:</p>
<pre></pre>
<p>Here's the output from the debugger:</p>
<pre></pre>
<p>The input as you can see is shown as a 1025x2000x3x32x32 tensor, while the original tensor is of size 2000x3x32x32, and the 1025 is the number of iterations of scan + 1.</p>
<p>Why are the variables replicated for each iteration instead of simply being reused, and how can I fix it?</p>
<p>EDIT:</p>
<p>Both and are shared variables. Self.input is passed to the class when initialized, while self.b is defined inside the class as follows:</p>
<pre></pre>
|
[
{
"AnswerId": "31148806",
"CreationDate": "2015-06-30T21:26:35.567",
"ParentId": null,
"OwnerUserId": "1985353",
"Title": null,
"Body": "<p>It is possible that, when the scan is first created or at some point during the optimization process, a symbolic <code>Alloc</code> with that shape is created.\nHowever, it should be optimized at a later stage of the optimization process.</p>\n\n<p>We are aware that there was a but related to that recently, which should now be fixed in the development (\"bleeding-edge\") version of Theano. In fact, I just tried your snippet (slightly edited) with a recent development version, and had no memory error. Moreover, there was no 5D tensor anywhere in the computation graph, which would suggest the bug has indeed been fixed.</p>\n\n<p>Finally, please be aware that operations such as convolutions, which are not really recurrent, will probably be much slower when expressed with <code>scan</code> rather than with one of the existing convolution operations. In particular, <code>scan</code> will not be able to parallelize efficiently when the iterations of the loop do not depend on each other.</p>\n"
}
] |
30,902,056 | 0 |
<python><lua><protocol-buffers><caffe><torch>
|
2015-06-17T20:45:28.757
| null | 4,468,092 |
Generate caffemodel file
|
<p>I am using both Torch and Caffe for deep learning. I wonder if there is a way to output nn model into caffemodel file. It may involve protobuf, which I am not familiar with.</p>
<p>It is kinda easy for the other way, where there are <a href="https://github.com/szagoruyko/loadcaffe" rel="nofollow">libraries</a> to read caffemodel file into torch.</p>
|
[] |
30,919,682 | 1 |
<matlab><deep-learning><leveldb><caffe>
|
2015-06-18T15:29:50.903
| 30,930,801 | 2,163,392 |
Convert a bunch of images to lmdb format in matlab
|
<p>I am using caffe deep network library and I have to use googlenet in my data (images).</p>
<p>The problem is the fact that I want to do a lot of pre-processing operations, and I don't want to save every image processing operation as image files and then execute create_imagenet.sh included in the library. This library needs text files to indicate where are the train and validation image files. </p>
<p>I don't want to save a lot of images and then converting them in LMDB.</p>
<p>All I want is to do the image processing operation in a series of images and save them in a row in a lmdb file to be read by caffe later.</p>
<p>Is that possible?</p>
|
[
{
"AnswerId": "30930801",
"CreationDate": "2015-06-19T05:50:59.980",
"ParentId": null,
"OwnerUserId": "773226",
"Title": null,
"Body": "<p>This github repository might be the one you are actually looking for:\n<a href=\"https://github.com/kyamagu/matlab-lmdb\" rel=\"nofollow\">https://github.com/kyamagu/matlab-lmdb</a></p>\n"
}
] |
30,921,359 | 1 |
<python><draw><ubuntu-14.04><deep-learning><caffe>
|
2015-06-18T16:54:58.637
| 30,943,006 | 4,561,745 |
Error while drawing net in Caffe
|
<p>I'm trying to draw my net in Caffe. The net is defined in and the desired image path is . When I run the command , I get the following error:</p>
<pre></pre>
<p>I'm running Caffe on Ubuntu 14.04 in CPU mode using the Anaconda Python interface. Any ideas why this error might be occuring and how to go about correcting it?</p>
|
[
{
"AnswerId": "30943006",
"CreationDate": "2015-06-19T16:30:51.023",
"ParentId": null,
"OwnerUserId": "4561745",
"Title": null,
"Body": "<p>The error log mentions that GraphViz's executables are not found. So, I did the following:</p>\n\n<ol>\n<li>Installed GraphViz on Ubuntu: <code>sudo apt-get install GraphViz</code>.</li>\n<li>Installed GraphViz for Python: <code>pip install GraphViz</code>.</li>\n</ol>\n\n<p>I'm not sure if step 2 is required or not, but step 1 is definitely required. After doing that, the command to draw the net in Caffe works like a charm!</p>\n"
}
] |
30,927,369 | 1 |
<neural-network><deep-learning><torch>
|
2015-06-18T23:13:27.900
| null | 2,817,198 |
What is gradInput and gradOutput in Torch7's 'nn' package?
|
<p>Hi I am a starter of using Torch's 'nn' package. In the past two weeks, I am extremely confused about the meaning of gradInput and gradOutput in the Torch's 'nn' library. I believe the 'grad' here means gradient, but what exactly are those two variables refers to?</p>
<p>Thanks for anyone's help! </p>
|
[
{
"AnswerId": "30927733",
"CreationDate": "2015-06-18T23:50:04.897",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p>gradOutput: gradient w.r.t. the output of the module. This is passed in either from the loss function, or from the module next to the current module. it is used to compute the gradient w.r.t. the input (gradInput) and gradient w.r.t. the parameters of the module (gradWeight / gradBias)</p>\n"
}
] |
30,942,172 | 1 |
<neural-network><deep-learning><caffe>
|
2015-06-19T15:42:06.863
| 30,946,985 | 4,561,745 |
How does Caffe determine the number of neurons in each layer?
|
<p>Recently, I've been trying to use Caffe for some of the deep learning work that I'm doing. Although writing the model in Caffe is very easy, I've not been able to know the answer to this question. <strong>How does Caffe determine the number of neurons in a hidden layer?</strong> I do know that determination of number of neurons in a layer and the number of hidden layers itself are problems that cannot be determined analytically and the use of 'thumb rules' is imperative in this regard. But is there a way to define or know the number of neurons in each layer in Caffe? And by default, how does Caffe inherently determine this?</p>
<p>Any help is much appreciated!</p>
|
[
{
"AnswerId": "30946985",
"CreationDate": "2015-06-19T20:44:12.920",
"ParentId": null,
"OwnerUserId": "5029761",
"Title": null,
"Body": "<p>Caffe doesn't determine the number of neurons--the user does.<br>\nThis is pulled straight from Caffe's website, here: <a href=\"http://caffe.berkeleyvision.org/tutorial/layers.html\" rel=\"nofollow\">http://caffe.berkeleyvision.org/tutorial/layers.html</a></p>\n\n<p>For example, this is a convolution layer of 96 nodes (or neurons):</p>\n\n<pre><code>layer {\n name: \"conv1\"\n type: \"Convolution\"\n bottom: \"data\"\n top: \"conv1\"\n # learning rate and decay multipliers for the filters\n param { lr_mult: 1 decay_mult: 1 }\n # learning rate and decay multipliers for the biases\n param { lr_mult: 2 decay_mult: 0 }\n convolution_param {\n num_output: 96 # learn 96 filters\n kernel_size: 11 # each filter is 11x11\n stride: 4 # step 4 pixels between each filter application\n weight_filler {\n type: \"gaussian\" # initialize the filters from a Gaussian\n std: 0.01 # distribution with stdev 0.01 (default mean: 0)\n }\n bias_filler {\n type: \"constant\" # initialize the biases to zero (0)\n value: 0\n }\n }\n}\n</code></pre>\n"
}
] |
30,942,693 | 1 |
<caffe>
|
2015-06-19T16:12:21.170
| 30,942,976 | 2,191,652 |
Caffe for feed forward networks
|
<p>Has anyone experience with using caffe as feed forward network instead of convolutional neural network?</p>
<p>My input data is 1 dimensional. Everything is ok, but when I want to use layers like max pooling, caffe assumes a square shaped kernel size. I would rather need a 1 dim max pooling kernel.</p>
|
[
{
"AnswerId": "30942976",
"CreationDate": "2015-06-19T16:29:29.733",
"ParentId": null,
"OwnerUserId": "2191652",
"Title": null,
"Body": "<p>Found it:</p>\n\n<pre><code>layer {\n name: \"pool1\"\n type: \"Pooling\"\n bottom: \"ip1\"\n top: \"pool1\"\n pooling_param {\n pool: MAX\n kernel_h: 3\n kernel_w: 1 \n }\n}\n</code></pre>\n"
}
] |
30,953,368 | 1 |
<lua><neural-network><torch>
|
2015-06-20T11:18:37.877
| 30,955,861 | 4,961,048 |
Convolution Neural Network in torch. Error when training the network
|
<p>I am trying to base my Convolution neural network upon the following tutorial:</p>
<p><a href="https://github.com/torch/tutorials/tree/master/2_supervised" rel="nofollow">https://github.com/torch/tutorials/tree/master/2_supervised</a></p>
<p>The issue is that my images are of different dimensions than those used in the tutorial. (3x200x200). Also I have only two classes.</p>
<p>The following are the changes that I made :</p>
<p>Changing the dataset to be loaded in 1_data.lua.</p>
<pre></pre>
<p>and </p>
<pre></pre>
<p>in 3_loss.lua and 4_train.lua.</p>
<p>My model is the same as that being trained in the tutorial. For Convenience I'll put the code below :</p>
<pre></pre>
<p>I get the following error when I run the doall.lua file :</p>
<pre></pre>
<p>I have been stuck at this for over a day now. Please help.</p>
|
[
{
"AnswerId": "30955861",
"CreationDate": "2015-06-20T15:43:41.190",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<p>The problem is the convolutional neural network from this tutorial has been made to work with a <strong>fixed size input resolution</strong> of 32x32 pixels.</p>\n\n<p>Right after the 2 convolutional / pooling layers you obtain 64 feature maps with a 5x5 resolution. This gives an input of 64x5x5 = 1,600 elements for the following fully-connected layers.</p>\n\n<p>As you can see in the tutorial there is a dedicated <em>reshape</em> operation that transforms the 3D input tensor into a 1D tensor with 1,600 elements:</p>\n\n<pre><code>-- nstates[2]*filtsize*filtsize = 64x5x5 = 1,600\nmodel:add(nn.Reshape(nstates[2]*filtsize*filtsize))\n</code></pre>\n\n<p>When you work with a higher resolution input you produce higher resolution output feature maps, here a 200x200 pixels input gives 64 output feature maps of size 47x47. This is why you obtain this <em>wrong size</em> error.</p>\n\n<p>So you need to adapt the reshape and following linear layers accordingly:</p>\n\n<pre><code>model:add(nn.Reshape(nstates[2]*47*47))\nmodel:add(nn.Linear(nstates[2]*47*47, nstates[3]))\n</code></pre>\n"
}
] |
30,959,402 | 1 |
<python><neural-network><theano>
|
2015-06-20T22:17:14.300
| null | 1,213,014 |
"ValueError: Shape mismatch" error in Lasagne for Neural Network
|
<p>I'm trying to create a neural network based on theano/lasagne that will (essentially) attempt to do a multi-variable regression. </p>
<p>The meat of the code is: </p>
<pre></pre>
<p>Here, train_value is just 1 column of (numerical) data that I want to train my NN to predict, and the following 57 columns (train_data) are all the parameters/values (all numbers) which should be weighted appropriately to predict the value in the first column. </p>
<p>However, when I run this script, I get the following error: </p>
<pre></pre>
<p>I'm not sure where it is getting this shape--none of my data has 83 columns or rows. (note: I've tried to adapt this script, which originally was written to look at pictures of faces and guess where different parts were (eyes, nose, mouth, etc)). </p>
<p>I have written a much simpler version of this (sans-dropout method) in pybrain, but am trying to migrate to sklearn/lasagne/theano as it opens more doors. </p>
|
[
{
"AnswerId": "31931119",
"CreationDate": "2015-08-11T00:08:26.247",
"ParentId": null,
"OwnerUserId": "1646409",
"Title": null,
"Body": "<p>Since you want to do regression, make sure to set the output type correctly:</p>\n\n<pre><code>output_nonlinearity = linear\n</code></pre>\n\n<p>Are you sure you also have 10 output units? I experienced some weird behavior in Lasagne. I think the API has changed over time and contains some bugs. I succeeded using the latest <a href=\"https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py\" rel=\"nofollow\">API demo</a> and adapting it to my needs.</p>\n"
}
] |
30,962,899 | 2 |
<image-processing><machine-learning><computer-vision><neural-network><torch>
|
2015-06-21T09:06:32.177
| 30,982,411 | 4,961,048 |
Using Convolution Neural network as Binary classifiers
|
<p>Given any image I want my classifier to tell if it is Sunflower or not. How can I go about creating the second class ? Keeping the set of all possible images - {Sunflower} in the second class is an overkill. Is there any research in this direction ? Currently my classifier uses a neural network in the final layer. I have based it upon the following tutorial :</p>
<p><a href="https://github.com/torch/tutorials/tree/master/2_supervised" rel="nofollow">https://github.com/torch/tutorials/tree/master/2_supervised</a></p>
<p>I am taking images with 254x254 as the input.
Would SVM help in the final layer ? Also I am open to using any other classifier/features that might help me in this.</p>
|
[
{
"AnswerId": "30982411",
"CreationDate": "2015-06-22T14:23:35.527",
"ParentId": null,
"OwnerUserId": "3633250",
"Title": null,
"Body": "<p>The standard approach in ML is that:</p>\n\n<p>1) Build model\n2) Try to train on some data with positive\\negative examples (start with 50\\50 of pos\\neg in training set)\n3) Validate it on test set (again, try 50\\50 of pos\\neg examples in test set)\nIf results not fine:\na) Try different model?\nb) Get more data</p>\n\n<p>For case #b, when deciding which additional data you need the rule of thumb which works for me nicely would be:\n1) If classifier gives lots of false positive (tells that this is a sunflower when it is actually not a sunflower at all) - get more negative examples\n2) If classifier gives lots of false negative (tells that this is not a sunflower when it is actually a sunflower) - get more positive examples</p>\n\n<p>Generally, start with <em>some</em> reasonable amount of data, check the results, if results on train set or test set are bad - get more data. Stop getting more data when you get the optimal results.</p>\n\n<p>And another thing you need to consider, is if your results with current data and current classifier are not good you need to understand if the problem is high bias (well, bad results on train set and test set) or if it is a high variance problem (nice results on train set but bad results on test set). If you have high bias problem - more data or more powerful classifier will definitely help. If you have a high variance problem - more powerful classifier is not needed and you need to thing about the generalization - introduce regularization, remove couple of layers from your ANN maybe. Also possible way of fighting high variance is geting <em>much, MUCH</em> more data.</p>\n\n<p>So to sum up, you need to use iterative approach and try to increase the amount of data step by step, until you get good results. There is no magic stick classifier and there is no simple answer on how much data you should use.</p>\n"
},
{
"AnswerId": "43321129",
"CreationDate": "2017-04-10T10:45:05.680",
"ParentId": null,
"OwnerUserId": "7434191",
"Title": null,
"Body": "<p>It is a good idea to use CNN as the feature extractor, peel off the original fully connected layer that was used for classification and add a new classifier. This is also known as the transfer learning technique that has being widely used in the Deep Learning research community. For your problem, using the one-class SVM as the added classifier is a good choice. </p>\n\n<p>Specifically,</p>\n\n<ul>\n<li>a good CNN feature extractor can be trained on a large dataset, e.g. ImageNet,</li>\n<li>the one-class SVM can then be trained using your 'sunflower' dataset.</li>\n</ul>\n\n<p>The essential part of solving your problem is the implementation of the one-class SVM, which is also known as anomaly detection or novelty detection. You may refer <a href=\"http://scikit-learn.org/stable/modules/outlier_detection.html\" rel=\"nofollow noreferrer\">http://scikit-learn.org/stable/modules/outlier_detection.html</a> for some insights about the method.</p>\n"
}
] |
30,978,275 | 1 |
<windows-10><flashlight><torch><camera-flash>
|
2015-06-22T11:02:08.230
| 30,979,837 | 4,953,112 |
Windows 10 Phone app Turn on flash (lamp)
|
<p>I have a problem. I m a new windows phone 10 developer and i have a problem.</p>
<p>How can i turn on flashlight on a Lumia 930 ? I can t find any answer on internet.</p>
<p>Thx alot,
Regards</p>
|
[
{
"AnswerId": "30979837",
"CreationDate": "2015-06-22T12:18:08.620",
"ParentId": null,
"OwnerUserId": "61529",
"Title": null,
"Body": "<p>The Windows.Devices.Lights.Lamp namespace is where you need to look, there's an example of Lamp usage in Windows 10 <a href=\"https://github.com/Microsoft/Windows-universal-samples/tree/master/lamp\" rel=\"nofollow\">https://github.com/Microsoft/Windows-universal-samples/tree/master/lamp</a> that might be of help. \nHere's a small example based on this as an alternative to the link:</p>\n\n<pre><code>using (var lamp = await Lamp.GetDefaultAsync()) \n{ \n lamp.BrightnessLevel = 1.0F;\n lamp.IsEnabled = true;\n}\n</code></pre>\n"
}
] |
30,979,264 | 1 |
<parsing><nlp><neural-network><theano><dimensionality-reduction>
|
2015-06-22T11:48:52.107
| null | 4,038,352 |
How to deal with different sizes of sentences when giving them as input to a Neural Network?
|
<p>I am giving a sentence as input to a tree structured Neural Network, where the leaf nodes will be the word vectors of the words in the sentence. </p>
<p>That tree will be a <a href="https://en.wikipedia.org/wiki/Branching_(linguistics)" rel="nofollow noreferrer">binarized constituency</a>(see the binary vs n-ary branching section) parse tree. </p>
<p>I am trying to develop a semantic representation of the sentence. </p>
<p><strong>The problem is</strong>, that since each sentence will have a different parse tree and different lengths, each sentence will have a different neural network. Due to this ever changing structure of neural network, I can't train it.</p>
<p>But this paper develops a tree structured Neural network using constituency and dependency based parse trees-</p>
<p>1) <a href="http://arxiv.org/pdf/1503.00075v3.pdf" rel="nofollow noreferrer">Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks</a> by Kai Sheng Tai, Richard Socher, and Christopher Manning</p>
<p>And this paper uses CNNs to extract a semantic representation.</p>
<p>2) <a href="http://nal.co/papers/KalchbrennerBlunsom_EMNLP13" rel="nofollow noreferrer">Recurrent Continuous Translation Models</a> by Nal Kalchbrenner,and Phil Blunsom. This picture gives a rough idea.<img src="https://i.stack.imgur.com/qvcI4.png" alt="CSM"></p>
<p><strong>Possible Solution</strong>-<br>
I can map the sentence vector to a fixed number of neurons and then use those neurons to create a tree structure. For example if there is the sentence length is 10 and the max sentence length is 20, I can create a fixed dimensionality layer of 20 neurons and then (in this particular case) the first word to the first 2 neurons, 2nd word to the 3rd and 4th neuron and so on.
Dynamic mapping can be done based on the sentence length.</p>
<p>The weight matrix of the sentence layer to the fixed dimensionality layer will be fixed(the weights should be kept 1). No biases.</p>
<p>But I think there will be some problems in this representation- for example- If this sentence "I had a lovely icecream and a pastry for dessert." is mapped to the fixed dimensionality layer it will become "I I had had a a lovely lovely icecream icecream and and a a pastry pastry for for dessert dessert..". So that means that shorter sentences will have a more profound effect on the neural network compared to the longer sentences. This biasness towards shorter sentences should also create duplicate words in the output(of a sequence generator). Could someone correct me if I am wrong?</p>
<p>I would welcome more solutions, especially ones that do not remove the relationships that words have between them in a sentence.</p>
<p>I will be implementing this using theano and python. I am considering a layer based approach and using theano.scan to iterate over the layers to finally form the sentence representation.</p>
|
[
{
"AnswerId": "33152671",
"CreationDate": "2015-10-15T15:33:26.793",
"ParentId": null,
"OwnerUserId": "2805751",
"Title": null,
"Body": "<p>The papers you cited both use recurrent architectures to deal with variable data lengths but from the graphs you posted and described, I'm not sure if you want to use recurrent structures or not. If you don't, you'll need to pad the input to some fixed length vector based on your dataset. (This is likely to be a poor model.) If you do, I'd look at some of the theano framework (lasagne, keras, blocks) implementations of RNNs.</p>\n"
}
] |
30,980,008 | 1 |
<matlab><deep-learning><caffe>
|
2015-06-22T12:27:49.553
| 30,980,841 | 2,163,392 |
.mat files as input for caffe deep learning network
|
<p>Is it possible to use .mat files to store input data for the caffe deep learning framework? if yes, how can I change the 'data' of my train_val.prototxt? I am using the googlenet network.</p>
<p>If no, which other file formats can I use rather than mdb and hd5?</p>
<p>For me, using .mat files is better because of their size, my .hd5 files are huge (caffe can't read them because of lack of memory) and I can't find how to save my data as .mdb files in Matlab.</p>
|
[
{
"AnswerId": "30980841",
"CreationDate": "2015-06-22T13:09:35.657",
"ParentId": null,
"OwnerUserId": "2732801",
"Title": null,
"Body": "<p>Mat files version 7.3 are hdf5 files, just make sure to use this format when writing. (check documenting for save) </p>\n\n<p>Any hdf5 library which supports gzip compression can read mat files. </p>\n"
}
] |
30,980,338 | 2 |
<matlab><image-processing><computer-vision><deep-learning><caffe>
|
2015-06-22T12:44:36.543
| 30,987,871 | 2,163,392 |
LMDB files and how they are used for caffe deep learning network
|
<p>I am quite new in deep learning and I am having some problems in using the caffe deep learning network. Basically, I didn't find any documentation explaining how I can solve a series of questions and problems I am dealing right now.</p>
<p>Please, let me explain my situation first. </p>
<p>I have thousands of images and I must do a series of pre-processing operations on them. For each pre-processing operation, I have to save these pre-processed images as 4D matrices and also store a vector with the images labels. I will store this information as LMDB files that will be used as input for the caffe googlenet deep learning. </p>
<p>I tried to save my images as .HD5 files, but the final file size is 80GB, which is impossible to process with the memory I have.</p>
<p>So, the other option is using LMDB files, right? I am quite newbie in this file format and I appreciate your help in understanding how to create them in Matlab. Basically, my rookie questions are: </p>
<p>1- These LMDB files have extension .MDB, right? is this extension the same used by microsoft access? or the right format is .lmdb and they are different? </p>
<p>2- I find this solution for creating .mdb files (<a href="https://github.com/kyamagu/matlab-leveldb" rel="nofollow">https://github.com/kyamagu/matlab-leveldb</a>), does it create the file format needed by caffe?</p>
<p>3- For caffe, should I have to create one .mdb file for labels and other for images or both can be fields of the same .mdb file?</p>
<p>4- When I create an .mdb file I have to label the database fields. Can I label one field as image and other as label? does caffe understand which field means?</p>
<p>5- what does the function (in <a href="https://github.com/kyamagu/matlab-leveldb" rel="nofollow">https://github.com/kyamagu/matlab-leveldb</a>) database.put('key1', 'value1') and database.put('key2', 'value2') do? Should I have to save my 4-d matrices in one field and the label vector in another?</p>
|
[
{
"AnswerId": "30987871",
"CreationDate": "2015-06-22T19:07:04.557",
"ParentId": null,
"OwnerUserId": "987599",
"Title": null,
"Body": "<p>There is no connection between LMDB files and MS Access files.</p>\n\n<p>As I see it you have two options:</p>\n\n<ol>\n<li>Use the \"convert_imageset\" tool - it is located in caffe under the tools folder to convert a list of image files and label to lmdb.</li>\n<li>Instead of \"data layer\" use \"image data layer\" as an input to the network. This type of layer takes a file with a list of image file names and labels as source so you don't have to build a database (another benefit for training - you can use the shuffle option and get slightly better training results)</li>\n</ol>\n\n<p>In order to use an image data layer just replace the layer type from Data to ImageData. The source file is the path to a file containing in each line a path of an image file and the label seperated by space. For example:</p>\n\n<pre><code>/path/to/filnename.png 23\n</code></pre>\n\n<p>If you want to do some preprocessing of the data without saving the preprocessed file to disk you can use the transformations available by caffe (mirror and cropping) (see here for information <a href=\"http://caffe.berkeleyvision.org/tutorial/data.html\">http://caffe.berkeleyvision.org/tutorial/data.html</a>) or implement your own <code>DataTransformer</code>.</p>\n"
},
{
"AnswerId": "30987646",
"CreationDate": "2015-06-22T18:54:38.300",
"ParentId": null,
"OwnerUserId": "1296661",
"Title": null,
"Body": "<p>Caffe doesn't use LevelDB - but <a href=\"https://github.com/BVLC/caffe/blob/50ab52cbbf738cc92b0eda042661b8c0172774c1/scripts/travis/travis_install.sh#L50\" rel=\"nofollow\">it uses</a> <a href=\"http://symas.com/mdb/\" rel=\"nofollow\">LMDB 'Lightning' db from Symas</a></p>\n\n<p>You can try using <a href=\"https://github.com/kyamagu/matlab-lmdb\" rel=\"nofollow\">this</a> Matlab LMDB wrapper\nI personally had no experience with using LMDB with Matlab, but there is nice library for doing this from Python: <a href=\"https://lmdb.readthedocs.org/en/release/\" rel=\"nofollow\">py-lmdb</a></p>\n\n<p>LMDB database is a Key/Value db (similar to HashMap in Java or dict in Python). In order to store 4D matrices you need to understand the convention Caffe uses to save images into LMDB format.</p>\n\n<p>This means that the best approach to convert images to LMDB for Caffe would be doing this with Caffe.</p>\n\n<p><a href=\"https://github.com/BVLC/caffe/blob/50ab52cbbf738cc92b0eda042661b8c0172774c1/examples/mnist/create_mnist.sh\" rel=\"nofollow\">There are examples in Caffe</a> on how to convert images into LMDB - I would try to repeat them and then modify scripts to use your images.</p>\n"
}
] |
30,982,758 | 1 |
<neural-network><svm><torch>
|
2015-06-22T14:37:33.333
| 31,038,255 | 4,961,048 |
Support Vector Machine in Torch7
|
<p>I have based my model upon the following tutorial:</p>
<p><a href="https://github.com/torch/tutorials/tree/master/2_supervised" rel="nofollow">https://github.com/torch/tutorials/tree/master/2_supervised</a></p>
<p>For the last stage a neural network is used upon the features extracted from the CNN. I want to use a SVM in the final layer. How can I add that to my existing model ?</p>
<p>It has been shown in some papers that SVM seem to function better than neural network as the final layer in the CNN and therefore I wanted to try them out to increase the accuracy of the model. Also SVM's can be use for one class classification which neural networks lack.I need a one class classifier in the end and so the need for adding an SVM to th CNN. </p>
<p>Kindly help</p>
|
[
{
"AnswerId": "31038255",
"CreationDate": "2015-06-24T22:30:58.560",
"ParentId": null,
"OwnerUserId": "2113367",
"Title": null,
"Body": "<p><strong><em>Edit:</em></strong> My old answer was complete rubbish since you <a href=\"https://stats.stackexchange.com/questions/158712/train-an-svm-via-back-propagation/158721#158721\">cannot code a (linear) SVM as a complete module</a>. Instead, you can think of an SVM as </p>\n\n<blockquote>\n <p>a 1-layer NN with linear activation on the output node and trained via hinge loss </p>\n</blockquote>\n\n<p>(see accepted answer's comments.)</p>\n\n<p>This means that in Torch, you can mock up a (linear) SVM with something like</p>\n\n<pre><code>linearSVM = nn.Sequential()\nlinearSVM:add(nn.Linear(ninputs, 1))\ncriterion = nn.MarginCriterion()\n</code></pre>\n\n<p>see the following question in the <a href=\"https://groups.google.com/forum/#!searchin/torch7/SVM/torch7/2YKxbyVkAs0/cn_tqsyxG8cJ\" rel=\"nofollow noreferrer\">Torch7 google code mailing list</a>...</p>\n"
}
] |
30,983,213 | 2 |
<caffe>
|
2015-06-22T14:56:18.570
| 30,991,590 | 2,191,652 |
How to use 1-dim vector as input for caffe?
|
<p>I'd like to train a neural network (NN) on my own 1-dim data, which I stored in a hdf5 database for caffe. According to the documetation this should work. It also works for me as far as I only use "Fully Connected Layers", "Relu" and "Dropout". However I get an error when I try to use "Convolution" and "Max Pooling" layers in the NN architecture. The error complains about the input dimension of the data. </p>
<pre></pre>
<p>This is the error when I only want to use a "Pooling" layer behind an "InnerProduct" layer: </p>
<pre></pre>
<p>However I don't know how to change the input dimensions such that it works.
This is the beginning of my prototxt file specifying the network architecture: </p>
<pre></pre>
<p>And this is how I output my 4D-database (with two singleton dimensions) using Matlabs h5write function:</p>
<pre></pre>
|
[
{
"AnswerId": "30991590",
"CreationDate": "2015-06-22T23:33:56.860",
"ParentId": null,
"OwnerUserId": "1452257",
"Title": null,
"Body": "<p>You seem to be outputting your data using the wrong shape. Caffe blobs have the dimensions <code>(n_samples, n_channels, height, width)</code> .</p>\n\n<p>Other than that your prototxt seems to be fine for doing predictions based on a 1D input.</p>\n"
},
{
"AnswerId": "30986393",
"CreationDate": "2015-06-22T17:39:12.240",
"ParentId": null,
"OwnerUserId": "773226",
"Title": null,
"Body": "<p>As I have no experience in using the <code>h5create</code> and <code>h5write</code> in Matlab, I am not sure on whether the training dataset is generated with the dimensions that you expect it to generate.</p>\n\n<p>The error msg for the convolution layer says that <code>shape[i] = -9</code>. This means that either the width, height, channels or number of images in a batch is being set to -9.</p>\n\n<p>The error msg when using pooling layer alone says that the network could detect only an input of 2D while the network is expecting an input of 4D.</p>\n\n<p>The error messages in both the layers are related to reshaping the blobs and this is a clear indication that the dimensions of the input are not as expected.</p>\n\n<p>Try debugging the Reshape functions present in blob.cpp & layers/pooling_layer.cpp to get an insight on which value is actually going rogue.</p>\n"
}
] |
30,983,354 | 3 |
<lua><torch>
|
2015-06-22T15:01:43.873
| 31,024,198 | 3,551,854 |
Is there special meaning for ()() syntax in Lua
|
<p>I see this type of syntax a lot in some Lua source file I was reading lately, what does it mean, especially the second pair of brackets
An example, line 8 in
<a href="https://github.com/karpathy/char-rnn/blob/master/model/LSTM.lua">https://github.com/karpathy/char-rnn/blob/master/model/LSTM.lua</a></p>
<pre></pre>
<p>The source code of
<a href="https://github.com/torch/nn/blob/master/Identity.lua">https://github.com/torch/nn/blob/master/Identity.lua</a></p>
<p>********** UPDATE **************</p>
<p>The ()() pattern is used in torch library 'nn' a lot. The first pair of bracket creates an object of the container/node, and the second pair of bracket references the depending node.</p>
<p>For example, y = nn.Linear(2,4)(x) means x connects to y, and the transformation is linear from 1*2 to 1*4.
I just understand the usage, how it is wired seems to be answered by one of the answers below.</p>
<p>Anyway, the usage of the interface is well documented below.
<a href="https://github.com/torch/nngraph/blob/master/README.md">https://github.com/torch/nngraph/blob/master/README.md</a></p>
|
[
{
"AnswerId": "30983647",
"CreationDate": "2015-06-22T15:14:32.377",
"ParentId": null,
"OwnerUserId": "1009479",
"Title": null,
"Body": "<p>No, <code>()()</code> has no special meaning in Lua, it's just two call operators <code>()</code> together.</p>\n\n<p>The operand is possibly a function that returns a function(or, a table that implements <code>call</code> metamethod). For example:</p>\n\n<pre><code>function foo()\n return function() print(42) end\nend\n\nfoo()() -- 42\n</code></pre>\n"
},
{
"AnswerId": "31024198",
"CreationDate": "2015-06-24T10:29:18.583",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<p>In complement to Yu Hao's answer let me give some Torch related precisions:</p>\n\n<ul>\n<li><code>nn.Identity()</code> creates an identity module,</li>\n<li><code>()</code> called on this module triggers <code>nn.Module</code> <a href=\"https://github.com/torch/nn/blob/b7aa53d96fbb6c0f2eaa1976b28c5cf12edf1ced/Module.lua#L232-L240\"><code>__call__</code></a> (thanks to Torch class system that properly hooks up this into the metatable),</li>\n<li>by default this <code>__call__</code> method performs a forward / backward,</li>\n<li>but here <a href=\"https://github.com/torch/nngraph\">torch/nngraph</a> is used and <strong>nngraph overrides</strong> this method as you can see <a href=\"https://github.com/torch/nngraph/blob/eb65b3602cdf3214ff671d5422521ad414f1c3b0/init.lua#L22-L45\">here</a>.</li>\n</ul>\n\n<p>In consequence every <code>nn.Identity()()</code> calls has here for effect to return a <code>nngraph.Node({module=self})</code> node where self refers to the current <code>nn.Identity()</code> instance.</p>\n\n<p>--</p>\n\n<p><strong>Update</strong>: an illustration of this syntax in the context of <a href=\"https://en.wikipedia.org/wiki/Long_short_term_memory\">LSTM-s</a> can be found <a href=\"http://apaszke.github.io/lstm-explained.html#computing-gate-values\">here</a>:</p>\n\n<pre><code>local i2h = nn.Linear(input_size, 4 * rnn_size)(input) -- input to hidden\n</code></pre>\n\n<blockquote>\n <p>If you’re unfamiliar with <code>nngraph</code> it probably seems strange that we’re constructing a module and already calling it once more with a graph node. What actually happens is that <strong>the second call converts the <code>nn.Module</code> to <code>nngraph.gModule</code> and the argument specifies it’s parent in the graph</strong>.</p>\n</blockquote>\n"
},
{
"AnswerId": "37304606",
"CreationDate": "2016-05-18T15:56:16.840",
"ParentId": null,
"OwnerUserId": "2036809",
"Title": null,
"Body": "<ul>\n<li>The first () calls the init function and the second () calls the call function </li>\n<li>If the class doesn't posses either of these functions then the parent functions are called .</li>\n<li><p>In the case of nn.Identity()() the nn.Identity has neither init function nor a call function hence the Identity parent nn.Module's init and call functions called .Attaching an illustration </p>\n\n<pre><code>require 'torch'\n\n-- define some dummy A class\nlocal A = torch.class('A')\nfunction A:__init(stuff)\n self.stuff = stuff\n print('inside __init of A')\nend\n\nfunction A:__call__(arg1)\nprint('inside __call__ of A')\nend\n\n-- define some dummy B class, inheriting from A\nlocal B,parent = torch.class('B', 'A')\n\nfunction B:__init(stuff)\n self.stuff = stuff\n print('inside __init of B')\nend\n\nfunction B:__call__(arg1)\nprint('inside __call__ of B')\nend\na=A()()\nb=B()()\n</code></pre>\n\n<p><strong>Output</strong></p>\n\n<pre><code>inside __init of A\ninside __call__ of A\ninside __init of B\ninside __call__ of B\n</code></pre></li>\n</ul>\n\n<p>Another code sample </p>\n\n<pre><code> require 'torch'\n\n -- define some dummy A class\n local A = torch.class('A')\n function A:__init(stuff)\n self.stuff = stuff\n print('inside __init of A')\n end\n\n function A:__call__(arg1)\n print('inside __call__ of A')\n end\n\n -- define some dummy B class, inheriting from A\n local B,parent = torch.class('B', 'A')\n\n b=B()()\n</code></pre>\n\n<p><strong>Output</strong></p>\n\n<pre><code> inside __init of A\n inside __call__ of A\n</code></pre>\n"
}
] |
30,994,563 | 2 |
<artificial-intelligence><neural-network><deep-learning><caffe><deep-dream>
|
2015-06-23T05:32:04.760
| 31,028,871 | 2,795,733 |
Google Deep Dream art: how to pick a layer in a neural network and enhance it
|
<p>I am interested in a recent blog post by Google that describes the use of to make art. </p>
<p>I am particularly interested in one technique: </p>
<blockquote>
<p>'In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.' </p>
</blockquote>
<p>The post is <a href="http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html?m=1" rel="nofollow">http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html?m=1</a>. </p>
<p><strong>My question</strong>: the post describes this as a 'simple' case--is there an open-source implementation of a nn that could be used for this purpose in a relatively plug-and-play process?
For just the technique described, does the network need to be trained? </p>
<p>No doubt for other techniques mentioned in the paper one needs a network already trained on a large number of images, but for the one I've described is there already some kind of open-source network layer visualization package?</p>
|
[
{
"AnswerId": "31708507",
"CreationDate": "2015-07-29T18:34:46.807",
"ParentId": null,
"OwnerUserId": "5086088",
"Title": null,
"Body": "<p>In the link to Ipython notebook Dmitry provided, it says that it does <strong>gradient</strong> <strong>ascent</strong> with <strong>maximizing</strong> L2 normalization. I believe this is what Google means to be enhance the feature from a algorithmic perspective. </p>\n\n<p>If you think about it, it's really the case, minimizing L2 would prevent over-fitting, i.e. make the curve looks smoother. If you do the opposite, you are making the feature more obvious.</p>\n\n<p>Here is a great link to understand <a href=\"http://www.onmyphd.com/?p=gradient.descent\" rel=\"noreferrer\">gradient ascent</a>, though it talks about gradient descent mainly.</p>\n\n<p>I don't know much about implementation details in caffe, since I use theano mostly. Hope it helps!</p>\n\n<p><strong>Update</strong></p>\n\n<p>So I read about the detailed articles [1],[2],[3],[4] today and find out that <a href=\"http://arxiv.org/pdf/1312.6034v2.pdf\" rel=\"noreferrer\">[3]</a> actually talks about the algorithm in details</p>\n\n<blockquote>\n <p>A locally-optimal <em>I</em> can be found by the back-propagation\n method. The procedure is related to the ConvNet training procedure, where the back-propagation is\n used to optimise the layer weights. The difference is that in our case the optimisation is performed\n with respect to the input image, while the weights are fixed to those found during the training stage.\n We initialised the optimisation with the zero image (in our case, the ConvNet was trained on the\n zero-centred image data), and then added the training set mean image to the result.</p>\n</blockquote>\n\n<p>Therefore, after training the network on classification, you train it again w.r.t to the input image, using gradient ascent in order to get higher score for the class.</p>\n"
},
{
"AnswerId": "31028871",
"CreationDate": "2015-06-24T14:07:25.683",
"ParentId": null,
"OwnerUserId": "2145920",
"Title": null,
"Body": "<p>UPD: Google posted more detail instructions how they implemented it: <a href=\"https://github.com/google/deepdream/blob/master/dream.ipynb\" rel=\"noreferrer\">https://github.com/google/deepdream/blob/master/dream.ipynb</a></p>\n\n<p>There's also another project: <a href=\"https://317070.github.io/Dream/\" rel=\"noreferrer\">https://317070.github.io/Dream/</a></p>\n\n<p>If you read <a href=\"https://i.stack.imgur.com/6NRkY.jpg\" rel=\"noreferrer\">1</a>,[2],[3],[4] from your link, you'll see that they used Caffe. This framework already contains the trained networks to play with. You don't need to train anything manually, just download the models using .sh scripts in the <code>models/</code> folder.</p>\n\n<p>You want \"plug-and-play process\", it's not so easy because besides the framework, we need the code of the scripts they used and, probably, patch Caffe. I tried to make something using their description. Caffe has Python and Matlab interface but there's more in its internals. </p>\n\n<p>The text below describes my thoughts on how it could be possibly implemented. I'm not sure about my words so it's more like an invitation to research with me than the \"plug-and-play process\". But as no one still answered, let me put it here. Maybe someone will fix me.</p>\n\n<p>So</p>\n\n<p>As far as I understand, they run optimization</p>\n\n<p><code>[sum((net.forwardTo(X, n) - enchanced_layer).^2) + lambda * R(X)] -> min</code></p>\n\n<p>I.e. look for such input <code>X</code> so that the particular layer of the netword would produce the \"enchanced\" data instead of the \"original\" data.</p>\n\n<p>There's a regularization constraint <code>R(X)</code>: <code>X</code> should look like \"natural image\" (without high-frequency noise).</p>\n\n<p><code>X</code> is our target image. The initial point <code>X0</code> is the original image.\n<code>forwardTo(X, n)</code> is what our network produces in the layer <code>n</code> when we feed the input with X. If speak about Caffe, you can make full-forward pass (<code>net.forward</code>) and look at the blob you are interested in (<code>net.blob_vec(n).get_data()</code>).</p>\n\n<p><code>enchanced_layer</code> - we take the original layer blob and \"enchance\" signals in it. What does it mean, I don't know. Maybe they just multiply the values by coefficient, maybe something else.</p>\n\n<p>Thus <code>sum((forwardTo(X, n) - enchanced_net).^2)</code> will become zero when your input image produces exactly what you want in the layer <code>n</code>.</p>\n\n<p><code>lambda</code> is the regularization parameter and <code>R(X)</code> is how <code>X</code> looks natural. I didn't implement it and my results look very noisy. As for it's formula, you can look for it at [2].</p>\n\n<p>I used Matlab and <code>fminlbfgs</code> to optimize.</p>\n\n<p>The key part was to find the gradient of the formula above because the problem has too many dimensions to calculate the gradient numerically.</p>\n\n<p>As I said, I didn't manage to find the gradient of <code>R(X)</code>. As for the main part of the formula, I managed to find it this way:</p>\n\n<ul>\n<li>Set diff blob at the layer <code>n</code> to <code>forwardTo(X, n) - enchanced_net</code>. (see caffe documentation for <code>set_diff</code> and <code>set_data</code>, <code>set_data</code> is used for forward and waits for data and <code>set_diff</code> is used for backward propagation and waits for data errors).</li>\n<li>Perform <em>partial</em> backpropagation from layer <code>n-1</code> to the input.</li>\n<li>Input diff blob would contain the gradient we need.</li>\n</ul>\n\n<p>Python and Matlab interfaces do NOT contain partial backward propagation but Caffe C++ internals contain it. I added a patch below to make it available in Matlab.</p>\n\n<p>Result of enhancing the 4th layer:</p>\n\n<p><img src=\"https://i.stack.imgur.com/6NRkY.jpg\" alt=\"Result of enhancing the 4th layer\"></p>\n\n<p>I'm not happy with the results but I think there's something in common with the article. </p>\n\n<ul>\n<li>Here's the code that produces the picture above \"as is\". The entry point is \"run2.m\", \"fit2.m\" contains the fitness function: <a href=\"https://github.com/galchinsky/caf\" rel=\"noreferrer\">https://github.com/galchinsky/caf</a></li>\n<li>Here's caffe patch to Matlab interface to make partial backpropagation available: <a href=\"https://gist.github.com/anonymous/53d7cb44c072ae6320ff\" rel=\"noreferrer\">https://gist.github.com/anonymous/53d7cb44c072ae6320ff</a></li>\n</ul>\n"
}
] |
31,005,463 | 3 |
<python><neural-network><caffe>
|
2015-06-23T14:22:06.590
| null | 1,107,562 |
How to train new fast-rcnn imageset
|
<p>I am using <strong>fast-rcnn</strong> and try to train the system for new class (label)
I followed this: <a href="https://github.com/EdisonResearch/fast-rcnn/tree/master/help/train" rel="nofollow noreferrer">https://github.com/EdisonResearch/fast-rcnn/tree/master/help/train</a> </p>
<ol>
<li><p>Placed the images</p></li>
<li><p>Placed the annotations </p></li>
<li><p>Prepare the ImageSet with all the image name prefix</p></li>
<li><p>Prepared selective search output: train.mat</p></li>
</ol>
<p><strong>I failed while running the train_net.py with the following error:</strong></p>
<pre></pre>
<p>My Questions is:</p>
<ol>
<li>Why am I having this error?</li>
<li>Do I need to rescale the images to fix: 256x256 before training?</li>
<li>Do I need to prepare something in order to set the class?</li>
</ol>
|
[
{
"AnswerId": "46239725",
"CreationDate": "2017-09-15T12:34:31.907",
"ParentId": null,
"OwnerUserId": "7596504",
"Title": null,
"Body": "<p>Check out the solution described in the following blog post, Part 4, Issue #4. The solution is to flip the x1 and x2 coordinate values.</p>\n\n<p><a href=\"https://huangying-zhan.github.io/2016/09/22/detection-faster-rcnn.html\" rel=\"nofollow noreferrer\">https://huangying-zhan.github.io/2016/09/22/detection-faster-rcnn.html</a></p>\n\n<p>Following is copied from link:</p>\n\n<p>box [:, 0] > box[:, 2]</p>\n\n<p>Solution: add the following code block in imdb.py</p>\n\n<pre><code>def append_flipped_images(self):\nnum_images = self.num_images\nwidths = self._get_widths()\nfor i in xrange(num_images):\n boxes = self.roidb[i]['boxes'].copy()\n oldx1 = boxes[:, 0].copy()\n oldx2 = boxes[:, 2].copy()\n boxes[:, 0] = widths[i] - oldx2\n boxes[:, 2] = widths[i] - oldx1\n for b in range(len(boxes)):\n if boxes[b][2] < boxes[b][0]:\n boxes[b][0]=0\n assert (boxes[:, 2] >= boxes[:, 0]).all()\n</code></pre>\n"
},
{
"AnswerId": "31603956",
"CreationDate": "2015-07-24T06:47:29.577",
"ParentId": null,
"OwnerUserId": "3985163",
"Title": null,
"Body": "<ol>\n<li>it says that there exists <code>boxes[:,2] < boxes[:, 0]</code>, <code>boxes[:, 2]</code> is the x-max of bounding box while <code>boxes[:, 0]</code> is x-min. So the problem is related to region proposal. I came across with this problem too. I found that it was causes by overflow. I remember that the dtype for boxes is np.uint8(need to check), if the image is too big, you get this error.</li>\n<li>rescale is one solution, however this may influence the performance. You can change the dtype from uint8 to float instead.</li>\n<li>As far as I know, there is no need for that.</li>\n</ol>\n"
},
{
"AnswerId": "31678091",
"CreationDate": "2015-07-28T13:43:07.810",
"ParentId": null,
"OwnerUserId": "1323010",
"Title": null,
"Body": "<p>I'm late to the party, but when I was editing the code this was my yakkity-hack of a solution</p>\n\n<pre><code> for b in range(len(boxes)):\n if boxes[b][2] < boxes[b][0]:\n boxes[b][0] = 0\n assert (boxes[:, 2] >= boxes[:, 0]).all()\n</code></pre>\n\n<p>There are smarter ways to do this, as <em>every single grad student</em> seems to point out, but this works fine.</p>\n"
}
] |
31,008,493 | 2 |
<python><caffe>
|
2015-06-23T16:34:42.337
| null | 286,579 |
Caffe: Drawing CNN Net
|
<p>I used python code to draw Net defined in prototext file as:</p>
<pre></pre>
<p>It fails to draw. It does not show any error but the results test.png file is white blank image file. Can anyone please help me in fixing it? It would really help to design new nets quickly.</p>
|
[
{
"AnswerId": "35239450",
"CreationDate": "2016-02-06T09:33:51.583",
"ParentId": null,
"OwnerUserId": "1044366",
"Title": null,
"Body": "<p>Somwhere in mid 2014, Caffe <a href=\"https://github.com/BVLC/caffe/releases/tag/v0.999\" rel=\"nofollow\">changed their proto definition for extensibility</a> which causes this problem. As a result of this change, all the proto files have to be updated to the newer definition. </p>\n\n<p>To do this, Caffe provides the following tools in the <code>distribute/bin/</code> or <code>.build_release/tools</code> directory:</p>\n\n<ol>\n<li><code>upgrade_net_proto_binary.bin</code> </li>\n<li><code>upgrade_net_proto_text.bin</code></li>\n</ol>\n\n<p>Here is a simple example of how to convert your proto text file to a newer format:</p>\n\n<pre><code>./upgrade_net_proto_text.bin /path/to/older_proto_file /path/to/newer_ouput_proto_file\n</code></pre>\n"
},
{
"AnswerId": "33389706",
"CreationDate": "2015-10-28T11:21:49.580",
"ParentId": null,
"OwnerUserId": "2528545",
"Title": null,
"Body": "<p>I had same problem. Based on <a href=\"https://groups.google.com/forum/#!topic/caffe-users/d6fpZTOq5M0\" rel=\"nofollow\">this thread</a>, I've managed to solve this by using older Proto syntax as suggested. For instance I had to do this:</p>\n\n<p>Rename layers definition from <code>layers</code> to <code>layer</code>. All layer type rename by caffe documentation (or by example proto files) - i.e. layer <code>type: CONVOLUTION</code> to <code>type: \"Convolution\"</code>, etc. Substitute newer syntax:</p>\n\n<pre><code>blobs_lr: 1 \nblobs_lr: 1 \nweight_decay: 1\nweight_decay: 0\n</code></pre>\n\n<p>for </p>\n\n<pre><code>param {\n name: \"conv1_w\"\n lr_mult: 1 \n decay_mult: 1\n}\nparam {\n name: \"conv1_b\"\n lr_mult: 2 \n decay_mult: 0\n}\n</code></pre>\n\n<p>Now parsing and new-drawing works just fine. Refer to example .prototxt files in caffe package to get better intuition, how working proto syntax looks like.</p>\n"
}
] |
31,008,673 | 2 |
<python><exception-handling><pycharm><pydev><theano>
|
2015-06-23T16:44:08.180
| null | 5,041,148 |
PyCharm break on exception does not work with Theano
|
<p>I cannot get PyCharm to stop on the line of code where an exception is raised, when I import Theano.</p>
<p>My code:</p>
<pre></pre>
<p>I expect PyCharm debugger to stop on the line, but it throws a and exits the debugger:</p>
<pre></pre>
|
[
{
"AnswerId": "34999303",
"CreationDate": "2016-01-25T18:09:00.103",
"ParentId": null,
"OwnerUserId": "648265",
"Title": null,
"Body": "<p>Looks like a bug in one of the libs (maybe both :^)).</p>\n\n<p>For some reason, <code>theano</code>'s and <code>PyCharm</code>'s excepthooks both think of the other one as its ancestor.</p>\n\n<p>Add debug printing into both libs at points where <code>sys.excepthook</code> and member variables that point to the previous handler are set to reveal the setting order. Someone appears to be breaking the handler chaining rules.</p>\n"
},
{
"AnswerId": "34994645",
"CreationDate": "2016-01-25T14:10:54.230",
"ParentId": null,
"OwnerUserId": "5041148",
"Title": null,
"Body": "<p>One hack is to comment out this line <code>sys.excepthook = thunk_hook</code> in <code>.../lib/python2.7/site-packages/theano/gof/link.py</code></p>\n"
}
] |
31,011,485 | 0 |
<build><lua><compiler-errors><deep-learning><torch>
|
2015-06-23T19:12:20.703
| null | 3,113,501 |
Attempting to install FBLuaLib
|
<p>FbLuaLib: <a href="https://github.com/facebook/fblualib" rel="nofollow">https://github.com/facebook/fblualib</a></p>
<p>To install FBLuaLib, I need to install thpp: <a href="https://github.com/facebook/thpp" rel="nofollow">https://github.com/facebook/thpp</a></p>
<p>I can build folly, fbthrift just fine, but when I run ./build.sh for thpp I get the following error message:</p>
<pre></pre>
|
[] |
31,019,371 | 1 |
<machine-learning><artificial-intelligence><regression><theano><pylearn>
|
2015-06-24T06:40:26.440
| 31,021,241 | 3,993,741 |
Pylearn2 example for time series or sequence prediction
|
<p>Can Pylearn2 be used for time series or sequence prediction of continuous numerical data? Can an LSTM recurrent neural network in Pylearn2 be used for this? If so, can someone post an example code in Pylearn2/Theano/Python? </p>
|
[
{
"AnswerId": "31021241",
"CreationDate": "2015-06-24T08:15:29.670",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>My understanding is that PyLearn2 is still not great for any kind of recurrent network, though I believe they are intending to improve support for these kinds of models.</p>\n\n<p>Having said that, there is experimental support, including an LSTM implementation.</p>\n\n<p>Take a look in the PyLearn2 source code in the directory <a href=\"https://github.com/lisa-lab/pylearn2/tree/master/pylearn2/sandbox/rnn\" rel=\"nofollow\">pylearn2/sandbox/rnn</a>, and in particular at the contents of <a href=\"https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/sandbox/rnn/models/rnn.py\" rel=\"nofollow\">pylearn2/sandbox/rnn/models/rnn.py</a> where you'll find an LSTM implementation.</p>\n\n<p>Because of its experimental nature, this code may not work properly, may not be supported fully, and the documentation may be incomplete or inaccurate.</p>\n\n<p>If you're willing to forego the intended ease of use benefits of PyLearn2 and work at a more detailled level then recurrent neural newtworks can be implemented just fine in Theano. There are many tutorials for this, including:</p>\n\n<ul>\n<li><a href=\"http://deeplearning.net/tutorial/lstm.html\" rel=\"nofollow\">LSTM Networks for Sentiment Analysis</a></li>\n<li><a href=\"http://deeplearning.net/tutorial/rnnrbm.html\" rel=\"nofollow\">Modeling and generating sequences of polyphonic music with the RNN-RBM</a></li>\n<li><a href=\"http://deeplearning.net/tutorial/rnnslu.html\" rel=\"nofollow\">Recurrent Neural Networks with Word Embeddings</a></li>\n</ul>\n"
}
] |
31,025,226 | 1 |
<python><theano>
|
2015-06-24T11:23:52.030
| 31,044,506 | 664,456 |
Theano set_value for casted shared variable
|
<p>In the Theano deep learning tutorial, y is a shared variable that is casted:</p>
<pre></pre>
<p>I later want to set a new value for y.</p>
<p>For GPU this works:</p>
<pre></pre>
<p>For CPU this works:</p>
<pre></pre>
<p>Why does this require a different syntax between GPU and CPU? I would like my code to work for both cases, am I doing it wrong?</p>
|
[
{
"AnswerId": "31044506",
"CreationDate": "2015-06-25T08:00:19.167",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>This is a very similar problem to that described in another <a href=\"https://stackoverflow.com/questions/30627302/can-not-update-a-subset-of-shared-tensor-variable-after-a-cast/30651282#30651282\">StackOverflow question</a>.</p>\n\n<p>The problem is that you are using a <em>symbolic</em> cast operation which turns the shared variable into a symbolic variable.</p>\n\n<p>The solution is to cast the shared variable's value rather than the shared variable itself.</p>\n\n<p>Instead of</p>\n\n<pre><code>y = theano.shared(numpy.asarray(data, dtype=theano.config.floatX))\ny = theano.tensor.cast(y, 'int32')\n</code></pre>\n\n<p>Use</p>\n\n<pre><code>y = theano.shared(numpy.asarray(data, dtype='int32'))\n</code></pre>\n\n<p>Navigating the Theano computational graph via the <code>owner</code> attribute is considered bad form. If you want to alter the shared variable's value, maintain a Python reference to the shared variable and set its value directly.</p>\n\n<p>So, with y being just a shared variable, and not a symbolic variable, you can now just do:</p>\n\n<pre><code>y.set_value(np.asarray(data2, dtype='int32'))\n</code></pre>\n\n<p>Note that the casting is happening in numpy again, instead of Theano.</p>\n"
}
] |
31,028,519 | 2 |
<computer-vision><deep-learning><caffe>
|
2015-06-24T13:52:36.120
| null | 2,163,392 |
GoogleNet can't read images when ImageData type field is used in train_val.prototxt
|
<p>I am trying o use caffe's implementation of GoogleNet. I want to train the deep network according to a list of files and labels in a text file, but the problem is that, when I train the deep network, it can't read the files.</p>
<p>Here is the train_val.prototxt definitions, where I use ImageData instead of using big LMDB files with 'Data' type</p>
<pre></pre>
<p>Here I used ImageData type for the googlenet rather than type Data as suggested here: <a href="https://stackoverflow.com/questions/30980338/lmdb-files-and-how-they-are-used-for-caffe-deep-learning-network">LMDB files and how they are used for caffe deep learning network</a></p>
<p>So, I have the text file (file_paths_and_labels.txt) where each line contain the following:</p>
<pre></pre>
<p>where path to image is the image's address and label is the label of the image (there are 10 different labels).</p>
<p>I want to know exactly where I am wrong because when I run the deep network training command</p>
<pre></pre>
<p>I have the following error:</p>
<pre></pre>
<p>I think GoogleNet is not finding data in my textfile. What is the problem? the syntax of my train_val.prototxt file?</p>
|
[
{
"AnswerId": "31141286",
"CreationDate": "2015-06-30T14:37:15.650",
"ParentId": null,
"OwnerUserId": "809993",
"Title": null,
"Body": "<p>You're specifying the source using the wrong parameter. For IMAGE_DATA you need to use <strong>image_data_param</strong> instead of <strong>data_param</strong>. Because you specify your source in data_param, and ImageDataLayer looks at image_data_param, the value of source is the empty string. You can see that in the log here:</p>\n\n<pre><code>I0624 10:36:11.525106 15246 image_data_layer.cpp:36] Opening file \n</code></pre>\n\n<p>The format of this line should be:</p>\n\n<pre><code>Opening file <filename>\n</code></pre>\n\n<p>while in your log there's an empty space following \"Opening file\".</p>\n"
},
{
"AnswerId": "33526867",
"CreationDate": "2015-11-04T16:20:20.513",
"ParentId": null,
"OwnerUserId": "1351629",
"Title": null,
"Body": "<p>That my be helpful for people who will land here in future.<br>\nI had kind of the same problem, but I was loading images from <code>leveldb</code> file.<br>\nAnd this error appeared when I copied <code>leveldb</code> files, generated on machine <strong>A</strong>, to another machine <strong>B</strong> and tried to run caffe on <strong>B</strong>.\nThe problem was solved by regenerating <code>leveldb</code> files once again on the machine <strong>B</strong>.</p>\n\n<p>BTW. may be anybody knows why does the machine on which was <code>leveldb</code> generated matter?</p>\n"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.