QuestionId
int64 388k
59.1M
| AnswerCount
int64 0
47
| Tags
stringlengths 7
102
| CreationDate
stringlengths 23
23
| AcceptedAnswerId
float64 388k
59.1M
⌀ | OwnerUserId
float64 184
12.5M
⌀ | Title
stringlengths 15
150
| Body
stringlengths 12
29.3k
| answers
listlengths 0
47
|
---|---|---|---|---|---|---|---|---|
27,789,597 | 1 |
<python><python-2.7><theano>
|
2015-01-05T23:18:13.470
| 27,831,625 | 4,416,268 |
Theano installation problems on a cluster
|
<p>I am trying to install Theano on a cluster node running "Red Hat Enterprise Linux Client release 5.10 (Tikanga)". I do not have admin permissions on the cluster. Hence, I installed Theano on my local user profile. The following are the version details of my installation:</p>
<blockquote>
<ol>
<li>The Python version is : Python 2.7.3</li>
<li>The version installed on the cluster is: version 1.6.2</li>
<li></li>
<li>nose version 1.3.4</li>
</ol>
</blockquote>
<p>I installed Theano in the following manner</p>
<blockquote>
<ol>
<li></li>
<li></li>
<li>The installed Theano version is: 0.6.0</li>
</ol>
</blockquote>
<p>I then tried to run theano.test() inside a python2.7 console. The test ran smoothly for a couple of minutes before I got the following errors:</p>
<pre></pre>
<p>Please help me install Theano correctly. I have come across solutions on this group suggesting re-installing the latest version of Theano. However, I have already installed Theano from the git repository. I have tried this possible solution already: <a href="https://stackoverflow.com/a/18238732/4416268">https://stackoverflow.com/a/18238732/4416268</a> but still I get the same error.</p>
|
[
{
"AnswerId": "27831625",
"CreationDate": "2015-01-08T01:33:53.683",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>Use Theano flag: <code>blas.ldflags=-lblas -lgfortran</code></p>\n\n<p>If you do not know how to use Theano flag, check this page: <a href=\"http://deeplearning.net/software/theano/library/config.html\" rel=\"nofollow\">http://deeplearning.net/software/theano/library/config.html</a></p>\n\n<p>Your problem is discussed on that page: <a href=\"http://deeplearning.net/software/theano/install_ubuntu.html\" rel=\"nofollow\">http://deeplearning.net/software/theano/install_ubuntu.html</a></p>\n"
}
] |
27,816,260 | 2 |
<python><theano>
|
2015-01-07T09:33:03.903
| 27,823,964 | 4,397,268 |
What Is equivalent to a [a < 0] = 0 in Theano?
|
<p>What is the equivalent to NumPy's in Theano (tensor variable)?
I want all my matrix elements that are smaller than a number equal to zero.</p>
|
[
{
"AnswerId": "27823964",
"CreationDate": "2015-01-07T16:27:19.703",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>This work:</p>\n\n<pre><code>import theano\na=theano.tensor.matrix()\nidxs=(a<0).nonzero()\nnew_a=theano.tensor.set_subtensor(a[idxs], 0)\n</code></pre>\n\n<p>Do not forget, Theano is a symbolic language. So the variable a isn't changed in the user graph. It is the new variable new_a that contain the new value and a still have the old value.</p>\n\n<p>Theano will optimize this to work inplace if possible.</p>\n"
},
{
"AnswerId": "38614428",
"CreationDate": "2016-07-27T13:30:40.920",
"ParentId": null,
"OwnerUserId": "5424661",
"Title": null,
"Body": "<p>This also works, and can also add upper bound limit</p>\n\n<pre><code>import theano\nimport theano.tensor as T\na = T.matrix()\nb = a.clip(0.0)\n</code></pre>\n\n<p>or if you want upper bound as well, you might like to try:</p>\n\n<pre><code>b = T.clip(a, 0.0, 1.0)\n</code></pre>\n\n<p>where 1.0 the place one want set upper bound.</p>\n\n<p>check the documents <a href=\"http://deeplearning.net/software/theano/library/tensor/basic.html\" rel=\"nofollow\">here</a></p>\n"
}
] |
27,816,788 | 1 |
<python><numpy><theano>
|
2015-01-07T09:59:39.383
| null | 1,465,156 |
Better way than T.eye in theano
|
<p>The problem is, given a arbitrary 1-d vector , expanded it into basis vectors with dimension. </p>
<p>The rule of the expansion is: each element in is the index of columns in the n*n identity matrix.</p>
<p>For example:</p>
<pre></pre>
<p>Since , we have the identity matrix:</p>
<pre></pre>
<p>Expand each element using the rule, we have:</p>
<pre></pre>
<p>I want to solve this problem using theano, with very large (>50k) and very long (>10k), so efficiency is important.</p>
<p>The solution using numpy is trivial, but the numpy.eye function may cost too much, we may use anther method to make it faster. Comparing the following methods:</p>
<pre></pre>
<p>However, the second method may not work in theano since loop in symbolic variable may not trivial. Is there a method which can avoid using T.eye?</p>
<p> can be an arbitrary 1-d vector.</p>
|
[
{
"AnswerId": "27817819",
"CreationDate": "2015-01-07T10:53:49.357",
"ParentId": null,
"OwnerUserId": "764322",
"Title": null,
"Body": "<p>You can try another approach. In my computer:</p>\n\n<pre><code>>>> %timeit np.eye(n)[y_value]\n1 loops, best of 3: 544 ms per loop\n</code></pre>\n\n<p>However, you don't need to create the whole array if you know in advance the rows you want. You can do this:</p>\n\n<pre><code>>>> n = 25500\n>>> n_rows = y_value.size\n>>> r = np.zeros((n_rows, n))\n>>> r[range(n_rows), y_value] = 1\n</code></pre>\n\n<p>You create a way smaller array, only <code>y x n</code> where <code>y</code> is the size of your index vector, and populate it in every row. The timing in my computer is:</p>\n\n<pre><code>>>> %%timeit \n..: r = np.zeros((n_rows, n))\n..: r[range(n_rows), y_value] = 1\n\n100 loops, best of 3: 3.8 ms per loop\n</code></pre>\n\n<p><code>x151</code> speedup in my laptop.</p>\n\n<p>Additionally, if you don't want an array full of zeros at the rear (x-axis), you could do:</p>\n\n<pre><code>>>> %%timeit \n..: r = np.zeros((n_rows, y_value.max()+1))\n..: r[range(n_rows), y_value] = 1\n\n100000 loops, best of 3: 16 µs per loop\n</code></pre>\n\n<p>Which is even faster, but the resulting array is <code>y x ymax</code>, in this case <code>99 x 100</code>, which might not be what you want.</p>\n"
}
] |
27,824,979 | 1 |
<python><theano>
|
2015-01-07T17:21:12.387
| 27,831,513 | 4,397,268 |
normalization in theano for any image?
|
<p>I write below code but that is very slow. And, of course, does not work correct!</p>
<p><strong>I have what I do?</strong></p>
<pre></pre>
|
[
{
"AnswerId": "27831513",
"CreationDate": "2015-01-08T01:20:08.733",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>First that section of your code do nothing. Just remove this line</p>\n\n<pre><code>T.set_subtensor(self.pooled_out[i], self.pooled_out[i])\n</code></pre>\n\n<p>Without full code, I can't test my solution, but I think this would do what you want:</p>\n\n<pre><code>pmin = self.pooled_out.min(axis=[2,3], keepdims=True)\npmax = self.pooled_out.max(axis=[2,3], keepdims=True)\nnormalized_pooled_out = (self.pooled_out - pmin)/pmax\n</code></pre>\n\n<p>Then normalized_pooled_out contain the symbolic variable that have the value I think you want.</p>\n"
}
] |
27,827,656 | 1 |
<cuda><deep-learning><caffe>
|
2015-01-07T19:59:57.043
| null | 4,096,101 |
Caffe Installation Issues
|
<p>I'm having some problem while installing Caffe. Please let me know if anyone have come across the same issue. Thanks.</p>
<blockquote>
<p>make runtest<br>
.build_release/test/test_all.testbin 0 --gtest_shuffle<br>
Cuda number of devices: 1<br>
Setting to use device 0<br>
Current device id: 0<br>
Note: Randomizing tests' orders with a seed of 88789 .<br>
[==========] Running 838 tests from 169 test cases.<br>
[----------] Global test environment set-up.<br>
[----------] 3 tests from ImageDataLayerTest/3, where TypeParam = caffe::DoubleGPU<br>
[ RUN ] ImageDataLayerTest/3.TestResize<br>
F0107 14:26:04.664185 3079 math_functions.cpp:91] Check failed: error == cudaSuccess (11 vs. 0) invalid argument<br>
<strong>* Check failure stack trace: *</strong><br>
@ 0x2ab3f5243daa (unknown)<br>
@ 0x2ab3f5243ce4 (unknown)<br>
@ 0x2ab3f52436e6 (unknown)<br>
@ 0x2ab3f5246687 (unknown)<br>
@ 0x6bdc35 caffe::caffe_copy<>()<br>
@ 0x7439af caffe::BasePrefetchingDataLayer<>::Forward_gpu()<br>
@ 0x428da2 caffe::Layer<>::Forward()<br>
@ 0x62ff53 caffe::ImageDataLayerTest_TestResize_Test<>::TestBody()<br>
@ 0x657363 testing::internal::HandleExceptionsInMethodIfSupported<>()<br>
@ 0x64de07 testing::Test::Run()<br>
@ 0x64deae testing::TestInfo::Run()<br>
@ 0x64dfb5 testing::TestCase::Run()<br>
@ 0x6512f8 testing::internal::UnitTestImpl::RunAllTests()<br>
@ 0x651587 testing::UnitTest::Run()<br>
@ 0x41d3a0 main<br>
@ 0x2ab3f8396ec5 (unknown)<br>
@ 0x4243d7 (unknown)<br>
@ (nil) (unknown)<br>
make: *** [runtest] Aborted (core dumped) </p>
</blockquote>
#
<p>Ubuntu 14.04</p>
<pre></pre>
#
|
[
{
"AnswerId": "27973046",
"CreationDate": "2015-01-15T21:10:46.567",
"ParentId": null,
"OwnerUserId": "3646384",
"Title": null,
"Body": "<p>There seems to be a problem with the GPU support. Maybe it does not support your GPU. I would try installing Caffe without GPU support. All you need to do is to uncomment </p>\n\n<p><div class=\"snippet\" data-lang=\"js\" data-hide=\"false\">\r\n<div class=\"snippet-code\">\r\n<pre class=\"snippet-code-html lang-html prettyprint-override\"><code>CPU_ONLY := 1</code></pre>\r\n</div>\r\n</div>\r\n\nin Makefile.config and then make again. <a href=\"http://caffe.berkeleyvision.org/installation.html\" rel=\"nofollow\">Here</a> are the instructions. </p>\n"
}
] |
27,845,374 | 1 |
<python><gpu><theano>
|
2015-01-08T16:50:56.730
| 27,996,360 | 4,397,268 |
memory error in my program when I set theano.config.device to gpu
|
<p>My graphics system is GT 550M</p>
<p>When I run my program on the gpu It gives following error and I don't know how I fix this problem</p>
<pre></pre>
|
[
{
"AnswerId": "27996360",
"CreationDate": "2015-01-17T05:07:58.200",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>The error by cuda are returned async. So part of the error message can be unrelated. Here is the first line:</p>\n\n<p>MemoryError: error freeing device pointer 0x0000000500C60000 (the launch timed out and was terminated)</p>\n\n<p>The answer is in the second part: the launch timed out and was terminated</p>\n\n<p>You GPU is attached to a monitor. In that case, there is a limit of 5s for each GPU kernel call. It happen that it got busted and the driver killed that kernel. This is to prevent the screen from being not responsive.</p>\n\n<p>possible solution:\n1) use a different gpu for the monitor.\n2) make the kernel faster by using small input data (lower batch size for example)\n3) buy a faster GPU, not sure it will work and if it work with your current size, the problem will appear with bigger size.</p>\n"
}
] |
27,873,828 | 1 |
<ubuntu><lua><torch>
|
2015-01-10T07:20:01.877
| null | 4,439,342 |
undefined symbol: spotrs_ in torch7
|
<p>I am having problem with torch7 in Ubuntu 14.04. Error:</p>
<p>error loading module "libtorch" from "libtorch.so": undefined symbol: spotrs_</p>
<p>I am trying to import it in lua 5.1.5. I don't have any problem in Mac. </p>
<p>Thanks in Advance.</p>
|
[
{
"AnswerId": "27898847",
"CreationDate": "2015-01-12T09:32:37.670",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<p>Have you used Torch <a href=\"https://github.com/torch/ezinstall\" rel=\"nofollow\">ezinstall</a> installer? <code>spotrs</code> is a LAPACK function, so you need to make sure you have e.g <a href=\"https://github.com/torch/ezinstall/blob/83d8ccb/install-deps#L7-L22\" rel=\"nofollow\">OpenBLAS</a> properly installed.</p>\n"
}
] |
27,878,446 | 3 |
<android><java-native-interface><caffe>
|
2015-01-10T16:30:44.100
| 28,625,180 | 1,282,137 |
Storing file on Android for Native reading
|
<p>I'm writing app for android and I'm using caffe library. My problem is that on start I need to initialize caffe, which is done by passing two files (structures of network) to caffe.
Problem is that I don't know how to store extra files on device. I've added model file to assets, but I don't know how can I read it using file path. Can you tell me where to store these file that could be access using file path? </p>
<p>Thanks for any ideas.</p>
|
[
{
"AnswerId": "27878470",
"CreationDate": "2015-01-10T16:34:14.953",
"ParentId": null,
"OwnerUserId": "3933089",
"Title": null,
"Body": "<p>Put them into your project as assets, and then when the app starts, you can read them from the assets and copy them into the app's private storage area. You can find this directory using <a href=\"http://developer.android.com/reference/android/content/Context.html#getFilesDir()\" rel=\"nofollow\"><code>Context.getFilesDir()</code></a>.</p>\n\n<p>From there, you'll be able to pass the files to Caffe.</p>\n"
},
{
"AnswerId": "27878463",
"CreationDate": "2015-01-10T16:33:24.010",
"ParentId": null,
"OwnerUserId": "4440363",
"Title": null,
"Body": "<p>This should do it. Just copy those files to data directory from asset folder. If you already have those files there just load them.</p>\n\n<pre><code>String toPath = \"/data/data/\" + getPackageName(); // Your application path\n\n\n\n\nprivate static boolean copyAssetFolder(AssetManager assetManager,\n String fromAssetPath, String toPath) {\n try {\n String[] files = assetManager.list(fromAssetPath);\n new File(toPath).mkdirs();\n boolean res = true;\n for (String file : files)\n if (file.contains(\".\"))\n res &= copyAsset(assetManager, \n fromAssetPath + \"/\" + file,\n toPath + \"/\" + file);\n else \n res &= copyAssetFolder(assetManager, \n fromAssetPath + \"/\" + file,\n toPath + \"/\" + file);\n return res;\n } catch (Exception e) {\n e.printStackTrace();\n return false;\n }\n }\n\nprivate static boolean copyAsset(AssetManager assetManager,\n String fromAssetPath, String toPath) {\n InputStream in = null;\n OutputStream out = null;\n try {\n in = assetManager.open(fromAssetPath);\n new File(toPath).createNewFile();\n out = new FileOutputStream(toPath);\n copyFile(in, out);\n in.close();\n in = null;\n out.flush();\n out.close();\n out = null;\n return true;\n } catch(Exception e) {\n e.printStackTrace();\n return false;\n }\n}\n\nprivate static void copyFile(InputStream in, OutputStream out) throws IOException {\n byte[] buffer = new byte[1024];\n int read;\n while((read = in.read(buffer)) != -1){\n out.write(buffer, 0, read);\n }\n}\n</code></pre>\n"
},
{
"AnswerId": "28625180",
"CreationDate": "2015-02-20T09:08:48.020",
"ParentId": null,
"OwnerUserId": "1282137",
"Title": null,
"Body": "<p>Assets are packaged and access only using special methods, so i solved problem by access file and then copy it to new location which i passed to native method.</p>\n"
}
] |
27,890,137 | 1 |
<c++><macos><opencv><osx-yosemite><caffe>
|
2015-01-11T17:44:37.577
| 27,904,334 | 2,288,621 |
Undefined symbols for architecture x86_64: for caffe build
|
<p>I got this error for <a href="http://caffe.berkeleyvision.org/" rel="noreferrer">caffe</a> build. How can I fix it?
I'm using Mac OSX Yosemite 10.10.1.</p>
<p>CONSOLE LOG</p>
<pre></pre>
<p>src/caffe/layers/window_data_layer.cpp includes</p>
<pre></pre>
<p>It seems I have correct libraries. (Is this right thing to check??)</p>
<pre></pre>
<p>OpenCV libraries seem to be built with libstdc++, because <a href="http://caffe.berkeleyvision.org/installation.html" rel="noreferrer">the official install guide</a> suggests so.</p>
<pre></pre>
<p>All the build commands are correctly with -stdlib=libstdc++ option.</p>
<pre></pre>
<p>Thank for your help in advance!</p>
<hr>
<p>Now I find I can use ld with -v option.
I keep investigating.</p>
<pre></pre>
<hr>
<p>Dylibs seems to contain appropriate symbols.</p>
<pre></pre>
<hr>
<p>Dylibs seems to have the appropriate architecture. mmm...</p>
<pre></pre>
<hr>
<p>Found a nice option for ld, which logs each file the ld loads.</p>
<pre></pre>
|
[
{
"AnswerId": "27904334",
"CreationDate": "2015-01-12T14:32:25.050",
"ParentId": null,
"OwnerUserId": "2288621",
"Title": null,
"Body": "<p>Solved!</p>\n\n<p><code>cv::imread(cv::String const&, int)</code> is defined on <code>libopencv_imgcodecs.dylib</code> and\nMakefile is missing it.</p>\n\n<p>So, I added <code>opencv_imgcodecs</code> to Makefile.</p>\n\n<pre><code>LIBRARIES += glog gflags protobuf leveldb snappy \\\n lmdb \\\n boost_system \\\n hdf5_hl hdf5 \\\n opencv_imgcodecs opencv_highgui opencv_imgproc opencv_core pthread\n</code></pre>\n"
}
] |
27,948,324 | 2 |
<python><machine-learning><theano>
|
2015-01-14T16:54:47.867
| null | 4,178,757 |
Implementing LeCun Local Contrast Normalization with Theano
|
<p>I'm trying to use the code that I found to implement the LeCun Local Contrast Normalization but I get incorrect result. The code is in Python and uses the library.</p>
<pre></pre>
<p>Here is the testing code:</p>
<pre></pre>
<p>Here is the result: </p>
<p><img src="https://i.stack.imgur.com/UnCq9.jpg" alt="Example image processing results"></p>
<p><em>(left to right: origin, my result, the expected result)</em> </p>
<p>Could someone tell me what I did wrong with the code?</p>
|
[
{
"AnswerId": "29746766",
"CreationDate": "2015-04-20T11:35:44.277",
"ParentId": null,
"OwnerUserId": "4810149",
"Title": null,
"Body": "<p>I think these two lines may have some mistakes on the matrix axes:</p>\n\n<pre><code>per_img_mean = denom.mean(axis=[1, 2])\ndivisor = T.largest(per_img_mean.dimshuffle(0, 'x', 'x', 1), denom)\n</code></pre>\n\n<p>and it should be rewritten as:</p>\n\n<pre><code>per_img_mean = denom.mean(axis=[2, 3])\ndivisor = T.largest(per_img_mean.dimshuffle(0, 1, 'x', 'x'), denom)\n</code></pre>\n"
},
{
"AnswerId": "35259710",
"CreationDate": "2016-02-07T22:05:46.557",
"ParentId": null,
"OwnerUserId": "5896264",
"Title": null,
"Body": "<p>Here is how I implemented local contrast normalization as reported in Jarrett et al (<a href=\"http://yann.lecun.com/exdb/publis/pdf/jarrett-iccv-09.pdf\">http://yann.lecun.com/exdb/publis/pdf/jarrett-iccv-09.pdf</a>). You can use it as a separate layer.</p>\n\n<p>I tested it on the code from the LeNet tutorial of theano in which I applied LCN to the input and to each convolutional layer which yields slightly better results.</p>\n\n<p>You can find the full code here:\n<a href=\"https://github.com/jostosh/theano_utils/blob/master/lcn.py\">https://github.com/jostosh/theano_utils/blob/master/lcn.py</a> </p>\n\n<pre><code>class LecunLCN(object):\ndef __init__(self, X, image_shape, threshold=1e-4, radius=9, use_divisor=True):\n \"\"\"\n Allocate an LCN.\n\n :type X: theano.tensor.dtensor4\n :param X: symbolic image tensor, of shape image_shape\n\n :type image_shape: tuple or list of length 4\n :param image_shape: (batch size, num input feature maps,\n image height, image width)\n :type threshold: double\n :param threshold: the threshold will be used to avoid division by zeros\n\n :type radius: int\n :param radius: determines size of Gaussian filter patch (default 9x9)\n\n :type use_divisor: Boolean\n :param use_divisor: whether or not to apply divisive normalization\n \"\"\"\n\n # Get Gaussian filter\n filter_shape = (1, image_shape[1], radius, radius)\n\n self.filters = theano.shared(self.gaussian_filter(filter_shape), borrow=True)\n\n # Compute the Guassian weighted average by means of convolution\n convout = conv.conv2d(\n input=X,\n filters=self.filters,\n image_shape=image_shape,\n filter_shape=filter_shape,\n border_mode='full'\n )\n\n # Subtractive step\n mid = int(numpy.floor(filter_shape[2] / 2.))\n\n # Make filter dimension broadcastable and subtract\n centered_X = X - T.addbroadcast(convout[:, :, mid:-mid, mid:-mid], 1)\n\n # Boolean marks whether or not to perform divisive step\n if use_divisor:\n # Note that the local variances can be computed by using the centered_X\n # tensor. If we convolve this with the mean filter, that should give us\n # the variance at each point. We simply take the square root to get our\n # denominator\n\n # Compute variances\n sum_sqr_XX = conv.conv2d(\n input=T.sqr(centered_X),\n filters=self.filters,\n image_shape=image_shape,\n filter_shape=filter_shape,\n border_mode='full'\n )\n\n\n # Take square root to get local standard deviation\n denom = T.sqrt(sum_sqr_XX[:,:,mid:-mid,mid:-mid])\n\n per_img_mean = denom.mean(axis=[2,3])\n divisor = T.largest(per_img_mean.dimshuffle(0, 1, 'x', 'x'), denom)\n # Divisise step\n new_X = centered_X / T.maximum(T.addbroadcast(divisor, 1), threshold)\n else:\n new_X = centered_X\n\n self.output = new_X\n\n\ndef gaussian_filter(self, kernel_shape):\n x = numpy.zeros(kernel_shape, dtype=theano.config.floatX)\n\n def gauss(x, y, sigma=2.0):\n Z = 2 * numpy.pi * sigma ** 2\n return 1. / Z * numpy.exp(-(x ** 2 + y ** 2) / (2. * sigma ** 2))\n\n mid = numpy.floor(kernel_shape[-1] / 2.)\n for kernel_idx in xrange(0, kernel_shape[1]):\n for i in xrange(0, kernel_shape[2]):\n for j in xrange(0, kernel_shape[3]):\n x[0, kernel_idx, i, j] = gauss(i - mid, j - mid)\n\n return x / numpy.sum(x)\n</code></pre>\n"
}
] |
27,971,707 | 1 |
<python><opencl><theano>
|
2015-01-15T19:43:18.510
| 28,749,022 | 1,085,946 |
Using Python+Theano with OpenCL in an AMD GPU
|
<p>I'm trying to use Python with Theano to accelerate some code with OpenCL. I installed and as instructed (I think), and got no errors. The installation detected the OpenCL runtime installed. </p>
<p>I just cannot run the Theano example for OpenCL, mainly because I don't know how to specify my GPU. My GPU is a , according to . All code in the Theano documentation uses and the only place where OpenCL is mentioned, it says where is a number.</p>
<p>I tried and got a error saying that the correct format is . I have since tried any number of combinations of numbers ( and such), and always an .</p>
<p>My system is Ubuntu 14.04 x64 and my hardware is a Toshiba Satellite, 15". I installed with , and later installed following the instructions on their site.</p>
<p>What am I doing wrong?</p>
|
[
{
"AnswerId": "28749022",
"CreationDate": "2015-02-26T17:33:51.500",
"ParentId": null,
"OwnerUserId": "1828289",
"Title": null,
"Body": "<p>opencl0:0 is correct. Could you confirm that pyopencl works? You may have a problem with your opencl (or drivers/cl compiler).</p>\n\n<p>However, I think Theano does not quite work with OpenCL at the moment. The current state is there is partial support, enough for \"hello world\", but not enough to run any significant code.</p>\n\n<p>See:</p>\n\n<p><a href=\"https://github.com/Theano/Theano/issues/2189\">https://github.com/Theano/Theano/issues/2189</a></p>\n\n<p><a href=\"https://github.com/Theano/Theano/issues/1471\">https://github.com/Theano/Theano/issues/1471</a></p>\n\n<p><a href=\"https://github.com/Theano/Theano/issues/2190\">https://github.com/Theano/Theano/issues/2190</a></p>\n\n<p><a href=\"https://github.com/Theano/Theano/pull/1732\">https://github.com/Theano/Theano/pull/1732</a></p>\n\n<p>To summarize, no, most stuff is not ported (including Elemwise, for example, which is a common op). I would really like to see Theano on OpenCL though. That would be a great thing for AMD to pitch in on. Soon :)</p>\n"
}
] |
27,972,386 | 1 |
<deep-learning><caffe>
|
2015-01-15T20:27:34.277
| null | 678,392 |
Caffe Deep Learning Library example
|
<p>Here is an example from
<a href="http://caffe.berkeleyvision.org/tutorial/net_layer_blob.html" rel="nofollow">http://caffe.berkeleyvision.org/tutorial/net_layer_blob.html</a></p>
<p>I'm kind of lost. What am I supposed to infer from this example? </p>
<pre></pre>
|
[
{
"AnswerId": "31407205",
"CreationDate": "2015-07-14T12:54:43.357",
"ParentId": null,
"OwnerUserId": "2466336",
"Title": null,
"Body": "<p>This snippet is to explain a feature of Caffe's Blob class shielding the user from the details of CPU<->GPU memory transfer.</p>\n\n<p>My attempt at elaborating on the comments in the code:</p>\n\n<p>It assumes you've already declared a Blob object and populated with data. What the data represents is irrelevant. The actual declaration of the Blob object and its initialization are missing from this snippet.</p>\n\n<pre><code>// Assuming that data are on the CPU initially, and we have a blob.\nconst Dtype* foo;\nDtype* bar;\n</code></pre>\n\n<p>Since the data populating the blob resides in CPU memory, making use of it on a GPU device requires a transfer.</p>\n\n<pre><code>foo = blob.gpu_data(); // data copied cpu->gpu.\n</code></pre>\n\n<p>But if you're getting copying that same data to somewhere else on CPU memory, the Blob object won't have to perform the expensive copy operations required for CPU<->GPU transfers.</p>\n\n<pre><code>foo = blob.cpu_data(); // no data copied since both have up-to-date contents.\n</code></pre>\n\n<p>Initializing the 'bar' object with data residing in GPU memory. It's already copied them once. No need to repeat the costly copy operations.</p>\n\n<pre><code>bar = blob.mutable_gpu_data(); // no data copied.\n// ... some operations ...\nbar = blob.mutable_gpu_data(); // no data copied when we are still on GPU.\n</code></pre>\n\n<p>The Blob class keeps track of whether the CPU and CPU copies are identical or if one was modified, requiring a renewed copy in order to keep them identical.</p>\n\n<pre><code>foo = blob.cpu_data(); // data copied gpu->cpu, since the gpu side has modified the data\nfoo = blob.gpu_data(); // no data copied since both have up-to-date contents\n</code></pre>\n\n<p>Now we're just going back and forth and seeing what will trigger a copy and will fall back on a cached copy of the data.</p>\n\n<pre><code>bar = blob.mutable_cpu_data(); // still no data copied.\nbar = blob.mutable_gpu_data(); // data copied cpu->gpu.\nbar = blob.mutable_cpu_data(); // data copied gpu->cpu.\n</code></pre>\n"
}
] |
27,984,493 | 1 |
<theano><deep-learning>
|
2015-01-16T12:58:58.327
| 27,989,893 | 1,926,931 |
Evaluating and modifying theano tensors when stuff is on GPU
|
<p>I am seriously stuck with something for ages now. I need some help.</p>
<p>I am running a theano conv network on GPU.
The network has a loss function as such</p>
<p>def mse(x, t):
return T.mean((x - t) ** 2)</p>
<p>Here x is the predicted value of a rectified liner unit and t is the expected value.</p>
<p>Now for a particular learning problem I am trying to modify the function such that I want to threshold the value of x. So essentially something simple as this</p>
<p>x[x>ts] = ts</p>
<p>But I am really struggling with this. I tried so many things</p>
<p></p>
<p>Apart from the three prints, which all print everything else gives me error.
So I am at my wits end how to do this simple stuff.
Is it because this stuff is on GPU ?</p>
<p>I did test the code on local python prompt, by constructing a numpy array and converting it into a tensor shared variable. The different stuff above works.
But I am conscious that the type is theano.tensor.sharedvar.TensorSharedVariable and not theano.tensor.var.TensorVariable. </p>
<p>I would really appreciate if some one gives me a helping hand here.</p>
<p>Regards</p>
|
[
{
"AnswerId": "27989893",
"CreationDate": "2015-01-16T17:53:48.647",
"ParentId": null,
"OwnerUserId": "1926931",
"Title": null,
"Body": "<p>Please find the answer to this question given by pascal at\n<a href=\"https://groups.google.com/forum/#!topic/theano-users/cNnvw2rUHc8\" rel=\"nofollow\">https://groups.google.com/forum/#!topic/theano-users/cNnvw2rUHc8</a></p>\n\n<p>The failures are correct because the input values are not being provided at the time the function is being called, since it is symbolic.</p>\n\n<p>The answer is to use T.minimum(x,threshold)</p>\n"
}
] |
27,986,339 | 1 |
<machine-learning><neural-network><deep-learning><caffe>
|
2015-01-16T14:39:56.837
| 28,009,396 | 4,462,028 |
Where can I find the label map between trained model like googleNet's output to there real class label?
|
<p>everyone, I am new to caffe. Currently, I try to use the trained GoogleNet which was downloaded from model zoo to classify some images. However, the network's output seem to be a vector rather than real label(like dog, cat).
Where can I find the label-map between trained model like googleNet's output to their real class label?
Thanks.</p>
|
[
{
"AnswerId": "28009396",
"CreationDate": "2015-01-18T11:57:49.263",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>If you got <code>caffe</code> from git you should find in <code>data/ilsvrc12</code> folder a shell script <code>get_ilsvrc_aux.sh</code>.<br>\nThis script should download several files used for ilsvrc (sub set of imagenet used for the large scale image recognition challenge) training. </p>\n\n<p>The most interesting file (for you) that will be downloaded is <code>synset_words.txt</code>, this file has 1000 lines, one line per class identified by the net.<br>\nThe format of the line is</p>\n\n<blockquote>\n <p>nXXXXXXXX description of class</p>\n</blockquote>\n"
}
] |
27,987,691 | 2 |
<cuda><caffe>
|
2015-01-16T15:48:13.890
| 28,422,572 | 1,115,169 |
Issues with compiling Caffe with cuDNN
|
<p>i am trying to compile caffe on ubuntu14 with 750ti geforce gpu but i cant. i installed the cudnn library in /usr/local/cuda/lib64 and the cudnn.h header file in /usr/local/cuda/include still there seems to be a problem. i really think enabling the cudNN=1 in Makefile.config</p>
<pre></pre>
<p>thats where the problem is. What exactly are these errors?</p>
<pre></pre>
<p>I tested CUDA sample and GPU is working fine with cuda. Here is the result...</p>
<pre></pre>
|
[
{
"AnswerId": "28422572",
"CreationDate": "2015-02-10T01:27:00.183",
"ParentId": null,
"OwnerUserId": "4548565",
"Title": null,
"Body": "<p>A version of Caffe that works with cuDNNv2 is available from <a href=\"https://github.com/slayton58/caffe\" rel=\"nofollow\">S. Layton's github page</a>.</p>\n\n<p>His Caffe master branch works with cuDNNv2. You can download it from the github page.</p>\n\n<p>He made a pull request to the official Caffe github and the full discussion is available <a href=\"https://github.com/BVLC/caffe/pull/1739\" rel=\"nofollow\">here</a> if you want the details.</p>\n"
},
{
"AnswerId": "27988298",
"CreationDate": "2015-01-16T16:21:34.000",
"ParentId": null,
"OwnerUserId": "1115169",
"Title": null,
"Body": "<p>I downloaded cuDNN R1 and now its working. Looks cudNN R2 latest one is not compatible with caffe.</p>\n"
}
] |
28,009,021 | 1 |
<python><logistic-regression><theano>
|
2015-01-18T11:11:31.710
| 28,078,636 | 3,656,081 |
Symbolic variables automatically update in theano
|
<p>Im following the theano tutorial given <a href="http://deeplearning.net/tutorial/logreg.html" rel="nofollow">here</a> for simple Stochastic Gradient Descent. Here however I am unable to understand in this block how the values of and are getting automatically updated according to the values of and since later down when we run test_logistic() we are only updating the values of and ?
Thanks</p>
<pre></pre>
|
[
{
"AnswerId": "28078636",
"CreationDate": "2015-01-21T23:12:34.023",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p><code>p_y_given_x</code> and <code>y_pred</code> are symbolic variable (just python object from Theano). Those python variable that point to the Theano object do not get updated. They just represent the computation we want to do. Think like in pseudo-code.</p>\n\n<p>They will be used when compiling the Theano function. It is only then that the value will be computed. But this do not cause any change to the python variable that point to the object <code>p_y_given_x</code> and <code>y_pred</code>. The object are not changed.</p>\n\n<p>Understanding this distinction take time for some people. It is a new way of thinking. So don't hesitate to ask questions. One thing that help is to always ask yourself if you are in the symbolic world or the numerical world. the numerical world happen only with compiled Theano function.</p>\n"
}
] |
28,011,527 | 1 |
<theano>
|
2015-01-18T15:41:46.923
| null | 1,382,757 |
How can I pool gradients in Theano?
|
<p>I'm using Theano for the first time to build a large statistical model. I'm performing a kind of stochastic gradient descent, but for each sample in the minibatch I need to perform a sampling procedure to compute the gradient. Is there a way in Theano to pool the gradients while I perform the sampling procedure for each datapoint in a minibatch, and only afterward perform the gradient update? </p>
|
[
{
"AnswerId": "28078965",
"CreationDate": "2015-01-21T23:41:44.450",
"ParentId": null,
"OwnerUserId": "1985353",
"Title": null,
"Body": "<p>I don't understand what you mean by \"pool\".\nWhen you compute the gradient of your cost wrt some variables, the cost has to be a scalar. So, when using minibatches, you have to combine the individual costs for the examples in the minibatch. That can be done by a sum, a mean, a weighted sum... And then that cost is backpropagated.\nThe gradient of that cost wrt parameters will correspond (mathematically) to the sum/mean/weigted sum of the individual gradients (on each of the examples), but that is not the way it is computed.\nThe gradient of that cost wrt intermediate variables that are function of the inputs (hidden representations, etc.) will have the same format as the original minibatch, with the gradient wrt each of the minibatches in a different row.</p>\n\n<p>So, maybe what you want is expressing your final cost as a result of your sampling procedure, and then backpropagate the gradient of that cost.\nOr maybe you do not want to backpropagate the gradient of the true cost all the way, and backpropagate something that depends on the gradient instead.\nIn that case, you can do something like:</p>\n\n<pre><code># minibatch of inputs\ninputs = tt.matrix()\ninterm_result = f(input)\ncost = g(interm_result).sum()\ngrad_wrt_interm_result = th.grad(cost, interm_result)\nsampled_grad = sampling_procedure(grad_wrt_interm_result)\ngrad_wrt_params = th.grad(cost, params,\n known_grads={inter_result: sampled_grad})\n</code></pre>\n\n<p>That way, you can perform some of the backpropagation to interm_result, then change the gradient wrt inter_result to sampled_grad, and then finish the backpropagation towards the parameters.</p>\n"
}
] |
28,011,551 | 3 |
<python><gpu><theano>
|
2015-01-18T15:44:05.900
| null | 22,996 |
How configure theano on Windows?
|
<p>I have Installed Theano on Windows machine and followed the configuration <a href="http://deeplearning.net/software/theano/library/config.html" rel="nofollow noreferrer">instructions</a>.</p>
<p>I placed the following .theanorc.txt file in C:\Users\my_username folder:</p>
<pre></pre>
<p>I tried to run the test, but haven't managed to run it on GPU. I guess the values from .theanorc.txt are not read, because I added the line print config.device and it outputs "cpu".</p>
<p>Below is the basic test script and the output:</p>
<pre></pre>
<p>output:</p>
<pre></pre>
<p>I have installed CUDA Toolkit successfully but haven't managed to install pyCUDA. I guess Theano should work without pyCUDA installed anyway.</p>
<p>I would be very thankful if anyone could help out solving this problem. I have followed <a href="https://stackoverflow.com/questions/25729969/installing-theano-on-windows-8-with-gpu-enabled">these</a> instructions but don't know why the configuration values in the program don't match the values in .theanorc.txt file.</p>
|
[
{
"AnswerId": "44518712",
"CreationDate": "2017-06-13T10:19:59.247",
"ParentId": null,
"OwnerUserId": "5859283",
"Title": null,
"Body": "<p>Contrary to what has been said on a couple of pages, my installation (Windows 10, Python 2.7, Theano 0.10.0.dev1) would not interpret config instructions within a <code>.theanorc.txt</code> file in my user profile folder, but would read a <code>.theanorc</code> file.</p>\n\n<p>If you are having trouble creating a file with that style of name, use the following commands at a terminal:</p>\n\n<pre><code>cd %USERPROFILE%\ntype NUL > .theanorc\n</code></pre>\n\n<p>Sauce: <a href=\"http://ankivil.com/making-theano-faster-with-cudnn-and-cnmem-on-windows-10/\" rel=\"noreferrer\">http://ankivil.com/making-theano-faster-with-cudnn-and-cnmem-on-windows-10/</a></p>\n"
},
{
"AnswerId": "42520401",
"CreationDate": "2017-02-28T23:04:23.077",
"ParentId": null,
"OwnerUserId": "2531309",
"Title": null,
"Body": "<p>Try to change the content in .theanorc.txt as indicating by Theano website ( <a href=\"http://deeplearning.net/software/theano/install_windows.html\" rel=\"nofollow noreferrer\">http://deeplearning.net/software/theano/install_windows.html</a>). The path needs to be changed accordingly based on your installation.</p>\n\n<pre><code>[global]\nfloatX = float32\ndevice = gpu\n\n[nvcc]\nflags=-LC:\\Users\\cchan\\Anaconda3\\libs\ncompiler_bindir=C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin\n</code></pre>\n"
},
{
"AnswerId": "28078667",
"CreationDate": "2015-01-21T23:15:26.967",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>You are right that Theano does not need PyCUDA.</p>\n\n<p>It is strange that Theano does not read your configuration file. The exact path that gets read is this. Just run this in Python and you'll see where to put it:</p>\n\n<p><code>os.path.expanduser('~/.theanorc.txt')</code></p>\n"
}
] |
28,028,253 | 2 |
<lua><dataframe><data-analysis><torch>
|
2015-01-19T15:39:11.107
| 37,419,889 | 297,094 |
Lua library for data analysis (data frames)
|
<p>Is there any Lua implementation of data frames - structures for data analysis which? Something like Python pandas. I want to do some statistical operations using LuaJIT.</p>
|
[
{
"AnswerId": "28029533",
"CreationDate": "2015-01-19T16:42:43.267",
"ParentId": null,
"OwnerUserId": "1442917",
"Title": null,
"Body": "<p>You may want to look at <a href=\"https://en.wikipedia.org/wiki/Torch_%28machine_learning%29\" rel=\"nofollow\">Torch7</a> that provides N-dimensional arrays with support for various statistical and mathematical operations and is based on LuaJIT.</p>\n"
},
{
"AnswerId": "37419889",
"CreationDate": "2016-05-24T17:09:56.153",
"ParentId": null,
"OwnerUserId": "409508",
"Title": null,
"Body": "<p>Yes, there is now. Check out <a href=\"https://github.com/AlexMili/torch-dataframe\" rel=\"nofollow\">torch-dataframe</a> that I and Alex are developing. Our main priority is reliability that we check with a rich test suite. Performance comes second although we try to avoid performance hogs within the limits of Lua.</p>\n\n<p>The package is currently far from the pandas sophistication, but feel free to contribute with any methods you feel lacking.</p>\n"
}
] |
28,031,841 | 1 |
<python><numpy><hdf5><caffe>
|
2015-01-19T19:08:02.153
| null | 1,115,169 |
HDF5-DIAG: Error detected in HDF5 (1.8.11)
|
<p>I am trying to load hdf5 in caffe, it is not working. I checked the paths and even able to view hdf file using viewer. Everything is ok but caffe cant seem to load.</p>
<p>I write the hdf5 using a python script like this, where X and labels are numpy arrays. </p>
<pre></pre>
<p>Here is the entire problem.</p>
<pre></pre>
|
[
{
"AnswerId": "28032495",
"CreationDate": "2015-01-19T19:53:11.043",
"ParentId": null,
"OwnerUserId": "1115169",
"Title": null,
"Body": "<p>Solved :)</p>\n\n<p>i created a text file, placed the path of real .hd5 file inside it. The caffe prototxt file points to text file and it worked :)</p>\n\n<pre><code> hdf5_data_param {\n source: \"train.txt\"\n batch_size: 10\n }\n</code></pre>\n\n<p>train.txt contains line..</p>\n\n<pre><code>facialkp.hd5\n</code></pre>\n"
}
] |
28,054,916 | 3 |
<lua><windows-8.1><luajit><torch>
|
2015-01-20T20:52:32.163
| 28,058,692 | 2,352,671 |
How to install Torch on windows 8.1?
|
<p><a href="http://torch.ch/" rel="noreferrer">Torch</a> is a scientific computing framework with wide support for machine learning algorithms. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.</p>
<h2>Q:</h2>
<p>Is there a way to install torch on MS Windows 8.1?</p>
|
[
{
"AnswerId": "30012889",
"CreationDate": "2015-05-03T10:45:21.620",
"ParentId": null,
"OwnerUserId": "1070480",
"Title": null,
"Body": "<p><a href=\"http://cilvr.nyu.edu/doku.php?id=software:torch:start#torch_tutorial_for_machine_learning\" rel=\"nofollow\">This webpage</a> hosted by New York University recommends installing a Linux virtual machine in order to run Torch7 on Windows through Linux. Another option would off course be to install a Linux dist in parallel with Windows 8.</p>\n\n<p>Otherwise, if you don't mind running an older version of Torch, there is a Windows installer for <a href=\"http://torch5.sourceforge.net/download.html\" rel=\"nofollow\">Torch5</a> at SourceForge.</p>\n"
},
{
"AnswerId": "31260640",
"CreationDate": "2015-07-07T05:44:55.643",
"ParentId": null,
"OwnerUserId": "1970830",
"Title": null,
"Body": "<p>I think to use a GPU from inside the virtual machine, the processor and the motherboard should not only support VT-x , but VT-d should be supported too.</p>\n\n<p>But the question is, if I use a CPU with VT-d supported, do you think there will be a significant loss in PCIe connections efficiency?</p>\n\n<p>From what I understand,\nVT-d is important if I want to give the virtual machines direct access to my hardware components (like PCI Express cards). Like directly attach graphics card to vm instead of host machine. Isn't that mean that the PCIe connections efficiency will be the same just like if it was the host?</p>\n"
},
{
"AnswerId": "28058692",
"CreationDate": "2015-01-21T02:37:32.490",
"ParentId": null,
"OwnerUserId": "1442917",
"Title": null,
"Body": "<p>I got it installed and running on Windows (although not 8.1, but I don't expect the process to be different) following instructions in <a href=\"https://github.com/torch/DEPRECEATED-torch7-distro\" rel=\"nofollow noreferrer\">this repository</a>; it's now deprecated, but wasn't deprecated few months ago when I built it. The new instructions point to <a href=\"https://github.com/torch/torch7\" rel=\"nofollow noreferrer\">torch/torch7</a> repository, but it has a different structure and I haven't been able to build it on Windows yet.</p>\n\n<p>There are instructions on how to install Torch7 from luarocks, but you may <a href=\"https://stackoverflow.com/questions/27276822/installing-torch7-with-luarocks-on-windows-with-mingw-build-error\">run into issues on windows</a> as well; I haven't tried this process. It seems like there is <a href=\"https://groups.google.com/forum/#!topic/torch7/rOxupFTyb28\" rel=\"nofollow noreferrer\">no official support for Windows yet</a>, but some work is being done by contributors (there is a link to a pull request in that thread).</p>\n\n<p><s>Based on my experience, compiling that deprecated repo may be your best option on Windows at the moment.</s></p>\n\n<p>Update (7/9/2015): I've recently submitted <a href=\"https://github.com/torch/torch7/pull/287\" rel=\"nofollow noreferrer\">several changes</a> that fix compilation issues with mingw, so you may try the most recent version of torch7 and follow the build instructions in the ticket. Note that the changes only apply to the core lib and additional libraries may need similar changes.</p>\n"
}
] |
28,057,957 | 1 |
<machine-learning><deep-learning><caffe><kaggle>
|
2015-01-21T01:08:04.857
| 28,369,288 | 1,115,169 |
Multi label regression in Caffe
|
<p>i am extracting 30 facial keypoints (x,y) from an input image as per kaggle facialkeypoints competition.</p>
<p>How do i setup caffe to run a regression and produce 30 dimensional output??. </p>
<pre></pre>
<p>How do i setup caffe accordingly?. I am using EUCLIDEAN_LOSS (sum of squares) to get the regressed output. Here is a simple logistic regressor model using caffe but it is not working. Looks accuracy layer cannot handle multi-label output.</p>
<pre></pre>
<p>Here is the layer file:</p>
<pre></pre>
|
[
{
"AnswerId": "28369288",
"CreationDate": "2015-02-06T15:36:26.367",
"ParentId": null,
"OwnerUserId": "1115169",
"Title": null,
"Body": "<p>i found it :)</p>\n\n<p>I replaced the SOFTLAYER to EUCLIDEAN_LOSS function and changed the number of outputs. It worked.</p>\n\n<pre><code>layers {\n name: \"loss\"\n type: EUCLIDEAN_LOSS\n bottom: \"ip1\"\n bottom: \"label\"\n top: \"loss\"\n}\n</code></pre>\n\n<p>HINGE_LOSS is also another option.</p>\n"
}
] |
28,090,797 | 4 |
<neural-network><reshape><deep-learning><caffe>
|
2015-01-22T14:08:53.840
| null | 1,926,094 |
How to reshape a blob in Caffe?
|
<p>How to reshape a blob of the shape to in Caffe?</p>
<p>I want to make a convolution layer the weights of which are identical between channels.</p>
<p>One way I come up with is to reshape the bottom blob of the shape to and place a convolution layer upon it. But I just don't know how to reshape a blob.</p>
<p>Please help me out, thank you.</p>
|
[
{
"AnswerId": "43199056",
"CreationDate": "2017-04-04T05:38:40.783",
"ParentId": null,
"OwnerUserId": "7812167",
"Title": null,
"Body": "<p>If I understand your final objective right, Caffe's convolution layer already can do multiple input-output convolution with common/shared filters like:</p>\n\n<pre><code>layer {\n name: \"conv\"\n type: \"Convolution\"\n bottom: \"in1\"\n bottom: \"in2\"\n bottom: \"in3\"\n top: \"out1\"\n top: \"out2\"\n top: \"out3\"\n convolution_param {\n num_output : 10 #the same 10 filters for all 3 inputs\n kernel_size: 3 \n }\n}\n</code></pre>\n\n<p>Assuming you have all streams split (slice layer can do that), and finally you may merge them if desired with a concat or eltwise layer.</p>\n\n<p>That avoid the needs of reshaping blob, convolved, and then reshaping it back, which might introduce cross-channel interference near the margins.</p>\n"
},
{
"AnswerId": "28611043",
"CreationDate": "2015-02-19T16:04:55.367",
"ParentId": null,
"OwnerUserId": "1580510",
"Title": null,
"Body": "<p>Not sure if this fits your specs exactly, but Caffe does have flattening layers. The blob goes from n * c * h * w to n * (c<em>h</em>w) * 1 * 1.</p>\n\n<p>See <a href=\"http://caffe.berkeleyvision.org/tutorial/layers.html\" rel=\"nofollow\">http://caffe.berkeleyvision.org/tutorial/layers.html</a></p>\n"
},
{
"AnswerId": "35005464",
"CreationDate": "2016-01-26T01:14:21.947",
"ParentId": null,
"OwnerUserId": "3130081",
"Title": null,
"Body": "<p>Caffe now has a reshapeLayer for you.\n<a href=\"http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ReshapeLayer.html\" rel=\"nofollow\">http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1ReshapeLayer.html</a></p>\n"
},
{
"AnswerId": "35008024",
"CreationDate": "2016-01-26T06:20:11.730",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>As pointed by <a href=\"https://stackoverflow.com/a/35005464/1714410\">whjxnyzh</a>, you can use <code>\"Reshape\"</code> layer. Caffe is quite flexible in the way it allows you to define the output shape.<br>\nSee <a href=\"https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto#L899\" rel=\"nofollow noreferrer\">the declaration of <code>reshap_param</code> in caffe.proto`</a>:</p>\n\n<blockquote>\n<pre><code>// Specify the output dimensions. If some of the dimensions are set to 0,\n// the corresponding dimension from the bottom layer is used (unchanged).\n// Exactly one dimension may be set to -1, in which case its value is\n// inferred from the count of the bottom blob and the remaining dimensions.\n</code></pre>\n</blockquote>\n\n<p>In your case I guess you'll have a layer like this:</p>\n\n<pre><code>layer {\n name: \"my_reshape\"\n type: \"Reshape\"\n bottom: \"in\"\n top: \"reshaped_in\"\n reshape_param { shape: {dim: 0 dim: 1 dim: -1 dim: 0 } }\n}\n</code></pre>\n\n<p>See also on <a href=\"http://caffe.help/manual/layers/reshape.html\" rel=\"nofollow noreferrer\">caffe.help</a>.</p>\n"
}
] |
28,093,753 | 1 |
<machine-learning><deep-learning><caffe>
|
2015-01-22T16:24:27.470
| 28,544,226 | 1,115,169 |
Convolution issue in Caffe
|
<p>i have 96x96 pixel images in grayscale format stored in HDF5 files. i am trying to do multi output regression using caffe however convolution is not working. What exactly is the problem here? Why is convolutions not working?</p>
<pre></pre>
<p>My prototxt layer file is like this</p>
<pre></pre>
|
[
{
"AnswerId": "28544226",
"CreationDate": "2015-02-16T14:58:48.230",
"ParentId": null,
"OwnerUserId": "4572256",
"Title": null,
"Body": "<p>the lines</p>\n\n<pre>I0122 17:18:40.337906 5074 net.cpp:103] Top shape: 100 9216 1 1 (921600)\nI0122 17:18:40.337929 5074 net.cpp:103] Top shape: 100 30 1 1 (3000)</pre>\n\n<p>suggest that your input data is not in the correct shape. For an input of 100 batchs of 96x96 grey-scale image the shape should be: 100 1 96 96.\nTry to change this. (my guess is that for shape: N C H W, where N number of batches, c channels, h height, w weight)</p>\n"
}
] |
28,120,712 | 1 |
<python><windows><visual-studio><installation><theano>
|
2015-01-24T00:01:48.690
| 28,199,278 | 1,242,679 |
import theano (0.6) on Windows 8 with device= gpu (and visual studios 12.0)
|
<p>I'm having an compilation issue when I try to use Theano with the GPU device (it works fine with the CPU). I'm getting almost exactly the same problem to that already reported <a href="https://stackoverflow.com/questions/25729969/installing-theano-on-windows-8-with-gpu-enabled">here</a>, however following the solution provided does not work for me. Following the original solution, I can get as far as verifying that pycuda has been installed successfully, but importing theano still throws the same error:</p>
<pre></pre>
<p>I have Python 2.7.8 32bit and MinGW set up and CUDA 6.5. I'm using the following .theanorc config:</p>
<pre></pre>
<p>In order to get the pycuda example to work, I had to add visual studio 12.0 to my user path. It doesn't work with visual studios 10.0 for some reason, despite using the visual studio 10 command prompt to build pycuda.</p>
<p>Incidentally, if I try , I get an error saying that version of visual studio is no good:</p>
<pre></pre>
<p>(I have visual studio express 2010 and 2013 installed)</p>
<p>I understand that the Theano+GPU support for Windows is still somewhat experimental, but it seems like it does work for some people. Any suggestions as to what try next?</p>
|
[
{
"AnswerId": "28199278",
"CreationDate": "2015-01-28T18:04:27.497",
"ParentId": null,
"OwnerUserId": "1242679",
"Title": null,
"Body": "<p>I found a bit of hack to fix it <a href=\"https://groups.google.com/forum/#!searchin/theano-users/warning$20C4273$3A$20$27round$27$20$3A$20inconsistent$20dll$20linkage/theano-users/C8ce20uE6Gs/9Xg7UHazoWAJ\" rel=\"nofollow\">here</a></p>\n\n<p>Essentially it entails finding the <code>cuda_ndarray.cuh</code> file in the <code><install-dir>/andbox\\cuda</code> and adding <code>#include <algorithm></code>.</p>\n\n<p>It still leaves a warning <code>warning C4273: 'round' : inconsistent dll linkage</code>. </p>\n\n<p>Which according to <a href=\"https://github.com/Theano/Theano/issues/2055\" rel=\"nofollow\">this reported issue</a> is due to a conflict between Python and CUDA and both providing an <code>round</code> functionality. This could perhaps be fixed by defining the macro HAVE_ROUND when linking with CUDA to tell Python not to try to redefine round.</p>\n\n<p>Not sure if this is a general fix that's applicable for everyone, but seemed to have worked for me - as far as enabling me to use the GPU with theano.</p>\n"
}
] |
28,134,363 | 1 |
<lua><torch>
|
2015-01-25T07:53:38.470
| 28,147,346 | 1,240,896 |
Given an image in Lua Torch, how can I find its dimensions?
|
<p>Let's say I want to know the height and width of . What method should I call on it? A link to a resource for methods on images would be great as the Torch command does not work in this case.</p>
|
[
{
"AnswerId": "28147346",
"CreationDate": "2015-01-26T09:18:29.913",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<p><code>image.lena()</code> returns a Torch 3-dimensional tensor where the first dimension is the number of channels (3 for a RGB image), and the last ones are resp. the height (nb. of rows) and width (nb. of columns) of the image.</p>\n\n<p>So all you need to do is use the <a href=\"https://github.com/torch/torch7/blob/e8155b4/doc/tensor.md#torch.Tensor.size\" rel=\"nofollow\"><code>size(dim)</code></a> method as follow:</p>\n\n<pre><code>require 'image'\n\nlocal img = image.lena()\nprint(torch.typename(img)) -- torch.DoubleTensor\n\nlocal nchan, height, width = img:size(1), img:size(2), img:size(3)\nprint('nb. channels: ' .. nchan) -- 3\nprint('width: ' .. width .. ', height: ' .. height) -- 512, 512\n</code></pre>\n"
}
] |
28,171,577 | 1 |
<python><lua><caffe><torch>
|
2015-01-27T13:22:44.347
| 28,189,121 | 2,380,470 |
How to get a layer from a caffe model using torch
|
<p>In python when I want to get the data from a layer using caffe I have the following code </p>
<pre></pre>
<p>Hoever when I'm using torch I'm a bit stuck since I don't know how to perform the same action.
Currently I have the following code</p>
<pre></pre>
<p>Any help would be appreciated </p>
|
[
{
"AnswerId": "28189121",
"CreationDate": "2015-01-28T09:51:23.283",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<p>First of all please note that <a href=\"https://github.com/szagoruyko/torch-caffe-binding\" rel=\"noreferrer\">torch-caffe-binding</a> (i.e the tool you use with <code>require 'caffe'</code>) is a direct wrapper around Caffe library thanks to LuaJIT FFI.</p>\n\n<p>This means that it allows you to conveniently do a forward or backward with a Torch tensor, <strong>but</strong> <a href=\"https://github.com/szagoruyko/torch-caffe-binding/blob/9b0892a/caffe.cpp#L30-L69\" rel=\"noreferrer\">behind the scenes</a> these operations are made on a <code>caffe::Net</code> and not on a Torch <code>nn</code> network.</p>\n\n<p>So if you want to manipulate a plain <a href=\"https://github.com/torch/nn\" rel=\"noreferrer\">Torch network</a> what you should use is the <a href=\"https://github.com/szagoruyko/loadcaffe\" rel=\"noreferrer\">loadcaffe</a> library which fully converts the network into a <a href=\"https://github.com/torch/nn/blob/master/doc/containers.md#nn.Sequential\" rel=\"noreferrer\"><code>nn.Sequential</code></a>:</p>\n\n<pre><code>require 'loadcaffe'\n\nlocal net = loadcaffe.load('net.prototxt', 'net.caffemodel')\n</code></pre>\n\n<p>Then you can use <a href=\"https://github.com/torch/nn/blob/master/doc/module.md#findmodulestypename\" rel=\"noreferrer\"><code>findModules</code></a>. However please note that you cannot use their initial label anymore (like <code>conv1</code> or <code>fc7</code>) as they are <a href=\"https://github.com/szagoruyko/loadcaffe/blob/7013690/loadcaffe.lua#L23-L40\" rel=\"noreferrer\">discarded after convert</a>.</p>\n\n<p>Here <code>fc7</code> (= <code>INNER_PRODUCT</code>) corresponds to the N-1 linear transformation. So you can get it as follow:</p>\n\n<pre><code>local nodes = net:findModules('nn.Linear')\nlocal fc7 = nodes[#nodes-1]\n</code></pre>\n\n<p>Then you can read the data (weights and biases) via <code>fc7.weight</code> and <code>fc7.bias</code> - these are regular <code>torch.Tensor</code>-s.</p>\n\n<hr>\n\n<p><strong>UPDATE</strong></p>\n\n<p>As of commit <a href=\"https://github.com/szagoruyko/loadcaffe/commit/2516fac\" rel=\"noreferrer\">2516fac</a> loadcaffe now saves layer names in addition. So to retrieve the <code>'fc7'</code> layer you can now do something like:</p>\n\n<pre><code>local fc7\nfor _,m in pairs(net:listModules()) do\n if m.name == 'fc7' then\n fc7 = m\n break\n end\nend\n</code></pre>\n"
}
] |
28,177,298 | 5 |
<python><caffe>
|
2015-01-27T18:17:48.620
| 28,235,061 | 1,115,169 |
Import caffe error
|
<p>i compiled caffe successfully in my ubuntu machine but cannot import in python.</p>
<p>Caffe is installed /home/pbu/Desktop/caffe</p>
<p>i tried adding the /home/pbu/caffe/python path to sys.path.append, still not working</p>
<p>i am trying to import caffe</p>
<pre></pre>
|
[
{
"AnswerId": "28235061",
"CreationDate": "2015-01-30T11:36:07.833",
"ParentId": null,
"OwnerUserId": "773226",
"Title": null,
"Body": "<p>This happens when you have not run <code>make</code> for the python files separately.</p>\n\n<p>Run <code>make pycaffe</code> soon after running <code>make</code> in the Caffe directory.</p>\n\n<p>You may have to set the path to the python library correctly in <code>Makefile.config</code></p>\n"
},
{
"AnswerId": "46405278",
"CreationDate": "2017-09-25T12:28:24.140",
"ParentId": null,
"OwnerUserId": "5258060",
"Title": null,
"Body": "<p>Adding to the above best answer. After you run <code>make</code> for python files by running <code>make pycaffe</code> where you ran your previous <code>make</code>s. Then you have to export that python path by running <code>export PYTHONPATH=<path-to-caffe>/python</code>. You can choose to run this everytime before running a python code which utilizes caffe or add it to your <code>~/.bashrc</code>. </p>\n"
},
{
"AnswerId": "36261669",
"CreationDate": "2016-03-28T11:47:15.893",
"ParentId": null,
"OwnerUserId": "5107003",
"Title": null,
"Body": "<p>You should build caffe and pycaffe using the command:</p>\n\n<pre><code>cd $FRCN_ROOT/caffe-fast-rcnn\nmake -j8 && make pycaffe\n</code></pre>\n\n<p>and before the compilation, you should create a <code>Makefile.config</code> file and set the corresponding library path, such as python. </p>\n\n<p>More details are presented on the web: <a href=\"https://github.com/rbgirshick/py-faster-rcnn\" rel=\"nofollow\">bgirshick/py-faster-rcnn</a>.</p>\n\n<p>What's more, when I run the \"Beyond the demo\" section, it seams that if I Create a symlink of the folder \"VOCdevkit\" as \"VOCdevkit2007\" which turns out to be \"can't find the dataset\". So, I change the folder name as \"VOCdevkit2007\", and it runs well.</p>\n"
},
{
"AnswerId": "40734483",
"CreationDate": "2016-11-22T05:35:42.237",
"ParentId": null,
"OwnerUserId": "1904943",
"Title": null,
"Body": "<p>I posted my Caffe install notes (my architecture: Arch Linux x86_64 | Intel i7 CPU ...) in an Anaconda Python 2.7 virtual environment here:</p>\n\n<p><a href=\"https://gist.github.com/victoriastuart/fb2cb22209ccb2771963a25c06221213\" rel=\"nofollow noreferrer\">Caffe Installation Notes</a></p>\n\n<pre><code>https://gist.github.com/victoriastuart/fb2cb22209ccb2771963a25c06221213\n</code></pre>\n\n<p>I also encountered the (downstream) \"Import caffe error,\" for which I needed to resolve my $PYTHONPATH to complete the make compilation and get Caffe finally installed, and also to be able to import it (in Python).</p>\n"
},
{
"AnswerId": "35312906",
"CreationDate": "2016-02-10T10:40:01.440",
"ParentId": null,
"OwnerUserId": "2324271",
"Title": null,
"Body": "<p>Well, I use the <code>cmake-gui</code> for <code>making</code> Caffe. There you need to set the Python paths to the Anaconda-python:</p>\n\n<pre><code>PYTHON_EXECUTABLE <path_to_anaconda_home>/bin/python2.7\nPYTHON_INCLUDE_DIRECTORY <path_to_anaconda_home>/include/PYTHON2.7\nPYTHON_LIBRARY <path_to_anaconda_home>/lib/libpython2.7.so\n</code></pre>\n"
}
] |
28,187,868 | 2 |
<cuda><theano><cublas>
|
2015-01-28T08:41:35.660
| 31,155,012 | 133,374 |
Theano: cublasSgemm failed (14) an internal operation failed
|
<p>Sometimes, after a while of running fine, I get such an error with Theano / CUDA:</p>
<pre></pre>
<p>As my code runs fine for a while (I do Neural Network training, and it runs most of the time through, and even when this error occurred, it already ran fine for >2000 mini-batches), I wonder about the cause of this. Maybe some hardware fault?</p>
<p>This is with CUDA 6.0 and a very recent Theano (yesterday from Git), Ubuntu 12.04, GTX 580.</p>
<p>I also got the error with CUDA 6.5 on a K20:</p>
<pre></pre>
<p>(Another error I sometimes got in the past is <a href="https://stackoverflow.com/questions/28221191/theano-cuda-error-an-illegal-memory-access-was-encountered">this</a> now instead. Not sure if this is related.)</p>
<p>Via <a href="https://stackoverflow.com/users/4580205/markus">Markus</a>, who got the same error:</p>
<pre></pre>
<p>With CUDA 6.5, Windows 8.1, Python 2.7, GTX 970M.</p>
<blockquote>
<p>The error only occurs in my own network, if I run the LeNet example from Theano, it runs fine. Though the network is compiling and running fine on the CPU (and also on the GPU for some colleagues using Linux). Does anyone have an idea what the problem could be?</p>
</blockquote>
|
[
{
"AnswerId": "34883787",
"CreationDate": "2016-01-19T17:56:26.523",
"ParentId": null,
"OwnerUserId": "5811925",
"Title": null,
"Body": "<p>Ran into a similar issue, and fwiw, in my case it was solved by eliminating the import of another library that used pycuda. It appears theano really does not like to share.</p>\n"
},
{
"AnswerId": "31155012",
"CreationDate": "2015-07-01T07:24:54.943",
"ParentId": null,
"OwnerUserId": "133374",
"Title": null,
"Body": "<p>Just for reference in case anyone stumbles upon this:</p>\n\n<p>This doesn't occur anymore for me. I'm not exactly sure what fixed it, but I think the main difference is that I avoid any multithreading and forks (without exec). This caused many similar problems, e.g. <a href=\"https://stackoverflow.com/questions/28221191/theano-cuda-error-an-illegal-memory-access-was-encountered\">Theano CUDA error: an illegal memory access was encountered (StackOverflow)</a>, and <a href=\"https://groups.google.com/d/topic/theano-users/Pu4YKlZKwm4/discussion\" rel=\"nofollow noreferrer\">Theano CUDA error: an illegal memory access was encountered (Google Groups discussion)</a>. Esp. that discussion on Google Groups is very helpful.</p>\n\n<p>Theano functions are not multithreading safe. However, that is not a\nproblem for me because I'm only using it in one thread. However, I\nstill think that other threads might cause these problems. Maybe it is\nrelated to the GC of Python which frees some Cuda_Ndarray in some\nother thread while the theano.function is running.</p>\n\n<p>I looked a bit at the <a href=\"https://github.com/Theano/Theano/blob/master/theano/sandbox/cuda/cuda_ndarray.cu\" rel=\"nofollow noreferrer\">relevant Theano code</a> and not sure if it covers\nall such cases.</p>\n\n<p>Note that you might even not be aware that you have some background\nthreads. Some Python stdlib code can spawn such background threads.\nE.g. multiprocessing.Queue will do that.</p>\n\n<p>I cannot avoid having multiple\nthreads, and until this is fixed in Theano, I create a new subprocess\nwith a single thread where I do all the Theano work. This also has\nseveral advantages such as: More clear separation of the code, being\nfaster in some cases because it all really runs in parallel, and being\nable to use multiple GPUs.</p>\n\n<p>Note that just using the multiprocessing module did not work for me\nthat well because there are a few libs (Numpy and others, and maybe\nTheano itself) which might behave bad in a forked process (depending\non the versions, the OS and race conditions). Thus, I needed a real\nsubprocess (fork + exec, not just fork).</p>\n\n<p>My code is <a href=\"https://gist.github.com/albertz/4177e40d41cb7f9f7c68\" rel=\"nofollow noreferrer\">here</a>, in case anyone is interested in this.</p>\n\n<p>There is ExecingProcess which is modeled after multiprocessing.Process\nbut does a fork+exec. (Btw, on Windows, the multiprocessing module\nwill anyway do this, because there is no fork on Windows.)\nAnd there is AsyncTask which adds up a duplex pipe to this which works\nwith both ExecingProcess and the standard multiprocessing.Process.</p>\n\n<p>See also: <a href=\"https://github.com/Theano/Theano/wiki/Using-Multiple-GPUs\" rel=\"nofollow noreferrer\">Theano Wiki: Using multiple GPUs</a></p>\n"
}
] |
28,204,536 | 1 |
<neural-network><connectivity><theano>
|
2015-01-28T23:40:37.007
| null | 4,496,286 |
Theanets: Removing individual connections
|
<p>How do you remove connections in Theanets? I'd like to create custom connectivity between an input layer, a single hidden layer, and an output layer. But the only defaults are feedforward all-to-all architectures or recurrent architectures. I'd like to remove specific connections from the all-to-all connectivity and then train the network.</p>
<p>Thanks in advance.</p>
|
[
{
"AnswerId": "28290504",
"CreationDate": "2015-02-03T02:46:35.317",
"ParentId": null,
"OwnerUserId": "2014584",
"Title": null,
"Body": "<p>(Developer of <code>theanets</code> here.)</p>\n\n<p>This is currently not directly possible with <a href=\"http://theanets.readthedocs.org\" rel=\"nofollow\">theanets</a>. For computational efficiency the underlying computations in feedforward networks are implemented as simple matrix operations, which are fast and can be executed on a GPU for sometimes dramatic speedups.</p>\n\n<p>You can, however, <em>initialize</em> the weights in a layer so that some (or many) of the weights are zero. To do this, just pass a dictionary in the <code>layers</code> list, and include a <code>sparsity</code> key:</p>\n\n<pre><code>import theanets\n\nnet = theanets.Autoencoder(\n layers=(784, dict(size=1000, sparsity=0.9), 784))\n</code></pre>\n\n<p>This initializes the weights for the layer so that the given fraction of weights are zeros. The weights are, however, eligible for change during the training process, so this is only an initialization trick.</p>\n\n<p>You can, however, implement a custom <a href=\"http://theanets.readthedocs.org/en/latest/generated/theanets.layers.Layer.html\" rel=\"nofollow\"><code>Layer</code></a> subclass that does whatever you like, as long as you stay within the <a href=\"http://deeplearning.net/software/theano\" rel=\"nofollow\">Theano</a> boundaries. You could, for instance, implement a type of feedforward layer that uses a mask to ensure that some weights remain zeros during the feedforward computation.</p>\n\n<p>For more details you might want to ask on the <a href=\"https://groups.google.com/forum/#!forum/theanets\" rel=\"nofollow\">theanets mailing list</a>.</p>\n"
}
] |
28,205,589 | 1 |
<function><updates><theano>
|
2015-01-29T01:32:58.660
| 28,312,564 | 1,105,571 |
The update order of theano function's update list
|
<p>Theano function's parameter updates take a list of pair, in which each pair specifys a shared symbolic variable and its new expression after the calculating the function outputs.
I wonder whether there is any order for the updating procedure.
The order will matters if two symbolic variable's new expression relies on each other and the updating procedure will use the updated symbolic variable for updating other symbolic variables that rely on it.
For examples, this list might looks like this,</p>
<pre class="lang-py prettyprint-override"></pre>
<p>I have written some function to test this. The result seems to show that it always use the old value in the expression (second term in the pair) to update the symbolic variable in the first term, i.e., </p>
<pre class="lang-py prettyprint-override"></pre>
<p>Is this a defined behavior?</p>
<p>However I found the implementation of momentum <a href="http://nbviewer.ipython.org/github/craffel/theano-tutorial/blob/master/Theano%20Tutorial.ipynb" rel="nofollow">here</a>,
Here are the codes for generating the update list and param_update symbolic variables</p>
<pre class="lang-py prettyprint-override"></pre>
<p>Then in the first iteration, the param will not be updated, because param_updates are all zero. In my understanding, param_update should be updated first, and then use that for updating param.</p>
|
[
{
"AnswerId": "28312564",
"CreationDate": "2015-02-04T02:33:30.420",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>For the update, it always use the previous value (the value before the Theano function call). So you found the right thing.</p>\n\n<p>For momentum, I think it is normal that there is a delay.</p>\n"
}
] |
28,209,743 | 1 |
<cuda><theano><cublas>
|
2015-01-29T08:07:10.543
| 31,155,045 | 133,374 |
Crash in Theano/CUDA exit in cuStreamDestroy
|
<p>I have an application which is linked against CPython and calls Theano+CUDA code from there.</p>
<p>The application itself also uses CUDA and Cublas. But as they are creating their own handle, I think they should not get into problems.</p>
<p>The GPU is in exclusive mode, i.e. only used by that process. I got that crash both on a Nvidia Tesla K20c and a Nvidia GeForce GTX 680. On Ubuntu 12.04. CUDA 6.0. Latest Theano from Git.</p>
<p>Sometimes, but not always, it crashes when it does the CPython cleanup, where Theano indirectly will cleanup its Cublas handles (see in the stacktrace). The CPython cleanup is done in an call, maybe that is relevant.</p>
<p>This is the stacktrace:</p>
<pre></pre>
<p>Is this a known error? Where could I even start to try to debug this? How do I fix this?</p>
|
[
{
"AnswerId": "31155045",
"CreationDate": "2015-07-01T07:26:18.017",
"ParentId": null,
"OwnerUserId": "133374",
"Title": null,
"Body": "<p>I call <code>Py_Finalize</code> earlier now, not via <code>std::atexit</code>, and so far, I did not have this crash anymore. So I guess that is the solution.</p>\n"
}
] |
28,216,931 | 2 |
<theano>
|
2015-01-29T14:18:27.990
| null | 1,452,494 |
Error following first Theano program example
|
<p>I'm totally new to theano and following this simple intro exercise to theano found here: <a href="http://deeplearning.net/software/theano/introduction.html#introduction" rel="nofollow">http://deeplearning.net/software/theano/introduction.html#introduction</a></p>
<p>The idea is to simply declare some tensor variables and wrap them in a function, it is the most simple thing you could possibly do with theano</p>
<p>the exact code is:</p>
<pre></pre>
<p>However, I get this traceback:</p>
<pre></pre>
<p>My only thought is that it may be python3 related, but that should not be the case. Please help.</p>
|
[
{
"AnswerId": "28219065",
"CreationDate": "2015-01-29T15:57:51.160",
"ParentId": null,
"OwnerUserId": "1452494",
"Title": null,
"Body": "<p>Problem is not including the BLAS in the most recent version of theano. Solved when you pull the bleeding-edge version:</p>\n\n<pre><code>pip3 install --upgrade --no-deps git+git://github.com/Theano/Theano.git\n</code></pre>\n"
},
{
"AnswerId": "28312623",
"CreationDate": "2015-02-04T02:39:59.137",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>Theano code base do not work out of the box for python2 and python3. It need to get converted. This is done during the installation of Theano. When installed via pip, this is done automatically. If you cloned/downloded the source code, you need to install it with:</p>\n\n<pre><code>python setup.py install\n</code></pre>\n\n<p>Here is a Theano ticket with more information:</p>\n\n<p><a href=\"https://github.com/Theano/Theano/issues/2317\" rel=\"nofollow\">https://github.com/Theano/Theano/issues/2317</a></p>\n\n<p>Also, for python 3 support, you should use the development version line the other answer:</p>\n\n<pre><code>pip3 install --upgrade --no-deps git+git://github.com/Theano/Theano.git\n</code></pre>\n\n<p>But this isn't related to BLAS as it is written.</p>\n"
}
] |
28,267,549 | 0 |
<c++><boost><osx-yosemite><boost-python><caffe>
|
2015-02-01T20:35:32.903
| null | 2,831,921 |
boost: command not found on mac OS 10.10
|
<p>I have been trying to install caffe Deep learning framework on my MAC OS 10.10, but there is a command which needs to be executed in this process.</p>
<pre></pre>
<p>I installed boost library version 1.55.0 , still it gives this error.</p>
<pre></pre>
<p>Also in this command: </p>
<p>I get this error:<br>
</p>
<p>Where am I getting wrong?</p>
<p>Are environment variable needed to be set for this command to run.</p>
<p>Also,</p>
<p>What is meant by changing homebrew formula?</p>
<p>I am newbie in python. Before posting this on stack overflow, I tried a lot to figure our this error, but it is almost a week, I am not able to overcome. If possible, some step by step guide of caffe installation on mac 10.10 will be helpful.</p>
<p>Thanks a lot in advance.</p>
|
[] |
28,272,791 | 1 |
<lua><mnist><torch>
|
2015-02-02T07:19:41.963
| 28,325,844 | 2,138,524 |
Issue with neuralnetwork_turial.lua with data preprocessing
|
<p>I have installed the <a href="https://github.com/nicholas-leonard/dp" rel="nofollow">torch deep learning module</a> by first -ing and later using and the installation was succussful. The works well in the torch prompt.</p>
<p>But when I try to execute the <a href="https://github.com/nicholas-leonard/dp/blob/master/examples/neuralnetwork_tutorial.lua" rel="nofollow">neuralnetwork_tutorial.lua</a>(), it throws the following errors.</p>
<pre></pre>
<p>I put some statements in those scripts to understand the flow. I happen to notice that in <a href="https://github.com/torch/torch7/blob/master/File.lua" rel="nofollow">File.lua</a> the first step after getting the object is to determine the type of the object; of which 8 have been declared. The types have been declared through 0 to 7, 0 being . However the code fails, as it detects a type 28(??).</p>
<p>Kindly any help where I am going wrong? Or where to look into to find the issue?</p>
<p>P.S.: The script downloads the data on its own, however due to certain standard corporate proxy setting issues, it could not download. Therefore, I personally downloaded the data <a href="https://stife076.files.wordpress.com/2014/08/mnist2.zip" rel="nofollow">MNIST</a> and stored it in the specific data directory. If this could be a clue??</p>
|
[
{
"AnswerId": "28325844",
"CreationDate": "2015-02-04T16:01:04.377",
"ParentId": null,
"OwnerUserId": "49985",
"Title": null,
"Body": "<p>Okay, so it was a bug in the code (serialized MNIST wasn't cross-platform). Fixed by serializing dataset using ascii format instead of binary.</p>\n"
}
] |
28,287,599 | 1 |
<caffe>
|
2015-02-02T21:59:54.243
| null | 2,831,921 |
make all gives linker command fail error in caffe installation
|
<p>After following all the steps in caffe installation, I find this error:</p>
<pre></pre>
<p>What can be done. I tried all possible cases, reinstalling and finding in githun issues.</p>
<p>Thanx in advance.</p>
|
[
{
"AnswerId": "28315765",
"CreationDate": "2015-02-04T07:27:15.450",
"ParentId": null,
"OwnerUserId": "773226",
"Title": null,
"Body": "<p>Seems like you have not installed OpenCV in your system.\nIf you are using the <a href=\"http://brew.sh/\" rel=\"nofollow\">Homebrew</a> package manager, try executing the following commands</p>\n\n<pre><code>brew tap homebrew/science\nbrew install homebrew/science/opencv\n</code></pre>\n\n<p>If the above two line of commands do perform without any error, try building Caffe again.</p>\n"
}
] |
28,299,422 | 2 |
<python><math><gradient><theano><automatic-differentiation>
|
2015-02-03T12:52:36.083
| null | 4,056,875 |
How does theano implement computing every function's gradient?
|
<p>I have a question about Theano's implementation.
How the theano get the gradient of every loss function by the following function(T.grad)? Thank you for your help.</p>
<pre></pre>
|
[
{
"AnswerId": "28301771",
"CreationDate": "2015-02-03T14:49:35.437",
"ParentId": null,
"OwnerUserId": "3088138",
"Title": null,
"Body": "<p>Look up <strong>Automatic differentiation</strong> and there the <em>backwards mode</em> that is used to efficiently evaluate gradients.</p>\n\n<p>Theano is, as far as I can see, a hybrid between the code-rewriting and operator based approach. It uses operator overloading in python to construct the computational graph, then optimizes it and generates from that graph (optimized) sequences of operations to evaluate the required inkds of derivatives.</p>\n"
},
{
"AnswerId": "39082247",
"CreationDate": "2016-08-22T14:33:40.580",
"ParentId": null,
"OwnerUserId": "38626",
"Title": null,
"Body": "<p><strong>Edit</strong>: this answer was wrong in saying that Theano uses Symbolic Differentiation. My apologies.</p>\n\n<p>Theano implements <strong>reverse mode autodiff</strong>, but confusingly they call it \"symbolic differentiation\". This is misleading because symbolic differentiation is something quite different. Let's look at both.</p>\n\n<p><strong>Symbolic differentiation</strong>: given a graph representing a function <code>f(x)</code>, it uses the chain rule to compute a new graph representing the derivative of that function <code>f'(x)</code>. They call this \"compiling\" <code>f(x)</code>. One problem with symbolic differentiation is that it can output a very inefficient graph, but Theano automatically simplifies the output graph.</p>\n\n<p>Example:</p>\n\n<pre><code>\"\"\"\nf(x) = x*x + x - 2\nGraph =\n ADD\n / \\\n MUL SUB\n / \\ / \\\n x x x 2\n\nChain rule for ADD=> (a(x)+b(x))' = a'(x) + b'(x)\nChain rule for MUL=> (a(x)*b(x))' = a'(x)*b(x) + a(x)*b'(x)\nChain rule for SUB=> (a(x)-b(x))' = a'(x) - b'(x)\nThe derivative of x is 1, and the derivative of a constant is 0.\n\nDerivative graph (not optimized yet) =\n ADD\n / \\\n ADD SUB\n / | | \\\n MUL MUL 1 0\n / | | \\\n 1 x x 1\n\nDerivative graph (after optimization) =\n ADD\n / \\\n MUL 1\n / \\\n 2 x\n\nSo: f'(x) = 2*x + 1\n\"\"\"\n</code></pre>\n\n<p><strong>Reverse mode autodiff</strong>: works in two passes through the computation graph, first going forward through the graph (from the inputs to the outputs), and then backwards using the chain rule (if you are familiar with backpropagation, this is exactly how it computes gradients).</p>\n\n<p>See <a href=\"http://alexey.radul.name/ideas/2013/introduction-to-automatic-differentiation/\" rel=\"noreferrer\">this great post</a> for more details on various automatic differentiation solutions and their pros&cons.</p>\n"
}
] |
28,398,705 | 1 |
<lua><libjpeg><torch>
|
2015-02-08T20:00:40.180
| null | 620,625 |
How do I get the right libjpeg dylib for Lua?
|
<p>So, I am running torch on OSX (See error in bold below):</p>
<pre></pre>
<p>Specifically this line:
<strong>error loading module 'libjpeg' from file '/usr/local/Cellar/jpeg/8d/lib/libjpeg.dylib':
dlsym(0x7fd564000320, luaopen_libjpeg): symbol not found
warning: could not be loaded (is it installed?)</strong></p>
<p>Seems like I don't have the right dylib? If so, where do I get it?</p>
|
[
{
"AnswerId": "28401169",
"CreationDate": "2015-02-09T00:39:01.163",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p>this happens when there's two libjpeg's installed on your machine, and one conflicts with another.\nstart torch like this:</p>\n\n<blockquote>\n <p>export DYLD_LIBRARY_PATH=/usr/local/lib:$DYLD_LIBRARY_PATH</p>\n \n <p>th</p>\n</blockquote>\n\n<p>it should hopefully work with this.</p>\n"
}
] |
28,402,666 | 2 |
<python><machine-learning><theano><perceptron><deep-learning>
|
2015-02-09T04:04:44.593
| null | 4,544,800 |
How to access to a theano symbolic variable's value inside a class?
|
<p>I want to access to the value of my_classifier.y_binary. My goal is to compute my_classifier.error.</p>
<p>I know how to get the value of my_classifier.y_hat using eval but I don't know how to use it when the input is a self parameter.</p>
<p>Thanks</p>
<pre class="lang-py prettyprint-override"></pre>
|
[
{
"AnswerId": "28612148",
"CreationDate": "2015-02-19T16:52:39.107",
"ParentId": null,
"OwnerUserId": "2805751",
"Title": null,
"Body": "<p>If you want the value of a node in the graph you'll need to compile a function to get it. I think something like </p>\n\n<pre><code>y_binary = theano.function(inputs = [X,], outputs=my_classifier.y_binary, allow_input_downcast=True)\n</code></pre>\n\n<p>should give you the function <code>y_binary()</code> and calling <code>y_binary(features)</code> should forward propagate the network and yield the binarized output.</p>\n"
},
{
"AnswerId": "37125559",
"CreationDate": "2016-05-09T21:12:46.273",
"ParentId": null,
"OwnerUserId": "2498151",
"Title": null,
"Body": "<p>A compiled function is a much better choice, but while you're setting stuff up a quick and dirty way is like this:</p>\n\n<p>like this:</p>\n\n<pre><code>while (epoch < n_epochs): \n epoch = epoch + 1 \n for minibatch_index in range(n_train_batches):\n minibatch_avg_cost = train_model(minibatch_index)\n iter = (epoch - 1) * n_train_batches + minibatch_index\n print(\"**********************************\")\n print(classifier.hiddenLayer.W.get_value()) \n</code></pre>\n\n<p>full code here: <a href=\"https://github.com/timestocome/MiscDeepLearning/blob/master/MLP_iris2.py\" rel=\"nofollow\">https://github.com/timestocome/MiscDeepLearning/blob/master/MLP_iris2.py</a></p>\n\n<p>I think in your example you'd use 'my_classifier.w.get_value()'</p>\n"
}
] |
28,418,823 | 1 |
<python><theano><softmax>
|
2015-02-09T20:34:08.667
| 34,094,065 | 2,040,628 |
getting error with softmax and cross entropy in theano
|
<p>I'm implementing a DNN with Theano. At the last layer of DNN, I'm using a softmax as a nonlinear function from </p>
<p>As a lost function i'm using cross entropy from
But I get a strange error:
"The following error happened while compiling the node', GpuDnnSoftmaxGrad{tensor_format='bc01' ..."</p>
<p>I'm a newbie with theano and can't figure out what's wrong with this model. Your help is appreciated
PS: my guess is it is somehow related to the fact that softmax takes a 2D tensor and returns a 2D tensor.</p>
<p>PS2:I'm using the bleeding edge Theano (just cloned) my CUDA version is old it is 4.2 BUT I'm almost sure that that's not the problem since I'm working without error with other DNN tools written based on Theano.
I'm using pylearn2 to accelerate and that's not the problem either since I already used it successfully with the current Theano and CUDA in another DNN.</p>
<p>The error happens at this line: </p>
<p>The full error message is:
</p>
<p>The Cross entropy funcion I'm using is defined as:
where input is the output of the softmax layer and target_y is the labels.</p>
|
[
{
"AnswerId": "34094065",
"CreationDate": "2015-12-04T17:27:48.567",
"ParentId": null,
"OwnerUserId": "2040628",
"Title": null,
"Body": "<p>solved. I had to use T.nnet.categorical_crossentropy since my target variable is an integer vector. </p>\n"
}
] |
28,427,145 | 1 |
<visual-studio><cuda><theano>
|
2015-02-10T08:30:44.227
| 28,428,405 | 2,771,315 |
CUDA and Theano install Windows 7: Is Visual Studio required?
|
<p>I'm in the process of installing Theano and combining it with CUDA on my Windows 7 PC. All the information/tutorials I've seen require Visual Studio to be installed. Most of the Visual Studio versions are >= 5GB in size which seems like a ridiculous number especially for an IDE. Is Visual Studio required to run/compile CUDA code, or can I install an alternative IDE? If it is required, what's the smallest (in size) Visual Studio version? </p>
|
[
{
"AnswerId": "28428405",
"CreationDate": "2015-02-10T09:41:35.637",
"ParentId": null,
"OwnerUserId": "2771315",
"Title": null,
"Body": "<p>Unfortunately, Visual Studio is explicitly required to run some of the CUDA tools. As of February 2015, your best bet is to download VS 2013 Community Edition (<a href=\"http://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx\" rel=\"nofollow noreferrer\">http://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx</a>) and CUDA 6.5 (<a href=\"https://developer.nvidia.com/cuda-downloads\" rel=\"nofollow noreferrer\">https://developer.nvidia.com/cuda-downloads</a>). <a href=\"https://stackoverflow.com/a/26073714/2771315\">This</a> is a great guide for installing CUDA on Windows 7 and 8.</p>\n"
}
] |
28,427,416 | 2 |
<python><matrix><normalization><theano>
|
2015-02-10T08:50:35.577
| 28,527,317 | 2,040,628 |
normalize a matrix row-wise in theano
|
<p>Lets say I have a Matrix with size and I want to normalize it row-wise,i.e.,
the sum of each row should be one. How can I do this in theano? </p>
<p>Motivation: using softmax returns back error for me, so I try to kind of sidestep it by implementing my own version of softmax.</p>
|
[
{
"AnswerId": "34205530",
"CreationDate": "2015-12-10T15:18:29.940",
"ParentId": null,
"OwnerUserId": "3731804",
"Title": null,
"Body": "<p>or you can also use</p>\n\n<pre><code>m/m.norm(1, axis=1).reshape((m.shape[0], 1))\n</code></pre>\n"
},
{
"AnswerId": "28527317",
"CreationDate": "2015-02-15T14:54:59.790",
"ParentId": null,
"OwnerUserId": "3489247",
"Title": null,
"Body": "<p>See if the following is useful for you:</p>\n\n<pre><code>import theano\nimport theano.tensor as T\n\nm = T.matrix(dtype=theano.config.floatX)\nm_normalized = m / m.sum(axis=1).reshape((m.shape[0], 1))\n\nf = theano.function([m], m_normalized)\n\nimport numpy as np\na = np.exp(np.random.randn(5, 10)).astype(theano.config.floatX)\n\nb = f(a)\nc = a / a.sum(axis=1)[:, np.newaxis]\n\nfrom numpy.testing import assert_array_equal\nassert_array_equal(b, c)\n</code></pre>\n"
}
] |
28,462,854 | 1 |
<python><matrix><shared><theano>
|
2015-02-11T19:27:58.677
| null | 3,214,017 |
Theano matrix multiplication
|
<p>I have a piece of code that is supposed to calculate a simple
matrix product, in python (using theano). The matrix that I intend to multiply with is a shared variable.</p>
<p>The example is the smallest example that demonstrates my problem.</p>
<p>I have made use of two helper-functions. floatX converts its input to something of type theano.config.floatX
init_weights generates a random matrix (in type floatX), of given dimensions.</p>
<p>The last line causes the code to crash. In fact, this forces so much output on the commandline that I can't even scroll to the top of it anymore.</p>
<p>So, can anyone tell me what I'm doing wrong?</p>
<pre></pre>
|
[
{
"AnswerId": "28463917",
"CreationDate": "2015-02-11T20:31:38.327",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>This work for me. So my guess is that you have a problem with your blas installation. Make sure to use Theano development version:</p>\n\n<p><a href=\"http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions\" rel=\"nofollow\">http://deeplearning.net/software/theano/install.html#bleeding-edge-install-instructions</a></p>\n\n<p>It have better default for some configuration. If that do not fix the problem, look at the error message. There is main part that is after the code dump. After the stack trace. This is what is the most useful normally.</p>\n\n<p>You can disable direct linking by Theano to blas with this Theano flag: blas.ldflags=</p>\n\n<p>This can cause slowdown. But it is a quick check to confirm the problem is blas.</p>\n\n<p>If you want more help, dump the error message to a text file and put it on the web and link to it from here.</p>\n"
}
] |
28,488,849 | 2 |
<python><opencv><ocr><caffe>
|
2015-02-12T22:13:03.463
| null | 1,214,547 |
What is the easiest way to obtain a state-of-the-art handwritten digit classifier?
|
<p>I am working on a project involving OCR-ing handwritten digits which uses a typical preprocessing-segmentation-recognition pipeline. I have done the first two stages manually by adjusting some standard algorithms from for my particular task. For the third stage (recognition), I'd like to use a ready-made classifier.</p>
<p>First I tried Tesseract, but it was <a href="https://stackoverflow.com/questions/27321553/forcing-tesseract-to-give-some-answer">really bad</a>. So I started looking into the progress on . Due to its popularity, I'd hoped it would be easy to get a nice high-quality classifier. Indeed, the top answer <a href="https://stackoverflow.com/questions/13319730/suggestions-for-digit-recognition">here</a> suggests using a tandem, which is conveniently implemented in <a href="https://github.com/Itseez/opencv/blob/master/samples/python2/digits.py" rel="nofollow noreferrer">this sample</a>. Unfortunately, it isn't as good as I'd hoped. It keeps confusing 's for 's (where it is obvious for my eye that it is actually a ), which accounts by far for the greatest amount of mistakes my algorithm makes.</p>
<p>Here are some examples of the errors made by :</p>
<p><img src="https://i.imgur.com/9oS7Umy.png" alt=""></p>
<p><img src="https://i.imgur.com/DlZp8Qr.png" alt=""></p>
<p><img src="https://i.imgur.com/ArhPfDK.png" alt=""></p>
<p>The top line are the original digits extracted from the image (higher-resolution images do not exist), the middle line are these digits deskewed, size-normalized and centered, and the bottom line is the output of .</p>
<p>I tried to hot-fix this error by applying a classifier after (if outputs an run and return its output instead), but the results were the same.</p>
<p>Then I tried to adapt <a href="https://github.com/lisa-lab/pylearn2/tree/master/pylearn2/scripts/papers/maxout" rel="nofollow noreferrer">this sample</a> which claims to achieve 0.45% test error. However, after spending a week with I couldn't make it work. It <a href="https://groups.google.com/forum/#!topic/pylearn-users/WLm8pZ2Vb0M" rel="nofollow noreferrer">keeps crashing randomly all the time</a>, even in an environment as sterile as an instance running <a href="http://thecloudmarket.com/image/ami-735bda04--ubuntu14-04-mkl-cuda-dl" rel="nofollow noreferrer">this image</a> (I don't even mention my own machine).</p>
<p>I know about the existence of , but I haven't tried it.</p>
<p>What would be the easiest way to set up a high-accuracy (say, MNIST test error <1%) handwritten digit classifier? Preferably, one that does not require an card to run. As far as I understand, (since it heavily relies on ) does. A interface and an ability to run on would be a pleasant bonus.</p>
<p>Note: I cannot create a new tag since I don't have enough reputation, but it surely should be there.</p>
|
[
{
"AnswerId": "28561241",
"CreationDate": "2015-02-17T12:15:35.197",
"ParentId": null,
"OwnerUserId": "987599",
"Title": null,
"Body": "<p>Go ahead and try caffe if you haven't done it already. \nIt is much easier than cuda-convnet to compile, it does not rely on cuda (although it speeds up things considerably) and it has an example for mnist with the Lenet algorithm.</p>\n\n<p>look here:\n<a href=\"https://github.com/BVLC/caffe/tree/dev/examples/mnist\" rel=\"nofollow\">https://github.com/BVLC/caffe/tree/dev/examples/mnist</a></p>\n"
},
{
"AnswerId": "28488917",
"CreationDate": "2015-02-12T22:17:40.380",
"ParentId": null,
"OwnerUserId": "764322",
"Title": null,
"Body": "<p>In the webpage of the MINST database you can find at the bottom a benchmark of the state of the art methods with links to their papers:</p>\n\n<p><a href=\"http://yann.lecun.com/exdb/mnist/\" rel=\"nofollow\">http://yann.lecun.com/exdb/mnist/</a></p>\n\n<p>The last entry of the table has the best result with a 0.23% of error (pretty impressive).</p>\n\n<p>Short answer: there is no <code>easy</code> way to achieve state of the art rate, unless you can accept some 2-5% of error (then use sklearn) or you find the code online.</p>\n"
}
] |
28,508,907 | 1 |
<theano>
|
2015-02-13T21:18:57.727
| null | 2,358,537 |
Error in using 'theano-nose' command
|
<p>After installing Theano from Enthought Canopy on Windows, following the steps here: <a href="http://deeplearning.net/software/theano/install.html#id9" rel="nofollow">http://deeplearning.net/software/theano/install.html#id9</a> , I tried to execute the command theano-nose from Canopy terminal. I got an error message saying "unable to find theano-nose". Can someone tell me what might be going wrong?</p>
|
[
{
"AnswerId": "28573484",
"CreationDate": "2015-02-17T23:52:01.777",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>theano-nose is from the command line. But your shell need to be able to find it.</p>\n\n<p>It is highly possible it didn't got installed correctly on Windows or that you need to reboot or log out/log in. I do not remember Windows particularities on that level.</p>\n\n<p>But a simple work around is to start python and run this:</p>\n\n<pre><code>import theano\ntheano.test()\n</code></pre>\n\n<p>This is equivalent for what you want to do.</p>\n"
}
] |
28,512,643 | 1 |
<lua><torch>
|
2015-02-14T05:10:14.647
| 28,515,146 | 620,625 |
Why does torch.Tensor have a libjpeg field?
|
<p>I see this:</p>
<pre></pre>
<p>How does Tensor get the field libjpeg? I don't see it referenced in Tensor.lua.</p>
<p>For more context, I am trying to debug this error:</p>
<pre></pre>
|
[
{
"AnswerId": "28515146",
"CreationDate": "2015-02-14T11:26:54.557",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p>A common practice in Torch packages is to use the tensor table as a namespace. This is a trick that is useful to do quick and dirty templated function dispatch.\nFor example, if you load the <em>nn</em> package, you will find the functions</p>\n\n<pre><code>torch.DoubleTensor.nn.L1Cost_updateOutput\ntorch.FloatTensor.nn.L1Cost_updateOutput\n</code></pre>\n\n<p>These are usually called according to the type of input tensor. For example:</p>\n\n<pre><code>input = torch.FloatTensor()\ninput.nn.L1Cost_updateOutput(...) \n</code></pre>\n\n<p>This is what you observe with <em>torch.Tensor.libjpeg*</em>\nif you use the image loading packages, then you will notice that there will be <em>torch.FloatTensor.libjpeg*</em> and <em>torch.DoubleTensor.libjpeg*</em></p>\n\n<p>I suspect that you might have set the default tensor type to <em>torch.CudaTensor</em>, which is when you would observe this error.\nBecause the image package's functions are not defined for a Cuda tensor, the functions <em>torch.CudaTensor.libjpeg*</em> will not exist.</p>\n\n<p>The solution for you is to set your default tensor type to FloatTensor or DoubleTensor, and create any Cuda tensors as you need them.</p>\n"
}
] |
28,513,586 | 2 |
<lua><png><torch>
|
2015-02-14T07:43:05.743
| null | 4,565,952 |
How to read 16bit png using Lua?
|
<p>I am using Ubuntu and torch7 library to deal with 16bit images.</p>
<p>It would be best if Lua can read/write 16bit png files.</p>
<p>However, I found that if I try to read them by image.load function, it gives result of only higher 8 bit values.</p>
<p>Currently I'm using preprocessed binary files instead, but it is quite cumbersome.</p>
<p>Is there any way to read/write 16 bit png file with Lua?</p>
|
[
{
"AnswerId": "30022598",
"CreationDate": "2015-05-04T04:25:39.457",
"ParentId": null,
"OwnerUserId": "456980",
"Title": null,
"Body": "<p>I have some OpenCV bindings for torch7. Simply because OpenCV has better image resizing/warping/loading than the image library written in torch. It handles the 16bit png images fine.</p>\n\n<p>They work on Height x Width x Channel images instead of the torch7 Channel x Height x Width images. This is no problem in practice, because they're convertible by transpose. </p>\n\n<p>It's not documented properly but should be very useful to someone! (read the init.lua for a description). </p>\n\n<p><a href=\"https://github.com/Saulzar/lua---opencv\" rel=\"nofollow\">https://github.com/Saulzar/lua---opencv</a></p>\n"
},
{
"AnswerId": "28513733",
"CreationDate": "2015-02-14T08:09:55.587",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p><a href=\"https://github.com/clementfarabet/graphicsmagick\" rel=\"nofollow\">https://github.com/clementfarabet/graphicsmagick</a></p>\n\n<p>The graphicsmagick package should work for 16-bit pngs.</p>\n\n<p>You can install it via </p>\n\n<pre><code>luarocks install graphicsmagick\n</code></pre>\n"
}
] |
28,552,966 | 0 |
<python><theano><theano-cuda>
|
2015-02-17T00:56:52.663
| null | 4,480,720 |
nvcc : fatal error : Unsupported host compiler 'bi'
|
<p>I'm trying to use the gpu in theano. I made my .theanorc file and tried to run the following python code: </p>
<pre></pre>
<p>i got this code from deeplearning.net. </p>
<p>The output:</p>
<pre></pre>
|
[] |
28,581,740 | 5 |
<python><cuda><theano><pycuda>
|
2015-02-18T10:49:59.943
| null | 1,537,631 |
Installing Theano with GPU on Windows 8.1 64-bit with Visual Studio 2013
|
<p>This Theano Installation is making me mad :(</p>
<p>So, I've followed the instructions here on the most voted answer because it seemed like the most similar condiguration from mine and up-to-date version : <a href="https://stackoverflow.com/questions/25729969/installing-theano-on-windows-8-with-gpu-enabled">Installing theano on Windows 8 with GPU enabled</a></p>
<p>1- I've installed Cuda v6.5, launched deviceQuery and it works fine.</p>
<p>2- I already have Visual Studio 2013 so I haven't installed Visual Studio 2010</p>
<p>3- > At the time of writing, Theano on GPU only allows working with 32-bit floats and is primarily built for 2.7 version of Python.</p>
<p>So i don't know exactly what is the current state now but I have a friend with the same configuration than mine and he managed to make it work so I guess it's possible. I've installed Python through Anaconda.</p>
<p>4- I've installed MinGW and Cygwin</p>
<p>5- I've fixed msvc9compiler.py</p>
<p>6- Here's the bottleneck : the PyCUDA Installation</p>
<p>Here's what I've done:
- I've used cygwin to extract the pycuda tar file
- I've executed python configure.py through VS2013 x64 Native Tools Command Prompt than configured siteconfig.py as followed:</p>
<pre></pre>
<ul>
<li>I've executed python setup.py build --compiler="msvc" through VS2013 x64 Native Tools Command Prompt</li>
<li>I've executed python setup.py install through VS2013 x64 Native Tools Command Prompt</li>
<li><p>When I execute the little test in python, here's what's happening:</p>
<pre></pre></li>
</ul>
<p>Could you please tell me why the hell this doesn't work ? </p>
|
[
{
"AnswerId": "29081282",
"CreationDate": "2015-03-16T15:54:34.390",
"ParentId": null,
"OwnerUserId": "4677134",
"Title": null,
"Body": "<p>You probably need to add the path to the executables for Visual Studio in your <code>nvcc.profile</code> </p>\n\n<p>(you can find it in your CUDA bin folder. On my system: <code>C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v6.5\\bin</code>).</p>\n\n<p>In my case, since I have Visual studio 2010, I added at the end of <code>nvcc.profile</code>:</p>\n\n<pre><code>\"compiler-bindir = C:\\Program Files (x86)\\Microsoft Visual Studio 10.0\\VC\\bin\\amd64\"\n</code></pre>\n"
},
{
"AnswerId": "29330809",
"CreationDate": "2015-03-29T15:11:24.843",
"ParentId": null,
"OwnerUserId": "4234099",
"Title": null,
"Body": "<p>I have written a practical guide for the whole process:</p>\n\n<p><a href=\"https://my6266blog.wordpress.com/2015/01/21/installing-theano-pylearn2-and-even-gpu-on-windows/\" rel=\"nofollow\">https://my6266blog.wordpress.com/2015/01/21/installing-theano-pylearn2-and-even-gpu-on-windows/</a></p>\n\n<p>Good luck! It's not that complicated, just follow the steps one by one.</p>\n"
},
{
"AnswerId": "31461817",
"CreationDate": "2015-07-16T18:25:50.893",
"ParentId": null,
"OwnerUserId": "5124900",
"Title": null,
"Body": "<p>I was able to get Theano installed on my ASUS K501LX Windows 8.1 laptop, with an NVIDIA GeForce 950M GPU, without any hassle whatsoever. I largely followed Maor's post above from March 29th. I was actually shocked at how easy it was! All I needed was the Community Edition of Visual Studio 2013 and the CUDA 7 Toolkit. I then installed Anaconda 3.4 (I used the latest version that's out there now). The one modification I made to Maor's post was installing mingw, via <code>conda install mingw libpython</code> immediately after installing Anaconda. Also, since I am using Python 3, I had to change the flags parameter in <code>.theanorc.txt</code> to point to <code>C:\\Anaconda3\\libs</code>. </p>\n\n<p>Upon importing theano, it returned that it was using my GeForce GTX 950M device, and running the <code>theano\\misc\\check_blas.py</code> check returned no errors and carried out its tests on my GPU. </p>\n\n<p>Happy times! </p>\n"
},
{
"AnswerId": "34842046",
"CreationDate": "2016-01-17T18:32:31.140",
"ParentId": null,
"OwnerUserId": "2924421",
"Title": null,
"Body": "<p>The process was a pretty big hassle, so here is a tutorial for anyone that is interested:</p>\n\n<p>All of this has been tested using a clean install of Windows 8.1, with nothing else on it, though it should work fine if you don't have a clean install because this will install all the required versions of the software for you.</p>\n\n<p>You need 64-bit windows, 32 bit will not work. You will also need a <a href=\"https://developer.nvidia.com/cuda-gpus\" rel=\"nofollow noreferrer\">CUDA compatible graphics card</a>, so if you don’t have one you’re stuck for now, sadly. This means that you need a relatively modern NVIDIA graphics card, AMD will not work (it can run OpenCL but not CUDA because CUDA is lame and proprietary).</p>\n\n<p>I am installing this on Windows 8.1, but I suspect it should all still work on windows 7 as well.</p>\n\n<p>First download <a href=\"http://sourceforge.net/projects/winpython/files/WinPython_2.7/2.7.10.3/WinPython-64bit-2.7.10.3.exe/download\" rel=\"nofollow noreferrer\">WinPython</a>, (make sure to get python 2.7, version 2.7.10.3, this link points to there) and install it TO A PATH THAT DOES NOT HAVE SPACES IN IT. OTHERWISE THINGS BREAK. I made an Other folder in C:\\ (C:\\Other) and then made a folder named Python27 (C:\\Other\\Python27) and tell the installer to install it in there.</p>\n\n<p>Once it is done installing, you will need to add it to your path. Press the windows key and type environment variables, then click “Edit the System Environment Variables”, click Environment Variables in the windows that pops up, scroll down to Path, and then append</p>\n\n<pre><code>C:\\Other\\Python27\\python-2.7.10.amd64\n</code></pre>\n\n<p>Or wherever else you installed WinPython to</p>\n\n<p>Then add a semicolon after it, so you get</p>\n\n<pre><code>C:\\Other\\Python27\\python-2.7.10.amd64;\n</code></pre>\n\n<p>This is how you add a specific path to the Path variables, in the future, I will just say to add it here and now give specific steps about how to do that. Note that if you update the system path, the current command prompt windows that are open won’t get that update, and you will have to open a new command prompt window to actually have it use the new path.</p>\n\n<p>The purpose of a path is so your command prompt window knows where programs are, because if you call, say</p>\n\n<p>python</p>\n\n<p>in the command prompt, it will look through every folder in your path until it finds a python.exe. If it can’t find any, it will get angry as it typically would if that program didn’t exist.</p>\n\n<p>If you don’t want to clutter your path variable/if your path variable is full, I put a tutorial <a href=\"https://stackoverflow.com/a/34216503/2924421\">here</a> about how to make it so your path is just appended to when you open a command prompt window via a text file that stores all the paths, instead of having to edit the environment variable itself, if you are interested.</p>\n\n<p>You then need to add</p>\n\n<pre><code>C:\\Other\\Python27\\python-2.7.10.amd64\\DLL;\nC:\\Other\\Python27\\python-2.7.10.amd64\\Scripts;\n</code></pre>\n\n<p>to your path as well (again, or wherever else you installed python. For later on I will just say where I installed it and if you installed it somewhere else it should be pretty easy to just tweak the commands accordingly)</p>\n\n<p>Next, install <a href=\"https://www.visualstudio.com/post-download-vs?sku=community&clcid=0x409\" rel=\"nofollow noreferrer\">visual studio 2015</a> community and <a href=\"https://www.visualstudio.com/en-us/downloads/download-visual-studio-vs.aspx\" rel=\"nofollow noreferrer\">visual studio 2013</a>, making sure to install all the tools related to c++ development as well (using custom installation, then under Programming Languages). These don’t need to be in a path without spaces, and they probably won’t let you store them anywhere else anyway and that is OK.</p>\n\n<p>Add</p>\n\n<pre><code>C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin;\n\nC:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin\\amd64;\n\nC:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\lib;\n\nC:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\lib\\amd64;\n</code></pre>\n\n<p>To your system path.</p>\n\n<p>Install NOT NEWEST DRIVER BECAUSE THEY ARE UNSTABLE, but instead <a href=\"http://www.nvidia.com/download/driverResults.aspx/89119/en-us\" rel=\"nofollow noreferrer\">355.60</a> because it is known to be very reliable, and new enough. Then install the <a href=\"https://developer.nvidia.com/cuda-toolkit-65\" rel=\"nofollow noreferrer\">CUDA toolkit</a> (it’s also okay to store this in a path with spaces, it probably won’t give you the option either, but even if it does, just let it store it in the default place it wants to store it to). Version 6.5 is needed because version 7 and above aren’t supported by pycuda. If you have a GTX 9__ you will need to download CUDA from <a href=\"https://developer.nvidia.com/cuda-downloads-geforce-gtx9xx\" rel=\"nofollow noreferrer\">here</a> instead.</p>\n\n<p>This will probably automatically append</p>\n\n<pre><code>C:\\Program Files (x86)\\Windows Kits\\8.1\\Redist\\D3D\\x64;\nC:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v7.5\\bin;\nC:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v7.5\\libnvvp;\nC:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;\n</code></pre>\n\n<p>To your path, if not you will need to do so now.</p>\n\n<p>Those are the only three things that can be stored to paths with spaces (these will be in like Program Files or Program Files (x86)), with everything else be very careful to store them to paths that don’t have spaces.</p>\n\n<p>Download the <a href=\"http://sourceforge.net/projects/boost/files/boost-binaries/1.55.0/boost_1_55_0-msvc-12.0-64.exe/download\" rel=\"nofollow noreferrer\">boost binaries</a> (1_55_0 for 64 bit which is the version this link points to), and run the installer, then select to store them to a path without spaces (I stored them to C:\\Other\\boost)</p>\n\n<p>Navigate to that directory in the command line, then run</p>\n\n<pre><code>bootstrap.bat\n</code></pre>\n\n<p>and then when it is done run</p>\n\n<pre><code>.\\b2\n</code></pre>\n\n<p>This will start building, and take a long time, and use a lot of space (about 6 GB).</p>\n\n<p>It will probably say that 8 targets failed, 8 targets were skipped, and 1075 were updated. This is what one should expect, and is not a problem.</p>\n\n<p>Install <a href=\"https://git-scm.com/download/win\" rel=\"nofollow noreferrer\">Git-2.7.0-64-bit</a> to some path without a space in it</p>\n\n<p>Choose to use Git from the Windows Command Prompt, checkout Windows-style, commit Unix-style endings, use Window’s default console window, and do not enable file system caching.</p>\n\n<p>Add</p>\n\n<pre><code>C:\\Other\\Git\\bin;\n</code></pre>\n\n<p>To your system path</p>\n\n<p>Next, run the installer for <a href=\"https://www.microsoft.com/en-us/download/details.aspx?id=44266\" rel=\"nofollow noreferrer\">VCForPython</a>. </p>\n\n<p>Download <a href=\"https://pypi.python.org/pypi/pycuda\" rel=\"nofollow noreferrer\">pycuda</a> source (pycuda-2015.1.3)</p>\n\n<p>Navigate inside that directory, then run</p>\n\n<pre><code>python configure.py\n</code></pre>\n\n<p>this will create a file named siteconf.py.</p>\n\n<p>Open this file, and it should look something like</p>\n\n<pre><code>BOOST_INC_DIR = []\nBOOST_LIB_DIR = []\nBOOST_COMPILER = 'gcc43'\nUSE_SHIPPED_BOOST = True\nBOOST_PYTHON_LIBNAME = ['boost_python-py27']\nBOOST_THREAD_LIBNAME = ['boost_thread']\nCUDA_TRACE = False\nCUDA_ROOT = 'C:\\\\Program Files\\\\NVIDIA GPU Computing Toolkit\\\\CUDA\\\\v6.5'\nCUDA_INC_DIR = ['${CUDA_ROOT}/include']\nCUDA_ENABLE_GL = False\nCUDA_ENABLE_CURAND = True\nCUDADRV_LIB_DIR = ['${CUDA_ROOT}/lib/Win32', '${CUDA_ROOT}/lib/x64']\nCUDADRV_LIBNAME = ['cuda']\nCUDART_LIB_DIR = ['${CUDA_ROOT}/lib/Win32', '${CUDA_ROOT}/lib/x64']\nCUDART_LIBNAME = ['cudart']\nCURAND_LIB_DIR = ['${CUDA_ROOT}/lib/Win32', '${CUDA_ROOT}/lib/x64']\nCURAND_LIBNAME = ['curand']\nCXXFLAGS = []\nLDFLAGS = []\n</code></pre>\n\n<p>modify it so it looks like:</p>\n\n<pre><code>BOOST_INC_DIR = ['C:/Other/boost']\nBOOST_LIB_DIR = ['C:/Other/boost/lib64-msvc-12.0']\nBOOST_COMPILER = 'msvc'\nUSE_SHIPPED_BOOST = True\nBOOST_PYTHON_LIBNAME = ['boost_python-vc120-mt-1_55']\nBOOST_THREAD_LIBNAME = ['boost_thread-vc110-mt-1_55']\nCUDA_TRACE = False\nCUDA_ROOT = 'C:\\\\Program Files\\\\NVIDIA GPU Computing Toolkit\\\\CUDA\\\\v6.5'\nCUDA_ENABLE_GL = False\nCUDA_ENABLE_CURAND = True\nCUDADRV_LIB_DIR = ['${CUDA_ROOT}/lib/Win32', '${CUDA_ROOT}/lib/x64']\nCUDADRV_LIBNAME = ['cuda']\nCUDART_LIB_DIR = ['${CUDA_ROOT}/lib/Win32', '${CUDA_ROOT}/lib/x64']\nCUDART_LIBNAME = ['cudart']\nCURAND_LIB_DIR = ['${CUDA_ROOT}/lib/Win32', '${CUDA_ROOT}/lib/x64']\nCURAND_LIBNAME = ['curand']\nCXXFLAGS = ['/DBOOST_PYTHON_STATIC_LIB', '/EHsc']\nLDFLAGS = ['/LIBPATH:C:\\\\Other\\\\boost\\\\/lib64-msvc-12.0', '/FORCE']\n</code></pre>\n\n<p>then run</p>\n\n<pre><code>python setup.py build\n</code></pre>\n\n<p>followed by</p>\n\n<pre><code>python setup.py install\n</code></pre>\n\n<p>This should install pycuda for you =)</p>\n\n<p>To install Theano (with GPU enabled), download <a href=\"https://github.com/Theano/Theano/releases/tag/rel-0.7\" rel=\"nofollow noreferrer\">release 0.7</a>, unzip it, navigate inside it using a command prompt, and then type</p>\n\n<pre><code>python setup.py install\n</code></pre>\n\n<p>then go and edit system environment variables, and create one named</p>\n\n<pre><code>THEANO_FLAGS\n</code></pre>\n\n<p>and set it’s value to</p>\n\n<pre><code>device=gpu,floatX=float32\n</code></pre>\n\n<p>Then open a new command prompt, and if you have completed all of the steps above, this should work well =) You can run the code <a href=\"http://deeplearning.net/software/theano/tutorial/using_gpu.html\" rel=\"nofollow noreferrer\">here</a> to make sure that you are actually running off the GPU.</p>\n"
},
{
"AnswerId": "29196085",
"CreationDate": "2015-03-22T15:20:00.377",
"ParentId": null,
"OwnerUserId": "3536271",
"Title": null,
"Body": "<p>Actually you don't need to install pycuda to get theano working on your windows machine. I'm not an expert but I have Theano installed on Windows 8.1.</p>\n\n<p>This is my laptop config: 64-bit, nvcc/cuda 6.5, Python 2.7.9, WinPython-64bit-2.7.9.3, Windows 8.1, VS2013 and two graphic units (Intel HD Graphics 4600 and NVIDIA GeForce GT 750M).</p>\n"
}
] |
28,608,114 | 1 |
<python><hdf5><deep-learning><caffe>
|
2015-02-19T13:51:41.163
| 28,627,633 | 4,448,562 |
Error using HDF5 data for training models in caffe
|
<p>I am working with caffe and I have been trying to train the caffenet model using HDF5 data I used the prototxt files from ~/../caffe/examples/hdf5_classification. But I get the following error</p>
<pre></pre>
<p>P.S: I am a newbie in python programming </p>
<p>Can someone help me regarding this.....
Cheers</p>
|
[
{
"AnswerId": "28627633",
"CreationDate": "2015-02-20T11:13:43.437",
"ParentId": null,
"OwnerUserId": "4448562",
"Title": null,
"Body": "<p>I didn't include the labels in the hdf file added labels data structure and the problem was solved </p>\n"
}
] |
28,648,345 | 1 |
<ipython><torch>
|
2015-02-21T16:50:33.603
| 28,654,908 | 659,156 |
itorch creates a python console, not a torch console
|
<p>When I call I don't get a torch console but an ipython console:</p>
<pre></pre>
<p>What do I do wrong?</p>
|
[
{
"AnswerId": "28654908",
"CreationDate": "2015-02-22T05:36:39.517",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p>iTorch supports iPython v2.3 or above. Please see the required dependencies.\n You seem to have iPython v 0.1.2, maybe that's a reason you see this behavior.</p>\n"
}
] |
28,652,645 | 0 |
<logistic-regression><theano>
|
2015-02-21T23:28:50.717
| null | 1,536,499 |
Theano - logistic regression example weight vector becomes NaN?
|
<p>I am doing a tutorial (code <a href="https://github.com/Newmu/Theano-Tutorials/blob/master/2_logistic_regression.py" rel="nofollow">here</a>) and video <a href="https://www.youtube.com/watch?v=S75EdAcXHKk" rel="nofollow">here</a> (13:00 minutes in). </p>
<p>My only change is using the mnist training set from a different location (creating a one-hot encoding) but it is not working. I literally copy-pasted all the code (except for the mnist loading) in this example. Here is the code:</p>
<pre></pre>
<p>The weight vector updates once, and becomes all on the second update. I am very new to theano, but I am looking for tips to figure this out, especially if someone has already done this tutorial.</p>
<p><strong>UPDATE</strong>.
It looks like the gradient is the issue.</p>
<p>When I add this</p>
<pre></pre>
<p>It prints . This appears to be the correct usage of though.</p>
<p><strong>UPDATE 2.</strong>
When I change the cost function to this:</p>
<pre></pre>
<p>It is working now but I only have 70% accuracy which is really bad.</p>
<p><strong>UPDATE 3.</strong>
I downloaded the MNIST data used in the tutorial and it worked with 92% accuary.</p>
<p>I am not sure why my first mnist datasource was dying with the crossentropy cost, and then performing really poor with mean squared error cost function.</p>
|
[] |
28,661,843 | 1 |
<opencv><osx-mavericks><caffe>
|
2015-02-22T18:52:01.570
| null | 4,594,305 |
caffe installation error linker error lib/libcaffe.dylib Error and src/caffe/CMakeFiles/caffe.dir/all Error
|
<p>I have been trying to install caffe on my mac OSX 10.9.5.
I have been following the official caffe installation from <a href="http://caffe.berkeleyvision.org/installation.html#compilation" rel="nofollow">http://caffe.berkeleyvision.org/installation.html#compilation</a>. </p>
<p>When I am following the cmake installation's "make all" in build folder, I keep on getting the following linking error. I have been trying many possible suggestions that I have found on the web but to no avail. </p>
<p>Any suggestion is appreciated. Thank you in advance.</p>
<pre></pre>
<p>The full error log is at <a href="https://github.com/jackywang529/myOpenCV/blob/master/OpenCV/OpenCVTutorial2/errorLog" rel="nofollow">https://github.com/jackywang529/myOpenCV/blob/master/OpenCV/OpenCVTutorial2/errorLog</a></p>
<p>Thank you</p>
|
[
{
"AnswerId": "28691308",
"CreationDate": "2015-02-24T08:42:05.247",
"ParentId": null,
"OwnerUserId": "4594305",
"Title": null,
"Body": "<p>After upgrading from CUDA6.5 to CUDA7.0, the \"make all\" step completed successfully. I also made sure that I removed the edits that I had made to the formulas, which were necessary when I was using CUDA6.5. Such edits is described in (<a href=\"http://caffe.berkeleyvision.org/install_osx.html\" rel=\"nofollow\">http://caffe.berkeleyvision.org/install_osx.html</a> under section libstdc++ installation).</p>\n\n<p>Good luck with all caffe users!</p>\n"
}
] |
28,692,209 | 4 |
<python><caffe>
|
2015-02-24T09:33:33.283
| 28,979,649 | 1,452,257 |
Using GPU despite setting CPU_Only, yielding unexpected keyword argument
|
<p>I'm installing Caffe on an Ubuntu 14.04 virtual server with CUDA installed (without driver) using <a href="https://github.com/BVLC/caffe/wiki/Ubuntu-14.04-VirtualBox-VM">https://github.com/BVLC/caffe/wiki/Ubuntu-14.04-VirtualBox-VM</a> as inspiration. During the installation process I edited the MakeFile to include before building it. However, it seems that Caffe is still trying to make use of the GPU. When I try to run a test example I get the following error:</p>
<pre></pre>
<p>How can I fix this and run entirely on CPU?</p>
|
[
{
"AnswerId": "34022583",
"CreationDate": "2015-12-01T14:16:29.283",
"ParentId": null,
"OwnerUserId": "5476099",
"Title": null,
"Body": "<p>just one typo from user2696499</p>\n\n<pre><code>if ms != inputs[in_][1:]\n print(self.inputs[in_])\n in_shape = self.inputs[in_][1:]\n m_min, m_max = mean.min(), mean.max()\n normal_mean = (mean - m_min) / (m_max - m_min)\n mean = resize_image(normal_mean.transpose((1,2,0)),\n in_shape[1:]).transpose((2,0,1)) * \\\n (m_max - m_min) + m_min\n '''\n raise ValueError('Mean shape incompatible with input shape.')\n '''\n</code></pre>\n"
},
{
"AnswerId": "28829083",
"CreationDate": "2015-03-03T10:12:04.610",
"ParentId": null,
"OwnerUserId": "2927205",
"Title": null,
"Body": "<p>There are some problems currently due to a lot of interface changes introduced by the caffe developers. The Python Wrapper is not yet updated with these changes.</p>\n\n<p>See this PR which fixes the problem: <a href=\"https://github.com/BVLC/caffe/pull/1964\" rel=\"nofollow\">https://github.com/BVLC/caffe/pull/1964</a></p>\n"
},
{
"AnswerId": "33866398",
"CreationDate": "2015-11-23T08:23:39.993",
"ParentId": null,
"OwnerUserId": "5055922",
"Title": null,
"Body": "<p>The same error was solved in another google group:-</p>\n\n<p>Where all you need to do is to change this:</p>\n\n<pre><code> mean=np.load(mean_file)\n</code></pre>\n\n<p>to this:</p>\n\n<pre><code>mean=np.load(mean_file).mean(1).mean(1)\n</code></pre>\n\n<p><a href=\"https://groups.google.com/forum/#!msg/caffe-users/C1J5cO54oRE/bSOT3EViAgAJ\" rel=\"nofollow\">Google Group: Mean shape incompatible with input shape</a></p>\n"
},
{
"AnswerId": "28979649",
"CreationDate": "2015-03-11T06:02:18.843",
"ParentId": null,
"OwnerUserId": "2696499",
"Title": null,
"Body": "<p>I'm gonna add a few words to Mailerdaimon's answer.</p>\n\n<p>I followed the installation guide (<a href=\"https://github.com/BVLC/caffe/wiki/Ubuntu-14.04-VirtualBox-VM\">https://github.com/BVLC/caffe/wiki/Ubuntu-14.04-VirtualBox-VM</a>) to setup Caffe in my vagrant virtual machine. FYI, virtual machines DO NOT support GPU accelerating. Back to the point, after I fix 'CPU / GPU switch in example scripts'(<a href=\"https://github.com/BVLC/caffe/pull/2058\">https://github.com/BVLC/caffe/pull/2058</a>) and add '--print_results --labels_file' options(<a href=\"https://github.com/jetpacapp/caffe/blob/master/python/classify.py\">https://github.com/jetpacapp/caffe/blob/master/python/classify.py</a>) to 'python/classify.py', this command './python/classify.py ./examples/images/cat.jpg foo --print_results' still throws the following error:</p>\n\n<pre><code> Traceback (most recent call last):\n File \"./python/classify.py\", line 175, in <module>\n main(sys.argv)\n File \"./python/classify.py\", line 129, in main\n channel_swap=channel_swap)\n File \"/home/vagrant/caffe/python/caffe/classifier.py\", line 38, in __init__\n self.transformer.set_mean(in_, mean)\n File \"/home/vagrant/caffe/python/caffe/io.py\", line 267, in set_mean\n raise ValueError('Mean shape incompatible with input shape.')\n ValueError: Mean shape incompatible with input shape.\n</code></pre>\n\n<p>Then I dump the shape of 'mean'(which is 3*256*256) and 'input'(3*227*227). Obviously these two shapes are incompatible. But old versions of 'set_mean()' do NOT throw the error, so I dig into the python code and find out that old 'set_mean()' function looks like this(python/caffe/pycaffe.py, line 195-202, <a href=\"https://github.com/jetpacapp/caffe/\">https://github.com/jetpacapp/caffe/</a>):</p>\n\n<pre><code>if mode == 'elementwise':\n if mean.shape != in_shape[1:]:\n # Resize mean (which requires H x W x K input in range [0,1]).\n m_min, m_max = mean.min(), mean.max()\n normal_mean = (mean - m_min) / (m_max - m_min)\n mean = caffe.io.resize_image(normal_mean.transpose((1,2,0)),\n in_shape[2:]).transpose((2,0,1)) * (m_max - m_min) + m_min\n</code></pre>\n\n<p>But in latest Caffe, contributors encapsulate 'set_mean()' and other transformation functions into class \n'Transformer'. New 'set_mean()' function looks like this(python/caffe/io.py, line 253-254, <a href=\"https://github.com/BVLC/caffe/\">https://github.com/BVLC/caffe/</a>):</p>\n\n<pre><code>if ms != self.inputs[in_][1:]:\n raise ValueError('Mean shape incompatible with input shape.')\n</code></pre>\n\n<p>Jesus, how could these two be the same function? So I change the new 'set_mean()', comment out the error raising sentence, and add shape re-sizing procedure as in old 'set_mean()'.</p>\n\n<pre><code>if ms != ins:\n print(self.inputs[in_])\n in_shape = self.inputs[in_][1:]\n m_min, m_max = mean.min(), mean.max()\n normal_mean = (mean - m_min) / (m_max - m_min)\n mean = resize_image(normal_mean.transpose((1,2,0)),\n in_shape[1:]).transpose((2,0,1)) * \\\n (m_max - m_min) + m_min\n '''\n raise ValueError('Mean shape incompatible with input shape.')\n '''\n</code></pre>\n\n<p>Voila, problem solved.</p>\n\n<pre><code>Classifying 1 inputs.\nDone in 1.17 s.\n[('tabby', '0.27933'), ('tiger cat', '0.21915'), ('Egyptian cat', '0.16064'), ('lynx', '0.12844'), ('kit fox', '0.05155')]\n</code></pre>\n"
}
] |
28,710,350 | 1 |
<caffe>
|
2015-02-25T03:01:08.990
| 28,808,297 | 678,392 |
When does Caffe make copies of the data?
|
<pre></pre>
<p>Why do the last two lines copy data? Don't the GPU and CPU have both up-to-date contents?</p>
<p><a href="http://caffe.berkeleyvision.org/tutorial/net_layer_blob.html" rel="noreferrer">http://caffe.berkeleyvision.org/tutorial/net_layer_blob.html</a></p>
|
[
{
"AnswerId": "28808297",
"CreationDate": "2015-03-02T11:08:33.943",
"ParentId": null,
"OwnerUserId": "773226",
"Title": null,
"Body": "<p><code>.gpu_data</code> and <code>.cpu_data</code> are used in cases were the <code>data</code> is used only as input and will not be modified by the algorithm. <code>.mutable_*</code> is used when the data itself gets updated while running the algorithm.</p>\n\n<p>Whenever a the data is called, it checks whether the previous statement was a <code>mutable_*</code> function call and that too using the same processor (gpu or cpu). If it is using the same processor, data need not be copied. If it is using the other processor, there is a chance that the data might have been updated in the previous <code>.mutable_*</code> call and hence a data copy is required.</p>\n\n<p><strong>Edit 1</strong>\nWhenever the previous instruction is 'mutable', data copy is to be done before the current instruction IF the current instruction is on a different processor. </p>\n\n<p>In no other case the data copy takes place except for a special initial condition, ie; when no data was present at all in the GPU memory and hence a copy of the data will take place before *_gpu_data() call.</p>\n"
}
] |
28,716,618 | 1 |
<neural-network><convolution><theano><conv-neural-network>
|
2015-02-25T10:28:44.867
| 28,752,057 | 4,078,391 |
Theano: Reconstructing convolutions with stride (subsampling) in an autoencoder
|
<p>I want to train a simple convolutional auto-encoder using Theano, which has been working great. However, I don't see how one can reverse the command when subsampling (stride) is used. Is there an efficient way to "invert" the convolution command when stride is used, like in the image below?</p>
<p><img src="https://i.stack.imgur.com/Zevzj.jpg" alt="Image shamelessly stolen and painted from http:/cs231n.github.io/convolutional-networks. "></p>
<p>For example, I want to change the following ... </p>
<pre></pre>
<p>... into the situation where . The first layer will work work just as expected. However, the second layer will effectively "do a convolution with stride 1, then throw away half of the outputs". This is clearly a different operation than what I'm looking for - won't even have the same number of neurons as length as . What should the second command be to "reconstruct" the original ?</p>
|
[
{
"AnswerId": "28752057",
"CreationDate": "2015-02-26T20:24:33.360",
"ParentId": null,
"OwnerUserId": "3489247",
"Title": null,
"Body": "<p>I deduce from this that you intend to have tied weights, i.e. if the first operation were are matrix multiplication with <code>W</code>, then the output would be generated with <code>W.T</code>, the adjoint matrix. In your case you would thus be looking for the adjoint of the convolution operator followed by subsampling.</p>\n\n<p>(EDIT: I deduced wrongly, you can use any filter whatsoever to 'deconvolve', as long as you get the shapes right. Talking about the adjoint is still informative, though. You will be able to relax the assumption afterwards.)</p>\n\n<p>Since the convolution operator and subsampling operators are linear operator, lets denote them by <code>C</code> and <code>S</code> respectively and observe that convolution + subsampling an image <code>x</code> would be</p>\n\n<pre><code>S C x\n</code></pre>\n\n<p>and that the adjoint operation on <code>y</code> (which lives in the same space as <code>S C x</code>) would be </p>\n\n<pre><code>C.T S.T y\n</code></pre>\n\n<p>Now, S.T is nothing other than upsampling to the original image size by adding zeros around all entries of <code>y</code> until the right size is obtained.</p>\n\n<p>From your post, you seem to be aware of the adjoint of the convolution operator of stride (1, 1) - it is the convolution with reversed filters and reversed <code>border_mode</code>, i.e. with <code>filters.dimshuffle(1, 0, 2, 3)[:, :, ::-1, ::-1]</code> and switch from <code>border_mode='valid'</code> to <code>border_mode='full'</code>.</p>\n\n<p>Concatenate upsampling and this reverse filter convolution and you obtain the adjoint you seek.</p>\n\n<p>Note: There may be ways of exploiting the gradient <code>T.grad</code> or <code>T.jacobian</code> to obtain this automatically, but I am never sure how this is done exactly.</p>\n\n<p>EDIT: There, I wrote it down :)</p>\n\n<pre><code>import theano\nimport theano.tensor as T\nimport numpy as np\n\nfilters = theano.shared(np.random.randn(4, 3, 6, 5).astype('float32'))\n\ninp1 = T.tensor4(dtype='float32')\n\nsubsampled_convolution = T.nnet.conv2d(inp1, filters, border_mode='valid', subsample=(2, 2))\n\ninp2 = T.tensor4(dtype='float32')\nshp = inp2.shape\nupsample = T.zeros((shp[0], shp[1], shp[2] * 2, shp[3] * 2), dtype=inp2.dtype)\nupsample = T.set_subtensor(upsample[:, :, ::2, ::2], inp2)\nupsampled_convolution = T.nnet.conv2d(upsample,\n filters.dimshuffle(1, 0, 2, 3)[:, :, ::-1, ::-1], border_mode='full')\n\nf1 = theano.function([inp1], subsampled_convolution)\nf2 = theano.function([inp2], upsampled_convolution)\n\nx = np.random.randn(1, 3, 10, 10).astype(np.float32)\nf1x = f1(x)\ny = np.random.randn(*f1x.shape).astype(np.float32)\nf2y = f2(y)\n\np1 = np.dot(f1x.ravel(), y.ravel())\np2 = np.dot(x.ravel(), f2y[:, :, :-1].ravel())\n\nprint p1 - p2\n</code></pre>\n\n<p><code>p1</code> being equal to <code>p2</code> corroborates that f2 is the adjoint of f1</p>\n"
}
] |
28,717,780 | 2 |
<virtualenv><theano>
|
2015-02-25T11:24:39.870
| 28,720,491 | 326,849 |
Issue installing Theano in my virtualenv
|
<p>I'm trying to install Theano in a virtualenv:</p>
<pre></pre>
<p>but I get the following error:</p>
<pre></pre>
<p>I would like not to depend on any system package, so I didn't use the option "--system-site-packages" to create my virtualenv.</p>
<p>Anybody can help?</p>
|
[
{
"AnswerId": "28720491",
"CreationDate": "2015-02-25T13:39:12.883",
"ParentId": null,
"OwnerUserId": "326849",
"Title": null,
"Body": "<p>As pointed out by user1615070, I just had to install numpy and scipy in my virtualenv before installing Theano (to not use the system versions).</p>\n"
},
{
"AnswerId": "44598600",
"CreationDate": "2017-06-16T22:04:26.203",
"ParentId": null,
"OwnerUserId": "3287820",
"Title": null,
"Body": "<p>To be more specific, Theano (at least version 0.8) has the <a href=\"http://deeplearning.net/software/theano_versions/0.8.X/install_ubuntu.html#virtualenv\" rel=\"nofollow noreferrer\">specific command in its documentation</a>. Here it is:</p>\n\n<pre><code>virtualenv --system-site-packages -p python2.7 theano-env\n</code></pre>\n"
}
] |
28,756,503 | 1 |
<theano><pymc3>
|
2015-02-27T02:26:10.873
| null | 1,361,822 |
Unable to create lambda function in hierarchical pymc3 model
|
<p>I'm trying to create the model shown below with PyMC 3 but can't figure out how to properly map probabilities to the observed data with a lambda function.</p>
<pre></pre>
<p>The error I get is</p>
<pre></pre>
<p>In the model I'm attempting to build, the elements of indicate which element in gives the probability of the corresponding observed value in (placed in RV ). In other words,</p>
<pre></pre>
<p>I'm guessing I need to define the probability with a Theano expression or use Theano but I don't see how it can be done for this model.</p>
|
[
{
"AnswerId": "28838717",
"CreationDate": "2015-03-03T18:01:40.623",
"ParentId": null,
"OwnerUserId": "429726",
"Title": null,
"Body": "<p>You should specify your categorical <code>p</code> values as <code>Deterministic</code> objects before passing them on to <code>w</code>. Otherwise, the <code>as_op</code> implementation would look something like this:</p>\n\n<pre><code>@theano.compile.ops.as_op(itypes=[t.lscalar, t.dscalar, t.dscalar],otypes=[t.dvector])\ndef p(z=z, phi=phi):\n return [phi[z[i,j]] for i in range(D) for j in range(W)]\n</code></pre>\n"
}
] |
28,764,801 | 0 |
<python><spyder><theano>
|
2015-02-27T12:09:17.563
| null | 4,614,444 |
Installing Theano in Winpython using tutorial from deeplearning.net
|
<p>I'm trying to install Theano in my Windows Server 2008 OS(64-bit) and I'm following the tutorial given in </p>
<p><a href="http://deeplearning.net/software/theano/install_windows.html#install-windows" rel="nofollow">http://deeplearning.net/software/theano/install_windows.html#install-windows</a></p>
<p>I followed the "Winpython" approach and </p>
<p>have reached this step </p>
<p><a href="http://deeplearning.net/software/theano/install_windows.html#configuring-theano" rel="nofollow">http://deeplearning.net/software/theano/install_windows.html#configuring-theano</a></p>
<p>As Instructed I executed the command</p>
<p></p>
<p>The tutorial said "this step will add the directory to you environment variable"</p>
<p>However after executing the code when I try to import theano, it shows that the module cannot be found.</p>
<p>If I directly copy the directory into the folder of my directory , I get a lot of jumbled messages and errors (over 100 lines) In my ipython console.</p>
<p>Please tell me what I am doing wrong. </p>
<p>Is there a better tutorial that I can follow ? </p>
|
[] |
28,776,650 | 0 |
<lua><scientific-computing><gsl><torch>
|
2015-02-28T00:32:42.440
| null | 297,094 |
How does GSL-shell compare to Torch
|
<p>It looks that <a href="http://torch.ch/" rel="nofollow">Lua Torch</a> supersedes <a href="http://www.nongnu.org/gsl-shell/" rel="nofollow">GSL Shell</a>, especially with the <a href="https://github.com/facebook/iTorch" rel="nofollow">iTorch</a> package. How do they compare to each other? When someone will like to use GSL Shell vs Torch? Are they compatible (can one use GSL Shell objects in Torch functions)? </p>
|
[] |
28,779,133 | 1 |
<computer-vision><neural-network><deep-learning><caffe>
|
2015-02-28T07:05:03.637
| null | 115,781 |
How to tune a training schema for a different data set in Caffe?
|
<p>Currently I am following the <a href="http://caffe.berkeleyvision.org/gathered/examples/imagenet.html" rel="nofollow">caffe imagenet example</a> but apply it on my own training data set. My dataset is about 2000 classes and about 10 ~ 50 images each class. Actually I was classifying vehicle images and the images were cropped to the front, so the images within each class have the same size, the same view angle(almost). </p>
<p>I've tried the imagenet schema but looks like it didn't work well and after about 3000 iterations the accuracy was down to 0. So I am wondering is there a practical guide on how to tune the schema?</p>
|
[
{
"AnswerId": "29223230",
"CreationDate": "2015-03-24T01:02:59.893",
"ParentId": null,
"OwnerUserId": "2484466",
"Title": null,
"Body": "<p>You can delete the last layer in imagenet, add your own last layer with a different name(to fit the number of classes), specify it with a higher learning rate, and specify a lower overall learning rate. There does exist an official example here: <a href=\"http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html\" rel=\"nofollow\">http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html</a></p>\n\n<p>However, if the accuracy was 0 you should check the model parameters first, perhaps it's an overflow</p>\n"
}
] |
28,806,203 | 1 |
<lua><neural-network><deep-learning><torch>
|
2015-03-02T09:20:54.793
| null | 2,138,524 |
How to predict using model generated by Torch?
|
<p>I have executed the <a href="https://github.com/nicholas-leonard/dp/blob/master/examples/neuralnetwork_tutorial.lua" rel="nofollow">neuralnetwork_tutorial.lua</a>. Now that I have the model, I would like to test it with some of my own handwritten images. But I have tried many ways by storing the weights, and now by storing the complete model using <a href="https://github.com/torch/torch7/blob/master/doc/serialization.md" rel="nofollow">torch save and load methods</a>.</p>
<p>However now that I try to predict my own handwritten images(converted to 28X28 DoubleTensor) using </p>
<pre></pre>
|
[
{
"AnswerId": "28814333",
"CreationDate": "2015-03-02T16:09:32.560",
"ParentId": null,
"OwnerUserId": "49985",
"Title": null,
"Body": "<p>You have two options. </p>\n\n<p>One. Use the encapsulated <a href=\"https://github.com/torch/nn/blob/master/doc/module.md#module\" rel=\"nofollow\">nn.Module</a> to forward your <a href=\"https://github.com/torch/torch7/blob/master/doc/tensor.md#tensor\" rel=\"nofollow\">torch.Tensor</a>:</p>\n\n<pre><code>mlp2 = mlp:toModule(datasource:trainSet():sub(1,2))\ninput = testImageTensor:view(1, 1, 32, 32)\noutput = mlp2:forward(input)\n</code></pre>\n\n<p>Two. Encapsulate your torch.Tensor into a <a href=\"http://dp.readthedocs.org/en/latest/view/index.html#dp.ImageView\" rel=\"nofollow\">dp.ImageView</a> and forward that through your <a href=\"http://dp.readthedocs.org/en/latest/model/index.html#dp.Model\" rel=\"nofollow\">dp.Model</a> :</p>\n\n<pre><code>inputView = dp.ImageView('bchw', testImageTensor:view(1, 1, 32, 32))\noutputView = mlp:forward(inputView, dp.Carry{nSample=1})\noutput = outputView:forward('b')\n</code></pre>\n"
}
] |
28,844,498 | 0 |
<machine-learning><anaconda><theano><intel-mkl><passage>
|
2015-03-04T00:14:38.263
| null | 3,718,836 |
mkl exception after installing anaconda accelerate
|
<p>When trying to train an RNN using Passage I get the following exception when running: model = RNN(layers=layers, cost='bce')</p>
<p>Exception: ('The following error happened while compiling the node', Dot22(Reshape{2}.0, Reshape{2}.0), '\n', 'Compilation failed (return status=1): ld: library not found for -lmkl. clang: error: linker command failed with exit code 1 (use -v to see invocation). ', '[Dot22(, )]')</p>
<p>This is a new issue since installing Anaconda Accelerate.</p>
<p>Appreciate any and all advice!</p>
|
[] |
28,848,133 | 1 |
<caffe>
|
2015-03-04T06:44:18.607
| 28,848,389 | 639,973 |
caffe: libglog.so.0 missing (error while loading shared libraries)
|
<p>I've installed caffe on a server a while ago, and back then it worked properly.</p>
<p>Now I'm following the LeNet MNIST tutorial again (<a href="http://caffe.berkeleyvision.org/gathered/examples/mnist.html" rel="nofollow">http://caffe.berkeleyvision.org/gathered/examples/mnist.html</a>), and running</p>
<pre></pre>
<p>returns </p>
<pre></pre>
<p>I've noticed that liibglog.so.0 is not in /lib which might be the reason for it, but I'm not allowed to copy that file into /lib directory, since I'm not a root user.</p>
<p>Is there workaround for this?</p>
|
[
{
"AnswerId": "28848389",
"CreationDate": "2015-03-04T07:02:10.050",
"ParentId": null,
"OwnerUserId": "391161",
"Title": null,
"Body": "<p>The easiest way to work around the lack of shared libraries in system directories is to use <code>LD_LIBRARY_PATH</code> with the directory where the shared library lives.</p>\n\n<p>Before running the the command that requires a library, run the following in the following the same shell.</p>\n\n<pre><code>export LD_LIBRARY_PATH=~/local/lib\n</code></pre>\n\n<p>You can also stick this in your <code>.bashrc</code> for convenience.</p>\n\n<p>An alternate solution is to use the following command line flag while compiling, but that requires mucking with other people's build scripts.</p>\n\n<pre><code> -Wl,-rpath,$(DEFAULT_LIB_INSTALL_PATH)\n</code></pre>\n"
}
] |
28,858,743 | 1 |
<theano>
|
2015-03-04T15:47:46.263
| null | 1,259,448 |
Theano, structured v.s. regular gradient
|
<p>I don't know whether SO is a good place to ask this question, but I can't find the document talking about the different between structured gradient and regular gradient in the Theano library. What are they ?</p>
|
[
{
"AnswerId": "30510229",
"CreationDate": "2015-05-28T15:01:04.163",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>Do you mean with respect to sparse operations?</p>\n\n<p>As mentioned <a href=\"http://deeplearning.net/software/theano/extending/other_ops.html#sparse-gradient\" rel=\"nofollow\">here</a>.</p>\n\n<blockquote>\n <p>There are 2 types of gradients for sparse operations: normal gradient\n and structured gradient.</p>\n</blockquote>\n\n<p>The difference only applies to sparse matrices because with sparse matrices you may or may not care about the effect of the \"empty space\" in a sparse matrix on the gradients.</p>\n\n<p>More <a href=\"http://deeplearning.net/software/theano/sandbox/sparse.html#dot-vs-structureddot\" rel=\"nofollow\">here</a>.</p>\n"
}
] |
28,867,791 | 3 |
<ubuntu><boost><include><caffe>
|
2015-03-05T00:23:05.107
| null | 1,054,435 |
Building Caffe on Ubuntu: make can't find Boost's include files
|
<p>I am following <a href="http://caffe.berkeleyvision.org/installation.html" rel="noreferrer">these instructions</a> to install and build Caffe along with its dependencies. I built Boost and got this at the end:</p>
<pre></pre>
<p>When I run in the caffe directory, I get this:</p>
<pre></pre>
<p>What do I need to modify for it to find the include (and lib) files? A specific environment variable? A setting in caffe's Makefile? Something else?</p>
|
[
{
"AnswerId": "43689645",
"CreationDate": "2017-04-28T22:21:09.837",
"ParentId": null,
"OwnerUserId": "4539629",
"Title": null,
"Body": "<p>This worked for me:</p>\n\n<pre><code>cd /usr/include/boost/thread\nsudo ln -s locks.hpp latch.hpp\n</code></pre>\n"
},
{
"AnswerId": "28952718",
"CreationDate": "2015-03-09T22:12:25.020",
"ParentId": null,
"OwnerUserId": "3646384",
"Title": null,
"Body": "<p>Just copy your boost folder that you have built (must be named as \"boost\") to your <code>/usr/local/include</code> or <code>/usr/include</code>. Then run <code>make all</code> again. </p>\n"
},
{
"AnswerId": "34915365",
"CreationDate": "2016-01-21T04:23:09.787",
"ParentId": null,
"OwnerUserId": "2351184",
"Title": null,
"Body": "<p>Probably you do not have <code>boost</code> installed on your machine</p>\n\n<pre><code> sudo apt-get install --no-install-recommends libboost-all-dev\n</code></pre>\n"
}
] |
28,877,653 | 1 |
<python><neural-network><theano><deep-learning>
|
2015-03-05T12:14:35.277
| null | 4,193,583 |
Using theano with single examples out of batch
|
<p>I'm currently using the machine learning library theano to produce a project which utilises deep learning techniques, to find an internal representation of the MNIST dataset.</p>
<p>So far my codebase is pretty much the same as this convolutional net tutorial <a href="http://deeplearning.net/tutorial/lenet.html" rel="nofollow">http://deeplearning.net/tutorial/lenet.html</a></p>
<p>Mt problem is arising from the fact that I want to be able to select individual examples out of the test batches (or indeed my own handwritten characters for testing) but I can't seem to get theano to do this.</p>
<p>I have tried quite a few things (been stuck for 4+ days) but my latest attempt which I think is getting somewhere looks like this:</p>
<pre></pre>
<p>tData is a theano.shared variable which contains the testing images, layer3.y_pred is the output I'm interested in.</p>
<p>But I keep getting errors related to shape, non-tensor types, and conversions. Anybody who has any experience with theano and knows what's going on under the hood I would really appreciate some input.</p>
|
[
{
"AnswerId": "29837052",
"CreationDate": "2015-04-24T01:16:07.830",
"ParentId": null,
"OwnerUserId": "3098048",
"Title": null,
"Body": "<p>You can change <code>tData[index]</code> to <code>tData[index:index+1]</code>.</p>\n\n<p>If <code>tData</code> is an n by m matrix containing n samples, then theano is expecting training data with dimension (batch_size, m). In your case, <code>tData[index]</code> has dimension (m,) while <code>tData[index:index+1]</code> has dimension (1,m)</p>\n"
}
] |
28,912,683 | 1 |
<python><numpy><theano><deep-learning>
|
2015-03-07T07:56:54.990
| 28,919,818 | 844,373 |
Pass in matrix of images of variables sizes into Theano
|
<p>I'm trying to use Theano to do some recognition. All my images are different sizes, and I don't want to resize them because they're paintings so they shouldn't be the same size. I was wondering how to pass in a matrix of images of variable image size lengths into the Theano function.</p>
<p>I'm under the impression that this not possible with numpy. Is there an alternative?</p>
<pre></pre>
|
[
{
"AnswerId": "28919818",
"CreationDate": "2015-03-07T20:49:27.330",
"ParentId": null,
"OwnerUserId": "1461210",
"Title": null,
"Body": "<p>Unless I'm mistaken in my interpretation of your code, I don't think what you're trying to do makes sense.</p>\n\n<p>If I understand correctly, in <code>model()</code> you are computing a weighted sum over your image pixels using <code>dot(X, w)</code>, where I assume that <code>X</code> is an <code>(nimages, npixels)</code> array of image data, and <code>w</code> is a weight matrix with <em>fixed</em> dimensions <code>(784, 10)</code>.</p>\n\n<p>In order for that dot product to even be computable, <code>X.shape[1]</code> (the number of pixels in each of your input images) <em>must</em> be equal to <code>w.shape[0]</code>.</p>\n\n<p>If the sizes of your input images vary, how can you expect to learn a single weight matrix with fixed dimensions?</p>\n"
}
] |
28,954,294 | 1 |
<cuda><theano>
|
2015-03-10T00:40:06.090
| null | 2,004,857 |
Getting Theano to use the GPU and all CPU cores (at the same time)
|
<p>I managed to get Theano working with either GPU or multicore CPU on Ubuntu 14.04 by following <a href="http://deeplearning.net/software/theano/install.html" rel="nofollow">this tutorial</a>.</p>
<p>First I got multicore working (I could verify that in System Monitor).
Then, after adding the config below to .theanorc, I got GPU working:</p>
<pre></pre>
<p>I verified it by running the test from the tutorial and checking the execution times, and also by the log message when running my program: </p>
<blockquote>
<p>"Using gpu device 0: GeForce GT 525M"</p>
</blockquote>
<p>But as soon as GPU started working I wouldn't see multicore in System Monitor anymore. It uses just one core at 100% like before.</p>
<p>How can I use both? Is it even possible?</p>
|
[
{
"AnswerId": "28976870",
"CreationDate": "2015-03-11T01:06:30.300",
"ParentId": null,
"OwnerUserId": "4656401",
"Title": null,
"Body": "<p>You can't fully utilize both multicore and GPU at the same time. \nMaybe this can be impoved in the future.</p>\n"
}
] |
28,956,321 | 1 |
<gpu><restart><convolution><caffe>
|
2015-03-10T04:42:26.577
| 28,959,690 | 3,510,476 |
In CNN with caffe, Can I set up initial caffemodel?
|
<p>I have operated CNN using caffe.
however, system was forced termination.
I have caffemodel so far.</p>
<p>Can I restart learning from now on using current caffemodel?</p>
<p>Thanks,</p>
|
[
{
"AnswerId": "28959690",
"CreationDate": "2015-03-10T09:06:52.487",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<p>Caffe supports resuming as explained <a href=\"http://caffe.berkeleyvision.org/gathered/examples/imagenet.html#resume-training\" rel=\"nofollow\">here</a>:</p>\n\n<blockquote>\n <p>We all experience times when the power goes out [...] Since we are snapshotting intermediate results during training, we will be able to resume from snapshots.</p>\n</blockquote>\n\n<p>This is available via the <code>--snapshot</code> option of the main caffe command-line tool, e.g:</p>\n\n<pre><code>./build/tools/caffe train [...] --snapshot=caffenet_train_10000.solverstate\n</code></pre>\n\n<p>As explained within the doc <code>caffenet_train_10000.solverstate</code> is:</p>\n\n<blockquote>\n <p>the solver state snapshot that stores all necessary information to recover the exact solver state.</p>\n</blockquote>\n\n<p>In particular you can find more precisions about how to configure snapshotting from the <a href=\"http://caffe.berkeleyvision.org/tutorial/solver.html#snapshotting-and-resuming\" rel=\"nofollow\">solver documentation</a>.</p>\n"
}
] |
28,958,117 | 1 |
<android><opencv><image-processing><object-detection><caffe>
|
2015-03-10T07:23:39.263
| null | 2,631,051 |
Not able to build caffe in android
|
<p>I'm trying to create an android app that can identify the object in the image and gives its name as result. I know caffe-library can be used for this but getting error when i run ./build.py .</p>
<p>command :</p>
<pre></pre>
<p>Error :</p>
<pre></pre>
|
[
{
"AnswerId": "30062929",
"CreationDate": "2015-05-05T20:44:41.370",
"ParentId": null,
"OwnerUserId": "4476156",
"Title": null,
"Body": "<p>Make sure that the clone is recursive to include the dependencies:</p>\n\n<blockquote>\n <p>*** caffe-android-lib Dependency:<br>\n Boost-for-Android<br>\n protobuf<br>\n Eigen</p>\n</blockquote>\n\n<p>E.g.:</p>\n\n<blockquote>\n <p>git clone --recursive <a href=\"https://github.com/sh1r0/caffe-android-lib.git\" rel=\"nofollow\">https://github.com/sh1r0/caffe-android-lib.git</a><br>\n cd caffe-android-lib<br>\n ./build.py $(NDK_PATH)</p>\n</blockquote>\n\n<p>Or have you tried to install the dependencies (especially protobuf) from source? </p>\n\n<p>Given the dependencies are correctly installed, then you will have a successful caffe-android build:</p>\n\n<p>E.g.:</p>\n\n<blockquote>\n <p><a href=\"https://gist.github.com/melvincabatuan/6b5e37444b77326ae7b3\" rel=\"nofollow\">https://gist.github.com/melvincabatuan/6b5e37444b77326ae7b3</a><br>\n ...updated 10980 targets...<br>\n Done!<br>\n ...<br>\n [armeabi-v7a] Install : libcaffe_jni.so => libs/armeabi-v7a/libcaffe_jni.so</p>\n</blockquote>\n"
}
] |
28,985,551 | 1 |
<python><ubuntu><numpy><cuda><caffe>
|
2015-03-11T11:33:24.063
| null | 3,786,150 |
Caffe installation in ubuntu 14.04
|
<p>I'm following <a href="https://github.com/BVLC/caffe/wiki/Ubuntu-14.04-ec2-instance" rel="nofollow">https://github.com/BVLC/caffe/wiki/Ubuntu-14.04-ec2-instance</a> link for installing caffe on my machine.
But when i write command all i get these errors</p>
<pre></pre>
<p>I'm new to this and can't really figure out what package is missing </p>
|
[
{
"AnswerId": "29021849",
"CreationDate": "2015-03-12T22:54:47.347",
"ParentId": null,
"OwnerUserId": "4665062",
"Title": null,
"Body": "<p>(This worked for an unmerged longjon caffe branch.)</p>\n\n<p>In\ncaffe/include/caffe/util/math_functions.hpp</p>\n\n<p>try changing</p>\n\n<pre><code>using std::signbit;\nDEFINE_CAFFE_CPU_UNARY_FUNC(sgnbit, y[i] = signbit(x[i])); \n</code></pre>\n\n<p>to</p>\n\n<pre><code>// using std::signbit;\nDEFINE_CAFFE_CPU_UNARY_FUNC(sgnbit, y[i] = std::signbit(x[i]));\n</code></pre>\n"
}
] |
29,023,614 | 1 |
<python><glibc><libc><theano>
|
2015-03-13T01:55:32.070
| 29,046,172 | 1,245,262 |
Problems with a local installation of libc
|
<p>I'm trying to run a Theano implementation of alexNet on some machines at work. When I first tried to run it I got the following error:</p>
<pre></pre>
<p>Normally, what I would do in this situation is use my package manager to update libc, but this is not an option for work/administrative reasons. So, what I did instead was to install a local version of libc and to change LD_LIBRARY_PATH in my .bashrc file to point to it. Now, however, I get the following error:</p>
<pre></pre>
<p>Oddly enough, I get a similar 'relocation error' when I just try to 'ls'.</p>
<p>Does anyone know how I can install a version of libc that will only be used by my Python interpreter, but not by everything else?</p>
<p>Note: The libstdc++ I'm using is a locally installed version. I installed it for the same reason that I'm trying to install a local version of libc. </p>
<p>...</p>
<p>OK, I've progressed a little further, but am still stuck. </p>
<p>If I return to my old LD_LIBRARY_PATH, in order to avoid errors with commands like 'ls', I can verify that the new (local) libc does, in fact, use the old (system) ld-linux-x86-64.so.2</p>
<pre></pre>
<p>So, I thought I would change LD_LIBRARY_PATH to look for the new (local) ld-linux-x86-64.so.2</p>
<pre></pre>
<p>So, my libc should see the new ld-linux-x86-64.so.2. I can verify that this so has the symbol _dl_starting_up:</p>
<pre></pre>
<p>But, when I try to run the alexNet implementation using LD_PRELOAD to guarantee the use of the right libc, I still get the same error:</p>
<pre></pre>
<p>Why is the new (local) ld-linux not overriding the old (system) ld-linux? Shouldn't my setting of LD_LIBRARY_PATH have taken care of that?</p>
<p>If I try to force the issue, by preloading ld-linux, I get a segmentation fault:</p>
<pre></pre>
<p>So, now I'm stuck. I don't know</p>
<ol>
<li><p>Why my setting of LD_LIBRARY_PATH didn't force my new (local) libc to use the new (local) ld-linux.</p></li>
<li><p>What I should do about the segmentation fault when I force libc to use the new (local) ld-linux.</p></li>
</ol>
|
[
{
"AnswerId": "29046172",
"CreationDate": "2015-03-14T06:35:31.273",
"ParentId": null,
"OwnerUserId": "50617",
"Title": null,
"Body": "<blockquote>\n <p>So, what I did instead was to install a local version of libc and to change LD_LIBRARY_PATH in my .bashrc file to point to it</p>\n</blockquote>\n\n<p>See <a href=\"https://stackoverflow.com/a/8658468/50617\">this answer</a> for why that can't work.</p>\n\n<blockquote>\n <p>Why my setting of LD_LIBRARY_PATH didn't force my new (local) libc to use the new (local) ld-linux.</p>\n</blockquote>\n\n<p>As above answer explains, <code>ld-linux</code> is <em>baked</em> into your executable at link time and can't be changed by <code>LD_LIBRARY_PATH</code>.</p>\n\n<blockquote>\n <p>What I should do about the segmentation fault when I force libc to use the new (local) ld-linux.</p>\n</blockquote>\n\n<p><code>LD_PRELOAD</code>ing <code>ld-linux</code> can <em>never</em> work: <code>LD_PRELOAD</code> is interpreted <em>by</em> <code>ld-linux</code>, so you are effectively forcing two separate <code>ld-linux</code>es into your process, and that greatly confuses both of them.</p>\n\n<p>So, what can you do?</p>\n\n<p>It is in fact possible to install two separate versions of glibc on a single host, we do that every day. Instructions <a href=\"https://stackoverflow.com/a/851229/50617\">here</a>.</p>\n"
}
] |
29,035,181 | 1 |
<python><list><function><matrix><theano>
|
2015-03-13T14:44:31.567
| null | 1,622,610 |
theano list of matrices as function argument
|
<p>Is it possible to define a function in theano that takes a list of matrices (scalars, vectors) as an input argument?</p>
<p>Here a simple example:</p>
<pre></pre>
<p>What happens if list 'a' has not two but thousands of elements with different shapes?
the only thing that comes in my mind is to concatenate these elements to one large matrix and then proceed with it.
Is not there a nicer solution?</p>
|
[
{
"AnswerId": "29036741",
"CreationDate": "2015-03-13T15:57:48.457",
"ParentId": null,
"OwnerUserId": "4652942",
"Title": null,
"Body": "<p>Maybe you could try something like this:</p>\n\n<pre><code>def my_func(*args): \n for list in args:\n # stuff here\n\n# ln being your lists.\nmy_func(l1, l2, l3...)\n</code></pre>\n\n<p>What happens there is that when you use *args in a function definition it means you can pass any number of positional arguments.</p>\n"
}
] |
29,044,706 | 2 |
<neural-network><gpu><theano><conv-neural-network>
|
2015-03-14T02:13:50.243
| null | 3,568,055 |
How to speed up GPU mode convolutional neural network with theano?
|
<p>I'm using theano to implement a convolution neural network. My CPU RAM is 32G and GPU RAM is 2G, but the data is also very big -- almost 5G training data.</p>
<p>When the program is running, the computer seems to be frozen and each operation is really slow, even didn't respond. And the CPU mode seems to be at least 2x faster than GPU mode.</p>
<p>Is there any way to speed up the GPU convolutional neural network?</p>
|
[
{
"AnswerId": "29529863",
"CreationDate": "2015-04-09T04:36:45.057",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>Make sure to use Theano 0.7 with cudnn, this speed up convolution heavily:</p>\n\n<p><a href=\"http://deeplearning.net/software/theano/library/sandbox/cuda/dnn.html\" rel=\"nofollow\">http://deeplearning.net/software/theano/library/sandbox/cuda/dnn.html</a></p>\n"
},
{
"AnswerId": "37107540",
"CreationDate": "2016-05-09T03:58:42.723",
"ParentId": null,
"OwnerUserId": "4326150",
"Title": null,
"Body": "<p>In order to use GPU accelleration first thing you need to install CUDA.\nOn the level of Theano configuration(Theano flags/TheanoRC) there are few ways you can speed-up your model with GPU:</p>\n\n<ol>\n<li>Specify usage of GPU \"device = gpu\"</li>\n<li>Enable Cuda memory allocation (CnMem) \"cnmem = 0.75\"</li>\n<li>Enable CUDNN optimization \"optimizer = cudnn\"</li>\n</ol>\n\n<p>You can read more about Theano config <a href=\"http://deeplearning.net/software/theano/library/config.html#envvar-THEANO_FLAGS\" rel=\"nofollow\">here</a> </p>\n"
}
] |
29,055,715 | 1 |
<python><ubuntu-12.04><pydot><caffe>
|
2015-03-14T23:59:34.450
| 29,173,396 | 1,726,140 |
Drawing network in Caffe causes pydot to throw End of Line errors
|
<p>So I just pulled the latest revision of Caffe from the master branch, and went through all the initialization steps. As a quick test, I was trying to run the script provided, in order to visualize the MNIST Autoencoder example network.
On executing the following command:</p>
<p></p>
<p>Pydot complained, and threw the following error:</p>
<pre></pre>
<p>I see many more messages like the ones shown above, and my error log was getting too big, so I didn't post the entire log.
<a href="https://stackoverflow.com/questions/24442094/pydot-not-playing-well-with-line-breaks">This post</a>, seems to be seeing the same error as me, so I tried to replicate their solution, and changed all the strings in the method in to raw strings. But that didn't seem to work. </p>
<p>Any suggestions on how I can sort this issue?</p>
<p>Thanks!! :)</p>
|
[
{
"AnswerId": "29173396",
"CreationDate": "2015-03-20T18:25:59.227",
"ParentId": null,
"OwnerUserId": "4695219",
"Title": null,
"Body": "<p>I think the key is in the <em>determine_node_label_by_layertype</em> function. This is a block of code that should look something like this (or at least it does in my current version of the repository):</p>\n\n<pre><code>def determine_node_label_by_layertype(layer, layertype, rankdir):\n\"\"\"Define node label based on layer type\n\"\"\"\n\n if rankdir in ('TB', 'BT'):\n # If graph orientation is vertical, horizontal space is free and\n # vertical space is not; separate words with spaces\n separator = ' '\n else:\n # If graph orientation is horizontal, vertical space is free and\n # horizontal space is not; separate words with newlines\n separator = '\\n'\n</code></pre>\n\n<p>Replace the <code>separater = '\\n'</code> with <code>separater = r\"\\n\"</code> and it seemed to work for me.</p>\n"
}
] |
29,102,165 | 2 |
<neural-network><theano><deep-learning><lstm>
|
2015-03-17T14:45:29.613
| null | 1,870,498 |
How to perform multi-label learning with LSTM using theano?
|
<p>I have some text data with multiple labels for each document. I want to train a LSTM network using Theano for this dataset. I came across <a href="http://deeplearning.net/tutorial/lstm.html" rel="noreferrer">http://deeplearning.net/tutorial/lstm.html</a> but it only facilitates a binary classification task. If anyone has any suggestions on which method to proceed with, that will be great. I just need an initial feasible direction, I can work on.</p>
<p>thanks,
Amit</p>
|
[
{
"AnswerId": "34081214",
"CreationDate": "2015-12-04T05:02:32.193",
"ParentId": null,
"OwnerUserId": "395857",
"Title": null,
"Body": "<p>1) Change the last layer of the model. I.e.</p>\n\n<pre><code>pred = tensor.nnet.softmax(tensor.dot(proj, tparams['U']) + tparams['b'])\n</code></pre>\n\n<p>should be replaced by some other layer, e.g. sigmoid:</p>\n\n<pre><code>pred = tensor.nnet.sigmoid(tensor.dot(proj, tparams['U']) + tparams['b'])\n</code></pre>\n\n<p>2) The cost should also be changed. </p>\n\n<p>I.e.</p>\n\n<pre><code>cost = -tensor.log(pred[tensor.arange(n_samples), y] + off).mean()\n</code></pre>\n\n<p>should be replaced by some other cost, e.g. cross-entropy:</p>\n\n<pre><code>one = np.float32(1.0)\npred = T.clip(pred, 0.0001, 0.9999) # don't piss off the log\ncost = -T.sum(y * T.log(pred) + (one - y) * T.log(one - pred), axis=1) # Sum over all labels\ncost = T.mean(cost, axis=0) # Compute mean over samples\n</code></pre>\n\n<p>3) In the function <code>build_model(tparams, options)</code>, you should replace:</p>\n\n<pre><code>y = tensor.vector('y', dtype='int64')\n</code></pre>\n\n<p>by </p>\n\n<pre><code>y = tensor.matrix('y', dtype='int64') # Each row of y is one sample's label e.g. [1 0 0 1 0]. sklearn.preprocessing.MultiLabelBinarizer() may be handy.\n</code></pre>\n\n<p>4) Change <code>pred_error()</code> so that it supports multilabel (e.g. using some metrics like accuracy or F1 score from scikit-learn).</p>\n"
},
{
"AnswerId": "29529916",
"CreationDate": "2015-04-09T04:41:04.307",
"ParentId": null,
"OwnerUserId": "949321",
"Title": null,
"Body": "<p>You can change the last layer of the model. It would have a vector of target where each element is 0 or 1, depending if you have the target or not.</p>\n"
}
] |
29,109,991 | 1 |
<caffe>
|
2015-03-17T21:24:02.900
| null | 3,268,282 |
Issues compiling caffe in RHEL-6.5 with Tesla K80
|
<p>when I am trying to compile caffe on my RHEL machine I have this errors:
I did follow the instruction of this link: <a href="http://caffe.berkeleyvision.org/installation.html" rel="nofollow">http://caffe.berkeleyvision.org/installation.html</a> but it does not look like working.
any help? thanks</p>
<pre></pre>
|
[
{
"AnswerId": "29206662",
"CreationDate": "2015-03-23T09:19:54.873",
"ParentId": null,
"OwnerUserId": "4702363",
"Title": null,
"Body": "<p>Proper opencv not installed. Even though you yum'ed opencv, it could be too low version. Check /usr/local/include/opencv2 if there's core/core.hpp as the warning says.\nDownload from <a href=\"http://opencv.org/downloads.html\" rel=\"nofollow\">http://opencv.org/downloads.html</a> something like 2.4.10 and install it. </p>\n"
}
] |
29,124,840 | 4 |
<python><deep-learning><caffe>
|
2015-03-18T14:32:37.767
| 31,391,076 | 1,452,257 |
Prediction in Caffe - Exception: Input blob arguments do not match net inputs
|
<p>I'm using Caffe for classifying non-image data using a quite simple CNN structure. I've had no problems training my network on my HDF5-data with dimensions n x 1 x 156 x 12. However, I'm having difficulties classifying new data.</p>
<p>How do I do a simple forward pass without any preprocessing? My data has been normalized and have correct dimensions for Caffe (it's already been used to train the net). Below is my code and the CNN structure.</p>
<p><strong>EDIT:</strong> I've isolated the problem to the function '_Net_forward' in pycaffe.py and found that the issue arises as the self.input dict is empty. Can anyone explain why that is? The set is supposed to be equal to the set coming from the new test data:</p>
<pre></pre>
<p>My code has changed a bit as I now use the IO methods for converting the data into datum (see below). In that way I've filled the kwargs variable with the correct data.</p>
<p>Even small hints would be greatly appreciated!</p>
<pre></pre>
<p><strong>CNN Prototext</strong></p>
<pre></pre>
|
[
{
"AnswerId": "31391076",
"CreationDate": "2015-07-13T18:44:38.853",
"ParentId": null,
"OwnerUserId": "395857",
"Title": null,
"Body": "<p>Here is <a href=\"https://groups.google.com/forum/#!topic/caffe-users/aojN_bmbg74\">the answer from Evan Shelhamer I got on the Caffe Google Groups</a>:</p>\n\n<blockquote>\n <p><code>self._inputs</code> is indeed for the manual or \"deploy\" inputs as defined\n by the input fields in a prototxt. To run a net with data layers in\n through pycaffe, just call <code>net.forward()</code> without arguments. No need\n to change the definition of your train or test nets.</p>\n \n <p>See for instance code cell [10] of the <a href=\"http://nbviewer.ipython.org/github/BVLC/caffe/blob/tutorial/examples/01-learning-lenet.ipynb\">Python LeNet example</a>.</p>\n</blockquote>\n\n<p>In fact I think it's clearer in the <a href=\"https://github.com/BVLC/caffe/blob/master/examples/00-classification.ipynb\">Instant Recognition with Caffe tutorial</a>, cell 6:</p>\n\n<pre><code># Feed in the image (with some preprocessing) and classify with a forward pass.\nnet.blobs['data'].data[...] = transformer.preprocess('data', caffe.io.load_image(caffe_root + 'examples/images/cat.jpg'))\nout = net.forward()\nprint(\"Predicted class is #{}.\".format(out['prob'].argmax()))\n</code></pre>\n\n<p>In other words, to generate the predicted outputs as well as their probabilities using pycaffe, once you have trained your model, you have to first feed the data layer with your input, then perform a forward pass with <code>net.forward()</code>.</p>\n\n<hr>\n\n<p>Alternatively, as pointed out in other answers, you can use a deploy prototxt that is similar to the one you use to define the trained network but removing the input and output layers, and add the following at the beginning (obviously adapting according to your input dimension):</p>\n\n<pre><code>name: \"your_net\"\ninput: \"data\"\ninput_dim: 1\ninput_dim: 1\ninput_dim: 1\ninput_dim: 250\n</code></pre>\n\n<p>That's what they use in the <a href=\"https://github.com/BVLC/caffe/blob/master/examples/cifar10/cifar10_quick.prototxt\">CIFAR10 tutorial</a>.</p>\n\n<p><sup>(pycaffe really ought to be better documented…)</sup></p>\n"
},
{
"AnswerId": "31360213",
"CreationDate": "2015-07-11T18:04:23.697",
"ParentId": null,
"OwnerUserId": "2434201",
"Title": null,
"Body": "<p>I have exactly the same problem. This is what fixed it. </p>\n\n<p>First, take the same prototext file as you used to train, remove the two data layers.</p>\n\n<p>Then add the block as Mark's above</p>\n\n<pre><code>name: \"Name_of_your_net\"\ninput: \"data\"\ninput_dim: 64 \ninput_dim: 1\ninput_dim: 28\ninput_dim: 28\n</code></pre>\n\n<p>where my input_dim are for mnist, change them to your dim. </p>\n\n<p>Everything works.</p>\n"
},
{
"AnswerId": "29223315",
"CreationDate": "2015-03-24T01:12:48.940",
"ParentId": null,
"OwnerUserId": "2484466",
"Title": null,
"Body": "<p>Only due to my own experimental experience, it's not a very good idea to specify train and test net in one file using {PHASE} clause. I got many weird errors when I used net file like that, but when I used older version of net files which contain two files separately, train and test, it worked. However I was using caffe version in Nov 2014, perhaps there's some bug or compatible issues there.</p>\n\n<p>Well, when the model is used for prediction, shouldn't there be a deploy file specifying the net structure? If you look at ImageNet you should find imagenet_deploy.prototxt there. Although deploy file is similar to train/test file, I heard it's a bit different due to some fillers. I don't know if it's the problem, but any discussion is welcome, I need to learn new caffe schema if there exist too</p>\n"
},
{
"AnswerId": "29423913",
"CreationDate": "2015-04-02T23:03:17.247",
"ParentId": null,
"OwnerUserId": "723090",
"Title": null,
"Body": "<pre><code>Even small hints would be greatly appreciated!\n</code></pre>\n\n<p>I am stuck too so not much help, sorry. Might want to skip to the end.</p>\n\n<p><code>net.inputs</code> is a @property function which supposedly generated the names of the input layer(s).</p>\n\n<pre><code>@property\ndef _Net_inputs(self):\n return [list(self.blobs.keys())[i] for i in self._inputs]\n</code></pre>\n\n<p>Where <code>list(self.blobs.keys())</code> for you would be</p>\n\n<pre><code>['data', 'feature_conv', 'conv1', 'pool1', 'conv2', 'fc1', 'accuracy', 'loss']\n</code></pre>\n\n<p>Since <code>inputs</code> has to match <code>kwargs.keys() = ['data']</code> we can conclude that <code>net._inputs</code> should have been <code>[0]</code>. Somehow.</p>\n\n<p>Since <code>_inputs</code> isn't used anywhere else in <code>pycaffe.py</code> I have a look at <code>_caffe.cpp</code>. Around line 222 it says </p>\n\n<pre><code>.add_property(\"_inputs\", p::make_function(&Net<Dtype>::input_blob_indices,\n bp::return_value_policy<bp::copy_const_reference>()))\n</code></pre>\n\n<p>So <code>_inputs</code> are the <code>input_blob_indices</code> and it makes sense that these should be <code>[0]</code> for your network.</p>\n\n<p><code>input_blob_indices</code> in turn is simply a function that returns <code>net_input_blob_indices_</code> in <code>include/caffe/net.hpp</code></p>\n\n<pre><code>inline const vector<int>& input_blob_indices() const { return net_input_blob_indices_; }\n</code></pre>\n\n<p>...which is only used in <code>src/caffe/net.cpp</code>, but I can't find it being defined or assigned anywhere.</p>\n\n<p>I have tried with <code>type: Data</code> and <code>type: MemoryData</code> but that doesn't make a difference. What does work is using</p>\n\n<pre><code>input: \"data\"\ninput_dim: 1\ninput_dim: 3\ninput_dim: 227\ninput_dim: 227\n</code></pre>\n\n<p>...instead of a layer. In that case <code>net._inputs = [0]</code> and <code>net.inputs = ['data']</code> (actually <code>net._inputs</code> is a <code>caffe._caffe.IntVec object</code> but <code>list(net._inputs) = [0]</code>).</p>\n\n<p><strong>TLDR</strong>: It is starting to look a lot like a bug so I submitted it: <a href=\"https://github.com/BVLC/caffe/issues/2246\" rel=\"nofollow\">https://github.com/BVLC/caffe/issues/2246</a></p>\n\n<p>P.s. it seems like you are converting ndarray to datum and then back again. Does this have a purpose?</p>\n"
}
] |
29,154,372 | 0 |
<python-2.7><theano>
|
2015-03-19T20:29:10.687
| null | 3,508,639 |
Find minimum N elements in theano
|
<p>I've got a theano function which computes euclidean distances for 2 matrices— () and (). The result is an matrix of pairwise distances of each vector (or row) in from each vector (or row) in .</p>
<pre></pre>
<p>Let's say I change the above function to accept a single vector, an array of vectors, and the number of smallest distances. What I want is a theano function that will find the N smallest distances, similar to below:</p>
<pre></pre>
<p>I'd like to avoid explicitly sorting all the distances, since the number that I want will generally be much much smaller than the total number of distances.</p>
|
[] |
29,158,732 | 2 |
<deep-learning><caffe>
|
2015-03-20T03:08:22.057
| 29,208,039 | 639,973 |
functional concept of caffe
|
<p>I'm a little lost as to the concept of caffe.</p>
<p>Is it for unsupervised feature extraction, for example, by feeding a lot of images without label?</p>
<p>Or is it a classifier when the inputs are set of values for certain fixed feature dimension?</p>
|
[
{
"AnswerId": "29346744",
"CreationDate": "2015-03-30T12:52:07.073",
"ParentId": null,
"OwnerUserId": "4634250",
"Title": null,
"Body": "<p>Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors.</p>\n\n<p>I'd advice to have look in this link, where some useful doc., codes <a href=\"http://caffe.berkeleyvision.org/\" rel=\"nofollow\">Caffe Algor.</a>and examples are found: </p>\n"
},
{
"AnswerId": "29208039",
"CreationDate": "2015-03-23T10:33:49.883",
"ParentId": null,
"OwnerUserId": "773226",
"Title": null,
"Body": "<p>Caffe is a supervised learning algorithm which extracts the features on a fixed MxN dimensional image. The labels of these images are to be passed through during the training phase. Special care is to be taken to select the training input such that objects of two classes may not be present in the same image.</p>\n"
}
] |
29,164,552 | 1 |
<theano><autoencoder>
|
2015-03-20T10:37:19.930
| null | 4,082,092 |
Implementing a saturating autoencoder in theano
|
<p>I am trying to implement an autoencoder, using the regularization method described in this paper: <a href="http://yann.lecun.com/exdb/publis/pdf/goroshin-lecun-iclr-13.pdf" rel="nofollow">"Saturating Auto-Encoders", Goroshin et. al., 2013</a></p>
<p>Essentially, this tries to minimize the difference between the output of the hidden layer, and the flat portions of the non-linear function being used to compute the hidden layer output.</p>
<p>Assuming we are using a step function as the nonlinearity, with the step being at 0.5, a simple implementation might be:</p>
<pre></pre>
<p>Then, the regularization cost can be simply:</p>
<pre></pre>
<p>I am trying to implement this functionality in Theano. Have started off with the <a href="http://deeplearning.net/tutorial/dA.html" rel="nofollow">denoising autoencoder code</a> available at the Theano website. Have made some basic modifications to it:</p>
<pre></pre>
<p>The above loss functions puts an L1 penalty on the hidden layer output, which should (hopefully) drive most of them to 0. In place of this simple L1 penalty, I want to use the saturating penalty as given above.</p>
<p>Any idea how to do this? Where do I compute the y_prime? How to do it symbolically?</p>
<p>I am a newbie to Theano, and still catching up with the symbolic computation part.</p>
|
[
{
"AnswerId": "29964326",
"CreationDate": "2015-04-30T09:44:46.570",
"ParentId": null,
"OwnerUserId": "1563927",
"Title": null,
"Body": "<p>The nonlinearities in the paper are applied during coding, i.e. during calculation of the hidden values. Therefore, given your code example, they should be applied inside the <code>get_hidden_values()</code> function (NOT in the <code>get_cost_updates()</code> function). They should be the last piece of processing that <code>get_hidden_values()</code> does before returning.</p>\n\n<p>Also, don't use <code>numpy.abs</code> in your symbolic expression because that demands that numpy does the calculation. Instead you want Theano to do it, so just use <code>abs</code> and I think it should work as needed.</p>\n"
}
] |
29,164,812 | 1 |
<lua><machine-learning><cluster-analysis><torch>
|
2015-03-20T10:51:35.927
| null | 249,001 |
Clustering in Torch
|
<p>I am trying to learn the <a href="http://torch.ch/" rel="nofollow">Torch</a> library for machine learning.</p>
<p>I know that the focus of Torch is neural networks, but just for the sake of it I was trying to run kmeans on it. If nothing, Torch implements fast contiguous storage which should be analogous to numpy arrays, and the <a href="https://github.com/torch/torch7/wiki/Cheatsheet" rel="nofollow">Torch cheatsheet</a> cites the <a href="https://github.com/koraykv/unsup" rel="nofollow">unsup</a> library for unsupervised learning, so why not?</p>
<p>I already have <a href="https://github.com/andreaferretti/kmeans" rel="nofollow">a benchmark</a> that I use for K-means implementations. Even though all the implementations there are intentionally using an unoptimized algorithm (the README explains why), LuaJIT is able to cluster 100000 points in 611ms. An optimized (or shall I say, not intentionally slowed down) implementation in Nim (not on the repository) runs in 68 ms,so I was expecting something in-between.</p>
<p>Unfortunately, things are much worse, so I suspect I am doing something awfully wrong. What I have written is</p>
<pre></pre>
<p>and the running time is around 6 seconds!</p>
<p>Can anyone check if I have done something wrong in using Torch/unsup?</p>
<p>If anyone wants to try it, the file is in the above repository</p>
|
[
{
"AnswerId": "29182271",
"CreationDate": "2015-03-21T11:38:54.360",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<blockquote>\n <p>Can anyone check if I have done something wrong in using Torch/unsup?</p>\n</blockquote>\n\n<p>Everything sounds correct (note: using <code>local</code> variables is recommended):</p>\n\n<ul>\n<li><code>data</code> is a 2-dimensional table and you use the corresponding <a href=\"https://github.com/torch/torch7/blob/3585ab5/doc/tensor.md#torchtensortable\" rel=\"nofollow\">Torch constructor</a>,</li>\n<li><code>points</code> is a 2-dimensional tensor with nb. rows = nb. of points and nb. cols = points dimension (2 here). This is what <code>unsup.kmeans</code> <a href=\"https://github.com/koraykv/unsup/blob/605f777/kmeans.lua#L4\" rel=\"nofollow\">expects as input</a>.</li>\n</ul>\n\n<p>What you can do is change the batch size (4th argument). It may impact the performance. You can also use the verbose mode that will output the average time per iteration:</p>\n\n<pre><code>-- batch size = 5000, no callback, verbose mode\ncentroids, counts = unsup.kmeans(points, 10, 15, 5000, nil, true)\n</code></pre>\n"
}
] |
29,173,761 | 1 |
<matlab><neural-network><caffe>
|
2015-03-20T18:46:24.210
| 29,207,862 | 2,191,652 |
How to extract image features using caffe matlab wrapper?
|
<p>Does anyone use the matlab wrapper for the caffe framework? Is there a way how to extract an 4096 dimensional feature vector from an image?
I was already following</p>
<p><a href="https://github.com/BVLC/caffe/issues/432" rel="nofollow">https://github.com/BVLC/caffe/issues/432</a></p>
<p>and also tried to remove the last lines in imagenet_deploy.prototxt to remove layers as suggested in another forum on github.</p>
<p>But still when I run "matcaffe_demo(im, 1)" I only get a 1000 dim vector of scores (for the image net classes).
Any help would be appreciated
Kind regards</p>
|
[
{
"AnswerId": "29207862",
"CreationDate": "2015-03-23T10:25:08.457",
"ParentId": null,
"OwnerUserId": "773226",
"Title": null,
"Body": "<p>It seems that you might not be calling the correct prototxt file. If the last layer defined in the prototxt has the top blob of 4096 dimension, there is no way for the output to be 1000 dimension. </p>\n\n<p>To be sure, try creating a bug in the prototxt file and see whether the program crashes. If it doesn't then the program indeed is reading some other prototxt file.</p>\n"
}
] |
29,191,808 | 2 |
<caffe><gflags><glog>
|
2015-03-22T06:55:52.843
| null | 1,282,043 |
libgflags bad value error for caffe
|
<p>I've linked all the required libraries and the caffee confige ran smoothly. But when I want to make the library I get this error:</p>
<p>/usr/bin/ld: /usr/local/lib/libgflags.a(gflags.cc.o): relocation R_X86_64_32S against `std::basic_string, std::allocator >::_Rep::_S_empty_rep_storage' can not be used when making a shared object; recompile with -fPIC
/usr/local/lib/libgflags.a: could not read symbols: Bad value</p>
<p>I found a 'workaround' for this problem at the libgflags and glog troubleshooting websites:
<a href="https://code.google.com/p/google-glog/issues/detail?id=201" rel="nofollow">https://code.google.com/p/google-glog/issues/detail?id=201</a></p>
<p>But I tried them and it doesn't seem to work. Am I missing something? Maybe I haven't uncommented a line in my original Makefile.config file? *I am installing caffe on my laptop with no CUDA or parallel computing for now.</p>
|
[
{
"AnswerId": "29205434",
"CreationDate": "2015-03-23T07:56:24.920",
"ParentId": null,
"OwnerUserId": "773226",
"Title": null,
"Body": "<p>Try recompiling the gflags library with -fPIC compiler flag.</p>\n\n<p>Did the caffe work using gflags shared library instead of using the static one?</p>\n"
},
{
"AnswerId": "42661505",
"CreationDate": "2017-03-08T01:44:02.743",
"ParentId": null,
"OwnerUserId": "2509148",
"Title": null,
"Body": "<p>Try to choose 'BUILD SHARED LIBS' option when using Cmake</p>\n"
}
] |
29,206,224 | 0 |
<theano><pymc>
|
2015-03-23T06:56:08.620
| null | 4,701,788 |
How do I use a FreeRV as a theano function input in pymc3
|
<p>I am trying to use to model my system: stochastic variables, of unknown distribution, which are related to some observed data by the function . I have written the function using theano, however I am not sure how to input the stochastics into my theano code. This is what I have tried:</p>
<pre></pre>
<p>This returns an error at the z_hat line:</p>
<blockquote>
<p>ValueError: ('Bad input argument to theano function with name
"scan_pm_v.py:60" at index 1(0-based)', 'setting an array element
with a sequence.')</p>
</blockquote>
<p>I can set the to be a numpy object, which then gives the error</p>
<blockquote>
<p>AttributeError: ('Bad input argument to theano function with name
"scan_pm_v.py:60" at index 1(0-based)', "'float' object has no
attribute 'dtype'")</p>
</blockquote>
<p>at the same line. I've also tried making the input a theano tensor by adding the line:</p>
<pre></pre>
<p>Which then returns the error:</p>
<blockquote>
<p>TypeError: ('Bad input argument to theano function with name
"scan_pm_v.py:60" at index 1(0-based)', 'Expected an array-like
object, but found a Variable: maybe you are trying to call a function
on a (possibly shared) variable instead of a numeric array?')</p>
</blockquote>
|
[] |
29,209,935 | 1 |
<import><runtime><theano>
|
2015-03-23T12:13:26.507
| null | 4,690,152 |
How much time "import theano" takes to run?
|
<p>I am taking first baby steps with Theano on Windows for deep learning experiments, and I'm surprised how much it takes <em>just loading</em> the library.</p>
<p>Here is the little test program :</p>
<pre></pre>
<p>results in my PyCharm console :</p>
<pre></pre>
<p>:-o</p>
<p>Is it normal behaviour ? Should I reinstall everything ? </p>
<p>Here is my configuration :</p>
<p>Windows 7. Python 2.7.8 in "Anaconda" package. CUDA 6.5. GPU : Nvidia Quadro K2000M.</p>
<p>Here is the .theanorc file :</p>
<pre></pre>
|
[
{
"AnswerId": "30508871",
"CreationDate": "2015-05-28T14:03:35.823",
"ParentId": null,
"OwnerUserId": "127480",
"Title": null,
"Body": "<p>It generally takes a long time to <code>import theano</code> when configured to use a GPU. On my machine it takes 0.567 seconds when running on CPU and 13.3 seconds when running on GPU.</p>\n\n<p>This may be one reason why to develop on CPU initially and switch to GPU once you've got it running right. The GPU overhead, including the increased Theano startup time, needs to be considered in determining whether running on a GPU is actually worthwhile.</p>\n"
}
] |
29,213,108 | 1 |
<speech><caffe>
|
2015-03-23T14:40:35.877
| null | 1,010,037 |
CNN-based 1-D signal classification using Caffe
|
<p>I am looking for an easy and straightforward example of 1-D signal classification (such as speech signal) based on CNN using Caffe.</p>
<p>From the <a href="http://caffe.berkeleyvision.org/" rel="nofollow">Caffe</a> website, it is possible to follow some examples and tutorial which are Image classification tasks. Instead, I am looking for an example and tutorial on 1-D signals.</p>
<p>Your answers are really appreciated. </p>
|
[
{
"AnswerId": "31135832",
"CreationDate": "2015-06-30T10:30:22.607",
"ParentId": null,
"OwnerUserId": "809993",
"Title": null,
"Body": "<p>Conceptually there's no meaningful difference between working with 1D data vs 2D data. You'd need a database where instead of 2D images you'll have 1D \"images\" of shape (channels: 1, height: 1, width: d), and make sure all kernels make use of <em>kernel_w</em> and <em>kernel_h</em> instead of <em>kernel_size</em> (which sets the kernel to a square shape).</p>\n\n<p>If you're looking for an example architecture you can use, there's this article you can follow for training a CNN on raw waveform speech data: <a href=\"http://www.cs.huji.ac.il/~ydidh/waveform.pdf\" rel=\"nofollow\">Speech Acoustic Modeling from Raw Multichannel Waveforms</a>.</p>\n\n<p>There's also <a href=\"https://github.com/BVLC/caffe/issues/1570\" rel=\"nofollow\">an open issue on Caffe's Github page</a> requesting an example for the speech domain, with more links to potential implementations you can look at.</p>\n"
}
] |
29,225,213 | 1 |
<python><theano>
|
2015-03-24T04:47:35.737
| null | 4,705,902 |
In theano, making the matrix of slices from a vector
|
<p>I want to do the same thing in <a href="https://stackoverflow.com/questions/22159774/get-matrix-of-vectors-from-a-vector">get matrix of vectors from a vector</a> in theano.</p>
<p>Maybe, it can be worked with scan().
But i dont know how scan() can be applied in this problem.</p>
<p>Following is the code for context.</p>
<pre></pre>
|
[
{
"AnswerId": "37475228",
"CreationDate": "2016-05-27T05:04:32.137",
"ParentId": null,
"OwnerUserId": "4705902",
"Title": null,
"Body": "<pre><code>import theano\nimport theano.tensor as T\n\nself.x = T.vector('x')\nself.i = T.imatrix('i') \n#indices tuple list. ex)[[0,3],[1,4]] means two slices (from 0 to 3 and from 1 to 4)\n\nresults, updates = theano.scan(lambda v:self.x[v[0]:v[1]], sequences=self.i)\nmake_slices = theano.function(inputs=[self.x,self.i], outputs=[results]) \n\nself.slices_list = make_slices(x,i)\n#'x' and 'i' are theano shared variables\n</code></pre>\n"
}
] |
29,248,639 | 1 |
<lua><neural-network><torch>
|
2015-03-25T06:12:18.017
| 29,257,879 | 3,468,673 |
torch7 neural network training error
|
<p>I'm trying to implement a neural network example in torch7. my data is stored in a text file in this form [19 cols x 10000 rows] :</p>
<pre></pre>
<p>With labels in last column [100 labels].</p>
<p>With this code :</p>
<pre></pre>
<p>I got this error message:</p>
<pre></pre>
<p>Can you please help me ?</p>
<p>Thank you.</p>
|
[
{
"AnswerId": "29257879",
"CreationDate": "2015-03-25T14:01:55.420",
"ParentId": null,
"OwnerUserId": "1688185",
"Title": null,
"Body": "<blockquote>\n <p>I got this error message [...] Can you please help me?</p>\n</blockquote>\n\n<p>Within your dataset <code>input</code> and <code>output</code> should be <code>Tensor</code>-s (here <code>input</code> is a plain Lua table which is why you obtain this error, i.e there is no <code>dim</code> method).</p>\n\n<p>To simplify data loading I recommend you to use a <a href=\"http://luarocks.org/search?q=csv\" rel=\"nofollow\">csv parser</a>, e.g you can use <a href=\"https://github.com/willkurt/csv2tensor\" rel=\"nofollow\">csv2tensor</a> to load your data into a <code>Tensor</code>.</p>\n\n<p>First make sure to add a header (as first line) to your file like:</p>\n\n<pre><code>x001,x002,x003,x004,x005,x006,x007,x008,x009,x010,x011,x012,x013,x014,x015,x016,x017,x018,label\n</code></pre>\n\n<p>Then load your data as follow:</p>\n\n<pre><code>local csv2tensor = require 'csv2tensor'\n\nlocal inputs = csv2tensor.load(\"data.csv\", {exclude={\"label\"}})\nlocal labels = csv2tensor.load(\"data.csv\", {include={\"label\"}})\n\nlocal dataset = {}\n\nfor i=1,inputs:size(1) do\n dataset[i] = {inputs[i], torch.Tensor{labels[i]}}\nend\n\ndataset.size = function(self)\n return inputs:size(1)\nend\n</code></pre>\n\n<p>And use this dataset for training:</p>\n\n<pre><code>-- ...\ntrainer:train(dataset)\n</code></pre>\n"
}
] |
29,255,940 | 1 |
<python><theano>
|
2015-03-25T12:40:06.850
| 29,256,541 | 4,695,297 |
Updating a variable that has been casted with theano.tensor.cast()
|
<p>I am trying to update a theano variable in a function, simplified like this:</p>
<pre></pre>
<p>My problem is that I get the error</p>
<pre></pre>
<p>The way I get this variable is through using the following (mostly copied from deeplearning.net tutorials) ( is initialized similarly):</p>
<pre></pre>
<p>which prints</p>
<pre></pre>
<p>that is, the variable is no longer "shared", explaining the error.
This makes sense, as I guess the variable is now simply just a casted view of the original shared floats. But how can I update a variable that is casted efficiently?</p>
|
[
{
"AnswerId": "29256541",
"CreationDate": "2015-03-25T13:06:26.567",
"ParentId": null,
"OwnerUserId": "4695297",
"Title": null,
"Body": "<p>I solved this myself, and the answer was of course the obvious one.</p>\n\n<p>Instead of overriding the <code>a_variable</code> variable with the casted version, I kept the uncasted version:</p>\n\n<pre><code>a_variable_casted = T.cast(a_variable, 'int32')\n</code></pre>\n\n<p>Updates are now done on <code>a_variable</code>, while <code>a_variable_casted</code> is used to perform the computations <code>a_variable</code> was used for earlier.</p>\n\n<p>There might obviously be a more elegant way to do this, in which case I'd love to hear it!</p>\n"
}
] |
29,274,823 | 1 |
<android><android-ndk><caffe>
|
2015-03-26T09:22:48.320
| null | 1,485,254 |
Android caffe built demo shows error
|
<p>Being new to <a href="http://caffe.berkeleyvision.org/" rel="nofollow">Android NDK Caffe</a>, I would like to use built version in my Android project. I tried to run <a href="https://github.com/sh1r0/caffe-android-demo" rel="nofollow">this built sample demo</a>, but while running, it showed the following:</p>
<pre></pre>
<p>(the app crashed)</p>
|
[
{
"AnswerId": "37325904",
"CreationDate": "2016-05-19T14:13:14.090",
"ParentId": null,
"OwnerUserId": "2995941",
"Title": null,
"Body": "<p>I can see that the sigsev signal is thrown through android AsyncTask.\nThe problem could come from this function.</p>\n\n<pre><code>caffeMobile.predictImage(strings[0])[0]; //line 160 of MainActivity\n</code></pre>\n\n<p>This signal comes from JNI and it is very difficult to know where is the problem unless you can debug natively (through ndk) the app. The caffe-sample is not configured to debug on native method.</p>\n\n<p>Try this issues to manage the error: </p>\n\n<blockquote>\n <p>Ensure that your image path in this string[0] arrays are not empty. and exists.</p>\n \n <p>Ensure that the other caffeMobile functions are able to exec without\n problems, for example:</p>\n</blockquote>\n\n<pre><code> caffeMobile = new CaffeMobile();\n caffeMobile.setNumThreads(4);\n caffeMobile.loadModel(\"/sdcard/caffe_mobile/bvlc_reference_caffenet/deploy.prototxt\", \"/sdcard/caffe_mobile/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel\");\n</code></pre>\n\n<p>If you are able to execute the other functions, probably your image path is not correct, check. </p>\n\n<p>If you are not able to execute loadModel or setNumThreads function, probably the apk is not loading libjni.so library correctly , or the jni bridge is not able to locate jni functions.</p>\n"
}
] |
29,286,838 | 3 |
<cuda><ubuntu-14.04><deep-learning><caffe>
|
2015-03-26T19:04:52.660
| null | 4,717,939 |
Caffe Installation Issue on Ubuntu 14.04
|
<p>I successfully installed caffe on my dual-boot laptop (GTX 860M, Windows 7 + Ubuntu 14.04.2). All the tests were successfully passed. When I <strong>restarted</strong>, however, the ubuntu got stuck on the opening screen (the one with ubuntu logo and <strong>five red dots</strong>). Don't know what to do with it. </p>
<p>Has anyone run into the same issue before? I reckon something is wrong with graphic card driver booting. I installed newest CUDA 7 Toolkit with nvidia drivers built inside. Since all tests were passed before I restarted, it seems that the driver would work once successfully booted. </p>
<p>the stuck screen is like this: <a href="http://i.stack.imgur.com/pRtEF.jpg" rel="nofollow">http://i.stack.imgur.com/pRtEF.jpg</a></p>
|
[
{
"AnswerId": "30028621",
"CreationDate": "2015-05-04T11:09:11.810",
"ParentId": null,
"OwnerUserId": "4841248",
"Title": null,
"Body": "<p>I had a similar issue when trying to install Caffe on my system. The steps below worked for me, but it has at least one known issue (documented below).</p>\n\n<p>I'm not sure what precisely caused this problem, but it surely has something to do with the Nvidia Driver and Cuda Toolkit installation and is <strong>not caused by Caffe</strong>.</p>\n\n<p>After completing the steps below, I've been able to successfully install Caffe on my system with the following tutorials and guides:</p>\n\n<ul>\n<li><a href=\"http://caffe.berkeleyvision.org/install_apt.html\" rel=\"nofollow\">Official Install Guide</a></li>\n<li><a href=\"https://github.com/BVLC/caffe/wiki/Ubuntu-14.04-VirtualBox-VM\" rel=\"nofollow\">Github Install Guide</a></li>\n</ul>\n\n<hr>\n\n<h2>Update</h2>\n\n<p>Recently, I had the exact same problem trying to <strong>make Cuda 7.5 work on Ubuntu 14.04;</strong> this approach also solved that problem. Specs:</p>\n\n<ul>\n<li>CPU: Intel Core i7-4700MQ (4x 2.40 GHz with Hyperthreading)</li>\n<li>GPU: NVidia GT 940M</li>\n<li>RAM: 8 GB</li>\n<li>HDD: 52.7 GB (of which 6.7 GB used after installation)</li>\n</ul>\n\n<hr>\n\n<h1>INSTALL NVIDIA DRIVER AND CUDA ON UBUNTU 14.04</h1>\n\n<p>Source: <a href=\"http://ubuntuforums.org/showthread.php?t=2246526\" rel=\"nofollow\">ubuntuforums.org/showthread.php?t=2246526</a></p>\n\n<p><strong>!! Known Issues !!</strong></p>\n\n<ul>\n<li>After the system has been suspended (or hibernated, not confirmed), all applications using the Nvidia Driver and Cuda 6.5 Toolkit will freeze. When this happens, the command <code>sudo shutdown -r now</code> will print the reboot message but nothing will happen.</li>\n</ul>\n\n<p><strong>Executed and tested on a fresh 64-bit Ubuntu 14.04 install with the following hardware specifications:</strong></p>\n\n<ul>\n<li>CPU: Intel Core i5-2410m (2x 2.30 GHz with Hyperthreading)</li>\n<li>GPU: NVidia GT 540M</li>\n<li>RAM: 6 GB</li>\n<li>HDD: 52.7 GB (of which 8.6 GB used after installation)</li>\n</ul>\n\n<p><strong>The following command was executed before installation:</strong></p>\n\n<pre><code>sudo apt-get -y build-essential vim git llvm clang\n</code></pre>\n\n<p><strong>The following steps resulted in a stable system with the latest Nvidia Driver and Cuda 6.5 Toolkit installed:</strong></p>\n\n<ol>\n<li>Remove all traces of previous/legacy Nvidia Drivers and Cuda Toolkits or perform a fresh Ubuntu 14.04 install.</li>\n<li>Download the latest Nvidia Driver .run file for Ubuntu 14.04 and your system specifications to the <code>~/Downloads</code> directory.\ne.g.: <code>NVIDIA-Linux-x86_64-346.35.run</code></li>\n<li>Download the latest Cuda 6.5 Toolkit .run file for Ubuntu 14.04 and your system specifications to the <code>~/Downloads</code> directory.\ne.g.: <code>cuda_6.5.14_linux_64.run</code></li>\n<li><p>Blacklist the 'nouveau' Driver by appending the following lines to <code>/etc/modprobe.d/blacklist.conf</code> (nouveau is a free open-source driver for Nvidia cards, it is the default for Ubuntu 14.04):</p>\n\n<p><code>blacklist nouveau</code><br>\n<code>options nouveau modeset=0</code></p></li>\n<li><p>Reboot the system, do <strong>NOT</strong> log in but drop to the terminal with CTRL+ALT+F1</p></li>\n<li><p>Kill lightdm (replace 'lightdm' with your own Display Manager if you have changed it, lightdm is the default for Ubuntu 14.04):</p>\n\n<p><code>sudo service lightdm stop</code></p></li>\n</ol>\n\n<p><em><strong>The next step is critical, make sure to check twice before continuing!</strong></em></p>\n\n<ol start=\"7\">\n<li><p>Run the Nvidia Driver installer with the <code>--no-opengl-files</code> option (the option prevents OpenGL files from being overwritten; without this option, Unity would not function properly and the screen would freeze after login):</p>\n\n<p><code>sudo chmod +x ~/Downloads/NVIDIA-Linux-x68_64-346.35.run</code><br>\n<code>sudo ~/Downloads/NVIDIA-Linux-x68_64-346.35.run --no-opengl-files</code></p></li>\n<li><p>Accept the EULA and acknowledge all further warnings but <strong>deny</strong> to install anything extra.</p></li>\n<li><p>Reboot and login to the desktop, verify with the 'Additional Drivers' (System Settings > Software & Updates > Additional Drivers) utility that the manually installed driver is in use.</p></li>\n<li><p>Open a terminal and install the Cuda 6.5 Toolkit:</p>\n\n<p><code>sudo chmod +x ~/Downloads/cuda_6.5.14_linux_64.run</code><br>\n<code>sudo ~/Downloads/cuda_6.5.14_linux_64.run</code></p></li>\n<li><p>Accept the EULA, do <strong>NOT</strong> install the driver, install the Toolkit and the Examples (if you want to), leave all default directories in place.</p></li>\n<li><p>Add the Cuda 6.5 Toolkit environment variables by appending the following lines to <code>~/.bashrc</code>: </p>\n\n<p><code># For 32-bit systems, append these:</code><br>\n<code>export PATH=$PATH:/usr/local/cuda-6.5/bin</code><br>\n<code>export LD_LIBRARY_PATH=/usr/local/cuda-6.5/lib</code> </p>\n\n<p><code># For 64-bit systems, append these:</code><br>\n<code>export PATH=$PATH:/usr/local/cuda-6.5/bin</code><br>\n<code>export LD_LIBRARY_PATH=/usr/local/cuda-6.5/lib64</code> </p></li>\n</ol>\n\n<p>The Nvidia Driver and Cuda 6.5 Toolkit should now be correctly installed.</p>\n\n<p><strong>Optional:</strong> confirm your Nvidia Driver and Cuda 6.5 Toolkit installation.</p>\n\n<ol start=\"13\">\n<li><p>Confirm the Nvidia Driver installation by running the following command:</p>\n\n<p><code>nvidia-smi</code> </p></li>\n<li><p>Confirm the Cuda Compiler installation by running the following command:</p>\n\n<p><code>nvcc -V</code> </p></li>\n<li><p>Confirm everything works by building and running the optionally installed Cuda Examples: (build-essential is required to use 'make')</p>\n\n<p><code>sudo apt-get install -y build-essential</code><br>\n<code>cd ~/NVIDIA_CUDA-6.5_SAMPLES/1_Utilities/deviceQuery</code><br>\n<code>make</code><br>\n<code>./deviceQuery</code><br>\n<code>cd ~/NVIDIA_CUDA-6.5_SAMPLES/1_Utilities/bandwidthTest</code><br>\n<code>make</code><br>\n<code>./bandwidthTest</code> </p></li>\n</ol>\n"
},
{
"AnswerId": "29286892",
"CreationDate": "2015-03-26T19:07:54.530",
"ParentId": null,
"OwnerUserId": "987599",
"Title": null,
"Body": "<p>This problem is not related to caffe.</p>\n\n<p>The problem is that the nVidia driver that is installed from the ubuntu software center does not support your card.</p>\n\n<p>Uninstall any nvidia package (<code>sudo apt-get purge nvidia-*</code>) and install the latest driver version from the nvidia website.</p>\n"
},
{
"AnswerId": "35447115",
"CreationDate": "2016-02-17T03:04:58.767",
"ParentId": null,
"OwnerUserId": "5938204",
"Title": null,
"Body": "<p>I recommend you to change the cuda 7.5 ubuntu 15.04 version. I try it on the ubuntu 14.04, it solves this problem. And when I install cuda 7.5 ubuntu 14.04 version on ubuntu 14.04 I countered the exactly problem.</p>\n"
}
] |
29,328,031 | 1 |
<python><c++><machine-learning><deep-learning><caffe>
|
2015-03-29T10:30:47.060
| 30,236,484 | 3,966,933 |
Issues while interfacing caffe with c++ or python
|
<p>What I have read about the tutorials is that you create your data then write the model using protobuf and then you write the solver file. Finally you train the model and you get your generated file. All this is done though command line. Now there are two questions</p>
<p>1) Suppose I have the generated model now how do I load a new image not in the test folder and perform a forward pass. Should it be done though command line or from some language(c++, python) ?</p>
<p>2) I guess above was one way of doing it. What is the best way to train the classifier (command line train/ or though coding) and how to use the generated model file(after training) in your code. </p>
<p>I want to interface caffe with my code but I am not able to find a short tutorial which will give me step by step on any database say mnist and the model doesn't need to be as complicated as LeNet but a simple Fully connected layer will also do. But can anyone tell me how to just write a simple code using C++ or python and train any dataset from scratch. </p>
<p>A sample C++/python code for training a classifier and using it to predict new data using caffe would also be appreciated.</p>
|
[
{
"AnswerId": "30236484",
"CreationDate": "2015-05-14T11:41:42.417",
"ParentId": null,
"OwnerUserId": "1714410",
"Title": null,
"Body": "<p>Training is best done using the command line. See <a href=\"http://caffe.berkeleyvision.org/gathered/examples/mnist.html\" rel=\"nofollow\">this tutorial</a>.</p>\n\n<p>Once you trained a model and you have a <code>myModel.caffemodel</code> file (a binary file storing the wieghts of the different layers) and a <code>deploy.prototxt</code> file (a text file describing your net), you can use python interface to classify images.</p>\n\n<p>You can run python script <a href=\"https://github.com/BVLC/caffe/blob/master/python/classify.py\" rel=\"nofollow\"><code>classify.py</code></a> to classify image(s) from command line. This script wraps around <a href=\"https://github.com/BVLC/caffe/blob/master/python/caffe/classifier.py\" rel=\"nofollow\"><code>classifier.py</code></a> - a python object that holds a trained net and allows you to perform forward passes in python.</p>\n"
}
] |
29,334,961 | 1 |
<lua><neural-network><deep-learning><torch>
|
2015-03-29T21:19:08.813
| null | 1,082,019 |
Torch, error in StochasticGradient training
|
<p>I'm trying to create an autoencoder neural network in Torch, but I got an error when I try to run the training phase.</p>
<p>Here's my input data in data.csv:</p>
<pre></pre>
<p>Here's my code</p>
<pre></pre>
<p>And here's my error:</p>
<pre></pre>
<p>Do you have any idea about why I'm gettin' this errors?</p>
|
[
{
"AnswerId": "29335566",
"CreationDate": "2015-03-29T22:16:44.063",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<pre><code>dataset[i] = {dnaseIsignals[i-1], dnaseIsignals[i-1]} \n</code></pre>\n\n<p>This is wrong. What you want to give as input to the neural network is a torch.Tensor, but in this case, you are initializing dataset's elements to be tables. Convert your input to a tensor, and you should be good.</p>\n"
}
] |
29,338,016 | 1 |
<python-2.7><ubuntu-12.04><theano>
|
2015-03-30T03:35:06.437
| null | 1,620,746 |
import theano gets Illegal instruction
|
<p>after install the bleeding-edge version and even uninstall Theano, I'm still getting "Illegal instruction" from "import theano", i'm on ubuntu 12.04 precise</p>
<pre></pre>
|
[
{
"AnswerId": "31951181",
"CreationDate": "2015-08-11T20:17:25.967",
"ParentId": null,
"OwnerUserId": "1115577",
"Title": null,
"Body": "<p>This can happen when you move from one computer to another with the same <code>.local</code> directory. The following (from <a href=\"https://groups.google.com/forum/#!topic/theano-users/T0521GHRC1w\" rel=\"noreferrer\">here</a>) worked for me:</p>\n\n<p>First delete <code>~/.theano</code> which stores some theano compiled files. Then reinstall theano via <code>pip uninstall theano; pip install --user theano</code>. It also fixes the gensim install for some reason (which shows the same error upon importing). Perhaps gensim imports theano when it can?</p>\n"
}
] |
29,354,357 | 5 |
<api><http><lua><torch>
|
2015-03-30T19:14:08.427
| 29,355,228 | 563,762 |
HTTP Server Using Lua/ Torch7
|
<p>I'm starting to learn Torch7 to get into the machine learning/ deep learning field and I'm finding it fascinating (and very complicated haha). My main concern, however, is if I can turn this learning into an application - mainly can I turn my Torch7 Lua scripts into a server that an app can use to perform machine learning calculations? And if it's possible, how?</p>
<p>Thank you</p>
|
[
{
"AnswerId": "35467931",
"CreationDate": "2016-02-17T21:17:53.940",
"ParentId": null,
"OwnerUserId": "3932212",
"Title": null,
"Body": "<p>You can use <a href=\"https://github.com/benglard/waffle\" rel=\"noreferrer\">waffle</a>. Here's the hello world example on it's page:</p>\n\n<pre><code>local app = require('waffle')\n\napp.get('/', function(req, res)\n res.send('Hello World!')\nend)\n\napp.listen()\n</code></pre>\n\n<p>lets say your algorithm is a simple face detector. The input is an image and the output is the face detections in some json format. You can do the following:</p>\n\n<pre><code>local app = require('waffle')\nrequire 'graphicsmagick'\nrequire 'MyAlgorithm'\n\napp.post('/', function(req, res)\n local img, detections, outputJson\n img = req.form.image_file:toImage()\n detections = MyAlgorithm.detect(img:double())\n outputJson = {}\n if (detections ~= nil) then\n outputJson.faceInPicture = true\n outputJson.faceDetections = detections\n else\n outputJson.faceInPicture = false\n outputJson.faceDetections = nil\n end\n res.json(outputJson)\nend)\n\napp.listen()\n</code></pre>\n\n<p>That way your algorithm can be used as an independent service.</p>\n"
},
{
"AnswerId": "29355228",
"CreationDate": "2015-03-30T20:01:59.523",
"ParentId": null,
"OwnerUserId": "1442917",
"Title": null,
"Body": "<p>You should look at Torch as a library (even though you may be accessing it as a standalone executable). That library can be used from some Lua code that is accessible through HTTP. The Lua code may be running inside <a href=\"http://openresty.org/\" rel=\"nofollow\">OpenResty</a>, which would take care of all HTTP interactions, and you get the same performance as OpenResty can be configured to use LuaJIT.</p>\n\n<p>Another option is to use HTTP processing based on luasocket and copas libraries (for examples, <a href=\"http://keplerproject.github.io/xavante/\" rel=\"nofollow\">Xavante</a>) or use one of the options listed on <a href=\"http://lua-users.org/wiki/LuaWebserver\" rel=\"nofollow\">LuaWebserver</a> page.</p>\n"
},
{
"AnswerId": "29356295",
"CreationDate": "2015-03-30T21:06:49.887",
"ParentId": null,
"OwnerUserId": "117844",
"Title": null,
"Body": "<p>You can also use the <a href=\"https://github.com/clementfarabet/async\" rel=\"nofollow\">async</a> package that we've tested with torch.</p>\n"
},
{
"AnswerId": "30985705",
"CreationDate": "2015-06-22T16:59:16.197",
"ParentId": null,
"OwnerUserId": "2053898",
"Title": null,
"Body": "<p>Try llserver - minimalistic Lua server. Runs as single coroutine, serves dynamic content via callback function: <a href=\"https://github.com/ncp1402/llserver\" rel=\"nofollow\">https://github.com/ncp1402/llserver</a>\nYou can perform other tasks/calculations in additional coroutines.</p>\n"
},
{
"AnswerId": "38837950",
"CreationDate": "2016-08-08T20:27:42.667",
"ParentId": null,
"OwnerUserId": "213123",
"Title": null,
"Body": "<p>Both <a href=\"https://github.com/clementfarabet/async\" rel=\"nofollow\">async</a> and <a href=\"https://github.com/benglard/waffle\" rel=\"nofollow\">waffle</a> are great options. Another option is to use <a href=\"http://zeromq.org/bindings:lua\" rel=\"nofollow\">ZeroMQ</a> + <a href=\"https://github.com/djungelorm/protobuf-lua\" rel=\"nofollow\">Protocol Buffers</a>. Whatever your preferred web development environment is, you can send requests to Torch using ZeroMQ in asynchronous manner, possibly serializing messages with protocol buffers, then process each request in Torch and return the result back. </p>\n\n<p>This way I managed to get much higher throughput, than <a href=\"https://github.com/benglard/waffle\" rel=\"nofollow\">waffle</a>'s 20K fib test.</p>\n"
}
] |
29,365,370 | 3 |
<python><gradient><theano>
|
2015-03-31T09:37:56.733
| null | 1,215,787 |
How to code adagrad in python theano
|
<p>To simplify the problem, say when a dimension (or a feature) is already updated n times, the next time I see the feature, I want to set the learning rate to be 1/n.</p>
<p>I came up with these codes:</p>
<pre></pre>
<p>Theano does not give any error, but the training result give Nan sometimes. Does anybody know how to correct this please ?</p>
<p>Thank you for your help</p>
<p>PS: I doubt it is the operations in sparse space which creates problems. So I tried to replace * by theano.sparse.mul. This gave the some results as I mentioned before</p>
|
[
{
"AnswerId": "29643793",
"CreationDate": "2015-04-15T07:12:23.017",
"ParentId": null,
"OwnerUserId": "828368",
"Title": null,
"Body": "<p>Perhaps you can utilize the following <a href=\"http://deeplearning.net/tutorial/code/lstm.py\" rel=\"noreferrer\">example for implementation of <strong>adadelta</strong></a>, and use it to derive your own. Please update if you succeeded :-)</p>\n"
},
{
"AnswerId": "39567087",
"CreationDate": "2016-09-19T06:58:27.297",
"ParentId": null,
"OwnerUserId": "217802",
"Title": null,
"Body": "<p>I find <a href=\"https://github.com/Lasagne/Lasagne/blob/master/lasagne/updates.py#L385\" rel=\"nofollow\">this implementation from Lasagne</a> very concise and readable. You can use it pretty much as it is:</p>\n\n<pre><code>for param, grad in zip(params, grads):\n value = param.get_value(borrow=True)\n accu = theano.shared(np.zeros(value.shape, dtype=value.dtype),\n broadcastable=param.broadcastable)\n accu_new = accu + grad ** 2\n updates[accu] = accu_new\n updates[param] = param - (learning_rate * grad /\n T.sqrt(accu_new + epsilon))\n</code></pre>\n"
},
{
"AnswerId": "36851284",
"CreationDate": "2016-04-25T21:20:16.417",
"ParentId": null,
"OwnerUserId": "4295315",
"Title": null,
"Body": "<p>I was looking for the same thing and ended up implementing it myself in the style of the resource zuuz already pointed out. So maybe this helps anyone looking for help here.</p>\n\n<pre><code>def adagrad(lr, tparams, grads, inp, cost):\n # stores the current grads\n gshared = [theano.shared(np.zeros_like(p.get_value(),\n dtype=theano.config.floatX),\n name='%s_grad' % k)\n for k, p in tparams.iteritems()]\n grads_updates = zip(gshared, grads)\n # stores the sum of all grads squared\n hist_gshared = [theano.shared(np.zeros_like(p.get_value(),\n dtype=theano.config.floatX),\n name='%s_grad' % k)\n for k, p in tparams.iteritems()]\n rgrads_updates = [(rg, rg + T.sqr(g)) for rg, g in zip(hist_gshared, grads)]\n\n # calculate cost and store grads\n f_grad_shared = theano.function(inp, cost,\n updates=grads_updates + rgrads_updates,\n on_unused_input='ignore')\n\n # apply actual update with the initial learning rate lr\n n = 1e-6\n updates = [(p, p - (lr/(T.sqrt(rg) + n))*g)\n for p, g, rg in zip(tparams.values(), gshared, hist_gshared)]\n\n f_update = theano.function([lr], [], updates=updates, on_unused_input='ignore')\n\n return f_grad_shared, f_update\n</code></pre>\n"
}
] |
29,380,867 | 3 |
<theano>
|
2015-04-01T00:29:06.957
| null | 4,620,420 |
Call a function from scan in Theano
|
<p>I need to execute a theano function a number of times via scan in order to sum-up a cost function and use it in a gradient computation. I'm familiar with the deep-learning tutorials that do this but my data slicing and some other complications means I need to do this a little different.
Below is a much simplified version of what I'm trying to do..</p>
<pre></pre>
<p>In the scan function, the call to test_fn(curr) is giving the error...
Expected an array-like object, but found a Variable: maybe you are trying to call a function on a (possibly shared) variable instead of a numeric array?')</p>
<p>Even if I pass in an array of values instead of putting the T.arrange(2) in place, I still get the same error. Is there a reason you can't call a function from scan? </p>
<p>In general I'm wondering if there is a way to call a function like this with a series of indexes so that the output can feed into a T.grad() computation (not shown). </p>
|
[
{
"AnswerId": "29426595",
"CreationDate": "2015-04-03T05:16:09.377",
"ParentId": null,
"OwnerUserId": "2990135",
"Title": null,
"Body": "<p>Don't make two different <code>theano.functions</code>.</p>\n\n<p>A <code>theano.function</code> takes a symbolic relationship, optimizes it, and compiles it. What you are doing here is asking <code>theano.scan</code> (and thus <code>out_fn</code>) to consider a compiled function as a symbolic relationship. Whether you could <em>technically</em> get that to work I'm not sure, but it goes against the idea of Theano.</p>\n\n<p>Since I don't know what your cost function does here I can't give an exact example, but here's a quick example which does work and should be similar enough to what I <em>think</em> you're trying to do.</p>\n\n<pre><code>x = theano.shared(np.asarray([7.1,2.2,3.4], dtype = np.float32))\n\nv = T.vector(\"v\")\ndef fv(v):\n res,_ = theano.scan(lambda x: x ** 2, v)\n return T.sum(res)\n\ndef f(i):\n return fv(x[i:i+2])\n\nouts,_ = theano.scan(\n f, \n T.arange(2)\n )\n\nfn = theano.function(\n [],\n outs,\n )\n\nfn()\n</code></pre>\n"
},
{
"AnswerId": "40788133",
"CreationDate": "2016-11-24T13:49:43.883",
"ParentId": null,
"OwnerUserId": "1950580",
"Title": null,
"Body": "<h3>Solution</h3>\n\n<p>The standard way is <code>OpFromGraph</code> (as of 0.8.2)</p>\n\n<pre><code>import theano as th\nimport theano.tensor as T\n\nx = T.scalar('x')\ny = T.scalar('y')\nz = x+y\n# unlike theano.function, must use list for outputs\nop_add = th.OpFromGraph([x,y], [z])\n\ndef my_add(x_, y_):\n return op_add(x_, y_)[0]\n\nx_list = T.vector('x_li')\nx_sum = th.scan(op_add, sequences=[x_list], outputs_info=[T.constant(0.)])\nfn_sum = th.function([x_list], x_sum)\nfn([1., 2., 3., 4.]) # 10.\n</code></pre>\n\n<h3>What it does?</h3>\n\n<p><code>OpFromGraph</code> compiles a function defined from a graph, then pack it into a new Op. Just like defining functions in imperative programming languages.</p>\n\n<h3>Pros/Cons</h3>\n\n<ul>\n<li>[+] It can be handy in tricky models.</li>\n<li>[+] It saves compilation time. You can compile a commonly used part of a big model into <code>OpFromGraph</code>, then directly use it in a bigger model. The final graph will have less nodes than direct implementation.</li>\n<li>[-] It cause worse runtime performance. Calling a function has overhead, also the compiler is unable to do a global optimization due to its compiled nature.</li>\n<li>[-] It's premature and still under development. The documentation for it is incomplete. Currently does not support <code>updates</code> and <code>givens</code> as in <code>theano.function</code>.</li>\n</ul>\n\n<h3>Notes</h3>\n\n<p>In most cases, you should be defining python functions/classes to build model. Only use <code>OpFromGraph</code> if no workaround is possible or you want to save compilation time.</p>\n"
},
{
"AnswerId": "29446969",
"CreationDate": "2015-04-04T13:25:33.887",
"ParentId": null,
"OwnerUserId": "4620420",
"Title": null,
"Body": "<p>After some investigation I agree that calling a function from a function is not correct. The challenge with the code is that following the basic design of the deep-learning tutorials, the first layer of the net has a symbolic variable defined as it's input and the output is propagated up to higher layers until a final cost is computed from the top layer. The tutorials uses code something like...</p>\n\n<pre><code>class layer1(object):\n def __init__(self):\n self.x = T.matrix()\n self.output = activation(T.dot(self.x,self.W) + self.b)\n</code></pre>\n\n<p>For me the tensor variable (layer1.self.x) needs to change every time scan takes a step to have a new slice of data. The \"givens\" statement in a function does that, but since calling a compiled theano function from inside a \"scan\" doesn't work there are two other solutions I was able to find...</p>\n\n<p>1 - Rework the network so that its cost function is based on a series of function calls instead of a propagated variable. This is technically simple but requires a bit of re-coding to get things organized properly in a multi-layer network.</p>\n\n<p>2 - Use theano.clone inside of scan. That code looks something like...</p>\n\n<pre><code>def step(curr):\n y_in = y[curr]\n replaces = {tn.layer1.x : x[curr:curr+1]}\n fn = theano.clone(tn.cost(y_in), replace=replaces)\n return fn\nouts,_ = theano.scan(step, sequences=[T.arange(batch_start,batch_end)])\n</code></pre>\n\n<p>Both methods return the same results and appear execute at the same speed.</p>\n"
}
] |
29,398,109 | 2 |
<python><numpy><theano><deep-learning>
|
2015-04-01T18:26:56.050
| null | 3,140,172 |
Using Random Numbers in Theano
|
<p>I am a theano newbie. <br></p>
<p>Can someone please explain the following code?</p>
<pre></pre>
<p>Q1. How does nearly_zeros() affect rv_u's generator?<br>
Q2. Why ?<br> </p>
<pre></pre>
|
[
{
"AnswerId": "29405457",
"CreationDate": "2015-04-02T05:18:11.813",
"ParentId": null,
"OwnerUserId": "2990135",
"Title": null,
"Body": "<p><strong>Q1</strong></p>\n\n<p>Looks like only 1 \"value\" (ie a 2x2 matrix) is being generated by rv_u's generator. You can see this if you use <code>theano.printing.debugprint</code> to print the graph of the function. For reference this is what I got:</p>\n\n<pre><code>>>> theano.printing.debugprint(nearly_zeros)\nElemwise{Composite{((i0 + i0) - (i1 * i0))}}[(0, 0)] [@A] '' 1\n |RandomFunction{uniform}.1 [@B] '' 0\n | |<RandomStateType> [@C]\n | |TensorConstant{(2,) of 2} [@D]\n | |TensorConstant{0.0} [@E]\n | |TensorConstant{1.0} [@F]\n |TensorConstant{(1, 1) of 2.0} [@G]\nRandomFunction{uniform}.0 [@B] '' 0\n</code></pre>\n\n<p>This means the two functions (nearly_zero and f) both grab only 1 value from rv_u, hence why </p>\n\n<pre><code>v3 == v1\n</code></pre>\n\n<p><strong>Q2</strong></p>\n\n<p>Theano is largely a symbolic math package. You define relationships between symbolic variables and Theano figures out how to evaluate those relationships.</p>\n\n<p>So you could consider rv_u to be a variable which represents a single 2x2 matrix, rather than a rng which generates a new 2x2 matrix each time it's \"called.\" Given that view, Theano would only need to call the underlying rng once to get the value for the variable rv_u.</p>\n"
},
{
"AnswerId": "35075137",
"CreationDate": "2016-01-29T01:03:17.553",
"ParentId": null,
"OwnerUserId": "5170062",
"Title": null,
"Body": "<h2>Q1</h2>\n\n<p>When <code>nearly_zeros()</code> is called, it uses the current rng state of the random stream to return a new <code>rv_u</code> and increments the state just as it does for the function <code>f</code>. The easiest way to see what is happening to <code>rv_u</code> is to add it as an output to the <code>nearly_zeros()</code> function. Although <code>rv_u</code> is present three times, the value of <code>rv_u</code> is sampled from the random stream only once, hence <code>nearly_zero</code> will indeed be nearly zero (up to floating-point quantization error.)</p>\n\n<pre><code>nearly_zeros = function([], [rv_u, rv_u + rv_u - 2 * rv_u])\nf() # The return value will be different from the rv_u in the list above\n</code></pre>\n\n<p>If you do not intend for it to increment, you can specify the <code>no_default_updates=True</code> as was done for <code>g</code>.</p>\n\n<pre><code>nearly_zeros = function([], [rv_u, rv_u + rv_u - 2 * rv_u], \n no_default_updates=True)\nf() # The return value will be equal to the rv_u in the list above\n</code></pre>\n\n<h2>Q2</h2>\n\n<p>The \"why\" here was a little ambiguous, so I present two possibilities with the answers.</p>\n\n<p><h3>Q2(a)</h3>Why does using <code>nearly_zeros()</code> impact <code>rv_u</code> in this way?</p>\n\n<p><code>rv_u</code> is a uniform srng object which uses a shared variable that is incremented each time it is requested, unless specifically told otherwise (by means of the <code>no_defalut_updates</code> parameter.) This is true whether <code>f</code> or <code>nearly_zero</code> happens to be the function that requests the value of <code>rv_u</code>.</p>\n\n<p><h3>Q2(b)</h3>Why is the following true:</p>\n\n<pre><code>v2 != v1\nv3 == v1\n</code></pre>\n\n<p>This is because your code saves the state after defining <code>nearly_zeros</code> but before it is called. When the state is set, the first value that is returned is the value used by <code>nearly_zeros</code> (which is equal to <code>v2</code>.) The next value requested will be equal to the value of <code>v3</code>. The following is a nearly identical copy of your code, and it should prove illustrative:</p>\n\n<pre><code>from theano.tensor.shared_randomstreams import RandomStreams\nfrom theano import function\nsrng = RandomStreams(seed=234)\nrv_u = srng.uniform((2,2))\nrv_n = srng.normal((2,2))\nf = function([], rv_u)\ng = function([], rv_n, no_default_updates=True) #Not updating rv_n.rng\nnearly_zeros = function([], [rv_u, rv_u + rv_u - 2 * rv_u]) # Give rv_u, too\n\nstate_after_v0 = rv_u.rng.get_value().get_state()\nnearly_zeros() # this affects rv_u's generator\nv1 = f()\nrng = rv_u.rng.get_value(borrow=True)\nrng.set_state(state_after_v0)\nrv_u.rng.set_value(rng, borrow=True)\nv2 = f() # v2 != v1, but equal to rv_u used by nearly_zeros\nv3 = f() # v3 == v1\n</code></pre>\n\n<p><i>Aside: Although this thread is a bit old, hopefully someone will find these answers helpful.</i></p>\n"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.