QuestionId
int64 388k
59.1M
| AnswerCount
int64 0
47
| Tags
stringlengths 7
102
| CreationDate
stringlengths 23
23
| AcceptedAnswerId
float64 388k
59.1M
⌀ | OwnerUserId
float64 184
12.5M
⌀ | Title
stringlengths 15
150
| Body
stringlengths 12
29.3k
| answers
listlengths 0
47
|
---|---|---|---|---|---|---|---|---|
48,114,820 | 1 |
<python><machine-learning><deep-learning><keras>
|
2018-01-05T13:40:35.923
| null | 5,855,649 |
How to provide input to keras for multiple data features?
|
<p>I know this is really basic question but since I am new to keras I am not able to find the right way to provide input of the dataset mentioned below to keras model.</p>
<pre></pre>
<p>and a target variable which has total 41 classes.</p>
<p>Subject, Message and Targe variable has string values and remaining columns has numeric values. I dont know how can I pipe this dataset to a simple CNN model?</p>
<p>Thank you in advance!!!</p>
|
[
{
"AnswerId": "48115375",
"CreationDate": "2018-01-05T14:14:52.430",
"ParentId": null,
"OwnerUserId": "6494397",
"Title": null,
"Body": "<p>I guess simple ANN works for you case, any reasons for going to CNN. BTW you can give the multiple inputs to Keras as shown here <a href=\"https://github.com/naveenkambham/MachineLearningModels/blob/master/NeuralNetwork.py\" rel=\"nofollow noreferrer\">https://github.com/naveenkambham/MachineLearningModels/blob/master/NeuralNetwork.py</a> this is a ANN that accpets 5 features as input features and 2 output features. Please refer this and build the model. Lemme know if you need any clarifications. Please mark this as answer or vote if you find it helpful.</p>\n"
}
] |
48,114,914 | 1 |
<tensorflow>
|
2018-01-05T13:46:58.023
| 48,123,935 | 5,666,733 |
Tensorflow C API placeholder / input variable setting
|
<p>I am trying to use the TensorFlow C-API to run a implementation of LeNet that has been saved from a Keras/TF model, but I am having consistent problems with setting the input. Relevant piece of code is:</p>
<pre></pre>
<p>However, anyway I try to build up the input tensor, I get the error status and message:</p>
<pre></pre>
<p>Any suggestions on what I am doing wrong?</p>
|
[
{
"AnswerId": "48123935",
"CreationDate": "2018-01-06T03:17:01.770",
"ParentId": null,
"OwnerUserId": "6708503",
"Title": null,
"Body": "<p>In your call to <a href=\"https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/c/c_api.h#L1247\" rel=\"nofollow noreferrer\"><code>TF_SessionRun</code></a>, you're also providing the <code>conv2d_1_input</code> operation as a \"target\". The error message can be improved, but it's basically complaining that you're asking the session to execute a placeholder operation, which it can't - which isn't possible (see the note in the documentation for <a href=\"https://www.tensorflow.org/api_docs/python/tf/placeholder\" rel=\"nofollow noreferrer\"><code>tf.placeholder</code></a>)</p>\n\n<p>Shouldn't you be asking for a different <code>target</code> or <code>output</code> tensor from the call to <code>TF_SessionRun</code> with something like:</p>\n\n<pre><code>TF_Output out = { TF_GraphOperationByName(graph, \"<name_of_output_tensor>\"), 0 };\n\nTF_Tensor* outputvalues = NULL;\nTF_SessionRun(session, NULL,\n &inp, inputvalues, 1, // inputs\n &out, &outputvalues, 1, // outputs\n NULL, 0, // targets\n NULL, status);\n</code></pre>\n\n<p>Hope that helps.</p>\n"
}
] |
48,115,096 | 2 |
<python><tensorflow>
|
2018-01-05T13:57:54.803
| null | 9,088,766 |
Explicitly clear/reset a nested TensorFlow Graph scope
|
<p>So, I'm using a bunch of functions from OpenAI <a href="https://github.com/openai/baselines/tree/master/baselines/deepq" rel="nofollow noreferrer">baselines</a> for Reinforcement Learning. In those functions, policy nets are initialised using statements like:</p>
<pre></pre>
<p>The problem is that the pointer to the output of those networks gets returned while still inside the scope, which means that when accessing those functions from another .py file I am still inside those scopes.</p>
<p>Basically I want to run a first function that trains the net and dumps the checkpoint to disk using .
Next, I run a function that reinitializes the same tf Graph and loads it's pretrained values using the checkpoint dir.</p>
<p>Right now, when I try this, I get a ValueError:
<em></em> because at the point of running the second function, I'm still in the scope defined by the first.. I checked the <a href="https://github.com/openai/baselines/blob/master/baselines/deepq/build_graph.py" rel="nofollow noreferrer">code</a> from OpenAI baselines (very nested code, hard to see everything that's going on), and <strong>reuse is already set to True</strong>.</p>
<p>So I tried doing something like:</p>
<p> followed by:</p>
<p></p>
<p>after the first function call. (I don't need the session to remain active since I'm dumping everything to disk)</p>
<p>But this gives me errors because I'm still inside a nested graph scope and so I can't reset the default graph... (see eg <a href="https://github.com/tensorflow/tensorflow/issues/11121" rel="nofollow noreferrer">here</a>)</p>
<p>Alternatively I tried things like:</p>
<pre></pre>
<p>or</p>
<pre></pre>
<p>but the exit() function needs a whole bunch of args I don't know how to get... (and I can't find good <a href="https://www.tensorflow.org/api_docs/python/tf/name_scope" rel="nofollow noreferrer">documentation</a> on how to use this function).</p>
<p>My current solution is to run these functions in separate subprocesses in Python (and let the garbage collector do all the work), but this doensn't feel like a satisfactory solution..</p>
<p>Any ideas on how to deal with this? Ideally I'd need something like: <strong></strong></p>
|
[
{
"AnswerId": "48115549",
"CreationDate": "2018-01-05T14:25:40.213",
"ParentId": null,
"OwnerUserId": "712995",
"Title": null,
"Body": "<p>You can try to do your work in another default graph:</p>\n\n<pre class=\"lang-py prettyprint-override\"><code>with tf.get_default_graph().as_default():\n with tf.variable_scope('deepq', reuse=False):\n v = tf.get_variable('v', shape=[])\n print(v.name, v.graph)\n\n with tf.Graph().as_default():\n v = tf.get_variable('v', shape=[])\n print(v.name, v.graph)\n</code></pre>\n\n<p>Output:</p>\n\n<pre><code>deepq/v:0 <tensorflow.python.framework.ops.Graph object at 0x7f61adaa6390>\nv:0 <tensorflow.python.framework.ops.Graph object at 0x7f61460abbd0>\n</code></pre>\n"
},
{
"AnswerId": "48152757",
"CreationDate": "2018-01-08T14:55:45.637",
"ParentId": null,
"OwnerUserId": "9088766",
"Title": null,
"Body": "<p>Ait one solution is indeed to reset the default graph:\nI simply wrap every function call in a new default graph object like this:</p>\n\n<pre><code>with tf.Graph().as_default():\n train_policy(output_dir)\n\nwith tf.Graph().as_default():\n run_policy(output_dir)\n\n...\n</code></pre>\n\n<p>This way the default graph simply gets reinitialised empty and you can load whatever is in the checkpoint file. (Inside every function I also close the default session before returning).</p>\n"
}
] |
48,115,671 | 1 |
<python><numpy><tensorflow><batch-processing><sklearn-pandas>
|
2018-01-05T14:32:40.627
| 48,116,463 | 7,246,268 |
How to use numpy array inputs in tensorflow RNN
|
<p>I just curious on how how to generate a sequence, batches and or epochs to feed into a tensor flow model, a multi_layer RNN graph from a numpy array. Originally numpy array was generated from pandas dataset and a Sklearn split below.</p>
<p>From Numpy to Pandas</p>
<pre></pre>
<p>Note: Very important</p>
<pre></pre>
<p>Out[37]:
(6721, 100)</p>
<pre></pre>
<p>Out[38]:
(6721, 3)</p>
<p>Now the shape of </p>
<h1>Scaling the features to speed up the model</h1>
<pre></pre>
<p>In order to generate the configuration parameters.</p>
<pre></pre>
<p>The input parameters used for the config are actually based on the shape of the numpy array, lets say input_size = 3 for 3 inputs and output_size = 100 derived from the out puts from the one hot encoding i.e depth equals 100.</p>
<pre></pre>
<p>For the graph features</p>
<p>The tensor flow features are as listed below,
# Now for the training session</p>
<pre></pre>
<p>Lastly for the training session </p>
<pre></pre>
<p>Configuration parameters</p>
<pre></pre>
<p>Here is how I try to Iterating over the Epochs.</p>
<pre></pre>
<p>Here are my batches again based on the shape of the input array, specifically the number of features.</p>
<pre></pre>
<p>Now for my out put. I get the below error. I have particular issue with the logits_size=[1,3] which I don't know how it was generated. It does not relater to either of the matrices( Input matrix, X_train or output, matrix, y_train.). My question is this how do I match the logits_size to the labels_size=[100,100].</p>
<p>Thanks in advance</p>
<pre></pre>
|
[
{
"AnswerId": "48116463",
"CreationDate": "2018-01-05T15:20:19.157",
"ParentId": null,
"OwnerUserId": "241013",
"Title": null,
"Body": "<p>I think the problem is here in this part of your code.</p>\n\n<pre><code> val = tf.transpose(val, [1, 0, 2])\n\n last = tf.gather(val, int(val.get_shape()[0]) - 1)\n</code></pre>\n\n<p>The output of the RNN is (timestep, batch_index, data) and you are transposing to (batch_index, timestep, data). Then you do gather with indices = shape[0] - 1 on axis 0 (that's the default). So you are taking the last element of batch. You probably want to specify to axis 1.</p>\n\n<p>Another way to do it, that would keep the code cleaner is:</p>\n\n<pre><code>last = val[:, -1, :]\n</code></pre>\n\n<p>I'm guessing you are only doing one time step in your test so that should explain the 1.\nI don't see any other bug right now so I would guess your input_size is 3 and when you do the matrix multiplication you get the [1, 3].</p>\n\n<p>Check that the weight has a shape like (x, 100). If your batch size is 100 fixing those two should give a result that has the right shape.</p>\n"
}
] |
48,116,484 | 2 |
<python><neural-network><keras><kernel><conv-neural-network>
|
2018-01-05T15:21:39.253
| null | 3,556,711 |
Does convolution kernel need to be designed in CNN (Convolutional Neural Networks)?
|
<p>I am new to Convolutional Neural Networks. I am reading some tutorial and testing some sample codes using . To add a convolution layer, basically I just need to specify the number of kernels and the size of the kernel. </p>
<p>My question is <em>what each kernel looks like? Are they generic to all computer vision applications?</em></p>
|
[
{
"AnswerId": "48117458",
"CreationDate": "2018-01-05T16:21:21.180",
"ParentId": null,
"OwnerUserId": "3908170",
"Title": null,
"Body": "<blockquote>\n <p>My question is what each kernel looks like? </p>\n</blockquote>\n\n<p>This depends on the parameters you chose for your Convolutional Layer:</p>\n\n<ul>\n<li>It will indeed depend on the <code>kernel_size</code> parameter you mentioned, as it will determine the shape and size of your kernel. Say you pass this parameter as <code>(3,3)</code> (on a Conv2D layer naturally), you will then obtain a 3x3 Kernel Matrix. </li>\n<li><p>It will depend on your <code>kernel_initializer</code> parameter, which determines the way that MxN Kernel Matrix is going to be filled. It's default value is <code>\"glorot_uniform\"</code>, which is explained on its <a href=\"https://keras.io/initializers/\" rel=\"nofollow noreferrer\">doc page</a>:</p>\n\n<blockquote>\n <p>Glorot uniform initializer, also called Xavier uniform initializer. It draws samples from a <strong>uniform distribution within [-limit, limit]</strong> where limit is sqrt(6 / (fan_in + fan_out)) where fan_in is the number of input units in the weight tensor and fan_out is the number of output units in the weight tensor.</p>\n</blockquote>\n\n<p>This is telling us the specific way it fills that kernel matrix. You may well select any other kernel initializer you desire to fit your needs. You may even build Custom Initializers, also exemplified in <a href=\"https://keras.io/initializers/\" rel=\"nofollow noreferrer\">that doc</a> page:</p>\n\n\n\n<pre><code>from keras import backend as K\n\ndef my_init(shape, dtype=None):\n #or whatever you want to customize\n return K.random_normal(shape, dtype=dtype)\n\nmodel.add(Dense(64, kernel_initializer=my_init))\n</code></pre></li>\n<li><p>Furthermore, it will depend on your <code>kernel_regularizer</code> parameter, which defines regularization functions applied to the weights of your kernel. It's default value is <code>None</code> but you can select others from the ones <a href=\"https://keras.io/regularizers/\" rel=\"nofollow noreferrer\">available</a>. You can again define your own custom initializers in a similar fashion: </p>\n\n\n\n<pre><code>def l1_reg(weight_matrix):\n #same here, fit your own needs\n return 0.01 * K.sum(K.abs(weight_matrix))\n\nmodel.add(Dense(64, input_dim=64,\n kernel_regularizer=l1_reg)\n</code></pre></li>\n</ul>\n\n<blockquote>\n <p>Are they generic to all computer vision applications?</p>\n</blockquote>\n\n<p>This I think may be a bit broad, however I would venture and say <em>yes</em>. Keras has available many kernels that were designed to specifically adapt to Deep Learning applications; it includes those ones that are most commonly used throughout the literature and well-known applications. </p>\n\n<p>The good thing is that, as illustrated before, if any of those kernels does not fit your needs you could well define your own custom initializer, or well enhance it by using regularizes. This enables you to tackle those really specific CV problems you may have. </p>\n"
},
{
"AnswerId": "48118131",
"CreationDate": "2018-01-05T17:06:33.937",
"ParentId": null,
"OwnerUserId": "349130",
"Title": null,
"Body": "<p>The actual kernel values are learned during the learning process, that's why you only need to set the number of kernels and their size.</p>\n\n<p>What might be confusing is that the learned kernel values actually mimic things like Gabor and edge detection filters. These are generic to many computer vision applications, but instead of being engineered manually, they are learned from a big classification dataset (like ImageNet).</p>\n\n<p>Also the kernel values are part of a feature hierarchy that can be used directly as features for a variety of computer vision problems. In that terms they are also generic.</p>\n"
}
] |
48,116,769 | 1 |
<python><tensorflow>
|
2018-01-05T15:37:31.990
| null | 987,397 |
Boolean indexing in Tensorflow
|
<p>It seems Tensorflow doesnot support boolean indexing. How can I do this in Tensorflow?</p>
<pre></pre>
|
[
{
"AnswerId": "48116973",
"CreationDate": "2018-01-05T15:50:44.487",
"ParentId": null,
"OwnerUserId": "4983450",
"Title": null,
"Body": "<p>To extract elements with a boolean array, you can use <a href=\"https://www.tensorflow.org/api_docs/python/tf/boolean_mask\" rel=\"nofollow noreferrer\"><code>boolean_mask</code></a>:</p>\n\n<pre><code>import tensorflow as tf\ntf.InteractiveSession()\n\na = tf.constant([3, 4, 5, -1, 6, -1, 7, 8])\nmask = tf.equal(a, -1)\ntf.boolean_mask(a, mask).eval()\n# array([-1, -1], dtype=int32)\n</code></pre>\n\n<p>Which however does not seem to support assignment. \nIf the elements need to be updated to the same value, use <code>tf.where</code>, which can work for both constant and Variable:</p>\n\n<pre><code>a = tf.constant([3, 4, 5, -1, 6, -1, 7, 8])\nmask = tf.equal(a, -1)\ntf.where(mask, [11] * a.shape[0], a).eval()\n# array([ 3, 4, 5, 11, 6, 11, 7, 8], dtype=int32)\n</code></pre>\n\n<p>If the updated values is an customized array with different values, we can use <a href=\"https://www.tensorflow.org/api_docs/python/tf/scatter_update\" rel=\"nofollow noreferrer\"><code>tf.scatter_update</code></a>, by converting the boolean mask to indices first, in which case <code>a</code> needs to be a <strong><code>Variable</code></strong>:</p>\n\n<pre><code>a = tf.Variable([3, 4, 5, -1, 6, -1, 7, 8])\ntf.global_variables_initializer().run()\n\nmask = tf.equal(a, -1)\nindices = tf.reshape(tf.where(mask), (-1,))\ntf.scatter_update(a, indices, [11, 12]).eval()\n# array([ 3, 4, 5, 11, 6, 12, 7, 8], dtype=int32)\n</code></pre>\n"
}
] |
48,117,223 | 0 |
<tensorflow><deep-learning>
|
2018-01-05T16:07:47.593
| null | 3,747,801 |
Does `config.gpu_options.allow_growth=True` reduce performance in the long run?
|
<p>I am interested in the costs of using , which I read about <a href="https://github.com/tensorflow/tensorflow/issues/1578" rel="nofollow noreferrer">here</a>.
I understand that there are some performance losses initially, as tensorflow allocates memory in multiple steps, but are there long run consequences?</p>
<p>E.g. if I have a computer that only runs tensorflow with , will it after say an hour of training run slower (batches per second) than if I didn't use the option?</p>
|
[] |
48,117,395 | 1 |
<tensorflow><object-detection>
|
2018-01-05T16:18:30.997
| 48,189,421 | 7,585,525 |
Why do i get double info string only when i use pretrained slim model?
|
<p>I start training a faster_rcnn_inception_v2 with inception_v2_imagenet_2016_08_28 pretrained model from slim.</p>
<pre></pre>
<p>I get a warning about missing parameter (gamma).
Than I get all the info doubled.</p>
<pre></pre>
<p>Why? Is there a solution?
Thanks.</p>
|
[
{
"AnswerId": "48189421",
"CreationDate": "2018-01-10T14:18:16.033",
"ParentId": null,
"OwnerUserId": "7585525",
"Title": null,
"Body": "<p>I found the solution to this problem!</p>\n\n<p>The bug is in the function <code>get_variables_available_in_checkpoint</code> in <a href=\"https://github.com/tensorflow/models/blob/master/research/object_detection/utils/variables_helper.py\" rel=\"nofollow noreferrer\">https://github.com/tensorflow/models/blob/master/research/object_detection/utils/variables_helper.py</a>:</p>\n\n<pre><code>def get_variables_available_in_checkpoint(variables, checkpoint_path):\n \"\"\"Returns the subset of variables available in the checkpoint.\n\n Inspects given checkpoint and returns the subset of variables that are\n available in it.\n\n TODO: force input and output to be a dictionary.\n\n Args:\n variables: a list or dictionary of variables to find in checkpoint.\n checkpoint_path: path to the checkpoint to restore variables from.\n\n Returns:\n A list or dictionary of variables.\n Raises:\n ValueError: if `variables` is not a list or dict.\n \"\"\"\n if isinstance(variables, list):\n variable_names_map = {variable.op.name: variable for variable in variables}\n elif isinstance(variables, dict):\n variable_names_map = variables\n else:\n raise ValueError('`variables` is expected to be a list or dict.')\n ckpt_reader = tf.train.NewCheckpointReader(checkpoint_path)\n ckpt_vars = ckpt_reader.get_variable_to_shape_map().keys()\n vars_in_ckpt = {}\n for variable_name, variable in sorted(variable_names_map.items()):\n if variable_name in ckpt_vars:\n vars_in_ckpt[variable_name] = variable\n else:\n logging.warning('Variable [%s] not available in checkpoint',\n variable_name)\n if isinstance(variables, list):\n return vars_in_ckpt.values()\n return vars_in_ckpt\n</code></pre>\n\n<p>I commented this part and the info during training are displayed only once.</p>\n\n<pre><code> # else:\n # logging.warning('Variable [%s] not available in checkpoint',\n # variable_name)\n</code></pre>\n"
}
] |
48,117,526 | 0 |
<python><tensorflow>
|
2018-01-05T16:25:33.733
| null | 345,048 |
tensorflow case error: Invalid argument: assertion failed: None of the conditions evaluated as True
|
<p>Evaluation of this code: </p>
<pre></pre>
<p>results in following error:</p>
<pre></pre>
<p>while the tf.Print statement clearly prints trans_type=2 and even shows that tf.equal evaluates to true. What's the problem here and more importantly - how to even debug it when tf.Print statement gives confusing results?</p>
<p>EDIT: changed to minimal example. It seems to be related to Dataset/iterators, just calling the method works fine.</p>
|
[] |
48,117,562 | 2 |
<python><tensorflow><machine-learning><neural-network><mnist>
|
2018-01-05T16:27:50.280
| null | 3,058,060 |
How to exclude a class from MNIST in TensorFlow?
|
<p>I am new to TensorFlow and I am following the tutorial for beginners with MNIST data set and I want to train the model just with the 0-8 (excluding the class 9), so where in the code was 10, I replaced it to 9, but at the training part of code, how to ask the to exclude the class 9 ? And if I want to exclude more than one class?</p>
<pre class="lang-python prettyprint-override"></pre>
|
[
{
"AnswerId": "48117981",
"CreationDate": "2018-01-05T16:56:19.123",
"ParentId": null,
"OwnerUserId": "712995",
"Title": null,
"Body": "<p>This line should do it right after you get <code>batch_xs</code> and <code>batch_ys</code>:</p>\n\n<pre><code>batch_xs, batch_ys = zip(*[(x_, y_[:-1]) for x_, y_ in zip(batch_xs, batch_ys) if np.argmax(y_) not in [9]])\n</code></pre>\n"
},
{
"AnswerId": "48117952",
"CreationDate": "2018-01-05T16:53:37.367",
"ParentId": null,
"OwnerUserId": "9175058",
"Title": null,
"Body": "<p>You should pull the training data out of the mnist data object, dropping the class you want, and then proceed. First get the dataset without class <code>9</code> in it:</p>\n\n<pre><code>Xdata_no9 = np.array([x for (x,y) in zip(mnist.train.images,mnist.train.labels) if y[9]==0])\nydata_no9 = np.array([y[0:9] for y in mnist.train.labels if y[9]==0])\n</code></pre>\n\n<p>Note that the <code>y[0:9]</code> reduces the size to <code>9</code> from <code>10</code>. That will do it, but now you need to build your own code to pull a minibatch. Here's a simple way to do so:</p>\n\n<pre><code>n = Xdata_no9.shape[0]\nbatch_size = 100\nbatch = np.floor(np.random.rand(batch_size)*n).astype(int)\nbatch_xs = Xdata_no10[batch,:]\nbatch_ys = ydata_no10[batch,:]\n</code></pre>\n\n<p>Note that you can compress this code a bit, but I have written it to be instructive. </p>\n\n<p><strong>Note of Caution</strong>: doing this (dropping the class from your training set) is better practice: if you don't want to train on part of your data, you should remove it from your dataset early on, rather than require every call to that data to remember what part of the data should be ignored. In this example it doesn't much matter as you only use it in training, but this of course would then break if you tried to evaluate performance on the whole training set (unless you remembered to ignore that class again).</p>\n"
}
] |
48,117,704 | 2 |
<c#><android><opencv><tensorflow><tensorflowsharp>
|
2018-01-05T16:36:51.147
| 48,129,480 | 8,732,201 |
How to transform Byte[](decoded as PNG or JPG) to Tensorflows Tensor
|
<p>I'am trying to use Tensorflowsharp in a Project in Unity.</p>
<p>The problem i'm facing is that for the transform you usually use a second Graph to transform the input into a tensor.
The used functions DecodeJpg and DecodePng are not supported on Android so how can you transform that input into a tensor ? </p>
<pre></pre>
<p>Other solutions seem to create non accurate results. </p>
<p>Maybe somehow with a Mat object?</p>
<p>and my EDIT:
I implemented something comparabel in c# in Unity and it works partially. It is just not accurate at all. How am i gonna find out the Mean? And i could not find anything about the rgb order.? I'm really new to this so maybe i have just overlooked it. (on Tensorflow.org) Using MobileNet trained in 1.4.</p>
<pre></pre>
|
[
{
"AnswerId": "48489474",
"CreationDate": "2018-01-28T17:45:36.247",
"ParentId": null,
"OwnerUserId": "2279177",
"Title": null,
"Body": "<p>You probably didn't crop and scale your image before putting it into @sladomic function.</p>\n\n<p>I managed to hack together a <a href=\"https://github.com/Syn-McJ/TFClassify-Unity\" rel=\"nofollow noreferrer\">sample of using TensorflowSharp in Unity</a> for object classification. It works with model from official Tensorflow Android example, but also with my self-trained MobileNet model. All you need is to replace the model and set your mean and std, which in my case were all equal to 224.</p>\n"
},
{
"AnswerId": "48129480",
"CreationDate": "2018-01-06T16:41:55.373",
"ParentId": null,
"OwnerUserId": "4132383",
"Title": null,
"Body": "<p>Instead of feeding the byte array and then use DecodeJpeg, you could feed the actual float array, which you can get like this:</p>\n\n<p><a href=\"https://github.com/tensorflow/tensorflow/blob/3f4662e7ca8724f760db4a5ea6e241c99e66e588/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowImageClassifier.java#L134\" rel=\"nofollow noreferrer\">https://github.com/tensorflow/tensorflow/blob/3f4662e7ca8724f760db4a5ea6e241c99e66e588/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowImageClassifier.java#L134</a></p>\n\n<pre><code>float[] floatValues = new float[inputSize * inputSize * 3];\nint[] intValues = new int[inputSize * inputSize];\n\nbitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());\nfor (int i = 0; i < intValues.length; ++i) {\n final int val = intValues[i];\n floatValues[i * 3 + 0] = (((val >> 16) & 0xFF) - imageMean) / imageStd;\n floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - imageMean) / imageStd;\n floatValues[i * 3 + 2] = ((val & 0xFF) - imageMean) / imageStd;\n}\n\nTensor<Float> input = Tensors.create(floatValues);\n</code></pre>\n\n<p>In order to use \"Tensors.create()\" you need to have at least Tensorflow version 1.4.</p>\n"
}
] |
48,117,854 | 3 |
<ios><tensorflow><coreml>
|
2018-01-05T16:47:07.070
| null | 9,126,047 |
How to use a retrained "tensorflow for poets" graph on iOS?
|
<p>With "tensorflow for poets", I retrained the inceptionv3 graph. Now I want to use tfcoreml converter to convert the graph to an iOS coreML model.</p>
<p>But tf_coreml_converter.py stops with "NotImplementedError: Unsupported Ops of type: PlaceholderWithDefault".</p>
<p>I already tried "optimize_for_inference" and "strip_unused", but I can't get rid of this unsupported op "PlaceholderWithDefault".</p>
<p>Any idea what steps are needed after training in tensorflow-for-poets, to convert a "tensorflow-for-poets" graph (inceptionv3) to an iOS coreML model?</p>
|
[
{
"AnswerId": "50723386",
"CreationDate": "2018-06-06T14:51:21.590",
"ParentId": null,
"OwnerUserId": "6207061",
"Title": null,
"Body": "<pre><code> import tfcoreml as tf_converter\n tf_converter.convert(tf_model_path = '/Users/username/path/tf_files/retrained_graph.pb',\n mlmodel_path = 'MyModel.mlmodel',\n output_feature_names = ['final_result:0'],\n input_name_shape_dict = {'input:0':[1,224,224,3]},\n image_input_names = ['input:0'],\n class_labels = '/Users/username/path/tf_files/retrained_labels.txt',\n image_scale=2/255.0,\n red_bias=-1,\n green_bias=-1,\n blue_bias=-1\n )\n</code></pre>\n\n<p>Using tfcoreml, I found success with these settings.</p>\n"
},
{
"AnswerId": "48127686",
"CreationDate": "2018-01-06T13:13:16.933",
"ParentId": null,
"OwnerUserId": "7501629",
"Title": null,
"Body": "<p>Whoever created this graph used <code>tf.placeholder_with_default()</code> to define the placeholder (a placeholder in TF is used for the inputs to the neural network). Since tf-coreml does not support the PlaceholderWithDefault op, you cannot use this graph.</p>\n\n<p>Possible solutions:</p>\n\n<ul>\n<li>Define the placeholders using <code>tf.placeholder()</code> instead. The problem is that you'll need to retrain the graph from scratch since Tensorflow for Poets uses a pretrained graph and you can no longer use that.</li>\n<li>Hack the graph to replace the PlaceholderWithDefault op with Placeholder.</li>\n<li>Hack tf-coreml to use a Placeholder op whenever it encounters a PlaceholderWithDefault op. This is probably the quickest solution.</li>\n</ul>\n\n<p><strong>Update:</strong> From the code, it looks like a recent update to tf-coreml now simply skips the PlaceholderWithDefault layer. It should no longer give an error message. So if you use the latest version of tf-coreml (not using pip but by checking out the master branch of the GitHub repo) then you should no longer get this error.</p>\n"
},
{
"AnswerId": "48135762",
"CreationDate": "2018-01-07T09:03:59.327",
"ParentId": null,
"OwnerUserId": "9126047",
"Title": null,
"Body": "<p>I succedded in removing the PlaceholderWithDefault op from the retrained tensorflow for poets graph with this steps:</p>\n\n<ol>\n<li><p>Optimize graph for interference:</p>\n\n<p><code>python -m tensorflow.python.tools.optimize_for_inference \\\n--input retrained_graph.pb \\\n--output graph_optimized.pb \\\n--input_names=Mul\\\n--output_names=final_result</code></p></li>\n<li><p>Remove PlaceholderWithDefault op with transform_graph tool:</p>\n\n<p><code>bazel build tensorflow/tools/graph_transforms:transform_graph\nbazel-bin/tensorflow/tools/graph_transforms/transform_graph \\\n--in_graph=graph_optimized.pb \\\n--out_graph=graph_optimized_stripped.pb \\\n--inputs='Mul' \\\n--outputs='final_result' \\\n--transforms='remove_nodes(op=PlaceholderWithDefault)'</code></p></li>\n</ol>\n\n<p>Afterwards I could convert it to coreML. But as Matthijs already pointed out, the latest version of tfcoreml from git hub does it automatically.</p>\n"
}
] |
48,118,111 | 4 |
<python><python-3.x><keras>
|
2018-01-05T17:05:16.863
| null | 8,952,956 |
Get loss values for each training instance - Keras
|
<p>I want to get loss values as model train with each instance. </p>
<pre class="lang-python prettyprint-override"></pre>
<p>for example above code returns the loss values for each epoch not mini batch or instance. </p>
<p>what is the best way to do this? Any suggestions?</p>
|
[
{
"AnswerId": "48118514",
"CreationDate": "2018-01-05T17:31:55.760",
"ParentId": null,
"OwnerUserId": "1587118",
"Title": null,
"Body": "<p>After combining resources from <a href=\"https://github.com/keras-team/keras/issues/2850\" rel=\"nofollow noreferrer\">here</a> and <a href=\"https://keunwoochoi.wordpress.com/2016/07/16/keras-callbacks/\" rel=\"nofollow noreferrer\">here</a> I came up with the following code. Maybe it will help you. The idea is that you can override the <code>Callbacks</code> class from keras and then use the <code>on_batch_end</code> method to check the loss value from the <code>logs</code> that keras will supply automatically to that method. </p>\n\n<p>Here is a working code of an NN with that particular function built in. Maybe you can start from here - </p>\n\n\n\n<pre class=\"lang-python prettyprint-override\"><code>import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport os\nimport matplotlib.pyplot as plt\nimport time\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import train_test_split\n\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.callbacks import Callback\n\n# fix random seed for reproducibility\nseed = 155\nnp.random.seed(seed)\n\n# load pima indians dataset\n\n# download directly from website\ndataset = pd.read_csv(\"https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data\", \n header=None).values\nX_train, X_test, Y_train, Y_test = train_test_split(dataset[:,0:8], dataset[:,8], test_size=0.25, random_state=87)\nclass NBatchLogger(Callback):\n def __init__(self,display=100):\n '''\n display: Number of batches to wait before outputting loss\n '''\n self.seen = 0\n self.display = display\n\n def on_batch_end(self,batch,logs={}):\n self.seen += logs.get('size', 0)\n if self.seen % self.display == 0:\n print('\\n{0}/{1} - Batch Loss: {2}'.format(self.seen,self.params['samples'],\n logs.get('loss')))\n\n\nout_batch = NBatchLogger(display=1000)\nnp.random.seed(seed)\nmy_first_nn = Sequential() # create model\nmy_first_nn.add(Dense(5, input_dim=8, activation='relu')) # hidden layer\nmy_first_nn.add(Dense(1, activation='sigmoid')) # output layer\nmy_first_nn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmy_first_nn_fitted = my_first_nn.fit(X_train, Y_train, epochs=1000, verbose=0, batch_size=128,\n callbacks=[out_batch], initial_epoch=0)\n</code></pre>\n\n<p>Please let me know if you wanted to have something like this.</p>\n"
},
{
"AnswerId": "52230255",
"CreationDate": "2018-09-07T22:20:52.163",
"ParentId": null,
"OwnerUserId": "10332794",
"Title": null,
"Body": "<p>There is exactly what you are looking for at the end of this official keras documentation page <a href=\"https://keras.io/callbacks/#callback\" rel=\"nofollow noreferrer\">https://keras.io/callbacks/#callback</a></p>\n\n<p>Here is the code to create a custom callback\n</p>\n\n<pre><code>class LossHistory(keras.callbacks.Callback):\n def on_train_begin(self, logs={}):\n self.losses = []\n\n def on_batch_end(self, batch, logs={}):\n self.losses.append(logs.get('loss'))\n\nmodel = Sequential()\nmodel.add(Dense(10, input_dim=784, kernel_initializer='uniform'))\nmodel.add(Activation('softmax'))\nmodel.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n\nhistory = LossHistory()\nmodel.fit(x_train, y_train, batch_size=128, epochs=20, verbose=0, callbacks=[history])\n\nprint(history.losses)\n# outputs\n'''\n[0.66047596406559383, 0.3547245744908703, ..., 0.25953155204159617, 0.25901699725311789]\n'''\n</code></pre>\n"
},
{
"AnswerId": "55119295",
"CreationDate": "2019-03-12T10:30:39.020",
"ParentId": null,
"OwnerUserId": "3778658",
"Title": null,
"Body": "<p>One solution is to calculate the loss function between train expectations and predictions from train input. In the case of loss = mean_squared_error and three dimensional outputs (i.e. image width x height x channels): </p>\n\n<pre><code>model.fit(train_in,train_out,...)\n\npred = model.predict(train_in)\nloss = np.add.reduce(np.square(test_out-pred),axis=(1,2,3)) # this computes the total squared error for each sample\nloss = loss / ( pred.shape[1]*pred.shape[2]*pred.shape[3]) # this computes the mean over the sample entry \n\nnp.savetxt(\"loss.txt\",loss) # This line saves the data to file\n</code></pre>\n"
},
{
"AnswerId": "48118275",
"CreationDate": "2018-01-05T17:15:18.883",
"ParentId": null,
"OwnerUserId": "3846213",
"Title": null,
"Body": "<p>If you want to get loss values for each batch, you might want to use call <code>model.train_on_batch</code> inside a generator. It's hard to provide a complete example without knowing your dataset, but you will have to break your dataset into batches and feed them one by one</p>\n\n<pre><code>def make_batches(...):\n ...\n\nbatches = make_batches(...)\nbatch_losses = [model.train_on_batch(x, y) for x, y in batches]\n</code></pre>\n\n<p>It's a bit more complicated with single instances. You can, of course, train on 1-sized batches, though it will most likely thrash your optimiser (by maximising gradient variance) and significantly degrade performance. Besides, since loss functions are evaluated outside of Python's domain, there is no direct way to hijack the computation without tinkering with C/C++ and CUDA sources. Even then, the backend itself evaluates the loss batch-wise (benefitting from highly vectorised matrix-operations), therefore you will severely degrade performance by forcing it to evaluate loss on each instance. Simply put, hacking the backend will only (probably) help you reduce GPU memory transfers (as compared to training on 1-sized batches from the Python interface). If you really want to get per-instance scores, I would recommend you to train on batches and evaluate on instances (this way you will avoid issues with high variance and reduce expensive gradient computations, since gradients are only estimated during training):</p>\n\n<pre><code>def make_batches(batchsize, x, y):\n ...\n\n\nbatchsize = n\nbatches = make_batches(n, ...)\nbatch_instances = [make_batches(1, x, y) for x, y in batches]\nlosses = [\n (model.train_on_batch(x, y), [model.test_on_batch(*inst) for inst in instances]) \n for batch, instances in zip(batches, batch_instances)\n]\n</code></pre>\n"
}
] |
48,118,640 | 1 |
<python-3.x><pandas><tensorflow><object-detection><object-detection-api>
|
2018-01-05T17:39:51.940
| 48,128,974 | 4,663,377 |
creating TFrecord file for multiple objects
|
<p>I am following the racoon detection tutorial from GitHub (<a href="https://github.com/datitran/raccoon_dataset" rel="nofollow noreferrer">https://github.com/datitran/raccoon_dataset</a>) to detect animals using google object detection API. For this, I need to generate tfrecord file which is already generated here on line number 29 to 34 (<a href="https://github.com/datitran/raccoon_dataset/blob/master/generate_tfrecord.py" rel="nofollow noreferrer">https://github.com/datitran/raccoon_dataset/blob/master/generate_tfrecord.py</a>).</p>
<p>But he has done the code for only one animal (racoon) line 29 to 34. I have multiple animals like a racoon, praying mantis, hermit crab etc how do I modify this tfrecord file for multiple animals. One way I found is making changes to line 29 to 34 in generatetfrecord file as follows</p>
<pre></pre>
<p>Is this approach corect to include multiple if in same file or I need to generate multiple tfrecord files to train multiple objects</p>
|
[
{
"AnswerId": "48128974",
"CreationDate": "2018-01-06T15:45:09.333",
"ParentId": null,
"OwnerUserId": "4663377",
"Title": null,
"Body": "<pre><code>def class_text_to_int(row_label):\n if row_label == 'raccoon':\n return 1\n elif row_label == 'prayingmantis':\n return 2\n else:\n None\n</code></pre>\n\n<p>Making the above change in generatetfrecord is a correct approach to train multiple custom images simultaniously. Also, in object detection.pbtxt file, make the following changes: </p>\n\n<pre><code>item{\n id:1\n name:'racoon'\n }\nitem{\n id:2\n name: 'prayingmantis'\n }\n</code></pre>\n\n<p>And retrain the model from the first step.</p>\n"
}
] |
48,118,747 | 1 |
<java><python><tensorflow>
|
2018-01-05T17:47:31.657
| 48,123,888 | 9,178,367 |
Why does a Java Tensorflow session seem to reset state when a Python Tensorflow session does not?
|
<p>I am trying to build and evaluate TensorFlow Graphs via the 1.4 Java API, on Linux. I have noticed that the Java API seems to reset the value of operation output tensors each time a call to Session.run() is made. This behavior does not seem to match what happens in Python. My eventual question (see below for details) is how to avoid this apparent behavior?</p>
<h1>Python Example</h1>
<p>By way of example here is Python code (also using the 1.4 API) that increments the value in a Scalar Tensor.</p>
<pre></pre>
<p>Notice that as expected, evaluating x gives its current value, and using the session to run the xUpdateOp causes x to get larger by 1.</p>
<h1>Java Example</h1>
<p>This is my attempt to use Java to build a Tensorflow graph that increments a Scalar Tensor. Initialization is different in the Java API because it lacks some of Pythons convenience methods.</p>
<pre></pre>
<p>The output of the above code snip</p>
<pre></pre>
<p>But I expected it to be 4.0 because I called run() on xUpdateOp 4 times. Even if I am off-by-one 1.0 is not what I expected.</p>
<h1>Question</h1>
<p>What do I need to do with this Java example to get the same behavior as the Python example? How do I get the xUpdateOp to use the value of x calculated in a previous call to run()?</p>
<h2>What I have already tried</h2>
<p>I have already tried to use the feed() function to feed in an x value</p>
<pre></pre>
<p>Result</p>
<pre></pre>
<p>I have also tried to call run() without an addTarget or a fetch(), thinking that the addTarget or fetch() is what is causing the state to be reset. Perhaps once a session understands what to run, it can run it several times.</p>
<pre></pre>
<p>Result</p>
<pre></pre>
<h2>Somewhat related questions</h2>
<p><a href="https://stackoverflow.com/questions/42813989/how-to-create-initialize-a-variable-with-tensorflow-1-0-java-api">How to create/initialize a Variable with Tensorflow 1.0 Java API</a></p>
<p><a href="https://stackoverflow.com/questions/46565238/java-tensorflow-reset-default-graph">java tensorflow reset_default_graph</a></p>
<p><a href="https://stackoverflow.com/questions/43605690/java-train-loaded-tensorflow-model">Java - train loaded tensorflow model</a></p>
<p>Thanks in advance for your time!</p>
|
[
{
"AnswerId": "48123888",
"CreationDate": "2018-01-06T03:04:51.760",
"ParentId": null,
"OwnerUserId": "6708503",
"Title": null,
"Body": "<p>In your sample, <code>xUpdateOp</code> has <code>x</code> as its input, and <code>x</code> is the output of the operation that assigns <code>zero</code> to the variable. Thus, every time <code>xUpdateOp</code> is run, it is first assigning zero to the variable.</p>\n\n<p>A slight tweak to your code will result in 4.0:</p>\n\n<pre><code># Changed addInput(x) to addInput(xVar)\nOperation xUpdateOp =\n g.opBuilder(\"AssignAdd\", \"x_get_x_plus_step\").addInput(xVar).addInput(step).build();\n\ntry (Session s = new Session(g)) {\n # Initialize the variable once\n s.runner().addTarget(x.op()).run();\n s.runner().addTarget(xUpdateOp).run();\n s.runner().addTarget(xUpdateOp).run();\n s.runner().addTarget(xUpdateOp).run();\n\n try (Tensor<Float> result =\n s.runner().fetch(xUpdateOp.name(), 0).run().get(0).expect(Float.class)) {\n System.out.println(result.floatValue());\n } \n}\n</code></pre>\n\n<p>Drawing a parallel with the Python code: The Java code snippet above is more like the Python code in the question. While the Java code in the question is more like the following in Python:</p>\n\n<pre><code>import tensorflow as tf\n\nzero = tf.constant(0.0)\nstep = tf.constant(1.0)\nxVar = tf.Variable(initial_value=zero, name=\"x\")\nx = tf.assign(xVar, zero)\nxUpdateOp = tf.assign_add(x, step)\n</code></pre>\n\n<p>So <code>tf.assign_add(x, step)</code> vs <code>tf.assign_add(xVar, step)</code> would make all the difference. In the former, the <code>AssignAdd</code> operation applies on the output of the <code>Assign</code> operation.</p>\n\n<p>Hope that helps.</p>\n"
}
] |
48,119,449 | 1 |
<python><tensorflow>
|
2018-01-05T18:38:59.277
| 48,119,933 | 5,714,432 |
How to get a tensorflow variable under certain namescope?
|
<p>Suppose we want fetch a value of a tensorflow variable ,we can just run it under a session.</p>
<p>Suppose </p>
<p>Then its value can be fetched using </p>
<p>But if there are two variables with same name but different name scope, how do I fetch the value of individual variables?</p>
<pre></pre>
<p>Then how do I get values of under and under respectively?
If I do , I am getting value under name_scope (recent one)</p>
|
[
{
"AnswerId": "48119933",
"CreationDate": "2018-01-05T19:18:13.690",
"ParentId": null,
"OwnerUserId": "6689249",
"Title": null,
"Body": "<p>You can checkout names of vars and get them by scope/names:</p>\n\n<pre><code>with tf.variable_scope(\"x\"):\n a = tf.get_variable('a', initializer=1)\n\nwith tf.variable_scope(\"y\"):\n a = tf.get_variable('a', initializer=2)\n\nwith tf.Session() as s:\n s.run(tf.global_variables_initializer())\n [print(var.op.name) for var in tf.global_variables()]\n res = s.run(['x/a:0', 'y/a:0'])\n print(res)\n</code></pre>\n\n<p>returns:</p>\n\n<pre><code>x/a\ny/a\n[1, 2]\n</code></pre>\n"
}
] |
48,119,473 | 1 |
<python><numpy><tensorflow>
|
2018-01-05T18:40:28.717
| 48,121,074 | 1,692,060 |
Gram-Schmidt orthogonalization in pure Tensorflow: performance for iterative solution is much slower than numpy
|
<p>I want to do Gram-Schmidt orthogonalization to fix big matrices which start to deviate slightly from orthogonality in pure Tensorflow (to do it on the graph within larger computation, without breaking it). The solutions I've seen <a href="https://github.com/EigenPro/EigenPro-tensorflow/blob/master/utils.py" rel="nofollow noreferrer">like the one there</a> are used "externally" (doing multiple inside). </p>
<p>So I wrote a simple and I think very inefficient implementation myself:</p>
<pre></pre>
<p>But when I compare it with the same iterative external code, it is 3 times slower (on GPU !!!) (though has a bit better precision):</p>
<pre></pre>
<p>(UPD 4: I had a small mistake in my example, but it didn't change timings at all, as is a lightweight function):</p>
<p>Minimal example:</p>
<pre></pre>
<p>Is there a way to speed it up? I couldn't figure out how to do it for G-S which requires appending to the basis (so no parallelization can help).</p>
<p>UPD: I have achieved difference in 2x by optimizing :</p>
<pre></pre>
<p>EDIT2:</p>
<p>Just for fun, tried to fully mimic numpy solution, and got extremely long working code:</p>
<pre></pre>
<p>(which seems to overfill GPU memory as well):</p>
<pre></pre>
<p>UPD3: My GPU is GTX1050, it usually has speedup 5-7 times in comparison to my CPU. So the result is very strange for me. </p>
<p>UPD5: Ok, I found that GPU is almost not used for this code, while training neural network with manually written backpropagation which uses a lot of 's and other matrix arithmetics fully exploits it. Why is it so?</p>
<hr>
<p>UPD 6:</p>
<p>Following the given suggestion I have measured the time in a new way:</p>
<pre></pre>
<p>Now I can see 4x speedup:</p>
<pre></pre>
|
[
{
"AnswerId": "48121074",
"CreationDate": "2018-01-05T20:50:01.533",
"ParentId": null,
"OwnerUserId": "8858032",
"Title": null,
"Body": "<p>TensorFlow appears slow because your benchmark is measuring both the time that it construct the graph and the time it takes to execute it; a fairer comparison between TensorFlow and NumPy would exclude graph construction from the benchmark. In particular, your benchmark should probably look something like this:</p>\n\n<pre><code>print(\"tensorflow version:\")\n# This line constructs the graph but does not execute it.\northogonalized = ort_discrepancy(tf_gram_schmidt(tf_nearly_orthogonal))\n\nstart = time.time()\ntf_result = sess.run(orthogonalized)\nend = time.time()\n</code></pre>\n"
}
] |
48,119,488 | 1 |
<python><tensorflow>
|
2018-01-05T18:41:53.073
| 48,141,720 | 2,886,575 |
tensorflow running one batch at a time
|
<p>I am loading in my data via a shuffle_batch input pipeline. However, when I go to do training, I would like to train for a bit, then python some things, then continue training. However, I'm not sure how to get control of the thing back from the reader and the filename queue. It just keeps reading and reading...</p>
<p><strong>EDIT::</strong> I realize that this is the "old way" to import data. However, I do not immediately see a way to remedy this with the "new way" <a href="https://www.tensorflow.org/versions/master/api_docs/python/tf/data/FixedLengthRecordDataset" rel="nofollow noreferrer">https://www.tensorflow.org/versions/master/api_docs/python/tf/data/FixedLengthRecordDataset</a></p>
<p>How do I feed just 50 cifar records through my training pipeline and then recover control in my jupyter notebook?</p>
|
[
{
"AnswerId": "48141720",
"CreationDate": "2018-01-07T21:28:46.057",
"ParentId": null,
"OwnerUserId": "8676953",
"Title": null,
"Body": "<p>Based on you using <a href=\"https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/\" rel=\"nofollow noreferrer\">https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/</a> - the actual ltraining happens when it is <a href=\"https://github.com/tensorflow/models/blob/0785f0c037806584b000fe7b2b2ae8888980f3ed/tutorials/image/cifar10/cifar10_train.py#L115\" rel=\"nofollow noreferrer\">executing the train_op</a>. You should be able to put your logic there. e.g.:</p>\n\n<pre><code>while not mon_sess.should_stop():\n mon_sess.run(train_op)\n if mon_sess.run(global_step) % 10 == 0:\n # do something special\n</code></pre>\n\n<p>Otherwise it also supports a <a href=\"https://github.com/tensorflow/models/blob/0785f0c037806584b000fe7b2b2ae8888980f3ed/tutorials/image/cifar10/cifar10_train.py#L51\" rel=\"nofollow noreferrer\">max_steps</a> parameter which would probably be similar to you trying to limit the input. But wouldn't be so useful if you then want to continue training.</p>\n"
}
] |
48,119,520 | 1 |
<tensorflow>
|
2018-01-05T18:44:19.970
| 48,120,785 | 3,924,118 |
Use cases of the argument feed_dict of the method Session.run
|
<p>In the <a href="https://www.tensorflow.org/api_docs/python/tf/Session#run" rel="nofollow noreferrer">TensorFlow's documentation</a> regarding the argument of the method, we have</p>
<blockquote>
<p>The optional argument allows the caller to override the value of tensors in the graph.</p>
</blockquote>
<p>or</p>
<blockquote>
<p>: A dictionary that maps <em>graph elements</em> to values (described above).</p>
</blockquote>
<p>Which graph elements? All of them?</p>
<p>I understood I can use to feed placeholders, but is there any other use case? If not, why not explicitly emphasizing the fact that is used only to feed placeholders? If yes, which ones? I would appreciate examples.</p>
|
[
{
"AnswerId": "48120785",
"CreationDate": "2018-01-05T20:24:30.303",
"ParentId": null,
"OwnerUserId": "2130551",
"Title": null,
"Body": "<p><code>feed_dict</code> can be used to feed <em>any</em> tensors in the graph. In practice it is convenient to make tensors that have to be fed, <code>Placeholder</code> nodes, since an error will be thrown if they aren't fed. But, say you are debugging a graph, you can feed add fetch any intermediate tensors in the graph.</p>\n\n<p>Here is an example:</p>\n\n<pre><code>import tensorflow as tf\n\nwith tf.Session() as sess:\n a = tf.constant(1, name=\"a\")\n b = tf.constant(2, name=\"b\")\n c = tf.add(a, b, name=\"c\")\n\n # prints 3\n print(sess.run(c)) \n\n # prints 4 since we have fed a new value for a, for just this run.\n print(sess.run(c, feed_dict={a:2})) \n</code></pre>\n\n<p>Hope that helps!</p>\n"
}
] |
48,119,910 | 2 |
<python><neural-network><deep-learning><pytorch>
|
2018-01-05T19:16:07.330
| null | 2,191,652 |
Pytorch: randomly subsample a vector
|
<p>I need to randomly subsample a vector in pytorch.
The equivalent in Matlab would be something like </p>
<pre></pre>
<p>Are there similar functions for pytorch?</p>
<p>I'm trying to randomly subsample my prediction and target vector for computing the loss.</p>
|
[
{
"AnswerId": "57013154",
"CreationDate": "2019-07-12T19:39:37.200",
"ParentId": null,
"OwnerUserId": "3990607",
"Title": null,
"Body": "<p>Or you could simply do:</p>\n\n<pre><code>sample_size = 5\na = torch.randn(10)\nb = torch.randperm(10)\na_sample = a[b[0:sample_size]]\n</code></pre>\n\n<p>that is to sample without replacement like in your question.</p>\n\n<p>Or if you want to sample with replacement:</p>\n\n<pre><code>sample_size = 5\na = torch.randn(10)\nb = torch.randint(0, 10, size=(sample_size,))\na_sample = a[b]\n</code></pre>\n"
},
{
"AnswerId": "48120035",
"CreationDate": "2018-01-05T19:26:02.277",
"ParentId": null,
"OwnerUserId": "2191652",
"Title": null,
"Body": "<p>I think I already found something useful:</p>\n\n<pre><code>sample_size = 5\na = torch.randn(10)\nb = torch.randperm(10)\na = a.index_select(0,b)\na = a[0:sample_size] \n</code></pre>\n"
}
] |
48,121,231 | 0 |
<tensorflow><gpu><cluster-computing><distributed>
|
2018-01-05T21:02:28.690
| null | 2,815,551 |
Distributed evaluation of Inception in tensorflow
|
<p>I know <a href="https://www.tensorflow.org/deploy/distributed" rel="nofollow noreferrer">Distributed Tensorflow</a> allows one to train on a cluster in a distributed setting. Is there a chance to do the on a cluster as well? Has anyone accomplished that? Sharing your code here would be nice for all readers as well.</p>
|
[] |
48,121,702 | 1 |
<tensorflow><quantization>
|
2018-01-05T21:43:50.440
| null | null |
Float ops found in quantized TensorFlow MobileNet model
|
<p><a href="https://i.stack.imgur.com/jualG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jualG.png" alt="loat ops found in quantized TensorFlow MobileNet model "></a></p>
<p>As you can see in the screenshot of a quantized MobileNet model implemented in TensorFlow, there are still some float operations. The quantization is done in TensorFlow via the graph_transform tools. </p>
<p>The red ellipse in the image has its description in the right-hand-size text box. The "depthwise" is a "DepthwiseConv2dNative" operation that expects "DT_FLOAT" inputs.</p>
<p>Despite the lower Relu6 performs an 8-bit quantized operation, the result has to go through "(Relu6)" which is a "Dequantize" op, in order to produce "DT_FLOAT" inputs for the depthwise convolution. </p>
<p>Why is depthwise conv operations left out by TF graph_transform tools? Thank you.</p>
|
[
{
"AnswerId": "48154816",
"CreationDate": "2018-01-08T17:02:55.890",
"ParentId": null,
"OwnerUserId": "5708323",
"Title": null,
"Body": "<p>Unfortunately there isn't a quantized version of depthwise conv in standard TensorFlow, so it falls back to the float implementation with conversions before and after. For a full eight-bit implementation of MobileNet, you'll need to look at TensorFlow Lite, which you can learn more about here:</p>\n\n<p><a href=\"https://www.tensorflow.org/mobile/tflite/\" rel=\"nofollow noreferrer\">https://www.tensorflow.org/mobile/tflite/</a></p>\n"
}
] |
48,121,763 | 1 |
<python><scikit-learn><deep-learning><keras><lstm>
|
2018-01-05T21:50:38.597
| 48,128,506 | 8,893,169 |
python RNN LSTM error
|
<p>This is a recurrent neural network LSTM model meant to predict the future values of forex market movement.</p>
<p>The data set shape is (1713, 50), the first column is the Date time index and the others are numeric values.
but right after printing the Training data and Validation data shapes the error start.</p>
<p>When I tried to implement this code:</p>
<pre></pre>
<p>I got this error:</p>
<p>Using TensorFlow backend.</p>
<p>Traceback (most recent call last):</p>
<p>Training data: (891, 50)</p>
<p>File "E:/Tutorial/new.py", line 31, in
Validation data: (178, 822, 50)</p>
<p>train_samples, train_nx, train_ny = train.shape</p>
<p>ValueError: not enough values to unpack (expected 3, got 2)</p>
|
[
{
"AnswerId": "48128506",
"CreationDate": "2018-01-06T14:48:39.197",
"ParentId": null,
"OwnerUserId": "4132383",
"Title": null,
"Body": "<p>There's an error in this line:</p>\n\n<pre><code>train = data_processed[:, int(val_split), :]\n</code></pre>\n\n<p>It should be:</p>\n\n<pre><code>train = data_processed[:int(val_split), :, :]\nval = data_processed[int(val_split):, :, :]\n</code></pre>\n"
}
] |
48,121,895 | 4 |
<python><tensorflow><machine-learning><neural-network><conv-neural-network>
|
2018-01-05T22:01:25.683
| null | 4,400,414 |
Poor accuracy in Tensorflow CNN implementation
|
<p>I'm trying to implement a 5 layer deep convolutional neural network in Tensorflow with 3 convolutional layers followed by 2 fully connected layers. My current implementation is below. </p>
<pre class="lang-py prettyprint-override"></pre>
<p>For some unknown reason, the model doesn't seem to improve its accuracy above 10%. I've been banging my head against the wall trying to figure out why. I'm using a softmax loss cost function (as described <a href="http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1SoftmaxWithLossLayer.html#details" rel="nofollow noreferrer">here</a>) and momentum optimiser. The dataset used is the <a href="http://benchmark.ini.rub.de/?section=gtsrb&subsection=news" rel="nofollow noreferrer">GTSRB dataset</a>.</p>
<p>While I can add various deep learning features (such as adaptive learning rates etc) to improve the accuracy, I am suspicious as to why the basic CNN model is performing so poorly. </p>
<p>Is there anything obvious that could explain why it's not learning as expected? Alternatively, is there anything I could try to help diagnose the problem? </p>
<p>Any help would be much appreciated! </p>
|
[
{
"AnswerId": "48126837",
"CreationDate": "2018-01-06T11:25:03.803",
"ParentId": null,
"OwnerUserId": "9175058",
"Title": null,
"Body": "<p>There are a few points that should help:</p>\n\n<ol>\n<li>As mentioned in another answer, the loss function is incorrect; use <code>tf.nn.softmax_cross_entropy_with_logits</code>.</li>\n<li>It is good practice, particularly when getting started with deep learning / tensorflow, to start with a simpler model. You haven't told us how many classes you have, but let's assume you have 10 classes. Just about <em>any</em> simple model should do better than 10%, so this indicates something fundamentally wrong. The wrong thing is to elaborate your model further; the right thing is to simplify to logistic regression (which is just a single matrix multiply and then a softmax layer) and check performance. That way you can separate the network architecture from the optimization and loss function (partially anyway). Then build complexity from there.</li>\n<li>Your data: you haven't described the data, and as much as we love the power of neural networks (we do!), understanding and thoughtfully preprocessing the data matters. For example, the famous SVHN dataset (google street view house numbers) is often found to be much easier to classify when some preprocessing is done on the color channels. If you read the fine print of many computer vision papers, there is similar data preprocessing. Perhaps this is not the case here, but simplifying your network to understand the data better (item above) should help.</li>\n<li>Finally, this isn't likely causing your issue, but why are you using <code>tf.pad</code> as you are? You might find things easier to use <code>padding=SAME</code> instead of <code>padding=VALID</code>, making those <code>tf.pad</code> calls unnecessary.</li>\n<li>After all that, use <code>tensorboard</code> to help analyze performance and how you might improve things. It's worth the trouble of learning it: <a href=\"https://www.tensorflow.org/get_started/summaries_and_tensorboard\" rel=\"nofollow noreferrer\">https://www.tensorflow.org/get_started/summaries_and_tensorboard</a> .</li>\n</ol>\n"
},
{
"AnswerId": "48584578",
"CreationDate": "2018-02-02T14:16:18.653",
"ParentId": null,
"OwnerUserId": "2220465",
"Title": null,
"Body": "<p>I think that your model is a little bit simple.<br>\nWhen I tried your model with more parameters like below, \ntest accuracy was 86%.</p>\n\n<blockquote>\n <p>W_conv2 = weight_variable([5, 5, 32, 64]) # feature maps 32=>64<br>\n b_conv2 = bias_variable([64])<br>\n W_conv3 = weight_variable([5, 5, 64, 128]) # feature maps 64=>128<br>\n b_conv3 = bias_variable([128])<br>\n W_fc1 = weight_variable([4*4*128,2048]) # feature maps 64=>2048<br>\n b_fc1 = bias_variable([2048]) </p>\n</blockquote>\n\n<p>This design of conv layers is inspired by VGG-16 network. In VGG-16 network, the number of feature maps are doubled through every stack of conv layers. \nThe number of feature maps depend on task, but I think this design principle is useful for the traffic sign recognition task. </p>\n\n<p>If you are interested in my experiment, please refer to my github repo.\n<a href=\"https://github.com/satojkovic/DeepTrafficSign/tree/sof_test\" rel=\"nofollow noreferrer\">https://github.com/satojkovic/DeepTrafficSign/tree/sof_test</a></p>\n"
},
{
"AnswerId": "48585105",
"CreationDate": "2018-02-02T14:45:48.240",
"ParentId": null,
"OwnerUserId": "7856948",
"Title": null,
"Body": "<p>It's preferable to use :</p>\n\n<pre><code>with tf.variable_scope('Conv_1'):\n W_conv1 = weight_variable([3,3, FLAGS.img_channels, 32])\n W_conv1_2 = weight_variable([3,3, 32, 32])\n</code></pre>\n\n<p>rather than :</p>\n\n<pre><code>with tf.variable_scope('Conv_1'):\n W_conv1 = weight_variable([5, 5, FLAGS.img_channels, 32])\n</code></pre>\n\n<p>your network loses less finite information.</p>\n\n<p>Prefer more orthodox parameters like </p>\n\n<pre><code>output = tf.nn.max_pool(input, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID', name=identifier)\n</code></pre>\n\n<p>over :</p>\n\n<pre><code>output = tf.nn.max_pool(x, ksize=[1, 3, 3, 1],\n strides=[1, 2, 2, 1], padding='VALID', name='pooling2')\n</code></pre>\n\n<p>This will also prevent you from padding with a constant.\nSide Note : I think you should pad with something different from zero, i would assume that it adds noise...\nand last tip, i think your learning rate is way too high, start with something more like 1e-3,1e-4</p>\n\n<p>Use AdamOptimizer, it works wonders... It basically has the second order of magnitude while viewing the error space which gives it an advantage over basic MomentumOptimizer.</p>\n\n<p>Good Luck to you</p>\n"
},
{
"AnswerId": "48122801",
"CreationDate": "2018-01-05T23:39:56.540",
"ParentId": null,
"OwnerUserId": "712995",
"Title": null,
"Body": "<blockquote>\n <p>I'm using a softmax loss cost function and momentum optimiser.</p>\n</blockquote>\n\n<p>I believe at least one of the problems is with the loss. This expression <strong>is not</strong> the cross-entropy loss:</p>\n\n<pre><code># WRONG!\ntf.reduce_mean(tf.negative(tf.log(tf.reduce_sum(tf.multiply(y_conv,y_),1)))\n</code></pre>\n\n<p>Take a look at the correct formula in <a href=\"https://stackoverflow.com/q/46291253/712995\">this question</a>. Anyway, you should simply use <a href=\"https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits\" rel=\"nofollow noreferrer\"><code>tf.nn.softmax_cross_entropy_with_logits</code></a> (and drop the softmax from <code>y_conv</code>, as the loss function applies softmax itself).</p>\n\n<p>PS. CNN architecture looks ok to me, should get to 60%-70% with right hyper-parameters.</p>\n"
}
] |
48,122,072 | 0 |
<tensorflow><multi-gpu>
|
2018-01-05T22:17:37.940
| null | 1,779,853 |
How to profile distributed TensorFlow?
|
<p>I have a simple distributed setup in TF (each step roughly corresponds to a call)</p>
<ol>
<li>Parent makes changes to variables on the device via a session with a target for the sole parent task.</li>
<li>Several child processes the parent variables to local variables sitting on .</li>
<li>Each child process then does a bunch of calculations with its own local variables on its own TF device.</li>
</ol>
<p>The children all have their own master sessions, connected to their personal child task targets.</p>
<p>My setup is either a single GPU or multiple (2) GPUs that are sharded among the parent and children. There are much more children than total GPUs (the memory and compute density are small enough that a GPU cannot be fully utilized by a single child task).</p>
<p>Even with many children, I still find that GPU utilization is low in both single and multiple GPU cases. I suspect that it is the operation. These are local GPUs, so it should be possible to execute the op via DMA, a vanilla GPU copy (if on the same GPU), or nccl, but I worry that by default TF will be transferring data over local gRPC sockets.</p>
<p>I have several questions:</p>
<ol>
<li>Is my data transfer understanding correct?</li>
<li>The TF <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/profiler" rel="nofollow noreferrer">profiler</a> seems directed seems directed to single-process workflows, measuring when ops get executed, etc., within a single session. How can I combine logs from multiple remote sessions? Will capture the gRPC latency?</li>
<li>If I was to run all ops on the parent session, doing every copy on each child index, would this avoid the gRPC call?</li>
</ol>
|
[] |
48,123,840 | 1 |
<python><tensorflow><python-3.6><cx-freeze>
|
2018-01-06T02:52:56.753
| 49,086,879 | 9,179,817 |
cx_Freeze not finding some TensorFlow imports
|
<p>I recently wrote a library (in Python 3.6) and built a GUI for it using tkinter on Windows 10. The GUI is now finished, and I'm trying to freeze it using cx_Freeze. </p>
<p>The setup script runs perfectly fine (or at least I couldn't spot any error message or warning) and I can get my executable out of it. The problem is, when I run it, I get the following error message:</p>
<pre></pre>
<p>The reason why TensorFlow is mentioned here is that my library uses TensorFlow, and of course, so does my GUI. What the entire error message says is that when I'm importing tensorflow (), the program tries to do and the in then tried to do the import that causes the error. </p>
<p>I found the file that causes the error, and when I do on IDLE , it works perfectly fine. </p>
<p>Before reaching to that point, I encountered several similar problems (the cx_Freeze building without displaying any warning or error, but the .exe having some errors), but I could so far fix them all by myself, mostly by adding them to the list of in the setup script. I tried to do the same for this TensorFlow file, but it didn't work. I also tried including TensorFlow as a package in the setup script, or directly importing it all in my , without success.</p>
<p>My is the following (there might be some unnecessary includes with it, since I tried lots of things):</p>
<pre></pre>
<p>I tried rebuilding TensorFlow and its dependencies from scratch, but it didn't solve anything either. </p>
<p>Thanks in advance!</p>
|
[
{
"AnswerId": "49086879",
"CreationDate": "2018-03-03T17:11:36.267",
"ParentId": null,
"OwnerUserId": "5651606",
"Title": null,
"Body": "<p>I was able to resolve this problem by creating a blank <code>__init__.py</code> file in <code>\\path\\to\\python\\Lib\\site-packages\\tensorflow\\core\\profiler</code>. I am running python 3.5.2 and TensorFlow 1.5.0 so this solution may be specific to my installations. </p>\n"
}
] |
48,123,877 | 1 |
<python><tensorflow><binary><type-conversion><integer>
|
2018-01-06T03:02:00.637
| 48,125,687 | 2,187,510 |
Print an integer tensor in binary
|
<p>I have a tensor of type
I would like to use but I need the result to be in binary.
Is this even possible?
It is for debugging.</p>
<p>Example:</p>
<pre></pre>
|
[
{
"AnswerId": "48125687",
"CreationDate": "2018-01-06T08:43:27.067",
"ParentId": null,
"OwnerUserId": "712995",
"Title": null,
"Body": "<p>You can use <a href=\"https://www.tensorflow.org/api_docs/python/tf/py_function\" rel=\"nofollow noreferrer\"><code>tf.py_function</code></a>:</p>\n\n\n\n<pre class=\"lang-py prettyprint-override\"><code>x = tf.placeholder(tf.int32)\nbin_op = tf.py_function(lambda dec: bin(int(dec))[2:], [x], tf.string)\n\nbin_op.eval(feed_dict={x: 5}) # '101'\n</code></pre>\n\n<p>But note that <code>tf.py_function</code> creates a node in the graph. So if you want to print many tensors, you <em>can</em> wrap them with <code>tf.py_function</code> before <code>tf.Print</code>, but doing this in a loop may cause bloating.</p>\n"
}
] |
48,124,761 | 0 |
<keras><theano><conv-neural-network><backpropagation>
|
2018-01-06T06:00:46.007
| null | null |
Learning method in keras?
|
<p>Could someone explain to me what methods are used Keras (Theano backend) when training (fit. function) convolutional neural networks, if the activation function is RELU? The backpropagation method can not be used because the RELU activation function is not differentiable at 0.</p>
|
[] |
48,124,900 | 0 |
<machine-learning><computer-vision><deep-learning><keras>
|
2018-01-06T06:32:06.670
| null | 9,180,292 |
Why does Keras model.predict() result in different probabilities based on the size of testing data?
|
<p>I'm relatively new to Keras and image classification in general and I'm running into an issue that I can't seem to find much information on. </p>
<p>So the gist of it is that I've written a slightly modified version of the resnet50 architecture and am testing it on my own training dataset of 5000 images. The training and testing data has been split into 85% training and 15% testing. The resulting training accuracy is 94.26% and the testing accuracy is 89.02% after 50 epochs. I wanted to verify that model.evaluate was working properly and so I ran on the testing data and it produced the same predictions. However, when doing to just see the predictions for the first 3 images in the test data, it produces different probability results than before.</p>
<p>For example when doing just , I end up getting these predictions for the first 3 images:</p>
<pre></pre>
<p>So as you can see for example the very first image would be of class 2 with a 99% predicted chance.</p>
<p>However, when I run , I end up getting different predictions for the same 3 images:</p>
<pre></pre>
<p>So as you can see, these probabilities are very different. The first image still does predict a class of 2, but in other cases, it predicts different classes which are flat out wrong but are correct when doing on the entirety of the testing data. Similarly, if I do something like , it changes again.</p>
<p>I'm so confused on why this is happening. If anyone has any insight or has encountered something similar, I would really appreciate if you could tell me what's going on.</p>
<p>I apologize if this isn't much information to go on, please let me know if there's any further info you need.</p>
|
[] |
48,125,185 | 0 |
<python-3.x><tensorflow><machine-learning><computer-vision><keras>
|
2018-01-06T07:24:13.627
| null | 9,155,924 |
Memory used up for loading data alone in Keras program
|
<p>My code is for training vgg16 from custom data. Two classes, Diseased and not diseased.
I have around 3400 Images, The problem is while loading the data-set to memory.The above-mentioned process utilizes 99% of ram memory and it gets stuck.I am using spyder,however when I followed another example which has lower data size it works fine. My question is as follows Can anyone suggest an efficent method to run it without loading all the images into the memory? Because this is eventually leading to the blue screen of death.
Ps:my system is capable of running deeplearning codes.</p>
<pre></pre>
|
[] |
48,125,424 | 1 |
<tensorflow><dynamic><version>
|
2018-01-06T08:02:32.773
| 48,125,595 | 7,194,271 |
nightly installed TF is the new 1.5 TF?
|
<p>today I heard that there are a new version 1.5 TF which is with good support dynamic graph.</p>
<p>And I also find there is a new nightly installed method.</p>
<p>So this nightly installing method is install the new Version TF?</p>
<pre></pre>
|
[
{
"AnswerId": "48125595",
"CreationDate": "2018-01-06T08:28:38.327",
"ParentId": null,
"OwnerUserId": "732003",
"Title": null,
"Body": "<p>Currently TF 1.5.0rc0 is out. You can install release candidates of latest versions by this command:</p>\n\n<pre><code>pip install tensorflow --pre\n</code></pre>\n\n<p>When you check its version, you will see something like this:</p>\n\n<pre><code>> import tensorflow as tf\n> tf.__version__\n'1.5.0-rc0'\n</code></pre>\n\n<p>The installation you have is the \"nightly\" which is built every night from the master branch. The version string you see, <code>1.6.0-dev20180105</code> means next release up is <code>1.6</code> and it was built on <code>2018-01-05</code>.</p>\n"
}
] |
48,126,106 | 0 |
<caffe><pytorch><nvidia-digits>
|
2018-01-06T09:44:31.243
| null | 6,930,972 |
Why the same configuration network in caffe and pytorch behaves so differently?
|
<p>the code in pytorch is <a href="https://github.com/chengyangfu/pytorch-vgg-cifar10" rel="nofollow noreferrer">here</a>, I only used the <strong>vgg19</strong> arch.</p>
<p>In order to make the cifar10 dataset preprocessed same in caffe and pytorch, I remove all the transforms in <a href="https://github.com/chengyangfu/pytorch-vgg-cifar10/blob/master/main.py#L91" rel="nofollow noreferrer"></a> but . I found the cifar10 dataset's range is [0,1] in pytorch, but caffe is [0,255], so I scale 1/255 below. Any extra preprocessing is canceled to make things easier.</p>
<p>here is my caffe net definition prototxt:</p>
<pre></pre>
<p>Here is my solver.prototxt</p>
<pre></pre>
<p>The fact is pytorch can train the model at a very quick speed, the acc after one epoch is about 20%, but in caffe the loss is always around 2.3XXX (about -log(0.1), random guess loss), I suspect it was because the different weight initialization, so I changed caffe's <a href="https://github.com/BVLC/caffe/blob/master/include/caffe/filler.hpp#L160" rel="nofollow noreferrer"></a>'s xavier to 'efficient backprop'(i.e U(-std, std), std=1/(sqrt(fan_in))), but it didn't work.</p>
<p>Now, the only difference is the bias initialization method, in pytorch it uses the weight's fan_in, but in caffe I think it uses the output_num as fan_in (because its shape is [1,N], N is the number of output neurons, and in filler.hpp it use blob.count() / blob.num() as fan_in for xavier ).</p>
<p>Who can help me ? I think if all the configuration is the same, the training process is almost the same ,but it breaked my opinion.</p>
|
[] |
48,126,134 | 1 |
<tensorflow>
|
2018-01-06T09:47:54.797
| null | 2,815,551 |
Proper usage of tf.reset_default_graph() to destruct a graph
|
<p>Looking at the definition of <a href="https://www.tensorflow.org/api_docs/python/tf/reset_default_graph" rel="nofollow noreferrer">tf.reset_default_graph()</a>, it seems to me that does not reset the value of a tensor (e.g., weights) inside a graph when I use in a loop:</p>
<pre></pre>
<p>How can I completely destruct the tf.graph on a nested loop?</p>
<p>P.s: I did try as well without success. </p>
|
[
{
"AnswerId": "55659908",
"CreationDate": "2019-04-12T21:53:28.230",
"ParentId": null,
"OwnerUserId": "9217178",
"Title": null,
"Body": "<p>@de1 's comment was right. I think that you confound the session and the graph. </p>\n\n<ol>\n<li>When you draw operations like <code>tf.add</code>, <code>tf.matmul</code>, <code>tf.nn.conv2d</code> in the graph, you only put nodes in the graph.</li>\n<li>When you do <code>tf.reset_default_graph()</code> you clean these nodes in the default graph. You can draw several graphs by <code>graph1 = tf.Graph()</code> <code>graph2 = tf.Graph()</code>. You can choose which graph to launch session by doing <code>with graph2.as_default()</code></li>\n<li>Variables like weights and bias only exist in session. That's why you always do <code>tf.global_variables_initializer()</code> inside a session.</li>\n</ol>\n\n<p><strong>To answer your question:</strong>\nYou can try:</p>\n\n<pre><code>for i in range(X):\n graphX = tf.Graph() # every time you overwrite it\n with graphX.as_default():\n with tf.Session(graph=graphX) as sess: #make sure which graph use\n ...\n</code></pre>\n\n<p>Hope that it helps</p>\n"
}
] |
48,126,690 | 3 |
<tensorflow><tensorflow-datasets>
|
2018-01-06T11:04:39.110
| null | 298,209 |
How to make tf.data.Dataset return all of the elements in one call?
|
<p>Is there an easy way to get the entire set of elements in a ? i.e. I want to set batch size of the Dataset to be the size of my dataset without specifically passing it the number of elements. This would be useful for validation dataset where I want to measure accuracy on the entire dataset in one go. I'm surprised there isn't a method to get the size of a </p>
|
[
{
"AnswerId": "53454114",
"CreationDate": "2018-11-24T00:12:18.257",
"ParentId": null,
"OwnerUserId": "1757224",
"Title": null,
"Body": "<p><code>tf.data</code> API creates a tensor called <code>'tensors/component'</code> with the appropriate prefix/suffix if applicable). after you create the instance. You can evaluate the tensor by name and use it as a batch size.</p>\n\n<pre><code>#Ignore the warnings\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nimport pandas as pd\nimport tensorflow as tf\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = (8,7)\n%matplotlib inline\n\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\")\n\nXtrain = mnist.train.images[mnist.train.labels < 2]\nytrain = mnist.train.labels[mnist.train.labels < 2]\n\nprint(Xtrain.shape)\n#(11623, 784)\nprint(ytrain.shape)\n#(11623,) \n\n#Data parameters\nnum_inputs = 28\nnum_classes = 2\nnum_steps=28\n\n# create the training dataset\nXtrain = tf.data.Dataset.from_tensor_slices(Xtrain).map(lambda x: tf.reshape(x,(num_steps, num_inputs)))\n# apply a one-hot transformation to each label for use in the neural network\nytrain = tf.data.Dataset.from_tensor_slices(ytrain).map(lambda z: tf.one_hot(z, num_classes))\n# zip the x and y training data together and batch and Prefetch data for faster consumption\ntrain_dataset = tf.data.Dataset.zip((Xtrain, ytrain)).batch(128).prefetch(128)\n\niterator = tf.data.Iterator.from_structure(train_dataset.output_types,train_dataset.output_shapes)\nX, y = iterator.get_next()\n\ntraining_init_op = iterator.make_initializer(train_dataset)\n\ndef get_tensors(graph=tf.get_default_graph()):\n return [t for op in graph.get_operations() for t in op.values()]\n\nget_tensors()\n#<tf.Tensor 'tensors_1/component_0:0' shape=(11623,) dtype=uint8>,\n#<tf.Tensor 'batch_size:0' shape=() dtype=int64>,\n#<tf.Tensor 'drop_remainder:0' shape=() dtype=bool>,\n#<tf.Tensor 'buffer_size:0' shape=() dtype=int64>,\n#<tf.Tensor 'IteratorV2:0' shape=() dtype=resource>,\n#<tf.Tensor 'IteratorToStringHandle:0' shape=() dtype=string>,\n#<tf.Tensor 'IteratorGetNext:0' shape=(?, 28, 28) dtype=float32>,\n#<tf.Tensor 'IteratorGetNext:1' shape=(?, 2) dtype=float32>,\n#<tf.Tensor 'TensorSliceDataset:0' shape=() dtype=variant>,\n#<tf.Tensor 'MapDataset:0' shape=() dtype=variant>,\n#<tf.Tensor 'TensorSliceDataset_1:0' shape=() dtype=variant>,\n#<tf.Tensor 'MapDataset_1:0' shape=() dtype=variant>,\n#<tf.Tensor 'ZipDataset:0' shape=() dtype=variant>,\n#<tf.Tensor 'BatchDatasetV2:0' shape=() dtype=variant>,\n#<tf.Tensor 'PrefetchDataset:0' shape=() dtype=variant>]\n\nsess = tf.InteractiveSession()\nprint('Size of Xtrain: %d' % tf.get_default_graph().get_tensor_by_name('tensors/component_0:0').eval().shape[0])\n#Size of Xtrain: 11623\n</code></pre>\n"
},
{
"AnswerId": "50704517",
"CreationDate": "2018-06-05T16:04:40.117",
"ParentId": null,
"OwnerUserId": "298209",
"Title": null,
"Body": "<p>Not sure if this still works in latest versions of TensorFlow but if this is absolutely needed a hacky solution is to create a batch that's bigger than the dataset size. You don't need to know how big the dataset is, just request a batch size that's larger.</p>\n"
},
{
"AnswerId": "48127601",
"CreationDate": "2018-01-06T13:00:47.580",
"ParentId": null,
"OwnerUserId": "9175058",
"Title": null,
"Body": "<p>In short, there is not a good way to get the size/length; <code>tf.data.Dataset</code> is built for pipelines of data, so has an iterator structure (in my understanding and according to my read of <a href=\"https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/python/data/ops/dataset_ops.py\" rel=\"nofollow noreferrer\">the Dataset ops code</a>. From the <a href=\"https://www.tensorflow.org/versions/master/programmers_guide/datasets\" rel=\"nofollow noreferrer\">programmer's guide</a>: </p>\n\n<blockquote>\n <p>A <code>tf.data.Iterator</code> provides the main way to extract elements from a dataset. The operation returned by <code>Iterator.get_next()</code> yields the next element of a Dataset when executed, and typically acts as the interface between input pipeline code and your model.</p>\n</blockquote>\n\n<p>And, by their nature, iterators do not have a convenient notion of size/length; see here: <a href=\"https://stackoverflow.com/questions/3345785/getting-number-of-elements-in-an-iterator-in-python\">Getting number of elements in an iterator in Python</a></p>\n\n<p>More generally though, why does this problem arise? If you are calling <code>batch</code>, you are also getting a <code>tf.data.Dataset</code>, so whatever you are running on a batch you should be able to run on the whole dataset; it will iterate through all the elements and calculate validation accuracy. Put differently, I don't think you actually need the size/length to do what you want to do.</p>\n"
}
] |
48,126,996 | 2 |
<machine-learning><keras><neural-network><conv-neural-network><autoencoder>
|
2018-01-06T11:42:28.533
| null | 2,699,919 |
Keras autoencoder negative loss and val_loss with data in range [-1 1]
|
<p>I am trying to adapt keras autoencoder example to a my data. I have the following network:</p>
<pre class="lang-python prettyprint-override"></pre>
<p>And, when I'm running on MNIST data, which are normalized to [0,1] everything works fine, but with my data that are in range [-1,1] I only see negative losses and 0.0000 accuracy while training. If I do data = np.abs(data), training starts and looks that goes well, but doing abs() on data makes no reasons to train data fakes.</p>
<p>The data I'm trying to feed to network are IQ channels of signal, 1st channel for real part, and 2nd channel for imag part, so both are normalized to a [-1 1], and both often contains very low values, e.g. 5e-12. I have shaped them to a (28,28,2) input.</p>
<p>I have also added Dense layers in the middle of autoencoder, as I wish to make predictions about classes (that are fitted automatically) when autoencoder completes training. Am I did this correctly, does this breaks the network?</p>
|
[
{
"AnswerId": "48128748",
"CreationDate": "2018-01-06T15:21:16.897",
"ParentId": null,
"OwnerUserId": "4132383",
"Title": null,
"Body": "<p>You are mixing between binary ('sigmoid') and categorical ('softmax' and 'categorical_crossentropy'). Change the following:</p>\n\n<ol>\n<li>Remove the dense layers in between and feed 'encoded' instead of 'encoded3' to the decoder</li>\n<li>Change the autoencoder loss to 'binary_crossentropy'</li>\n</ol>\n\n<p>Alternatively if you really want to try the dense layers in between, just use them without an activation function (None)</p>\n"
},
{
"AnswerId": "48129277",
"CreationDate": "2018-01-06T16:17:56.957",
"ParentId": null,
"OwnerUserId": "4685471",
"Title": null,
"Body": "<p>There are several issues with your question, including your understanding of autoencoders and their usage. I strongly suggest at least going through the Keras blog post <a href=\"https://blog.keras.io/building-autoencoders-in-keras.html\" rel=\"nofollow noreferrer\">Building Autoencoders in Keras</a> (if you do have gone through it, arguably you have to do it again, this time more thoroughly).</p>\n\n<p>A few general points, most of which are included in the above linked post:</p>\n\n<ol>\n<li>Autoencoders are <strong>not</strong> used for classification, hence it makes no sense to ask for a metric such as accuracy. Similarly, since the fitting objective is the reconstruction of their input, categorical cross entropy is not the correct loss function to use (try binary cross entropy instead).</li>\n<li>The very existence of the intermediate dense layers you use is puzzling, and even more puzzling is the choice of a <code>sigmoid</code> layer followed by a <code>softmax</code> one; the same holds for the <code>sigmoid</code> choice in your final, <code>decoded</code> layer. Both these activation functions are normally used for classification purposes at final layers, so again refer to point (1) above.</li>\n<li>I strongly suggest you start with a model demonstrated in the blog post linked above, and, if necessary, <em>incrementally</em> modify it to fit your purpose, as I am not sure what you have built here can even qualify as an autoencoder in the first place.</li>\n</ol>\n"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.