QuestionId
int64 63.3k
59.1M
| AnswerCount
int64 0
11
| Tags
stringlengths 3
101
| CreationDate
stringlengths 23
23
| AcceptedAnswerId
stringlengths 6
8
⌀ | Title
stringlengths 15
150
| Body
stringlengths 58
29.2k
| Lable
stringclasses 5
values |
---|---|---|---|---|---|---|---|
58,329,742 | 1 | <numpy><pytorch> | 2019-10-10T19:17:13.743 | null | Pytorch copying inexact value of numpy floating point number | <p>I'm converting a floating point number (or numpy array) to Pytorch tensor and it seems to be copying the inexact value to the tensor. The error comes in the 8th significant digit and afterwards. This is significant (no-pun intended) for my work as I deal with chaotic dynamics which is very sensitive towards the slight change in the initial conditions. </p>
<p>I'm already using <code>torch.set_printoptions(precision=16)</code> to print 16 significant digits.</p>
<pre><code>np_x = state
print(np_x)
x = torch.tensor(np_x,requires_grad=True,dtype=torch.float32)
print(x.data[0])
</code></pre>
<p>and the output is :</p>
<pre><code>0.7575408585008059
tensor(0.7575408816337585)
</code></pre>
<p>It would be helpful to know what is going wrong or how it could be resolved ?</p>
| D |
58,334,740 | 1 | <python><gpu><pytorch> | 2019-10-11T05:26:11.533 | null | How to fix the error `Process finished with exit code -1073741819 (0xC0000005)` | <p>My problem is when I running FC network the code works well in both CPU and GPU. But when it comes to CNN, I can only train it on CPU. It raises an error when I try to train it on GPU.</p>
<p>Like that: </p>
<blockquote>
<p>Process finished with exit code -1073741819 (0xC0000005)</p>
</blockquote>
<p>I find the error raised when the code goes to loss.backword. The error happened when I use the first column instead of the second.</p>
<pre><code>device = torch.device("cuda:0")
device = torch.device("cuda:0" if opt.cuda else "cpu")
</code></pre>
<p>My environment is Python 3.6.9, Windows 10, Torch 1.2.0, Cuda 9.2.</p>
| D |
58,349,416 | 0 | <jupyter-notebook><pytorch> | 2019-10-11T23:09:45.260 | null | Jupyter Notebook/PyTorch, module 'torch' has no attribute 'BoolTensor' | <p>I am working in PyTorch and I need to use a BoolTensor, which is available according to the documentation <a href="https://pytorch.org/docs/stable/tensors.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/tensors.html</a></p>
<p>However, when I try to initialize the BoolTensor I get the error: "AttributeError: module 'torch' has no attribute 'BoolTensor'"</p>
<p>All other tensors do work normally, like LongTensor or DoubleTensor</p>
<p>I have no idea what could cause this particular error, since it seems pretty strange that only one type of tensor doesn't work to me...</p>
| D |
58,360,256 | 0 | <tensorflow> | 2019-10-13T03:03:18.310 | null | how to slice and assign in tensorflow2.0 | <p>I find it annoyed with slice and assign in tf2.0. Supposing array is a 2D array. In numpy, I can do</p>
<pre class="lang-py prettyprint-override"><code>array[:, [1, 3]] = a_num - array[:, [3, 1]]
</code></pre>
<p>But how can i achieve these ops in tf2.0? I want to know.</p>
| D |
58,373,906 | 0 | <augmented-reality><arcore><tensorflow-lite> | 2019-10-14T09:32:08.730 | null | using custom object detection model with ARcore | <p>i have developed a custom hand detection model using tensorflow object detection API and converted it to tf-lite, now i want to render some watches on top of it using ARcore or any other Augmented reality SDK. where do i need to start?</p>
| B |
58,376,614 | 0 | <python><file><keras><model> | 2019-10-14T12:16:47.317 | null | How can I convert hdf5 file or model.tflite file to pkl file? | <p>I have trained a CNN and stored the model in 2 formats that is tflite and hdf5 model file. How can I convert any of these files to pkl file.</p>
| D |
58,379,207 | 1 | <python><tensorflow><pycharm> | 2019-10-14T14:48:27.330 | null | PyCharm reports import errors with TensorFlow 2 | <p>Since I upgraded to TensorFlow 2, PyCharm is displaying import warnings and errors for many TensorFlow modules and classes. For example, if I use the <a href="https://www.tensorflow.org/tutorials/quickstart/advanced" rel="nofollow noreferrer">quickstart example</a> I get:</p>
<p><a href="https://i.stack.imgur.com/coKvB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/coKvB.png" alt="enter image description here"></a></p>
<p>The code runs fine, so these imports are valid, but PyCharm does not think that they are.</p>
<p>Does anyone know how to resolve this? I didn't experience import issues with TensorFlow 1.14.</p>
| D |
58,383,938 | 1 | <tensorflow><tensorflow.js> | 2019-10-14T20:54:21.767 | 58458317 | Error: Size(XX) must match the product of shape x,x,x,x | <p>This is a newbie question, but any help will be appreciated.</p>
<p>I'm having a problem with a 3D tensor in TensorFlow.JS (node), with the following code:</p>
<pre><code>const tf = require('@tensorflow/tfjs-node');
(async ()=>{
let list = [
{
xs: [
[
[ 0.7910133603149169, 0.7923634491520086, 0.79166712455722, 0.7928027625311359, 0.4426631841175303, 0.018719529693542337 ],
[ 0.7890709817505044, 0.7943561081665688, 0.7915865358198619, 0.7905450669351226, 0.4413258183256521, 0.04449784810703526 ],
[ 0.7940229392692819, 0.7924745639669473, 0.7881395357356101, 0.7880208892359736, 0.40902353356570315, 0.14643954229459097 ],
[ 0.801474878324385, 0.8003822349633881, 0.7969969705961001, 0.7939094034872144, 0.40227041242732126, 0.03893523221469505 ],
[ 0.8022503526561848, 0.8011600386679555, 0.7974621873981194, 0.8011488339557422, 0.43008361179994464, 0.11210020422004835 ],
],
[
[ 0.8034111510684465, 0.7985390234525179, 0.7949321830852709, 0.7943788081438548, 0.5739870761673189, 0.13358267460835263 ],
[ 0.805714476773561, 0.8072996569653942, 0.8040745782073486, 0.8035592212810225, 0.5899031300445114, 0.03229758335964042 ],
[ 0.8103322733081704, 0.8114317495511435, 0.8073606480159334, 0.8057140734135828, 0.5842202187553198, 0.01986941729798157 ],
[ 0.815132106874313, 0.8122641403791668, 0.8104353115275772, 0.8103395749739932, 0.5838313552472632, 0.03332674037143093 ],
[ 0.8118480102237944, 0.8166500561770489, 0.8128943005604122, 0.8147644523703373, 0.601619389872815, 0.04807286626501376 ],
]
],
ys: 1
}
];
const ds = tf.data.generator(async () => {
let index = 0;
return {
next: async () => {
if(index >= list.length) return { done : true };
let doc = list[index];
index++;
return {
value: {
xs : doc.xs,
ys : doc.ys
},
done: false
};
}
};
}).batch(1);
let model = tf.sequential();
model.add(tf.layers.dense({units: 60, activation: 'relu', inputShape: [2, 5, 6]}));
model.compile({
optimizer: tf.train.adam(),
loss: 'sparseCategoricalCrossentropy',
metrics: ['accuracy']
});
await model.fitDataset(ds, {epochs: 1});
return true;
})().then(console.log).catch(console.error);
</code></pre>
<p>This code generate the following error:</p>
<pre><code>Error: Size(60) must match the product of shape 1,2,5,60
at Object.inferFromImplicitShape
</code></pre>
<p>I didn't understand why the layer is changing the last value of the <code>inputShape</code> from <code>6</code> to <code>60</code> (which is the expected output units for this layer).</p>
<p>Just to confirm, as far I know the <code>units</code> should be the product of: <code>batchSize * x * y * z</code>, in the example case: <code>1 * 2 * 5 * 6 = 60</code></p>
<p>Thank you!</p>
<p>Software specification:</p>
<ul>
<li>tfjs-node: v1.2.11</li>
<li>Node JS: v11.2.0</li>
<li>OS: Ubuntu 18.04.2</li>
</ul>
| D |
58,384,510 | 0 | <keras><deep-learning><gradient><loss-function><derivative> | 2019-10-14T21:53:59.423 | null | How to use the input gradients as variables within a custom loss function in Keras? | <p>I am using the input gradient as feature important and want to compare the feature importance of a train datapoint with the human annotated feature importance. I would like to make this comparison differentiable such that it can be learned through backpropagation. For that, I am writing a custom loss function that in addition to the regular loss (e.g. m.s.e. on the prediction vs true labels) also checks whether the input gradient is correct (e.g. m.s.e. of the input gradient vs the human annotated feature importance).</p>
<p>With the following code I am able to get the input gradient:</p>
<pre><code>from keras import backend as K
import numpy as np
from keras.models import Model
from keras.layers import Input, Dense
def normalize(x):
# utility function to normalize a tensor by its L2 norm
return x / (K.sqrt(K.mean(K.square(x))) + 1e-5)
# Amount of training samples
N = 1000
input_dim = 10
# Generate training set make the 1st and 2nd feature same as the target feature
X = np.random.standard_normal(size=(N, input_dim))
y = np.random.randint(low=0, high=2, size=(N, 1))
X[:, 1] = y[:, 0]
X[:, 2] = y[:, 0]
# Create simple model
inputs = Input(shape=(input_dim,))
x = Dense(10, name="dense1")(inputs)
output = Dense(1, activation='sigmoid')(x)
model = Model(input=[inputs], output=output)
# Compile and fit model
model.compile(optimizer='adam', loss="mse", metrics=['accuracy'])
model.fit([X], y, epochs=100, batch_size=64)
# Get function to get input gradients
gradients = K.gradients(model.output, model.input)[0]
gradient_function = K.function([model.input], [normalize(gradients)])
# Get input gradient values of the training-set
grads_val = gradient_function([X])[0]
print(grads_val[:2])
</code></pre>
<p>This prints the following (you can see that the 1st and the 2nd features have the highest importance):</p>
<pre><code>[[ 1.2629046e-02 2.2765596e+00 2.1479919e+00 2.1558853e-02
4.5277486e-03 2.9851785e-03 9.5279224e-04 -1.0903150e-02
-1.2230731e-02 2.1960819e-02]
[ 1.1318034e-02 2.0402350e+00 1.9250139e+00 1.9320872e-02
4.0577268e-03 2.6752844e-03 8.5390132e-04 -9.7713526e-03
-1.0961102e-02 1.9681118e-02]]
</code></pre>
<p>How can I write a custom loss function in which the input gradients are differentiable?
I started with the following loss function.</p>
<pre><code>from keras.losses import mean_squared_error
def custom_loss():
# human annotated feature importance
# Let's say that it says to only look at the second feature
human_feature_importance = []
for i in range(N):
human_feature_importance.append([0,0,1,0,0,0,0,0,0,0])
def loss(y_true, y_pred):
# Get regular loss
regular_loss_value = mean_squared_error(y_true, y_pred)
# Somehow get the input gradient of each training sample as a tensor
# It should be differential w.r.t. all of the weights
gradients = ??
feature_importance_loss_value = mean_squared_error(gradients, human_feature_importance)
# Combine the both losses
return regular_loss_value + feature_importance_loss_value
return loss
</code></pre>
<p>I also found an implementation in tensorflow to make the input gradient differentialble: <a href="https://github.com/dtak/rrr/blob/master/rrr/tensorflow_perceptron.py#L18" rel="nofollow noreferrer">https://github.com/dtak/rrr/blob/master/rrr/tensorflow_perceptron.py#L18</a></p>
| D |
58,408,616 | 1 | <conv-neural-network><pytorch><autoencoder><max-pooling> | 2019-10-16T08:10:49.840 | 58408952 | Indices out of range for MaxUnpool2d | <p>I am trying to understand unpooling in Pytorch because I want to build a convolutional auto-encoder. </p>
<p>I have the following code </p>
<pre><code>from torch.autograd import Variable
data = Variable(torch.rand(1, 73, 480))
pool_t = nn.MaxPool2d(2, 2, return_indices=True)
unpool_t = nn.MaxUnpool2d(2, 2)
out, indices1 = pool_t(data)
out = unpool_t(out, indices1)
</code></pre>
<p>But I am constantly getting this error on the last line (unpooling). </p>
<pre><code>IndexError: tuple index out of range
</code></pre>
<p>Although the data is simulated in this example, the input has to be of that shape because of the preprocessing that has to be done. </p>
<p>I am fairly new to convolutional networks, but I have even tried using a ReLU and convolutional 2D layer before the pooling however, the indices always seem to be incorrect when unpooling for this shape. </p>
| D |
58,425,760 | 0 | <tensorflow><pytorch><lstm><recurrent-neural-network> | 2019-10-17T05:53:34.880 | null | RNN (LSTM) training switch between time series or images | <p>Regarding RNN training,
We feed network a network -> point by point from the same time series (or image or smth else).</p>
<p>When we “switch from one time series to another”, what should be done or how network “will know” that the time series is different now?</p>
<p>What comes to my mind is to reset a hidden state or maybe to introduce some kind of symbol which "means" separation between inputs.</p>
<p>Are there any other or better options?</p>
| D |
58,441,824 | 0 | <tensorflow><arm64><cudnn> | 2019-10-17T23:08:38.150 | null | Tensorflow: find out the CUDA/CuDNN version that a pre-built tensorflow wheel build against | <p>I already have a tensorflow-gpu 1.14 installed, however I installed it through a pre-built wheel so I have no idea what CUDA/CuDNN version it originally built against. How to find it out? </p>
<p>The reason I ask this question is that in my case I want to install a particular Tensorflow wheel first and install the required CUDA/CuDNN version secondly to support my Tensorflow. If the runtime CUDA/CuDNN library does not match the original CuDA/CuDNN library that the Tensorflow built against, one may have the error "Could not find 'cudart64_80.dll'", or "mismatch of cudnn library, built in 7.1.5, runtime is 7.0.5". </p>
<p>Particular in my case, my error message is "tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcudnn.so.7'; dlerror: libcudnn.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda-9.0/lib64:"</p>
<p>This error is wierd because libcudnn.so.7 is actually installed and properly linked, see </p>
<pre><code>$ sudo ldconfig -p | grep libcudnn.so.7
libcudnn.so.7 (libc6,AArch64) => /usr/local/cuda/lib64/libcudnn.so.7
libcudnn.so.7 (libc6,AArch64) => /usr/lib/aarch64-linux-gnu/libcudnn.so.7
$ ls -alh /usr/local/cuda/lib64/libcudnn.so.7
lrwxrwxrwx 1 root root 44 Oct 17 19:17 /usr/local/cuda/lib64/libcudnn.so.7 -> /usr/lib/aarch64-linux-gnu/libcudnn.so.7.1.4
$ ls -alh /usr/lib/aarch64-linux-gnu/libcudnn.so.7
lrwxrwxrwx 1 root root 44 Oct 17 17:48 /usr/lib/aarch64-linux-gnu/libcudnn.so.7 -> /usr/lib/aarch64-linux-gnu/libcudnn.so.7.1.4
</code></pre>
<p>My debugging processes,
(1) I tried to locate the tensorflow package in my system by </p>
<pre><code>$ python -c "import tensorflow as tf; print(tf.__file__)"
***/lib/python3.5/site-packages/tensorflow/__init__.py
</code></pre>
<p>(2) I then cd to that directory and tried to find out the hard-coded build inforation</p>
<pre><code>$ cat python/platform/build_info.py
is_cuda_build = True
cuda_version_number = '9.0'
cudnn_version_number = '7.1'
</code></pre>
<p>I however if this is True, I still cannot understand why the error happens.</p>
<p>Notes:<br>
1. Refering the original Tensorflow-to-CUDA/CuDNN binding does not work, because my tensorflow is installed through a pre-built wheel.<br>
2. Checking the runtime CUDA/CuDNN does not answer the question, I am asking the library at build phase.<br>
3. Re-installing Tensorflow based on the runtime CUDA/CuDNN is not a preferred solution. </p>
| D |
58,453,793 | 2 | <python><tensorflow><keras><tf.keras><keras-2> | 2019-10-18T15:20:38.200 | 58467266 | The clear_session() method of keras.backend does not clean up the fitting data | <p>I am working on a comparison of the fitting accuracy results for the different types of data quality. A "good data" is the data without any NA in the feature values. A "bad data" is the data with NA in the feature values. A "bad data" should be fixed by some value correction. As a value correction, it might be replacing NA with zero or mean value. </p>
<p>In my code, I am trying to perform multiple fitting procedures.</p>
<p>Review the simplified code:</p>
<pre class="lang-py prettyprint-override"><code>from keras import backend as K
...
xTrainGood = ... # the good version of the xTrain data
xTrainBad = ... # the bad version of the xTrain data
...
model = Sequential()
model.add(...)
...
historyGood = model.fit(..., xTrainGood, ...) # fitting the model with
# the original data without
# NA, zeroes, or the feature mean values
</code></pre>
<p>Review the fitting accuracy plot, based on <code>historyGood</code> data:</p>
<p><a href="https://i.stack.imgur.com/RFsrR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RFsrR.png" alt="enter image description here"></a></p>
<p>After that, the code resets a stored the model and re-train the model with the "bad" data:</p>
<pre class="lang-py prettyprint-override"><code>K.clear_session()
historyBad = model.fit(..., xTrainBad, ...)
</code></pre>
<p>Review the fitting process results, based on <code>historyBad</code> data:</p>
<p><a href="https://i.stack.imgur.com/PlHUX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PlHUX.png" alt="enter image description here"></a></p>
<p>As one can notice, the initial accuracy <code>> 0.7</code>, which means the model "remembers" previous fitting.</p>
<p>For the comparison, this is the standalone fitting results of "bad" data:</p>
<p><a href="https://i.stack.imgur.com/D3ZHY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D3ZHY.png" alt="enter image description here"></a></p>
<p>How to reset the model to the "initial" state?</p>
| D |
58,454,889 | 1 | <tensorflow> | 2019-10-18T16:32:30.620 | null | can't read one array csv file corrently with tf.decode_csv() | <p>I have hundreds of csv files, each of them contains one array, which is the input array of my network.
I tried to use <code>tf.data.TextLineDataset</code> to generate a dataset of csv file names and use <code>dataset.map()</code> to read them. However, I get very confused with how to use <code>tf.decode_csv()</code></p>
<p>Say the csv content is</p>
<pre><code>1 2 3 4 5 ... 100
</code></pre>
<p>Then using:</p>
<pre><code>dataset = tf.data.TextLineDataset(['a.csv','b.csv', ...])
dataset.map(proc)
</code></pre>
<p>Where:</p>
<pre><code>def proc(csv):
array = tf.decode_csv(csv,[0.0 for i in range(1000)])
return array
</code></pre>
<p>I get:</p>
<pre><code># array = (tensor, shape=1) * 100
</code></pre>
<p>If use:</p>
<pre><code>def proc(csv):
array = tf.decode_csv(csv,[[0.0]])
return array
</code></pre>
<p>I get:</p>
<pre><code># array = (tensor, shape=1)
</code></pre>
<p>This means only one value of the array is read.</p>
<p>How should the array be read into <code>(tensor, shape=100)</code>?</p>
| D |
58,468,446 | 0 | <tensorflow><keras><deep-learning> | 2019-10-19T22:13:37.947 | null | Failed to convert object of type <class 'tuple'> to Tensor. when implementing a custom layer | <p>I am trying to apply ArcFace for face detection using Keras and I was trying to use <a href="https://github.com/4uiiurz1/keras-arcface/blob/master/metrics.py" rel="nofollow noreferrer">this</a> implementation for ArcFace.</p>
<p>I was trying to use the functional API to define a function, as below (where model is a a separate model) </p>
<p>but I am getting </p>
<p>Failed to convert object of type to Tensor. Contents: (Dimension(512), 2300). Consider casting elements to a supported type.</p>
<p>At the output step, and I really don't know what I should do</p>
<pre><code>input = keras.Input(shape=(32, 32, 3))
label=keras.Input(shape=(2300,))
x=model(input)
output=ArcFace(2300)([x, label])
</code></pre>
| D |
58,471,381 | 2 | <r><rstudio> | 2019-10-20T08:52:45.867 | null | Cant install psych package | <p>I am new to R-studio and I am trying to install the <code>psych</code> package but I get the following error:</p>
<blockquote>
<p>Error: package or namespace load failed for ‘psych’ in loadNamespace(j
<- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]): there
is no package called ‘mnormt’</p>
</blockquote>
<p>So then I tried to install the <code>mnormt</code> package and I get the following message:</p>
<blockquote>
<p>package ‘mnormt’ successfully unpacked and MD5 sums checked Warning in
install.packages : cannot remove prior installation of package
‘mnormt’ Warning in install.packages : restored ‘mnormt’</p>
</blockquote>
<p>But when I then try to install the <code>psych</code> package it keeps saying that the <code>mnormt</code> package was not installed.</p>
<p>would appreciate some help</p>
| No |
58,473,232 | 0 | <windows><keras><anaconda><python-3.5> | 2019-10-20T12:59:59.027 | null | After upgrading to Python 3.5.6, Keras does not seem to work and it throws no errors about it | <p>I am using Anaconda 4.4.0 and I had an Anaconda version of Python installed as a conda environment. It was a version of the 3.5 branch (I think it was 3.5.2). Recently I installed Pytorch for Windows, which led to the upgrading of a few packages, including Python itself, which was upgraded to 3.5.6. Since then some of my files will not run. What is worse is that they do not even display an error messages.</p>
<p>I checked the console and the Idle window but no error messages were thrown. So I tried the hard way and I placed print() statements throughout my code to see where the execution fails. I quickly figured that the issue is most probably caused by Keras. Whenever I even import that module, the Idle shell restarts without a warning. I only get the "Using TensorFlow backend." line and then it stops running. I also checked that not only when running a file, but also when running directly from a Python shell, importing Keras leads to the restarting of the shell.</p>
<p>I tried upgrading Keras to version 2.3.1, but to no avail. The issue persists. I also tried running from Python version 3.6.8, which is the "Base" of my Anaconda installation and not an installed environment. Any ideas as to who might be the culprit here?</p>
<p>I would like to avoid resorting to "radical" solutions, such as deleting everything and reinstalling or installing the newest version of Python, where I will have to reinstall every single package afresh, at least not until I have ruled out all other possible solutions.</p>
| D |
58,485,035 | 0 | <mysql><sql> | 2019-10-21T11:09:05.027 | null | value out of range double in mysql | <p>I want to add a double in mysql database table,<br>
I have set the the double to DOUBLE(5,2) and the value I'm trying to add is 157.82 but I get this error message :</p>
<blockquote>
<p>Data truncation: Out of range value for column 'vægt_kg' at row 1</p>
</blockquote>
<p>I have already tried all other data types and multiple inputs and i have tried DOUBLE(5;2) which I read would help but it wouldn't accept the ;. I have also tried with DOUBLE(50,2) and it still said out of range.</p>
<pre><code>CREATE TABLE IF NOT EXISTS `patient`.`patientInfo` (
`cpr` INT(10) NOT NULL,
`forNavn` VARCHAR(21) NULL DEFAULT NULL,
`efterNavn` VARCHAR(25) NULL DEFAULT NULL,
`højde_cm` INT(3) NULL DEFAULT NULL,
`vægt_kg` DOUBLE(5,2) NULL DEFAULT NULL,
`beskrivelse` TEXT NULL DEFAULT NULL,
`adresse` VARCHAR(50) NULL DEFAULT NULL,
`telefon` INT(8) NULL DEFAULT NULL,
PRIMARY KEY (`cpr`))
ENGINE = InnoDB
DEFAULT CHARACTER SET = utf8mb4
COLLATE = utf8mb4_0900_ai_ci;
</code></pre>
<p>query:</p>
<pre><code>INSERT INTO patientInfo (cpr, forNavn, efterNavn, højde_cm, vægt_kg, beskrivelse, adresse, telefon)
VALUES ('1234567890','mille','san','175','157.82','hej','vejgade 23', '34343434');
</code></pre>
| No |
58,488,465 | 1 | <python><django><python-3.x><oop> | 2019-10-21T14:28:43.270 | 58489564 | Diamond problem in Python: Call method from all parent classes | <p><strong>PLEASE READ THE QUESTION FIRST BEFORE MARKING IT AS DUPLICATE!</strong> <br><br>
This question is asked by many but almost all gave the same solution which I am already applying. <br>
So I have classes <code>TestCase</code>,<code>B</code>,<code>C</code>, <code>D</code> & <code>E</code>. <br>
Classes <code>C</code> & <code>D</code> inherit class <code>B</code> & class <code>E</code> inherits both <code>C</code> & <code>D</code>. Class <code>B</code> inherits <code>TestCase</code>. <br></p>
<p>When I run my code, class <code>E</code> is only running with methods for class <code>C</code> and ignoring <code>D</code> all along.<br>
Now my classes go like these </p>
<pre><code>class GenericAuthClass(TestCase):
def setUp(self):
"""Create a dummy user for the purpose of testing."""
# set some objects and variabels
def _get_authenticated_api_client(self):
pass
class TokenAuthTests(GenericAuthClass):
def _get_authenticated_api_client(self):
"""Get token recieved on login."""
super()._get_authenticated_api_client()
# make object
return object
class BasicAuthTests(GenericAuthClass):
def _get_authenticated_api_client(self):
"""Get token recieved on login."""
super()._get_authenticated_api_client()
# make object
return object
class ClientTestCase(BasicAuthTests, TokenAuthTests):
def dothis(self):
return self._get_authenticated_api_client()
</code></pre>
<ol>
<li>How can I call method (with same name) in <code>C</code> and <code>D</code> from <code>E</code> like the diamond problem in <code>C++</code>? As of now, when I call the certain method using <code>self.method()</code> from <code>E</code> it only calls that method from <code>C</code> and ignores the same method from <code>D</code> while I think it should call both methods. Note that the method doesn't exist in class <code>E</code> and my code is working right now without errors but only calling method from <code>C</code>.</li>
</ol>
<blockquote>
<p>This seems like a <code>Python</code> question mainly but tagging <code>django</code> too as <code>TestCase</code> class might have something to do with it.<br><br></p>
</blockquote>
| No |
58,499,529 | 1 | <python><tensorflow> | 2019-10-22T07:56:41.063 | null | When Using Cuda: TypeError: ('Keyword argument not understood:', 'activation') | <p>Just installed Cuda with Anaconda. When trying to run the same model that worked before the installation I get the error message on the first addition of the lstm layer: TypeError: ('Keyword argument not understood:', 'activation').</p>
<p>My code looks like this:</p>
<pre><code>from tensorflow.keras.layers import Dense, Activation, Embedding, LSTM, Dropout, CuDNNLSTM
from tensorflow.keras.models import Sequential
import tensorflow as tf
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
import pylab as pl
import seaborn as sns
model = Sequential()
model.add(CuDNNLSTM(128, input_shape=(800,1), activation='tanh', return_sequences=True))
model.add(Dropout(0.2))
model.add(CuDNNLSTM(128, activation='tanh'))
model.add(Dropout(0.2))
model.add(Dense(32, activation='tanh'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
</code></pre>
| D |
58,502,660 | 0 | <tensorflow><yocto> | 2019-10-22T10:54:54.497 | null | Yocto tensorflow 2.0 | <p>As the availability of Tensorflow 2.0 I am looking for a yocto recipe to build it including its python package.
There are instructions on building from source for Linux/MacOS systems but is there some available Tensorflow 2.0 yocto recipe ? </p>
| D |
58,505,663 | 0 | <r><keras><loss-function><sequential> | 2019-10-22T13:41:15.637 | null | Custom loss function returning nan in sequential Keras model in R | <p>I am trying to make a custom function in sequential model in keras for R. The code for compilation looks like this: </p>
<pre><code>AE_model %>% compile(
loss = custom_metric("name",metric_fn= function(y_true, y_pred){
diff = y_true - y_pred
sqrt(abs(diff))
}),
optimizer = optimizer_adam()
)
</code></pre>
<p>But returned loss when training is nan. </p>
<p>however when I tried <code>sqrt(0)</code> R gave me 0 back. </p>
<p><strong>EDIT</strong></p>
<p>I have tried sgd instead of adam optimizer, in third epoch I have again nan. So this must be some other problem, not with optimizer. </p>
| D |
58,509,402 | 0 | <pytorch><onnx> | 2019-10-22T17:20:58.470 | null | Pytorch to ONNX export function fails and causes legacy function error | <p>I am trying to convert the pytorch model in <a href="https://github.com/samirsen/small-object-detection" rel="nofollow noreferrer">this</a> link to onnx model using the code below :</p>
<pre><code>device=t.device('cuda:0' if t.cuda.is_available() else 'cpu')
print(device)
faster_rcnn = FasterRCNNVGG16()
trainer = FasterRCNNTrainer(faster_rcnn).cuda()
#trainer = FasterRCNNTrainer(faster_rcnn).to(device)
trainer.load('./checkpoints/model.pth')
dummy_input = t.randn(1, 3, 300, 300, device = 'cuda')
#dummy_input = dummy_input.to(device)
t.onnx.export(faster_rcnn, dummy_input, "model.onnx", verbose = True)
</code></pre>
<p>But I get the following error (Sorry for the block quote below stackoverflow wouldn't let the whole trace be in code format and wouldn't let the question be posted otherwise):</p>
<blockquote>
<pre><code> Traceback (most recent call last):
small_object_detection_master_samirsen\onnxtest.py", line 44, in <module>
t.onnx.export(faster_rcnn, dummy_input, "fasterrcnn_10120119_06025842847785781.onnx", verbose = True)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\onnx\__init__.py",
</code></pre>
<p>line 132, in export
strip_doc_string, dynamic_axes)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\onnx\utils.py",
line 64, in export
example_outputs=example_outputs, strip_doc_string=strip_doc_string, dynamic_axes=dynamic_axes)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\onnx\utils.py",
line 329, in _export
_retain_param_name, do_constant_folding)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\onnx\utils.py",
line 213, in _model_to_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args, training)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\onnx\utils.py",
line 171, in _trace_and_get_graph_from_model
trace, torch_out = torch.jit.get_trace_graph(model, args, _force_outplace=True)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\jit__init__.py",
line 256, in get_trace_graph
return LegacyTracedModule(f, _force_outplace, return_inputs)(*args, **kwargs)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py",
line 547, in <strong>call</strong>
result = self.forward(*input, **kwargs)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\jit__init__.py",
line 323, in forward
out = self.inner(*trace_inputs)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py",
line 545, in <strong>call</strong>
result = self._slow_forward(*input, **kwargs)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py",
line 531, in _slow_forward
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py",
line 531, in _slow_forward
result = self.forward(*input, **kwargs)
File "D:\smallobject2\export test s\small_object_detection_master_samirsen\model\faster_rcnn.py", line
133, in forward
h, rois, roi_indices)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py",
line 545, in <strong>call</strong>
result = self._slow_forward(*input, **kwargs)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py",
line 531, in _slow_forward
result = self.forward(*input, **kwargs)
File "D:\smallobject2\export test s\small_object_detection_master_samirsen\model\faster_rcnn_vgg16.py",
line 142, in forward
pool = self.roi(x, indices_and_rois)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py",
line 545, in <strong>call</strong>
result = self._slow_forward(*input, **kwargs)
File "C:\Users\HP\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\module.py",
line 531, in _slow_forward
result = self.forward(*input, **kwargs)
File "D:\smallobject2\export test s\small_object_detection_master_samirsen\model\roi_module.py", line
85, in forward
return self.RoI(x, rois)
RuntimeError: Attempted to trace RoI, but tracing of legacy functions is not supported</p>
</blockquote>
| D |
58,511,598 | 1 | <python><gpu><pytorch><cpu><conda> | 2019-10-22T20:08:43.910 | null | Can both the GPU and CPU versions of PyTorch be installed in the same Conda environment? | <p>The <a href="https://pytorch.org/get-started/locally/" rel="nofollow noreferrer">PyTorch installation web page</a> shows how to install the GPU and CPU versions of PyTorch:</p>
<pre><code>conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
</code></pre>
<p>and</p>
<pre><code>conda install pytorch torchvision cpuonly -c pytorch
</code></pre>
<p>Can both version be installed in the same <code>Conda</code> environment?</p>
<p>In case you might ask why would this be needed, it's because I would like a single <code>Conda</code> environment which I can use on computers which have a GPU and those which don't.</p>
| D |
58,541,057 | 0 | <tensorflow><tensorflow-datasets> | 2019-10-24T12:06:42.627 | null | Pass parameters to tf.py_function | <p>I am creating a tf.data.Dataset, I have several preprocessing functions that I need to pass parameters to. Is it possible to pass parameters to functions via tf.py_function()?</p>
<p>The only way I can see to do it is to put my preprocessing functions inside a class, so that I can pass parameters in via self.</p>
<p>eg:</p>
<pre><code>
class My_Dataset():
def __init__(self, shape):
self.shape = shape
def resize(self, image):
# Note I am just using resize as a dummy example
# my actual preprocessing functions are more general
# and take several params
return cv2.resize(image.numpy(), self.shape)
def get_map_func(self, image, label):
[image,] = tf.py_function(self.resize, [image], [tf.float32])
image.set_shape(shape)
return image, label
def create_dataset(self, images_paths, labels):
ds = ...
ds = ds.map(my_dataset.get_map_func)
return ds
my_dataset = My_Dataset( (512, 512, 3) )
ds = my_dataset.create_dataset(...)
</code></pre>
<p>But is there a better way? I am always really cautious about passing classes to multiprocess functions. As I understand, they get pickled to the process so if the class gets too big then it always seems to cause me issues.</p>
<p><strong>Edit:</strong> Adding Second question..</p>
<p>In the example above, does any instance of the my_dataset object actually exist in the final ds? For example, images_paths is a list millions of image paths tens of MB big. If I passed images_paths and labels into the class at <strong>init</strong> and assigned them to self, then would there be some massive object in ds that needs to get passed around between processes?</p>
| D |
58,555,825 | 1 | <python><tensorflow><anaconda> | 2019-10-25T09:30:28.507 | 58556122 | Support tensorflow v1.x and v2.0 on same PC | <p>Code with tensorflow v1.x is not compatible with tensorflow v2.0. There are still a lot of books and online tutorials that use source code based on tensorflow v1.x. If I upgrade to v2.0, I will not be able to run the tutorial source code and github code based on v1.x.</p>
<p>Is it possible to have both v1.x and v2.0 supported on the same machine?</p>
<p>I am using python v3.7 anaconda distribution.</p>
| D |
58,576,018 | 1 | <python><tensorflow><jupyter-notebook><placeholder><quantile-regression> | 2019-10-27T01:39:27.597 | 58576177 | ValueError: None values not supported when using a placeholder variable in a multiplication operation | <p>I am using the following code inside a network graph in tensorflow:</p>
<pre><code>self.num_quantiles = tf.placeholder(dtype=tf.int32)
num_samples = self.rnn.get_shape().as_list()[0]
quantiles_shape = [self.num_quantiles * num_samples, 1]
self.quantiles = tf.random_uniform(quantiles_shape, minval=0, maxval=1, dtype=tf.float32)
</code></pre>
<p>However, I get an error because of the multiplication "None values not supported.".</p>
<p>Can anyone tell me please how to perform the multiplication properly using my placeholder ?</p>
<pre><code> ---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-31-4f68d3e68617> in <module>
2 tf.reset_default_graph()
3 #We define the primary and target q-networks
----> 4 mainQN = Qnetwork('main')
5 targetQN = Qnetwork('target')
6
<ipython-input-27-5b3a3af3cf09> in __init__(self, myScope)
136 #batch_size = state_net.get_shape().as_list()[0]
137 num_samples = self.rnn.get_shape().as_list()[0] #batch_size
--> 138 quantiles_shape = [self.num_quantiles * num_samples, 1]
139 self.quantiles = tf.random_uniform(quantiles_shape, minval=0, maxval=1, dtype=tf.float32)
140
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py in binary_op_wrapper(x, y)
813 elif not isinstance(y, sparse_tensor.SparseTensor):
814 try:
--> 815 y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
816 except TypeError:
817 # If the RHS is not a tensor, it might be a tensor aware object
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in convert_to_tensor(value, dtype, name, preferred_dtype)
1037 ValueError: If the `value` is a tensor not of given `dtype` in graph mode.
1038 """
-> 1039 return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
1040
1041
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\tensor_util.py in make_tensor_proto(values, dtype, shape, verify_shape, allow_broadcast)
452 else:
453 if values is None:
--> 454 raise ValueError("None values not supported.")
455 # if dtype is provided, forces numpy array to be the type
456 # provided if possible.
ValueError: None values not supported.
</code></pre>
| D |
58,583,175 | 1 | <python><tensorflow> | 2019-10-27T20:40:38.900 | 58673412 | Create ground truth in TF loss function | <p>I'm trying to build my own model for 3D object detection. My net consist of 2 convolution network and output is of shape (128,64,8). I'm using DenseBox object detection approach and therefor my ground truth should look like this for example <a href="https://i.stack.imgur.com/t7Mdv.png" rel="nofollow noreferrer">ground truth for image with one object</a>. That's first channel, there are 7 more. It's a lot of data to feed to tensorflow model so i decided to feed as label just few points(corners of bounding box) which are helpfull to get center of that circle. Then i was intended to draw that circle to some 2D array ( i also need to apply GaussianBlur to that array) and compare it in loss function.</p>
<p>Does someone know how to achieve this?</p>
| D |
58,607,510 | 2 | <tensorflow><variables><session> | 2019-10-29T12:28:48.310 | 58608645 | different results for the same code with tf.control_dependencies | <p>I run a piece of code twice and get two different results</p>
<p>The code:</p>
<pre><code>import tensorflow as tf
x = tf.placeholder(tf.int32, shape=[], name='x')
y = tf.Variable(2, dtype=tf.int32)
assign_op = tf.assign(y, y + 1)
out = x * y
with tf.control_dependencies([assign_op]):
out_ = out+2
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(3):
print('output:', sess.run(out_, feed_dict={x: 1}))
</code></pre>
<p>First output:</p>
<pre class="lang-none prettyprint-override"><code>output: 4
output: 5
output: 6
</code></pre>
<p>Second output:</p>
<pre class="lang-none prettyprint-override"><code>output: 4
output: 6
output: 6
</code></pre>
<p>Can someone please explain why does it happen?</p>
| D |
58,615,923 | 1 | <pytorch><softmax> | 2019-10-29T21:53:44.563 | 58616251 | PyTorch Softmax Output Doesn't Sum to 1 | <p>Cross posting <a href="https://discuss.pytorch.org/t/torch-nn-functionals-softmax-does-not-sum-to-1/59516" rel="nofollow noreferrer">my question from the PyTorch forum</a>:</p>
<p>I started receiving negative KL divergences between a target Dirichlet distribution and my model’s output Dirichlet distribution. Someone online suggested that this might be indicative that the parameters of the Dirichlet distribution don’t sum to 1. I thought this was ridiculous since the output of the model is passed through</p>
<p><code>output = F.softmax(self.weights(x), dim=1)</code></p>
<p>But after looking into it more closely, I found that <code>torch.all(torch.sum(output, dim=1) == 1.)</code> returns False! Looking at the problematic row, I see that it is <code>tensor([0.0085, 0.9052, 0.0863], grad_fn=<SelectBackward>)</code>. But <code>torch.sum(output[5]) == 1.</code> produces <code>tensor(False)</code>.</p>
<p>What am I misusing about softmax such that output probabilities do not sum to 1?</p>
<p>This is PyTorch version 1.2.0+cpu. Full model is copied below:</p>
<pre><code>import torch
import torch.nn as nn
import torch.nn.functional as F
def assert_no_nan_no_inf(x):
assert not torch.isnan(x).any()
assert not torch.isinf(x).any()
class Network(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Linear(
in_features=2,
out_features=3)
def forward(self, x):
output = F.softmax(self.weights(x), dim=1)
assert torch.all(torch.sum(output, dim=1) == 1.)
assert_no_nan_no_inf(x)
return output
</code></pre>
| D |
58,616,994 | 1 | <python><tensorflow><machine-learning> | 2019-10-30T00:01:07.133 | 58618891 | How can I change the way TensorFlow performs addition? | <p>I have designed a new method to do additions that returns "approximately" correct results (i.e. it outputs a rough estimate of the actual sum). I want to quantify the effect of approximating addition this way on the accuracy of neural network inference.</p>
<p>I found a TensorFlow implementation of a neural network whose accuracy I want to assess (<a href="https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet" rel="nofollow noreferrer">MobileNet</a>). I want to be run inference on various examples using this network - <em>however</em>, I want this network to perform any of it's necessary additions <em>using my approximate addition method</em>.</p>
<p>In other words, anytime the network tries to perform an addition, I want the addition to be done using my approximate add operation instead.</p>
<p>I found <a href="https://www.tensorflow.org/guide/create_op" rel="nofollow noreferrer">documentation</a> that describes how to create your own TensorFlow operation. I can use this to implement my approximate addition operation.</p>
<p><strong>What's the easiest way for me to "convert" all additions in the existing MobileNet implementation to use my approximate addition instead?</strong></p>
<p>It's not as simple as copy and pasting over all instances of <code>tf.add</code> unfortunately. Additions are used all over the place, from ReLU operations, to <code>conv2d</code> layers, and I need to make sure that all additions used in the inference are done using my adder instead.</p>
| D |
58,618,358 | 2 | <python> | 2019-10-30T03:31:07.730 | 58618425 | How to apply minmax scaler according to different dataframe | <p>i have a dataframe as below:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'category': ['fruits','fruits','fruits','fruits','fruits','vegetables','vegetables','vegetables','vegetables','vegetables'],
'product' : ['apple','orange','durian','coconut','grape','cabbage','carrot','spinach','grass','potato'],
'sales' : [10,20,30,40,100,10,30,50,60,100]
})
df.head(15)
</code></pre>
<p>current method: normalize according to a single category in df, manually</p>
<pre><code>from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
df_fruits = df[df['category'] == "fruits"]
df_fruits['sales'] = scaler.fit_transform(df_fruits[['sales']])
df_fruits.head()
df_fruits = pd.to_csv('minmax/output/category-{}-minmax.csv'.format('XX'))
</code></pre>
<p>questions: <br>
- how to loop through accordingly to all the category in df <br>
- then how to export the csv file accordingly with category name in it</p>
<p>thanks a lot</p>
| No |
58,625,848 | 0 | <function><assembly><mips> | 2019-10-30T12:54:49.073 | null | MIPS Assembly language Power Program | <p>im learning mips and i dont understand what to put inside the (complete the codes) section and my teacher confuses the hell outta everyone so its a struggle trying to understand therefore any help is welcomed</p>
<pre><code># Power program
# ----------------------------------
# Data Segment
.data
var_x: .word 3
var_y: .word 5
result: .word 0
# ----------------------------------
# Text/Code Segment
.text
.globl main
main:
lw $a0, var_x # load word var_x in memory to $a0
lw $a1, var_y # load word var_y in memory to $a1
jal power # call the function power
sw $v0, result # save word from $v0 to result in memory
# complete other codes here to print result in console window
# -----
# Done, exit program.
li $v0, 10 # call code for exit
syscall # system call
.end main
# ----------------------------------
# power
# arguments: $a0 = x
# $a1 = y
# return: $v0 = x^y
# ----------------------------------
.globl power
power:
add $t0,$zero,$zero # initialize $t0 = 0, $t0 is used to record how many times we do the operations of multiplication
addi $v0,$v0,1 # set initial value of $v0 = 1
power_loop: beq $t0, $a1, exit_L
mul $v0,$v0,$a0 # multiple $v0 and $a0 into $v0
addi $t0,$t0,1 # update the value of $t0
j power_loop
exit_L: jr $ra
.end power
</code></pre>
| No |
58,633,177 | 2 | <python><tensorflow><machine-learning> | 2019-10-30T20:28:03.733 | null | Why there's a big jump (up) of the loss curve during the training? | <p>I've been training the exactly same model (with the exactly same training dataset) twice but results are very different, and I got confused about the behavior of their loss curves. </p>
<p>The loss curve of the 1st experiment (<strong>red curve</strong>) suddenly jump up near the end of the first epoch, and then slowly steadily decrease. </p>
<p><a href="https://i.stack.imgur.com/8Ln9V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Ln9V.png" alt="enter image description here"></a></p>
<p>However, the loss curve of 2nd experiments (<strong>blue curve</strong>) didn't jump up anywhere, and always steadily decrease to converge. The loss after 20 epoch is much lower than the 1st experiment, and I got very good quality output.</p>
<p><a href="https://i.stack.imgur.com/ogS93.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ogS93.png" alt="enter image description here"></a></p>
<p>I don't know what cause that big jump at the first time. Both experiments used same models, and the training dataset. </p>
<p>Description of the model:
My project is sparse-view CT image reconstruction. My goal is to reconstruct the sparse-view image by using the iterative method + CNN inside of each iteration. This is very similar to the LEARN algorithm proposed by Chen. </p>
<p>The process contains 30 number of iterations, and at each iteration, I use CNN to better train the regularization term. </p>
<p>Since I have 30 iterations, and 3+ (I've been trying different complexity of architectures) layers of CNN in each of the iterations, I understand there will be large number of parameters and layers. </p>
<p>So far, for all the CNN architectures I've been testing, the "big jump" happened quite usual at each of them. </p>
<p>The training data consists of 3600 512*512 sparse-view CT images, and the test data consists of 360 sparse-view CT images. </p>
<p>The batch size is 1, and epoch = 20. </p>
<p><strong>UPDATE:</strong>
Thank you all for the suggestions. After reading the answers, I started thinking about gradient exploding/vanishing issues. So I changed ReLU to ELU, and change the weight initialization from Xavier to He, and added gradient clipping. The result turns out great. I run the standard model(the same model as I mentioned above) five more times, and they are all steadily slowing down. For the other models with CNN arch, their loss also decrease and no major strikes happened. </p>
<p>The code already has the training dataset shuffled at the beginning of every epoch. what I'm planning to do next is adding batch normalization, and try max_norm regularization. </p>
| D |
58,636,190 | 0 | <tensorflow><machine-learning><distance> | 2019-10-31T02:33:12.223 | null | TensorFlow 1: Many Distance Calculation | <p>I have two sets of n-dimensional points. These sets are arranged in two tensors, call them A and B. I want to iterate over the points in A and assign each to the closest point in B. However, I want to do this without replacement. That is, as soon as a point in B is assigned to a point in A, it is not available to be assigned to another point in A.</p>
<p>I want to implement this in TensorFlow. Right now, the best I have is a distance matrix (code below). However, <code>tf.reduce_max( distances , axis=1)</code> will not give me what I want because it will not use all points in both A and B (assuming A has more points that B), as in the above paragraph.</p>
<pre><code>distances = tf.sqrt(tf.reduce_sum((A[:,np.newaxis] - B)**2, axis=2))
</code></pre>
| D |
58,662,911 | 0 | <editor><web-component><monaco-editor> | 2019-11-01T16:48:17.083 | null | Inserting WebComponents as content into Monaco Editor | <p>Is there a way to insert a WebComponent into the Monaco Editor? I'd like to be able to drop an HTML WebComponent onto the editor and the WebComponent would be interactive. For example, I could drop a timestamp component onto the editor that looks like [00:00:00.000] but the hours, minutes, seconds etc are interactive, i.e. can use up and down arrows. Behind the scenes, I know Monaco is a hidden textarea and I'd therefore like to expose something like an innerText value from the component so that Monaco just receives the text. That's one direction - on reload I'd like it to detect the timestamp values and render out Webcomponents preset with the values.</p>
<p>Fingers crossed! If this is not possible - I have seen that Monaco has IContentWidgets and wonder if this is somethings that could be used?</p>
| No |
58,676,503 | 1 | <android><android-intent><textview> | 2019-11-03T00:37:35.743 | null | App crashes while setting text in textView | <p>XML Code</p>
<p>
<pre><code>android:orientation="vertical" android:layout_width="match_parent"
android:layout_height="192dp"
android:background="@color/colorPrimaryDark"
android:theme="@style/ThemeOverlay.AppCompat.Dark"
android:gravity="bottom"
android:id="@+id/nav_header">
<TextView
android:id="@+id/user_Name"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:paddingLeft="20dp"
android:layout_marginBottom="8dp"
android:textSize="20dp"
android:textColor="@android:color/white"
android:textAppearance="@style/TextAppearance.AppCompat.Body1"/>
</code></pre>
<p></p>
<p>***java Code</p>
<pre><code>@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_home);
btn2 = findViewById(R.id.button);
TextView textView = findViewById(R.id.user_Name);
textView.setText("My text");
}
</code></pre>
<p>App crashes when i use this code to set Text, getting null pointer exception</p>
| No |
58,677,717 | 0 | <tensorflow> | 2019-11-03T05:45:05.437 | null | No module 'named tensorflow_core.esitmator' | <p>Hello i believe in you can help my problem. I am using Windows10, 64bit. I run below codes in spyder. But there is an error of "ModuleNotFoundError: No module named 'tensorflow_core.estimator'". I uninstalled tensorflow, tensorboard, tensorflow-estimator then again installed but error did not disappeared. How to solve it please?</p>
<p>Codes:
import time
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import dataset</p>
| D |
58,681,024 | 0 | <python-3.x><pytorch><multilingual><bert-language-model> | 2019-11-03T14:16:02.743 | null | How to fix 'RuntimeError: the size of tensor a must match the tensor b' in Bert-multilingual in pytorch? | <p>Good morning, everyone,
I am a beginner in the world of machine learning and I try to use BERT on PyTorch and more particularly <code>BertForMaskedLM</code>. I use the <code>bert-base-multilingual</code> model. However, I get the following error message when I want to predict tokens. Could someone help me?</p>
<blockquote>
<p>The size of tensor a (16) must match the size of tensor b (14) at non-singleton dimension 1</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
text = "[CLS] Salut je suis Eva et j'aime bien les pommes. [SEP]"
tokenized_text = tokenizer.tokenize(text)
masked_index = 11
tokenized_text[masked_index] = '[MASK]'
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [0,0,0,0,0,0,0,0,0,0,0,0,0,0]
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
model = BertForModelMaskedLM.from_pretrained('bert-base-multilingual-cased')
model.eval()
with torch.no_grad():
predictions = model(tokens_tensor, segments_tensors)
</code></pre>
| D |
58,688,791 | 0 | <c#><smpp> | 2019-11-04T07:19:49.900 | null | I am getting an error when trying to start an Smpp client using the JamaaTech Smpp library | <p>When I am trying to start an SMPP client,I am getting the following at </p>
<pre><code>JamaaTech.Smpp.Net.Lib.Networking.TcpIpSession.RaiseSessionClosedEvent(JamaaTech.Smpp.Net.Lib.Networking.SessionCloseReason, System.Exception)
at JamaaTech.Smpp.Net.Lib.Networking.TcpIpSession.Receive(Byte[], Int32, Int32)
</code></pre>
<p>I am using th JamaaTech Smpp library on C#</p>
<p>It was connecting properly but now I am getting that error. </p>
<pre><code>var client = new SMPPClient.SMPPClient(s);
var smppCl = client.GetSMPPClient();
connections.GetOrAdd(s.SMSCID.ToString(), smppCl);
await Task.Run(() => smppCl.Start());
</code></pre>
| No |
58,717,324 | 0 | <python-3.x><tensorflow><keras><protocol-buffers><emgucv> | 2019-11-05T18:23:51.850 | null | How to create a PB from network containing batch normalization | <p>I want to create PB file from keras model, and serve it EmguCV (or at least opencv, EmguCV is preferred) using <code>DnnInvoke.readnetfromTensorflow</code>
I create a network using code:</p>
<pre><code>from keras import backend as K
from keras.callbacks import *
from keras.layers import *
from keras.models import *
from keras.utils import *
from keras.optimizers import Adadelta, RMSprop, Adam, SGD
from keras.callbacks import ModelCheckpoint
from keras.callbacks import TensorBoard
from config import *
def ctc_lambda_func(args):
iy_pred, ilabels, iinput_length, ilabel_length = args
# the 2 is critical here since the first couple outputs of the RNN
# tend to be garbage:
iy_pred = iy_pred[:, 2:, :] # no such influence
return K.ctc_batch_cost(ilabels, iy_pred, iinput_length, ilabel_length)
def CRNN_model(is_training=True):
inputShape = Input((width, height, 1), name='input') # base on Tensorflow backend
conv_1 = Conv2D(64, (3, 3), activation='relu', padding='same')(inputShape)
conv_2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv_1)
#batchnorm_2 = BatchNormalization()(conv_2)
pool_2 = MaxPooling2D(pool_size=(2, 2))(conv_2)
conv_3 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool_2)
conv_4 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv_3)
#batchnorm_4 = BatchNormalization()(conv_4)
pool_4 = MaxPooling2D(pool_size=(2, 2))(conv_4)
conv_5 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool_4)
conv_6 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv_5)
pool_5 = MaxPool2D(pool_size=(2, 2))(conv_6)
#batchnorm_6 = BatchNormalization()(conv_6)
#bn_shape = batchnorm_6.get_shape()
#print(bn_shape)
#x_reshape = Reshape(target_shape=(int(bn_shape[1]), int(bn_shape[2] * bn_shape[3])))(batchnorm_6)
#drop_reshape = Dropout(0.25, name='d1')(x_reshape)
fl_1 = Flatten()(pool_5)
fc_1 = Dense(256, activation='relu')(fl_1)
#print(x_reshape.get_shape())
#print(fc_1.get_shape())
bi_LSTM_1 = Bidirectional(LSTM(256, return_sequences=True, kernel_initializer='he_normal'), merge_mode='sum')(fc_1)
bi_LSTM_2 = Bidirectional(LSTM(128, return_sequences=True, kernel_initializer='he_normal'), merge_mode='concat')(bi_LSTM_1)
#drop_rnn = Dropout(0.3, name='d2')(bi_LSTM_2)
fc_2 = Dense(label_classes, kernel_initializer='he_normal', activation='softmax')(bi_LSTM_2)
base_model = Model(inputs=[inputShape], outputs=fc_2)
labels = Input(name='the_labels', shape=[label_len], dtype='float32')
input_length = Input(name='input_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')
loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([fc_2, labels, input_length, label_length])
if is_training:
return Model(inputs=[inputShape, labels, input_length, label_length], outputs=[loss_out]), base_model
else:
return base_model
</code></pre>
<p>and I use code below for creating .pb file:</p>
<pre><code>import tensorflow as tf
mfname = './models/weights.01-0.080-0.007.hdf5' # FIXME
tf.keras.backend.set_learning_phase(0)
sess = tf.keras.backend.get_session()
sess.as_default()
model = tf.keras.models.load_model(mfname, compile=False)
constant_graph = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph.as_graph_def(),
[out.op.name for out in model.outputs],
)
tf.train.write_graph(constant_graph, '', mfname[:-4] + '_graph.pb', as_text=False)
</code></pre>
<p>but when I call DnnInvoke.readnetfromTensorflow it shows this error:</p>
<blockquote>
<p>Emgu.CV.Util.CvException: 'OpenCV: Input layer not found:
dense_1/Tensordot/free'</p>
</blockquote>
<p>How can I solve this problem?</p>
| D |
58,720,791 | 0 | <python><tensorflow><tensorflow2.0> | 2019-11-05T23:05:58.013 | null | Is Masking useful for customer loss and gradient descent? | <p>There is a type of layer named <code>Masking</code> in TF2 that claims can mask temporal data. I do some test, finding that the following layers still can add values to those variables being masked out in the first place. For example</p>
<pre><code>def compute_valid_entries(x, mask):
n = np.reduce_sum(mask)
print(n.numpy())
for i in range(len(mask.shape)):
if mask.shape[i] == 1:
n *= x.shape[i]
return n
x = np.reshape(np.arange(100), (2, 5, 10)).astype(np.float32)
mask = np.ones(10).reshape((2, 5))
mask[1] = 0
mask = mask[..., None]
x_masked = x*mask
model = Sequential([Masking(), Dense(2, bias_initializer=tf.constant_initializer(1.))])
print(model(x_masked))
# print out a tensor of shape (2, 5, 2) with no zeros
</code></pre>
<p>If that's the case, what's the point of adding <code>Masking</code>? How is <code>Masking</code> layer beneficial? How should I use it?</p>
| D |
58,733,721 | 0 | <python><deep-learning><pytorch> | 2019-11-06T15:38:04.770 | null | Using the full PyTorch Transformer Module | <p>I tried asking this question on the PyTorch forums but didn't get any response so I am hoping someone here can help me. Additionally, if anyone has a good example of using the transformer module please share it as the documentation only shows using a simple linear decoder. For the transformer I'm aware that we generally feed in the actual target sequence. Therefore, my first question is that prior to the transformer I have a standard linear layer to transform my time series sequence along with positional encodings. According to the documentation the transformer module code the src and trg sequence need to be the same dimension.</p>
<pre><code>from torch.nn.modules.transformer import Transformer
class TransformerTimeSeries(torch.nn.Module):
def __init__(self, n_time_series, d_model=128):
super()._init__()
self.dense_shape = torch.nn.Linear(n_time_series, d_model)
self.pe = SimplePositionalEncoding(d_model)
self.transformer = Transformer(d_model, nhead=8)
</code></pre>
<p>So I was wondering can I simply do something like this or will this somehow leak information about the target? I'm still not actually sure how loss.backward() works so I'm not sure if this will cause problems.</p>
<pre><code>def forward(self, x, t):
x = self.dense_shape(x)
x = self.pe(x)
t = self.dense_shape(t)
t = self.pe(t)
x = self.transformer(x, t)
</code></pre>
<p>Secondly, does the target sequence need any sort of offset? So for instance if I have the time series [0,1,2,3,4,5,6,7] and I want to feed in [0,1,2,3] to predict [4,5,6,7] (tgt)? Would I simply feed it in like that or is it more complicated? Typically BERT and those models have [CLS] and [SEP] tokens to denote the beginning and end of sentences however, for time series I assume I don't need a separator time step.</p>
| D |
58,734,939 | 1 | <python><tensorflow> | 2019-11-06T16:48:00.520 | null | Full installation of tensorflow (all modules)? | <p>I have this repository with me ; <a href="https://github.com/layog/Accurate-Binary-Convolution-Network" rel="nofollow noreferrer">https://github.com/layog/Accurate-Binary-Convolution-Network</a> . As requirements.txt says, it requires tensorflow==1.4.1. So I am using miniconda (in Ubuntu18.04) and for the love of God, I can't get it to run (errors out at the below line)</p>
<pre><code>from tensorflow.examples.tutorial.* import input_data
</code></pre>
<p>Gives me an ImportError saying it can't find tensorflow.examples. I have diagnosed the problem that a few modules are missing after I installed tensorflow (Have tried all of the below ways)</p>
<pre><code>pip install tensorflow==1.4.1
conda install -c conda-forge tensorflow==1.4.1
#And various wheel packages avaliable on the internet for 1.4.1
pip install tensorflow-1.4.0rc1-cp36-cp36m-manylinux1_x86_64.whl
</code></pre>
<p>Question is, if I want all the modules which are present in the git repo source as my installed copy, do I have to COMPLETELY build tensorflow from source ? If yes, can you mention the flag I should use? Are there any wheel packages available that have all modules present in them ?
A link would save me tonnes of effort! </p>
<p><strong>NOTE</strong>: Even if I manually import the examples directory, it says tensorflow.contrib is missing, and if I local import that too, another ImportError pops up. There has to be an easier way I am sure of it</p>
| D |
58,740,068 | 1 | <python><tensorflow><module> | 2019-11-06T23:54:01.773 | null | How do I import packages and modules in Python without any errors? | <p>I am trying to import tensorflow, matplotlib, numpy, and keras. When I try to do this I get an error saying that the module does not exist.</p>
<p>I have tried installing the latest version of everything as well as reinstalling them. </p>
<pre><code>import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
data = keras.dataset_fashion_mnist
(train_images, train_labels), (test_images, test_labels) = data.data_load()
print(train_labels[1])
</code></pre>
<p>I expect the output to be a number between 1 and 10.</p>
<p>There might be an error such as:</p>
<pre><code>C:\Users\admin\Desktop>python "tensorflow.py"
Traceback (most recent call last):
File "tensorflow.py", line 2, in <module>
import matplotlib.pyplot as plt
ModuleNotFoundError: No module named 'matplotlib'
</code></pre>
<p>Note that this is not the only module that is having issues. Same thing happens with numpy and tensorflow.</p>
| D |
58,740,129 | 1 | <android><android-sqlite> | 2019-11-07T00:01:38.917 | 58740424 | Should I retrieve records from a database in the onResume() call? | <p>The home page of my app displays a list of records. Another page allows adding a new record. After creating a new record, on hitting the save button, when the app goes back to the home page, I want the latest record to also be displayed. </p>
<p>Is there a better way of retrieving all the records rather than fetching them all from the SQLite database in a call inside onResume()?</p>
| No |
58,755,778 | 0 | <python><inheritance><super> | 2019-11-07T19:35:25.203 | null | Setting a parent variable to a child type while preserving data | <p>(Python) Hello at a point in this network server code, i need to change a variable called select_key.data that is a ConnectionData object equal to server which is a ServerDetails object, a type that inherits from ConnectionData(as shown below)</p>
<p>ConnectionData</p>
<pre><code>class ConnectionData(object):
def __init__(self):
self.read_buffer = ""
self.write_buffer = ""
</code></pre>
<p>ServerDetails</p>
<pre><code>class ServerDetails(ConnectionData):
def __init__(self):
super(ServerDetails, self).__init__()
self.servername = None
self.hopcount = None
self.info = None
self.first_link = None
</code></pre>
<p>When every i simply try select_key.data = server i get "Cant set attribute" what is the best way to go about this?
Thank you!!</p>
| No |
58,760,303 | 1 | <c++><string><vector><parentheses> | 2019-11-08T04:18:48.183 | null | Write a function that adds parentheses to the beginning and end to make all parentheses match and return it | <p>Given a string of parentheses, such as <code>(((())((()</code>, write a function that adds parentheses to the beginning and end to make all parentheses match and return it.</p>
<p>I'm trying to figure out how to output this. </p>
<p>Input: <code>)(()(</code></p>
<p>Output: <code>()(()())</code></p>
<p>I've tried using <code>cout << pMatch()</code> but does not give me the output desired above.</p>
<p>It has to be the same as above. Any help is much appreciated.</p>
<pre><code>#include <iostream>
#include <string>
#include <vector>
using namespace std;
string paranFix(string input) {
string output;
vector<string> strVector;
for (unsigned int a = 0; a < input.size(); ++a) {
if (input[a] == ')') {
if (strVector.empty()) {
output += "(";
}
else {
strVector.pop_back();
}
}
else if (input[a] == '(') {
strVector.push_back(")");
}
output += input[a];
}
while (!strVector.empty()) {
output += strVector.back();
strVector.pop_back();
}
return output;
};
int main(){
string s = "(((())((()"; // Given String
cout << "INPUT: "; // Need to output --> "INPUT: )(()( "
cout << "OUTPUT: "; // Need to output --> "OUTPUT: ()(()()) "
cout << paranFix(s); // This outputs: (((())((())))), which is incorrect
return 0;
}
</code></pre>
<p>This is what the compiler should be outputting with the given String of parentheses <code>(((())((()</code>.</p>
<pre><code>Input: `)(()(`
Output: `()(()())`
</code></pre>
| No |
58,772,503 | 0 | <python><nlp><pytorch><softmax> | 2019-11-08T19:11:23.623 | null | Output probabilities of next word given inputted word and target word in an RNN using Pytorch | <p>I am running a recurrent network using Pytorch 1.3.0, where the input are words from a small probabilistic grammar (one-hot encoding), and the target is one word ahead from the input. I have succeeded in running the model, but I am wondering if there is a way to return the probability distributions of possible next words given the input word. My end goal is to measure how much the output from the network deviates from this probability distribution. </p>
<p>I’ve seen other examples use the softmax function to get a tensor of probabilities, but only in the context of word/character generation. So, I used the same function, but inserted in within the forward function instead. And then I used <code>print()</code> because I couldn’t get the function to return it without an error (and also because I have no idea what I’m doing).</p>
<p>For simplicity, I am only running the model through one sentence: “the cat runs”</p>
<p>The one hot encoding is as follows, along with the target sequence “cat runs .” (period included in encoding):</p>
<pre class="lang-py prettyprint-override"><code>input_seq = np.array([[[0., 0., 1., 0.],
[0., 1., 0., 0.],
[0., 0., 0., 1.]]], dtype = np.float32)
target_seq = [[1, 3, 0]]
</code></pre>
<p>And then using Pytorch, here is my code for the model:</p>
<pre class="lang-py prettyprint-override"><code>device = torch.device("cpu")
input_seq = torch.from_numpy(input_seq)
target_seq = torch.Tensor(target_seq)
class Model(nn.Module):
def __init__(self, input_size, output_size, hidden_dim, n_layers):
super(Model, self).__init__()
self.hidden_dim = hidden_dim
self.n_layers = n_layers
self.rnn = nn.RNN(input_size, hidden_dim, n_layers, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, x):
batch_size = x.size(0)
hidden = self.init_hidden(batch_size)
out, hidden = self.rnn(x, hidden)
out = out.contiguous().view(-1, self.hidden_dim)
out = self.fc(out)
prob = nn.functional.softmax(out[-1], dim=0).data # here is where I included the code
print(prob)
return out, hidden
def init_hidden(self, batch_size):
hidden = torch.zeros(self.n_layers, batch_size, self.hidden_dim).to(device)
return hidden
</code></pre>
<p>And here is where I instantiate the model:</p>
<pre class="lang-py prettyprint-override"><code>model = Model(input_size=dict_size, output_size=dict_size, hidden_dim=12, n_layers=1)
model.to(device)
n_epochs = 5 # epochs set to 5 for simplicity
lr=0.01
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
</code></pre>
<p>And lastly the training:</p>
<pre class="lang-py prettyprint-override"><code>for epoch in range(1, n_epochs + 1):
optimizer.zero_grad()
input_seq = input_seq.to(device)
output, hidden = model(input_seq)
loss = criterion(output, target_seq.view(-1).long())
loss.backward()
optimizer.step()
print('Epoch: {}/{}.............'.format(epoch, n_epochs), end=' ')
print("Loss: {:.4f}".format(loss.item()))
</code></pre>
<p>When I run the code, it prints the probability for each word as a tensor, but only at the end of each epoch instead of after each word inputted. </p>
<p>So my two questions are:</p>
<p><b>1)</b> Is there anyway where the probabilities can be outputted after each word in the sequence? So in this example, when the model processes "cat" it returns probabilities for all possible words. Or is that something that can’t be done?</p>
<p><b>2)</b> And then if yes to question 1, is there a way I can return the probabilities not with the <code>print()</code> function, but as a variable I can access? (So basically returning the variable without causing errors)</p>
<p>This is my first time asking a question on stack overflow, and also my first real experience with Pytorch, so I apologize in advance for any confusion (and my lack of knowledge in this area)!</p>
| D |
58,781,515 | 1 | <python><artificial-intelligence><pytorch><lstm><data-science> | 2019-11-09T16:57:19.750 | 58909193 | LSTM implementation / overfitting | <p>I am having a problem on an implementation of LSTM. I am not sure if I have the right implementation or this is just an overfitting problem. I am doing essay grading using a LSTM, scoring text with score from 0 - 10 (or other range of score). I am using the <a href="https://www.kaggle.com/c/asap-aes" rel="nofollow noreferrer">ASAP kaggle competition data</a> as one of the training data.</p>
<p>However, the main goal is to achieve good performance on a private dataset, with around 500 samples. The 500 samples includes validation and training set. I have previously done some experiment and got the model to work, but after fiddling with something, the model doesn't fit anymore. The model does not improve at all. I have also re-implemented the code in a cleaner manner with much more obejct oriented code and still can't reproduce my previous result.</p>
<p>However, I am getting the model to fit to my data, just there is tremendous overfitting. I am not sure if this is an implementation problem of some sort or just overfitting, but I cannot get the model to work. The maximum I can get it to is 0.35 kappa using LSTM on the ASAP data essay set 1. For some bizarre reason, I can get a single layer fully connected model to have 0.75 kappa. I think this is an implementation problem but I am not sure.</p>
<p>Here is my old code:</p>
<h2>train.py</h2>
<pre class="lang-py prettyprint-override"><code>import gensim
import numpy as np
import pandas as pd
import torch
from sklearn.metrics import cohen_kappa_score
from torch import nn
import torch.utils.data as data_utils
from torch.optim import Adam
from dataset import AESDataset
from network import Network
from optimizer import Ranger
from qwk import quadratic_weighted_kappa, kappa
batch_size = 32
device = "cuda:0"
torch.manual_seed(1000)
# Load data from csv
file_name = "data/data_new.csv"
data = pd.read_csv(file_name)
arr = data.to_numpy()
text = arr[:, :2]
text = [str(line[0]) + str(line[1]) for line in text]
text = [gensim.utils.simple_preprocess(line) for line in text]
score = arr[:,2]
score = [sco*6 for sco in score]
score = np.asarray(score, dtype=int)
train_dataset = AESDataset(text_arr=text[:400], scores=score[:400])
test_dataset = AESDataset(text_arr=text[400:], scores=score[400:])
score = torch.tensor(score).view(-1,1).long().to(device)
train_loader = data_utils.DataLoader(train_dataset,shuffle=True, batch_size=batch_size, drop_last=True)
test_loader = data_utils.DataLoader(test_dataset,shuffle=True,batch_size=batch_size, drop_last=True)
out_class = 61
epochs = 1000
model = Network(out_class).to(device)
model.load_state_dict(torch.load("model/best_model"))
y_onehot = torch.FloatTensor(batch_size, out_class).to(device)
optimizer = Adam(model.parameters())
criti = torch.nn.CrossEntropyLoss()
# model, optimizer = amp.initialize(model, optimizer, opt_level="O2")
step = 0
for i in range(epochs):
#Testing
if i % 1 == 0:
total_loss = 0
total_kappa = 0
total_batches = 0
model.eval()
for (text, score) in test_loader:
out = model(text)
out_score = torch.argmax(out, 1)
y_onehot.zero_()
y_onehot.scatter_(1, score, 1)
kappa_l = cohen_kappa_score(score.view(batch_size).tolist(), out_score.view(batch_size).tolist())
score = score.view(-1)
loss = criti(out, score.view(-1))
total_loss += loss
total_kappa += kappa_l
total_batches += 1
print(f"Epoch {i} Testing kappa {total_kappa/total_batches} loss {total_loss/total_batches}")
with open(f"model/epoch_{i}", "wb") as f:
torch.save(model.state_dict(),f)
model.train()
#Training
for (text, score) in train_loader:
optimizer.zero_grad()
step += 1
out = model(text)
out_score = torch.argmax(out,1)
y_onehot.zero_()
y_onehot.scatter_(1, score, 1)
kappa_l = cohen_kappa_score(score.view(batch_size).tolist(),out_score.view(batch_size).tolist())
loss = criti(out, score.view(-1))
print(f"Epoch {i} step {step} kappa {kappa_l} loss {loss}")
loss.backward()
optimizer.step()
</code></pre>
<h2>dataset.py</h2>
<pre class="lang-py prettyprint-override"><code>import gensim
import torch
import numpy as np
class AESDataset(torch.utils.data.Dataset):
def __init__(self, text_arr, scores):
self.data = text_arr
self.scores = scores
self.w2v_model = ("w2vec_model_all")
self.max_len = 500
def __getitem__(self, item):
vector = []
essay = self.data[item]
pad_vec = [1 for i in range(300)]
for i in range(self.max_len - len(essay)):
vector.append(pad_vec)
for word in essay:
word_vec = pad_vec
try:
word_vec = self.w2v_model[word]
except:
#print(f"Skipping word as word {word} not in dictionary")
word_vec = pad_vec
vector.append(word_vec)
#print(len(vector))
vector = np.stack(vector)
tensor = torch.tensor(vector[:self.max_len]).float().to("cuda")
score = self.scores[item]
score = torch.tensor(score).long().to("cuda").view(1)
return tensor, score
def __len__(self):
return len(self.scores)
</code></pre>
<h2>network.py</h2>
<pre class="lang-py prettyprint-override"><code>import torch.nn as nn
import torch
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self, output_size):
super(Network, self).__init__()
self.lstm = nn.LSTM(300,500,1, batch_first=True)
self.dropout = nn.Dropout(p=0.5)
#self.l2 = nn.L2
self.linear = nn.Linear(500,output_size)
def forward(self,x):
x, _ = self.lstm(x)
x = x[:,-1,:]
x = self.dropout(x)
x = self.linear(x)
return x
</code></pre>
<p>My new code: <a href="https://github.com/Clement-Hui/EssayGrading" rel="nofollow noreferrer">https://github.com/Clement-Hui/EssayGrading</a></p>
| D |
58,788,165 | 0 | <python><tensorflow><autocomplete><pycharm><tensorflow2.0> | 2019-11-10T11:23:39.273 | null | Any chance to get full autocompletion for Tensorflow 2.0 in PyCharm? | <p>I have upgraded to tensorflow-gpu==2.0 and now I have very limited autocompletion in PyCharm (e.g. can't view a method signature). There seems to be some lazy loading mechanism that I'm not familiar with. Is there a way to have a full autocompletion working as in older TF versions?</p>
| D |
58,834,986 | 1 | <python-3.x><dictionary> | 2019-11-13T10:26:05.280 | 58835447 | picking up one key from multiple keys having same set of values | <p>I have dictionary with following pattern</p>
<pre><code>input_dict_data = {'how to access outlook on open network': {'intent': 'access_email_from home'},'how to access evpn': {'intent': 'access_email_from home'},'how to access ess': {'intent': 'access_email'},'how to access mobile': {'intent': 'access'}}
</code></pre>
<p>and have result list like</p>
<pre><code>result = ['how to access outlook on open network','how to access evpn','how to access ess','how to access mobile']
</code></pre>
<p>i want to filter result list such that if intent is same for 2 results then i should retain 1st value and delete the other value</p>
<p>I have created one function to filter but not getting how to retain one value having similar intent field</p>
<pre><code>def intent_matching(result_list):
result_none = [i for i in result_list if input_dict_data[i]['intent'] is None]
result_intent = [i for i in result_list if input_dict_data[i]['intent'] is not None]
result = [i for i in result_intent if not any(input_dict_data[i]['intent'] == input_dict_data[item]['intent'] for item in result_intent if
i != item)]
result = [*result_none, *result]
return result
</code></pre>
<p>my final result is</p>
<pre><code>['how to access ess', 'how to access mobile']
</code></pre>
<p>and i should get the result like</p>
<pre><code>['how to access outlook on open network','how to access ess', 'how to access mobile']
</code></pre>
| No |
58,844,602 | 1 | <python><json><dictionary> | 2019-11-13T19:52:19.313 | 58844691 | How do I handle a KeyError exception in python without exiting the dictionary? | <p>Basically I have some JSON data that I want to put in a MySQL db and to do this I'm trying to get the contents of a dictionary in a cursor.execute method. My code is as follows:</p>
<pre><code>for p in d['aircraft']:
with connection.cursor() as cursor:
print(p['hex'])
sql = "INSERT INTO `aircraft` (`hex`, `squawk`, `flight`, `lat`, `lon`, `nucp`, `seen_pos`, " \
"`altitude`, `vert_rate`, `track`, `speed`, `messages`, `seen`, `rssi`) " \
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s )"
cursor.execute(sql, (p['hex'], p['squawk'], p['flight'], p['lat'], p['lon'], p['nucp'], p['seen_pos'], p['altitude'], p['vert_rate'], p['track'], p['speed'], p['messages'], p['seen'], p['rssi']))
print('entered')
connection.commit()
</code></pre>
<p>The issue is that any value in the dictionary can be null at any time and I need to find out how to handle this. I've tried to put the code in a try catch block and 'pass' whenever a KeyError exception is raised but this means a record is completely skipped when it has a null value. I've also tried to write a load of if blocks to append a string with the value of the dictionary key but this was pretty useless.</p>
<p>I need to find a way to put a dictionary in my db even if it contains null values.</p>
| No |
58,853,067 | 1 | <python><tensorflow> | 2019-11-14T09:12:11.460 | 58856371 | Generate a zero matrix from a vector with None Shape | <p>I am given a tensor of Shape <code>[None,1,None]</code> (First None being the Batch size, corresponding to e.g. <code>[1,1,28]</code>) and I want to generate a <code>tf.zeros_like</code> matrix of form <code>[None,1,None,None]</code> where the last two Nones are the same (so in the example being <code>[1,1,28,28]</code>). </p>
<p>Lets say d is the vector of shapes <code>[None,1,None]</code>
What I tried was: </p>
<pre><code>z = tf.zeros_like(tf.broadcast_to(d, tf.concat([d.shape, d.shape[-1:]], axis=0)), tf.int32)
</code></pre>
<p>The idea being: I concat the the shape of d with the last shape of d in order to keep the sizes correct, to then broadcast d to these shapes to use the result of that broadcast to generate <code>zeros_like</code>. I understand that just <code>tf.zeros</code> does not work in this case, since it does not work with <code>None</code> shapes.
This approach fails though, resulting in this error:
<code>Tensors in list passed to 'values' of 'ConcatV2' Op have types [<NOT CONVERTIBLE TO TENSOR>, <NOT CONVERTIBLE TO TENSOR>] that don't all match.</code></p>
| D |
58,853,632 | 0 | <keras><silhouette> | 2019-11-14T09:42:01.780 | null | How to write Silhouette score function as customize Keras loss function? | <p>I am trying to write custom Keras loss function using SKlearn implementation of Silhouette score.<br>
When running the network in debugger mode , in "training_utils" I get ndim= None instead ndim=1.<br>
I'd like to know if my what I want to do is realistic and can be done? or should I start writing the Silhuoette score in TensorFlow instead using sklearn numpy implementation + py_function </p>
<p>The loss <strong>code</strong>:</p>
<pre><code>def loss_fun(y_true, y_pred):
sil_loss = tf.py_function(
sklearn.metrics.silhouette_score,
[y_pred[:, :-1], y_pred[:, -1]],
tf.float32,
name='silhouetteScore'
)
return tf.expand_dims(sil_loss, axis=1)
return loss_fun
</code></pre>
<p>Appreciate your feedback and input! </p>
| D |
58,868,527 | 3 | <sql><postgresql><query-performance><timescaledb> | 2019-11-15T00:21:10.167 | null | Optimizing MIN / MAX queries on time-series data | <p>I have several big time-series tables having a lot of Nulls (each table may have up to 300 columns), for example:</p>
<p><strong>Time-series table</strong></p>
<pre><code>time | a | b | c | d
--------------------+---------+----------+---------+---------
2016-05-15 00:08:22 | | | |
2016-05-15 13:50:56 | | | 26.8301 |
2016-05-15 01:41:58 | | | |
2016-05-15 00:01:37 | | | |
2016-05-15 01:45:18 | | | |
2016-05-15 13:45:32 | | | 26.9688 |
2016-05-15 00:01:48 | | | |
2016-05-15 13:47:56 | | | | 27.1269
2016-05-15 00:01:22 | | | |
2016-05-15 13:35:36 | 26.7441 | 29.8398 | | 26.9981
2016-05-15 00:08:53 | | | |
2016-05-15 00:08:30 | | | |
2016-05-15 13:14:59 | | | |
2016-05-15 13:33:36 | 27.4277 | 29.7695 | |
2016-05-15 13:36:36 | 27.4688 | 29.6836 | |
2016-05-15 13:37:36 | 27.1016 | 29.8516 | |
</code></pre>
<p>I want to optimize queries for searching first and last values in every column, i.e.:</p>
<pre><code>select MIN(time), MAX(time) from TS where a is not null
</code></pre>
<p><em>(Those queries can run for several minutes)</em></p>
<p>I plan to create a metadata table holding column names and pointing to the first and last timestamp:</p>
<p><strong>Metadata table</strong></p>
<pre><code>col_name | first_time | last_time
---------+---------------------+--------------------
a | 2016-05-15 13:35:36 | 2016-05-15 13:37:36
b | 2016-05-15 13:35:36 | 2016-05-15 13:37:36
c | 2016-05-15 13:50:56 | 2016-05-15 13:45:32
d | 2016-05-15 13:47:56 | 2016-05-15 13:35:36
</code></pre>
<p>This way no Null search will occur during the query and I will just access the value in the first and last timestamps.</p>
<p>But I want to prevent the need to update the metadata table on every time-series data modification. Instead I want to create a generic Trigger Function to which will update <code>first_time</code> and <code>last_time</code> columns of the metadata table on every Insert, Update or Delete to Time-Series table. The trigger function should compare existing timestamps in the metadata table against inserted / deleted rows.</p>
<p>Any idea if it's possible to create a generic Trigger Function which will not hold the exact column names of time-series table?</p>
<p>Thanks</p>
| No |
58,869,994 | 0 | <caffe><google-colaboratory><pycaffe><caffe2> | 2019-11-15T03:38:16.727 | null | I can NOT find train_val.prototxt in colaboratory after I install caffe in colaboratory | <p>I use the following command to install caffe on </p>
<p><a href="https://colab.research.google.com/" rel="nofollow noreferrer">https://colab.research.google.com/</a></p>
<p><code>%%time
!apt install -y caffe-cuda
!apt install -y caffe-cpu</code></p>
<p>then I use:</p>
<p><code>!find / -name "*train_val.prototxt*"</code>,</p>
<p>but nothing,</p>
<p>Where am I wrong? </p>
<p>I just want to find the prototxt for AlexNet</p>
<p>Thanks for your help.</p>
| D |
58,883,695 | 0 | <python><tensorflow><keras> | 2019-11-15T19:44:58.837 | null | Loss and accuracy frozen after first epoch | <p>My loss and accuracy are frozen at 0 and 1 respectively. Why would this be? The first epoch has a normal looking loss and accuracy, but after that, I have no clue what is happening.</p>
<pre><code>import tensorflow as tf
import numpy as np
import glob
data = "*data\\*training\\*\\*"
filelist = glob.glob(str(data))
classes = ('classa','classb')
def generator(file):
f = tf.io.read_file(file)
f = tf.image.decode_jpeg(f)
f = tf.image.resize(tf.image.rgb_to_grayscale(f), size = [28,28])
f = tf.squeeze(f, [2])
l = file.split('\\')
l = l[2]
l_label = classes.index(l)
r1 = tf.constant(l_label)
return f, r1
array1 = []
array2 = []
def todataset():
for file in filelist:
push_to_array1, push_to_array2 = generator(file)
array1.append(push_to_array1)
array2.append(push_to_array2)
dataseta = tf.data.Dataset.from_tensors(array1)
datasetb = tf.data.Dataset.from_tensors(array2)
return(tf.data.Dataset.zip((dataseta,datasetb)))
dataset = todataset().shuffle(len(filelist)).repeat()
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
model.fit(dataset, epochs=10, steps_per_epoch = 10)
</code></pre>
| D |
58,886,475 | 0 | <c> | 2019-11-16T00:46:01.053 | null | How to strcpy from char pointer to array of char pointer? | <pre><code>char *suffer ="Dallas";
char *buffer[10];
strcpy(buffer[0], suffer) // warning, passing argument 1 of strcpy from incompatible pointer type
</code></pre>
<p>I received segmentation fault.</p>
| No |
58,905,480 | 1 | <r><keras><neural-network><dropout> | 2019-11-17T21:42:18.870 | 58971701 | R Keras: apply dropout regularization both on the input and the hidden layer | <p>I'm learning Keras in R and want to apply dropout regularization both on the input layer as it's very big (20000 variables) and the intermediate layer (100 neurons). I'm deploying Keras for regression. From the official <a href="https://keras.rstudio.com/articles/tutorial_overfit_underfit.html" rel="nofollow noreferrer">documentation</a> I've reached to the below model build:</p>
<pre><code>model <- keras_model_sequential() %>%
layer_dense(units = 100, activation = "relu", input_shape = 20000) %>%
layer_dropout(0.6) %>%
layer_dense(units = 1)
</code></pre>
<p>How should I adapt the code to achieve what I intend to make?</p>
| D |
58,907,306 | 0 | <python><keras><neural-network><deep-learning><data-science> | 2019-11-18T02:38:21.540 | null | I can't seem to understand how this model is built | <p>Could I please know what the meaning of this model is as well as its internal variables used
such as Conv2D (filters, kernal_size, activation),
MaxPooling2D(pool_size),
Flatten(), Dense()</p>
<pre><code>cnn_model = Sequential([
Conv2D(filters=32, kernel_size=3, activation='relu', input_shape=im_shape),
MaxPooling2D(pool_size=2),
Dropout(0.2),
Flatten(),
Dense(32, activation='relu'),
Dense(10, activation='softmax')
])
</code></pre>
| D |
58,911,984 | 0 | <tensorflow> | 2019-11-18T09:59:49.743 | null | Read higher dimensional numpy ndarrays into TFRecords | <p>I am trying to read a numpy-ndarray with dimension greater than 2 into TFRecords, in order to avoid a bottleneck in Tensorflow (using Keras). </p>
<pre><code>import tensorflow as tf
import numpy as np
def _dtype_feature_list(ndarray):
return lambda array: tf.train.FeatureList(float_list=tf.train.FloatList(value=array))
def _dtype_feature(ndarray):
return lambda array: tf.train.Feature(float_list=tf.train.FloatList(value=array))
X=np.ones((10,5,3),dtype=np.float32) ### test data
Y=np.ones((10,),dtype=np.float32)
dtype_feature_=_dtype_feature(Y)
dtype_feature_list_=_dtype_feature_list(X)
writer = tf.compat.v1.python_io.TFRecordWriter("train.tfrecords")
for k in range(10):
Y_=Y[k]
X_=X[k,:,:]
d_feature={}
d_feature['Y'] = dtype_feature_(Y_)
d_feature['X'] = dtype_feature_list_(X_)
features__tf = tf.train.Features(feature=d_feature)
example = tf.train.Example(features=features__tf)
serialized = example.SerializeToString()
writer.write(serialized)
</code></pre>
<p>It is not quite clear to me how to specify the method "tf.train.FeatureList" correctly (4th line in the above code). Is "tf.train.FeatureList" effective in this context? Do I have to use another method?</p>
| D |
58,924,277 | 2 | <tensorflow><machine-learning><keras><neural-network><deep-learning> | 2019-11-18T23:10:39.667 | 58942977 | MLP output of first layer is zero after one epoch | <p>I've been running into an issue lately trying to train a simple MLP.</p>
<p>I'm basically trying to get a network to map the XYZ position and RPY orientation of the end-effector of a robot arm (6-dimensional input) to the angle of every joint of the robot arm to reach that position (6-dimensional output), so this is a regression problem.</p>
<p>I've generated a dataset using the angles to compute the current position, and generated datasets with 5k, 500k and 500M sets of values.</p>
<p>My issue is the MLP I'm using doesn't learn anything at all. Using Tensorboard (I'm using Keras), I've realized that the output of my very first layer is always zero (see image 1), no matter what I try.</p>
<p>Basically, my input is a shape (6,) vector and the output is also a shape (6,) vector.</p>
<p>Here is what I've tried so far, without success:</p>
<ul>
<li>I've tried MLPs with 2 layers of size 12, 24; 2 layers of size 48, 48; 4 layers of size 12, 24, 24, 48.</li>
<li>Adam, SGD, RMSprop optimizers</li>
<li>Learning rates ranging from 0.15 to 0.001, with and without decay</li>
<li>Both Mean Squared Error (MSE) and Mean Absolute Error (MAE) as the loss function</li>
<li>Normalizing the input data, and not normalizing it (the first 3 values are between -3 and +3, the last 3 are between -pi and pi)</li>
<li>Batch sizes of 1, 10, 32</li>
<li>Tested the MLP of all 3 datasets of 5k values, 500k values and 5M values.</li>
<li>Tested with number of epoches ranging from 10 to 1000</li>
<li>Tested multiple initializers for the bias and kernel.</li>
<li>Tested both the Sequential model and the Keras functional API (to make sure the issue wasn't how I called the model)</li>
<li>All 3 of sigmoid, relu and tanh activation functions for the hidden layers (the last layer is a linear activation because its a regression)</li>
</ul>
<p>Additionally, I've tried the very same MLP architecture on the basic Boston housing price regression dataset by Keras, and the net was definitely learning something, which leads me to believe that there may be some kind of issue with my data. However, I'm at a complete loss as to what it may be as the system in its current state does not learn anything at all, the loss function just stalls starting on the 1st epoch.</p>
<p>Any help or lead would be appreciated, and I will gladly provide code or data if needed!</p>
<p>Thank you</p>
<p><strong>EDIT:</strong>
Here's a link to 5k samples of the data I'm using. Columns B-G are the output (angles used to generate the position/orientation) and columns H-M are the input (XYZ position and RPY orientation). <a href="https://drive.google.com/file/d/18tQJBQg95ISpxF9T3v156JAWRBJYzeiG/view" rel="nofollow noreferrer">https://drive.google.com/file/d/18tQJBQg95ISpxF9T3v156JAWRBJYzeiG/view</a></p>
<p>Also, here's a snippet of the code I'm using:</p>
<pre><code>df = pd.read_csv('kinova_jaco_data_5k.csv', names = ['state0',
'state1',
'state2',
'state3',
'state4',
'state5',
'pose0',
'pose1',
'pose2',
'pose3',
'pose4',
'pose5'])
states = np.asarray(
[df.state0.to_numpy(), df.state1.to_numpy(), df.state2.to_numpy(), df.state3.to_numpy(), df.state4.to_numpy(),
df.state5.to_numpy()]).transpose()
poses = np.asarray(
[df.pose0.to_numpy(), df.pose1.to_numpy(), df.pose2.to_numpy(), df.pose3.to_numpy(), df.pose4.to_numpy(),
df.pose5.to_numpy()]).transpose()
x_train_temp, x_test, y_train_temp, y_test = train_test_split(poses, states, test_size=0.2)
x_train, x_val, y_train, y_val = train_test_split(x_train_temp, y_train_temp, test_size=0.2)
mean = x_train.mean(axis=0)
x_train -= mean
std = x_train.std(axis=0)
x_train /= std
x_test -= mean
x_test /= std
x_val -= mean
x_val /= std
n_epochs = 100
n_hidden_layers=2
n_units=[48, 48]
inputs = Input(shape=(6,), dtype= 'float32', name = 'input')
x = Dense(units=n_units[0], activation=relu, name='dense1')(inputs)
for i in range(1, n_hidden_layers):
x = Dense(units=n_units[i], activation=activation, name='dense'+str(i+1))(x)
out = Dense(units=6, activation='linear', name='output_layer')(x)
model = Model(inputs=inputs, outputs=out)
optimizer = SGD(lr=0.1, momentum=0.4)
model.compile(optimizer=optimizer, loss='mse', metrics=['mse', 'mae'])
history = model.fit(x_train,
y_train,
epochs=n_epochs,
verbose=1,
validation_data=(x_test, y_test),
batch_size=32)
</code></pre>
<p><strong>Edit 2</strong>
I've tested the architecture with a random dataset where the input was a (6,) vector where input[i] is a random number and the output was a (6,) vector with output[i] = input[i]² and the network didn't learn anything. I've also tested a random dataset where the input was a random number and the output was a linear function of the input, and the loss converged to 0 pretty quickly. In short, it seems the simple architecture is unable to map a non-linear function.</p>
<p><a href="https://i.stack.imgur.com/UHFcM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UHFcM.png" alt="image 1"></a></p>
| D |
58,924,805 | 1 | <javascript><html> | 2019-11-19T00:10:58.477 | null | Why is drop event being called for multiple objects? | <p>When this code is used for a chrome extension in a content script, both draggable elements are created, and are individually resizable, however when drag and dropped they both end up in the same position. </p>
<p>Using log statements I determined that the drag_start event is only called for the element which is clicked on when dragged, but the drop event is always called for both elements. I'm reasonably new to js so would be happy with any sort of help/advice</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>function drag_start(event) {
var style = window.getComputedStyle(event.target, null);
event.dataTransfer.setData("text/plain", (parseInt(style.getPropertyValue("left"), 10) - event.clientX) + ',' + (parseInt(style.getPropertyValue("top"), 10) - event.clientY) + ',' + event.target.getAttribute('data-item'));
}
function drag_over(event) {
event.preventDefault();
return false;
}
function drop(elem, event) {
var offset = event.dataTransfer.getData("text/plain").split(',');
elem.style.left = (event.clientX + parseInt(offset[0], 10)) + 'px';
elem.style.top = (event.clientY + parseInt(offset[1], 10)) + 'px';
event.preventDefault();
return false;
}
class Note {
constructor(x, y, sx, sy, doc){
this.x = x;
this.y = y;
this.doc = doc;
this.div = doc.createElement("div");
this.note_text = doc.createElement("textarea");
this.note_text.setAttribute("type", "text");
this.note_text.style.display = "inline-block";
this.note_text.style.webkitBoxSizing = "border-box";
this.note_text.style.boxSizing = "border-box";
this.note_text.style.width = "100%";
this.note_text.style.height = "100%";
this.note_text.style.resize = "none";
this.div.appendChild(this.note_text);
this.div.draggable = true;
this.div.style.resize = "both";
this.div.style.overflow = "auto";
this.div.className = "dragme";
this.div.style.zIndex = "99999";
this.div.style.position = "absolute";
this.div.style.overflowX = "hidden";
this.div.style.overflowX = "hidden";
this.div.style.overflowY = "hidden";
this.div.style.left = "0";
this.div.style.top = "0";
this.div.style.width = "200px";
this.div.style.background = "rgba(100, 255, 255,1)";
this.div.style.border = "2px solid rgba (0,0,0,1)";
this.div.style.borderRadius = "4px";
this.div.style.padding = "8px";
this.doc.body.append(this.div);
this.div.addEventListener('dragstart', drag_start, false);
doc.body.addEventListener('dragover', drag_over, false);
doc.body.addEventListener('drop', (event) => drop(this.div, event), false);
}
}
const note1 = new Note(0,0,0,0,document);
const note2 = new Note(0,0,0,0,document);</code></pre>
</div>
</div>
</p>
| No |
58,931,446 | 1 | <python><c++><pytorch> | 2019-11-19T10:10:02.487 | 58934501 | Pass Python object to C and back again | <p>I'm trying to get to grips with using embedded Python from a C++ application. Specifically I want my C++ to launch some PyTorch code.</p>
<p>I am making some initialization function in Python to perform the device (CPU or GPU) discovery and would like to pass this back to the C++ code. The C++ will call another Python function for inference which is when the C++ will pass the device to Python.</p>
<pre><code> pFunc_init = PyObject_GetAttrString(pModule, "torch_init");
if (pFunc_init && PyCallable_Check(pFunc_init)) {
pValue_device = PyObject_CallObject(pFunc_init, pArgs);
if (pArgs != NULL)
Py_DECREF(pArgs);
if (pValue_device != NULL) {
pFunc_infer = PyObject_GetAttrString(pModule, "torch_infer");
if (pFunc_infer && PyCallable_Check(pFunc_infer)) {
//
// TODO put object pValue_device into pArgs_device.
//
pValue_infer = PyObject_CallObject(pFunc_infer, pArgs_device);
if (pValue_infer != NULL) {
printf("Result pValue_infer: %ld\n", PyLong_AsLong(pValue_infer));
Py_DECREF(pValue_infer);
}
}
Py_DECREF(pValue_device);
}
else {
Py_DECREF(pFunc_init);
Py_DECREF(pModule);
PyErr_Print();
fprintf(stderr, "Call failed\n");
return 1;
}
}
</code></pre>
<p>The TODO marks where I would like to put this code. With simple Python objects I think I know what I need but how to deal with this custom Python object?</p>
| D |
58,942,875 | 0 | <python><kivy><kivy-language> | 2019-11-19T21:12:44.113 | null | Kivy update data from other class when click a button | <p>In my App, I have a storage (a Json file), and when I load my app, I have a list that is created from this storage.<br>
I also have a button in this app, and when I click on it, I would like to add something in my storage, and then update my list.<br>
The problem is that the list and the button are 2 separated class, and when I try to call for <code>list.updateData()</code> in my <code>on_release() # from the button class</code>, I have to create another instance of my list, and update is not made inside my application.</p>
<p>Here is the code of my entire app :<br>
<strong>planificateur.py</strong></p>
<pre><code>from kivy.app import App
from kivy.lang import Builder
from kivy.uix.recycleview import RecycleView
from kivy.uix.recycleview.views import RecycleDataViewBehavior
from kivy.uix.label import Label
from kivy.properties import BooleanProperty
from kivy.uix.recycleboxlayout import RecycleBoxLayout
from kivy.uix.behaviors import FocusBehavior
from kivy.uix.recycleview.layout import LayoutSelectionBehavior
from kivy.vector import Vector
from kivy.uix.behaviors import ButtonBehavior
from kivy.uix.widget import Widget
from kivy.uix.tabbedpanel import TabbedPanel
import random
from kivy.storage.jsonstore import JsonStore
class SelectableRecycleBoxLayout(FocusBehavior, LayoutSelectionBehavior,
RecycleBoxLayout):
''' Adds selection and focus behaviour to the view. '''
class SelectableLabel(RecycleDataViewBehavior, Label):
''' Add selection support to the Label '''
index = None
selected = BooleanProperty(False)
selectable = BooleanProperty(True)
name=''
priority=1
isActive=True
def refresh_view_attrs(self, rv, index, data):
''' Catch and handle the view changes '''
self.index = index
return super(SelectableLabel, self).refresh_view_attrs(
rv, index, data)
def on_touch_down(self, touch):
''' Add selection on touch down '''
if super(SelectableLabel, self).on_touch_down(touch):
return True
if self.collide_point(*touch.pos) and self.selectable:
return self.parent.select_with_touch(self.index, touch)
def apply_selection(self, rv, index, is_selected):
''' Respond to the selection of items in the view. '''
self.selected = is_selected
# if is_selected:
# print("selection changed to {0}".format(rv.data[index]))
# else:
# print("selection removed for {0}".format(rv.data[index]))
def writeInStore(self, name):
store = JsonStore(name)
store.put(self.name, priority=self.priority, isActive=self.isActive, task="task")
class RV(RecycleView):
def __init__(self, **kwargs):
super(RV, self).__init__(**kwargs)
self.updateData()
def my_callback(self,dt):
self.updateData()
def updateData(self): #I would like to use this method, on the current instance of self when I click on the button.
store = JsonStore('test.json')
self.data = [{'text' : str(task[0]) + " " + str(task[1].get('priority'))} for task in store.find(task="task") ]
self.refresh_from_data()
def addTask(self, task):
self.taskList.append(task)
self.updateData()
def getOneRandom(self):
pass
class AddTaskButton(ButtonBehavior, Widget):
def __init__(self, **kwargs):
super(AddTaskButton, self).__init__(**kwargs)
def collide_point(self, x, y):
return Vector(x, y).distance(self.center) <= self.width / 2
def on_release(self):
task= SelectableLabel();
task.name ="test22"
task.priority = 1
task.isActive = False
task.writeInStore('test.json')
rv = RV() #Here I have to call for another instance of RV, so modification are not made inside the running app
rv.updateData()
class Main(TabbedPanel):
pass
class PlanificateurApp(App):
def build(self):
return Main()
if __name__ == '__main__':
PlanificateurApp().run()
</code></pre>
<p>and <strong>planificateur.kv</strong> </p>
<pre><code><AddTaskButton>:
size: (min(self.width,self.height),min(self.width,self.height)) # force circle
canvas.before:
Color:
rgb: 0.75, 0.75, 0.75
Ellipse:
pos: self.pos
size: self.size
<SelectableLabel>:
canvas.before:
Color:
rgba: (.0, 0.9, .1, .3) if self.selected else (0, 0, 0, 1)
Rectangle:
pos: self.pos
size: self.size
<RV>:
viewclass: 'SelectableLabel'
SelectableRecycleBoxLayout:
default_size: None, dp(56)
default_size_hint: 1, None
size_hint_y: None
height: self.minimum_height
orientation: 'vertical'
multiselect: False
touch_multiselect: True
<Main>:
do_default_tab: False
TabbedPanelItem:
text: 'first tab'
Label:
text: "app.todayTask"
TabbedPanelItem:
text: 'tab2'
FloatLayout:
RV:
id:rv
AddTaskButton:
id:tb
size_hint: .1, .1
pos_hint:{'right': 1}
</code></pre>
<p><code>SelectableRecycleBoxLayout</code> and <code>SelectableLabel</code> are from the Kivy tutorial about the Selecatble list of item.<br>
I've put commentary in <code>RV.updateData()</code> and in <code>AddTaskButton.on_release()</code> to explain.</p>
<p>Do I have to create a singleton from my RV class? (I'm not sure if it's possible in python?) Or do I have a way to retrieve the current instance of RV running in order to call for my function on it?
Or is my logic wrong? I tried to create a function in the <code>Main</code> class that retrieve instance of class from id in kv file, but it didn't do anything.<br>
Can Somebody help me?</p>
| No |
58,949,491 | 1 | <api><wso2><microservices><api-gateway><wso2-mgw> | 2019-11-20T08:09:36.503 | null | How to compose multiple microservices with WSO2 API MicroGateway | <p>The new <strong>WSO2 API MicroGateway</strong> 3.0 states as new feature <strong>Support for composing multiple microservices</strong>.I cannot find an <strong>example</strong> of how to do that.
We are trying a use case with just this type of processing:
An API that queries a back-end database using OData and if not found queries another (non OData) API.
In both cases the result must be transformed (reformatted).</p>
| No |
58,949,687 | 0 | <swift><tensorflow> | 2019-11-20T08:22:52.793 | null | softmaxCrossEntropy in S4TF | <p>im experimenting with Swift for TensorFlow to do some image segmentations, lets first look at some code: </p>
<pre><code>let (loss, grad) = model.valueWithGradient { (model: UNet) -> Tensor<Float> in
let logits = model(batch.images)
print(logits.shape)
print(batch.corners.shape)
return softmaxCrossEntropy(logits: logits, probabilities: batch.corners)
}
</code></pre>
<p>Each <code>batch</code> contains some tensors of images of tickets, and some tensors of images of the corners, hence the two references: <code>batch.images</code> and <code>batch.corners</code></p>
<p>You will see, that I also print their shapes, which both come out to be: [32, 324, 324, 4]</p>
<p>32 being the batch size, 324*324 being the size of the images, and 4 channels for each image.
The goal is to extract the position of the corners on the images.</p>
<p>I wanna use <a href="https://www.tensorflow.org/swift/api_docs/Functions#softmaxcrossentropylogits:probabilities:" rel="nofollow noreferrer">softmaxCrossEntropy</a> as the loss function, but it gives be the following error:</p>
<pre><code>Fatal error: logits and labels must be either 2-dimensional, or broadcasted to be 2-dimensional: file /swift-base/tensorflow-swift-apis/Sources/TensorFlow/Bindings/EagerExecution.swift, line 300
Current stack trace:
0 libswiftCore.so 0x00007f51ab60b940 swift_reportError + 50
1 libswiftCore.so 0x00007f51ab67ccf0 _swift_stdlib_reportFatalErrorInFile + 115
2 libswiftCore.so 0x00007f51ab5a6b48 <unavailable> + 3722056
3 libswiftCore.so 0x00007f51ab5a6cd7 <unavailable> + 3722455
4 libswiftCore.so 0x00007f51ab3794e8 <unavailable> + 1438952
5 libswiftCore.so 0x00007f51ab57a5ce <unavailable> + 3540430
6 libswiftCore.so 0x00007f51ab378c09 <unavailable> + 1436681
7 libswiftTensorFlow.so 0x00007f51a79b5f50 <unavailable> + 2899792
8 libswiftTensorFlow.so 0x00007f51a7809d10 checkOk(_:file:line:) + 434
9 libswiftTensorFlow.so 0x00007f51a7810ce0 TFE_Op.evaluateUnsafe() + 506
10 libswiftTensorFlow.so 0x00007f51a7811550 TFE_Op.execute<A, B>(_:_:) + 323
11 libswiftTensorFlow.so 0x00007f51a781a0c2 <unavailable> + 1212610
12 libswiftTensorFlow.so 0x00007f51a792fee0 static Raw.softmaxCrossEntropyWithLogits<A>(features:labels:) + 821
13 libswiftTensorFlow.so 0x00007f51a7a6f5b0 _vjpSoftmaxCrossEntropyHelper<A>(logits:probabilities:) + 84
14 libswiftTensorFlow.so 0x00007f51a7a6f6b0 AD__$s10TensorFlow25softmaxCrossEntropyHelper6logits13probabilitiesAA0A0VyxGAG_AGtAA0aB13FloatingPointRzlF__vjp_src_0_wrt_0 + 9
15 libswiftTensorFlow.so 0x00007f51a7ac9e10 AD__$s10TensorFlow19softmaxCrossEntropy6logits13probabilities9reductionAA0A0VyxGAH_A3HXFtAA0aB13FloatingPointRzlF__vjp_src_0_wrt_0 + 444
16 libswiftTensorFlow.so 0x00007f51a7b48f64 <unavailable> + 4550500
17 libswiftTensorFlow.so 0x00007f51a7ac9a60 AD__$s10TensorFlow19softmaxCrossEntropy6logits13probabilitiesAA0A0VyxGAG_AGtAA0aB13FloatingPointRzlF__vjp_src_0_wrt_0 + 616
Current stack trace:
frame #14: 0x00007f516f9a3c45 $__lldb_expr162`AD__$s15__lldb_expr_16110TensorFlow0C0VySfG02__a1_B4_1354UNetVcfU___vjp_src_0_wrt_0(model=<unavailable>, batch=<unavailable>) at <Cell 25>:17
frame #21: 0x00007f516f99b941 $__lldb_expr162`main at <Cell 25>:13:34
</code></pre>
<p>I understand that the inputs needs to be 2D, but I dont know how to handle that. I cant help but notice, that <a href="https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits" rel="nofollow noreferrer">the Python version of the same function</a> has a parameter <code>alongAxis</code> and wonder what I would do in S4TF to set a specific axis.</p>
| D |
58,975,671 | 0 | <python><odoo> | 2019-11-21T13:00:38.097 | null | problem with TransientModel and abstractmodel | <p>I am trying to create a new view for the report of a general ledger, for this by investigating I realized that I have to use the transientmodel and abstractmodel so that it can work. Then I started structuring these wizard models and created the abstractmodel in the report but when executing I get the following error:</p>
<p>File "/mnt/c/odoo/odoo/models.py", line 4727, in ensure_one</p>
<pre><code>raise ValueError("Expected singleton: %s" % self)
</code></pre>
<p><strong>ValueError: Expected singleton: proyecto_rc.account(1, 2)</strong></p>
<p>It is because of the above that I come to this place to see if they can help me to find and resilient the problem. Next I add the code hoping that they understand it.</p>
<p><strong>wizar py</strong></p>
<pre class="lang-py prettyprint-override"><code>
class Book (models.TransientModel):
_name = 'project_rc.book'
start_date = fields.Date (string = "Start date", required = True)
end_date = fields.Date (string = "End date", required = True)
@api.multi
def action_report (self):
"" "Method that calls the logic that generates the report" ""
data = {'ids': self.env.context.get ('active_ids', [])}
res = self.read (['start_date', 'end_date'])
res = res and res [0] or {}
datas ['form'] = res
domain = []
if self.date_date:
domain = [('create_date', '<', self.state_date)]
fields = ['title', 'total_debit_count', 'total_credit_count']
lmayor_data = self.env ['project_rc.account']. search_read (domain, fields)
datas ['lmayor_data'] = lmayor_data
return self.env ['report']. get_action ([], 'project_rc.report_bookmajor', data = data)
</code></pre>
<p><strong>reports py</strong></p>
<pre class="lang-py prettyprint-override"><code>class report_bookmajor(models.AbstractModel):
_name = 'report.proyecto_rc.bookmajor'
@api.model
def render_html(self, docids, data=None):
data = data if data is not None else {}
bookmajor = self.env['proyecto_rc.account'].browse(data.get('ids', data.get('active_ids')))
docargs = {
'doc_ids': data.get('ids', data.get('active_ids')),
'doc_model': 'proyecto_rc.account',
'docs': bookmajor,
'data': dict(
data
),
}
return self.env['report'].render('proyecto_rc.bookmajor_template', docargs)
</code></pre>
<p>Reviewing these two models, what problem can I have? I have to add the account table or in the method performed in the transientmodel and call it and it is not necessary to add it as manyone?</p>
| No |
58,997,711 | 0 | <tensorflow><keras> | 2019-11-22T15:55:18.907 | null | Keras: Why can't I load weights from file for same model except without dropout? | <p>I have two models that are the same except one has a dropout layer removed. If I save the weights from the dropout model (<code>with model.save_weights()</code>) and then try to load into the non-dropout model, I get an error about "checkpoint references" or something. What does this mean, and how can I get around it (without leaving dropout in at rate 0.0)?</p>
| D |
59,013,295 | 0 | <python-3.x><tensorflow> | 2019-11-23T23:36:17.253 | null | Prediction results from tensorflow's model.predict_generator for 2 classes | <p>I'm trying to learn how to predict results from a model but there seems to be some basic I'm getting confused about. There are two classes I'm working with 0:'cats', 1:'dogs'. When predicting labels for 6 images I get a 6 by 1 array and it seems like the prediction values are for how close the images get to the 'dog' label. I was expecting to get some 6 by 2 array where each image is compared to both classes just like the <a href="https://www.tensorflow.org/tutorials/keras/classification" rel="nofollow noreferrer">Tensorflow basic image classification for clothes</a> tutorial. I also see it may be redundant to generate two prediction values when there are only 2 classes so is my understanding correct? </p>
<p>The code I'm following is the TensorFlow tutorial for image classification:
<a href="https://www.tensorflow.org/tutorials/images/classification" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/images/classification</a></p>
<p>To see how the model (called "model_new") predicts labels of 6 images (1 cat followed by 5 dogs), I have added the following code with 6 images in the directory ('cat_dog_testing') and then under another sub-directory ('cat_dog_testing'):</p>
<pre><code># Preparing the testing dataset
test_dir = os.path.join(os.getcwd(), 'cat_dog_testing')
test_image_generator = ImageDataGenerator(rescale=1./255) # scaling pixel values to be from 0 to 1
test_generator = test_image_generator.flow_from_directory(batch_size=6,
directory=test_dir,
shuffle=False,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode=None)
STEP_SIZE_TEST=test_generator.n//test_generator.batch_size # step size would be 6/6 = 1
test_generator.reset()
pred=model_new.predict_generator(test_generator, steps=STEP_SIZE_TEST, verbose=1)
</code></pre>
<p>For the results:</p>
<pre><code>pred.shape # comes out to be (1,6)
pred # array([[0.26228264],
# [0.66503084],
# [0.98268926],
# [0.97690296],
# [0.8235555 ],
# [0.7541907 ]], dtype=float32)
# Visualize test images
# This function will plot images in the form of a grid with 1 row and 6 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 6, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
sample_test_images = next(test_generator)
sample_test_images.shape # comes out to be (6, 150, 150, 3)
plotImages(sample_test_images[:6])
</code></pre>
<p><a href="https://i.stack.imgur.com/rn4vK.png" rel="nofollow noreferrer">Here's the order of images plotted and I'm assuming they correspond to the order of predictions</a></p>
| D |
59,030,519 | 0 | <tensorflow><neural-network><deep-learning><nlp><nlg> | 2019-11-25T11:15:30.343 | null | How to get automatic inferences from graphs | <p>A bit new to this and getting minimal help from googling.</p>
<p>I want to work on generating automatic inferences from a graph. To start with, Is there any course or blog that can guide me to achieve this. To be in detail, I have a graph, say a basic line graph, If a user hovers on it it should show one or two points/inferences about the graph. All the graphs are basic ones. Inference can be very basic. Can NLP be applied to achieve this?</p>
<p>Thanks in advance.</p>
| D |
59,078,738 | 1 | <tensorflow><google-colaboratory><tensorflow2.0> | 2019-11-27T21:33:45.477 | 59084298 | How to improve model to prevent overfitting for very simple image classification | <p>First of all: I'm a beginner with TensorFlow (version2). I'm learning a lot by reading. However, I don't seem to find an answer to the following problem.</p>
<p>I'm trying to build a model for classifying images into three labels.
As you see in the graphs below my training accurary is quite ok, but the validation accuracy is way too low.</p>
<h2><a href="https://i.stack.imgur.com/mxAGV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mxAGV.png" alt="Training & Validation Accuracy"></a></h2>
<p>As I understand, this is probably an 'overfitting' problem.</p>
<p>Maybe I'll first explain what I'm trying to do:</p>
<p>I want to use images as input. As output I'd want to receive zero or more labels (classifiers) that belong to those images.
I was expecting this would be an easy task, since the input images are simple. (only two colors, and only 0, 1, 2 or 3 possible 'labels'.
Here are a few examples of the images. They are a representation of a walked track (green) on a field (bounded by blue polygon):</p>
<p><a href="https://i.stack.imgur.com/rVaPK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rVaPK.png" alt="example input images"></a></p>
<p>Possible labels are:</p>
<ol>
<li>cross: (first 2 images): you can see clearly that the green lines are forming one or more 'crosses'</li>
<li>zig-zag: (third image): not exactly sure if this the is correct term in English, but I guess you get the picture ;-)</li>
<li>rows: the green lines are mostly parallel lines (no zigzag, nor cross)</li>
<li>none of above (don't know if this needs to be a label)</li>
</ol>
<p>I'm using following model:</p>
<pre><code>batch_size = 128
epochs = 30
IMG_HEIGHT = 150
IMG_WIDTH = 150
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu',
input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Dropout(0.2),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Dropout(0.2),
Flatten(),
Dense(512, activation='relu'),
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
Model: "sequential_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_15 (Conv2D) (None, 150, 150, 16) 448
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 75, 75, 16) 0
_________________________________________________________________
dropout_10 (Dropout) (None, 75, 75, 16) 0
_________________________________________________________________
conv2d_16 (Conv2D) (None, 75, 75, 32) 4640
_________________________________________________________________
max_pooling2d_16 (MaxPooling (None, 37, 37, 32) 0
_________________________________________________________________
conv2d_17 (Conv2D) (None, 37, 37, 64) 18496
_________________________________________________________________
max_pooling2d_17 (MaxPooling (None, 18, 18, 64) 0
_________________________________________________________________
dropout_11 (Dropout) (None, 18, 18, 64) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 20736) 0
_________________________________________________________________
dense_10 (Dense) (None, 512) 10617344
_________________________________________________________________
dense_11 (Dense) (None, 3) 1539
=================================================================
Total params: 10,642,467
Trainable params: 10,642,467
Non-trainable params: 0
</code></pre>
<p>I'm using 3360 images as training dataset and 496 as validation dataset.
Those are already 'augmented' so those sets contain already rotated and mirrored versions of other existing images.</p>
<p>Maybe it is worth to mention that the dataset is unbalanced: 80% of the images do contain the label 'cross', while the other 20% is covered by 'zig-zag' and 'rows'.</p>
<p>Anybody can guide me in the right direction how I can improve my model?</p>
| D |
59,084,328 | 0 | <react-native><tensorflow><automl> | 2019-11-28T08:14:36.613 | null | Can get local JSON data on React native TensorflowJS | <p>I use <code>tensorflowjs autoML</code> and there is an issue with this lib.
I won't explain it, too many unnecessary details.</p>
<p>The point is, it tries to fetch a json data </p>
<p>This works;</p>
<pre><code>const modelJson = 'http://192.168.0.18:8081/src/assets/model.json';
</code></pre>
<p>This works;</p>
<pre><code>const modelJson = 'http://www.somewhere/api/json';
</code></pre>
<p>This does not work;</p>
<pre><code>const modelJson = '../src/assets/model.json';
</code></pre>
<p>Error is;</p>
<pre><code>TypeError: Network request failed
</code></pre>
<p>I want to collect the JSON on the device, but when it will be on product, how can I use it? Maybe I can use this <code>'http://192.168.0.18:8081/src/assets/model.json'</code> version. Am I gonna get user's ip or what can I do about that.. Thank you very much! </p>
| D |
59,086,077 | 0 | <spring> | 2019-11-28T09:56:20.363 | null | @Value Spring add empttyMap as default Value | <p>I have the below code in Spring :
private Map deviceAttributes = new HashMap<>();</p>
<p>Now on static analysis I get the error that I should use @Autowired,@Value,@Resource or @Inject for setting the default value .
Could someone please let me know how should I use @Value or if anyother from above can be used , please let me know . Thanks in advance , had been struggling with it from quite some time now.</p>
| No |
59,089,380 | 0 | <tensorflow><keras> | 2019-11-28T12:53:31.460 | null | Tensorflow predict throws Error while reading resource variable dense_1_9/kernel from Container | <p>Am trying to load keras model in flask to be called from webservice</p>
<pre><code> #__init__
...
self.graph = tf.get_default_graph()
def predict(self, description, taxonomy):
# set_session(self.session)
with self.graph.as_default() as graph:
with tf.Session() as sess:
set_session(sess)
x_1 = self.tfidf1.transform(description).toarray()
x_2 = self.tfidf2.transform(taxonomy).toarray()
results = self.model.predict(x=[x_1, x_2], verbose=1)
res = self.le.inverse_transform(np.argmax(results, axis=1))
prob = np.amax(results, axis=1)
return res, prob
</code></pre>
<p>but it keeps throwing </p>
<pre><code>tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable dense_1_9/kernel from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/dense_1_9/kernel)
[[{{node dense_1_9/MatMul/ReadVariableOp}}]]
</code></pre>
| D |
59,091,544 | 2 | <python><list><dictionary><pytorch><iterable> | 2019-11-28T14:56:37.357 | 59098149 | In python3: strange behaviour of list(iterables) | <p>I have a specific question regarding the behaviour of iterables in python. My iterable is a custom built Dataset class in pytorch:</p>
<pre><code>import torch
from torch.utils.data import Dataset
class datasetTest(Dataset):
def __init__(self, X):
self.X = X
def __len__(self):
return len(self.X)
def __getitem__(self, x):
print('***********')
print('getitem x = ', x)
print('###########')
y = self.X[x]
print('getitem y = ', y)
return y
</code></pre>
<p>The weird behaviour now comes about when I initialize a specific instance of that datasetTest class. Depending on what data structure I pass as an argument X, it behaves differently when I call list(datasetTestInstance). In particular, when passing a torch.tensor as argument there is no problem, however when passing a dict as argument it will throw a KeyError. The reason for this is that list(iterable) not just calls i=0, ..., len(iterable)-1, but it calls i=0, ..., len(iterable). That is, it will iterate until (inclusive) the index equal to the length of the iterable. Obviously, this index is not definied in any python datastructure, as the last element has always the index len(datastructure)-1 and not len(datastructure). If X is a torch.tensor or a list, no error will be risen, even though I think the should be an error. It will still call getitem even for the (non-existent) element with index len(datasetTestinstance) but it will not compute y=self.X[len(datasetTestInstance]. Does anyone know if pytorch handels this somehow gracefully internally?</p>
<p>When passing a dict as data it will throw an error in the last iteration, when x=len(datasetTestInstance). This is actually the expected behaviour I guess. But why does this only happen for a dict and not for a list or torch.tensor?</p>
<pre><code>if __name__ == "__main__":
a = datasetTest(torch.randn(5,2))
print(len(a))
print('++++++++++++')
for i in range(len(a)):
print(i)
print(a[i])
print('++++++++++++')
print(list(a))
print('++++++++++++')
b = datasetTest({0: 12, 1:35, 2:99, 3:27, 4:33})
print(len(b))
print('++++++++++++')
for i in range(len(b)):
print(i)
print(b[i])
print('++++++++++++')
print(list(b))
</code></pre>
<p>You could try out that snippet of code if you want to understand better what I have observed.</p>
<p>My questions are:</p>
<p>1.) Why does list(iterable) iterate until (including) the len(iterable)? A for loop doesnt do that.</p>
<p>2.) In case of a torch.tensor or a list passed as data X: Why does it not throw an error even when calling the getitem method for the index len(datasetTestInstance) which should actually be out of range since it is not defined as an index in the tensor/list? Or, in other words, when having reached the index len(datasetTestInstance) and then going into the <strong>getitem</strong> method, what happens exactly? It obviously doesnt make the call 'y = self.X[x]' anymore (otherwiese there would be an IndexError) but it DOES enter the getitem method which I can see as it prints the index x from within the getitem method. So what happens in that method? And why does it behave different depending on whether having a torch.tensor/list or a dict?</p>
| D |
59,107,933 | 1 | <python><keras><callback> | 2019-11-29T15:58:29.197 | null | How to manupilate input tensor epochwise in keras | <p>I'm new to Keras and I have a project written in Keras which I want to modify it slightly. My idea is adding random noise to random samples of the the input tensor each epoch. So in every epoch, random indices of the input data will be corrupted with the noise. </p>
<p>If I inject noise to the features before feeding to keras <code>model.fit()</code> noise will be added to the same samples of the input tensor and will stay same during whole training. But I want to change random samples after each epoch. </p>
<p>Therefore, I tried to use Callbacks:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>class Noisify(Callback):
def __init__(self,mixture_rate = 0.2):
self.mixture_rate = mixture_rate
def on_epoch_begin(self,epoch,logs={}):
# get input tensor
'''
mix randomly noisy and clean features here
'''
# set input tensor</code></pre>
</div>
</div>
</p>
<p>But I couldn't find a way to get and set input tensor like <code>model.get_layer</code> or <code>get_weights</code>. How can do it? </p>
| D |
5,146,597 | 4 | <c#><ocr><text-recognition> | 2011-02-28T19:26:47.150 | null | Which library to use to extract text from images? | <p>I am writing a program that when given an image of a low level math problem (e.g. 98*13) should be able to output the answer. The numbers would be black, and the background white. <em>Not</em> a captcha, just an image of a math problem.</p>
<p>The math problems would only have two numbers and one operator, and that operator would only be +, -, *, or /.</p>
<p>Obviously, I know how to do the calculating ;) I'm just not sure how to go about getting the text from the image.</p>
<p>A free library would be ideal... although If I have to write the code myself I could probably manage.</p>
| No |
5,197,636 | 3 | <xslt> | 2011-03-04T18:16:42.173 | 5198651 | XSLT transformation falsely replaces characters | <p>I've the following <em>XSLT</em> code:</p>
<pre><code><xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:template match="POL">
<sql:SQLXML>
<sql:Execute as="Test" into="Test">
<sql:SQL>
select trans_type, trans_datetime, replace(convert(varchar, trans_datetime, 114), ':', '_') as trans_time, application_data from Acord_Transaction where transaction_id=
<xsl:value-of select="TRANSACTIONID" />
</sql:SQL>
</sql:Execute>
<xsl:if test="string-length(APPLICATIONDATA/parameters/noteid) &gt; 0">
<sql:Execute as="newnote" into="newnotes">
<sql:SQL>
select * from notes where note_id=
<xsl:value-of select="APPLICATIONDATA/parameters/noteid" />
AND added_date='
<xsl:value-of select="APPLICATIONDATA/parameters/addeddate" />
'
</sql:SQL>
</sql:Execute>
</xsl:if>
</xsl:template>
</code></pre>
<p></p>
<p><strong>Problem:</strong></p>
<p><code>APPLICATIONDATA</code> is a string field that is initialized from the database and contains XML code. After the <code>sql:execute</code> completes, the output <code><</code> and <code>></code> is replaced by <code>&lt</code> and <code>&gt</code>.</p>
<p>I need a template that will be applied after <code>sql:execute</code>, so that the result of the execution becomes <em>valid</em> XML code. Then I can run the XPath from <code>xsl:if</code> on it.</p>
| No |
5,231,147 | 1 | <php> | 2011-03-08T10:22:35.883 | 5231202 | enter selected checkbox values into database separated by commas | <p>I have a php form with text boxes,radiobutton and checkboxes.I have written the coding for inserting the entered datas into database .my datas are getting entered except that of the checkbox values.i want to enter all the selected checkbox values within a single column separated by commas.can anyone correct my coding.I am not entering the code here since half of the codings are getting hidden.please go to the folllowing link to view my coding. </p>
<p><a href="http://pastebin.com/DTR9LvtZ" rel="nofollow">http://pastebin.com/DTR9LvtZ</a></p>
| No |
5,407,652 | 4 | <asp.net-mvc><asp.net-mvc-3> | 2011-03-23T15:29:36.817 | 5408735 | ASP.NET Remote Validation only on blur? | <p>I'm using the remote validation in MVC 3, but it seems to fire any time that I type something, if it's the second time that field's been active. The problem is that I have an autocomplete box, so they might click on a result to populate the field, which MVC views as "leaving" it.</p>
<p>Even apart from the autcomplete thing, I don't want it to attempt to validate when they're halfway through writing. Is there a way that I can say "only run validation n milliseconds after they are finished typing" or "only run validation on blur?"</p>
| No |
5,411,283 | 1 | <ruby-on-rails><ruby-on-rails-3><mongodb><mongoid> | 2011-03-23T20:34:14.403 | 5411334 | Rails 3: how to use active record and mongoid at the same time | <p>I read alot that folks recommend using nosql together with sql datastores. For example have some reporting audit-trailing or log information in mysql and some threaded hierarchical data in mongodb. </p>
<p>Is it possible to hook up rails with active record on mysql as well as mongoid?</p>
<p>Out of the box it seems not to work...Any hints?
Or is this a not recommended approach?</p>
| No |
5,434,028 | 2 | <sql><database><oracle><function> | 2011-03-25T14:47:46.327 | 5434466 | Oracle Count with Function | <p>I have a <code>SQL</code> like this.</p>
<pre><code>SELECT A.HESAP_NO, A.TEKLIF_NO1 || '/' || A.TEKLIF_NO2 AS TEKLIF
MV_K(A.TEKLIF_NO1,A.TEKLIF_NO2, A.DATE) AS KV
FROM S_TEKLIF A
</code></pre>
<p>When i want calculate <code>MV_K(A.TEKLIF_NO1,A.TEKLIF_NO2, A.DATE)/COUNT(@TEKLIF)</code> but it doesn't work.</p>
<p>I use Oracle.</p>
<p>How can i divide in <code>Oracle</code> like that?</p>
<p>Here is my FULL aspx code;</p>
<pre><code><%@ Page Language="C#" AutoEventWireup="true" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<%@ Import Namespace="System" %>
<%@ Import Namespace="System.Configuration" %>
<%@ Import Namespace="System.IO" %>
<%@ Import Namespace="System.Text" %>
<%@ Import Namespace="System.Data.SqlClient" %>
<%@ Import Namespace="System.Web.UI.WebControls" %>
<%@ Import Namespace="System.Collections.Generic" %>
<%@ Import Namespace="System.Linq" %>
<%@ Import Namespace="System.Web" %>
<%@ Import Namespace="System.Web.UI" %>
<%@ Import Namespace="System.Collections" %>
<%@ Import Namespace="System.Data.OracleClient" %>
<%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" tagprefix="asp" %>
<script runat="server">
protected void Page_Load(object sender, EventArgs e)
{
Calculate.Visible = false;
}
protected void Calculate_Click(object sender, EventArgs e)
{
Calculate.Visible = true;
double sumMV = 0;
foreach (GridViewRow gvr in GridView1.Rows)
{
CheckBox cb = (CheckBox)gvr.FindControl("NameCheckBoxField2");
if (cb.Checked == true)
{
double amount = Convert.ToDouble(gvr.Cells[9].Text);
sumMV += amount;
}
}
GridView1.FooterRow.Cells[9].Text = String.Format("{0:n}", sumMV);
double sumRISK = 0;
foreach (GridViewRow gvr in GridView1.Rows)
{
CheckBox cb = (CheckBox)gvr.FindControl("NameCheckBoxField1");
if (cb.Checked == true)
{
double amount = Convert.ToDouble(gvr.Cells[7].Text);
sumRISK += amount;
}
}
GridView1.FooterRow.Cells[7].Text = String.Format("{0:n}", sumRISK);
double sumKV = 0;
foreach (GridViewRow gvr in GridView1.Rows)
{
CheckBox cb = (CheckBox)gvr.FindControl("NameCheckBoxField3");
if (cb.Checked == true)
{
double amount = Convert.ToDouble(gvr.Cells[11].Text);
if (amount != -1)
{
sumKV += amount;
}
}
}
GridView1.FooterRow.Cells[11].Text = String.Format("{0:n}", sumKV);
}
protected void SendToGridview_Click(object sender, EventArgs e)
{
DateTime dt_stb;
Calculate.Visible = true;
string strQuery = string.Empty;
string ConnectionString = ConfigurationManager.ConnectionStrings["ora"].ConnectionString;
OracleConnection myConnection = new OracleConnection(ConnectionString);
string txtBoxText1 = ((TextBox)Page.FindControl("TextBox1")).Text;
if (txtBoxText1 != "")
{
strQuery = @"SELECT A.HESAP_NO, A.TEKLIF_NO1 || '/' || A.TEKLIF_NO2 AS TEKLIF, A.MUS_K_ISIM AS MUSTERI,
B.MARKA, C.SASI_NO, C.SASI_DURUM, D.TAS_MAR, NVL(RISK_SASI(A.TEKLIF_NO1, A.TEKLIF_NO2, C.URUN_SIRA_NO, C.SIRA_NO, :S_TARIH_B),0) AS RISK,
NVL(MV_SASI(A.TEKLIF_NO1, A.TEKLIF_NO2, C.SIRA_NO, C.URUN_SIRA_NO, :S_TARIH_B),0) AS MV,
MV_K(A.TEKLIF_NO1,A.TEKLIF_NO2, :S_TARIH_B)/COUNT(*) OVER() AS KV
FROM S_TEKLIF A, S_URUN B, S_URUN_DETAY C, KOC_KTMAR_PR D
WHERE A.TEKLIF_NO1 || A.TEKLIF_NO2 = B.TEKLIF_NO1 || B.TEKLIF_NO2
AND A.TEKLIF_NO1 || A.TEKLIF_NO2 = C.TEKLIF_NO1 || C.TEKLIF_NO2
AND B.SIRA_NO = C.URUN_SIRA_NO
AND C.SASI_DURUM IN ('A','R')
AND B.DISTRIBUTOR = D.DIST_KOD
AND B.MARKA = D.MARKA_KOD
AND B.URUN_KOD = D.TAS_KOD ";
}
string param = "";
foreach (ListItem l in CheckBoxList1.Items)
{
if (l.Selected)
{
param += string.Format("'{0}'", l.Value);
param += ",";
}
}
try
{
param = param.Remove(param.Length - 1);
strQuery = strQuery + " AND A.HESAP_NO IN (" + param + ")";
OracleCommand myCommand = new OracleCommand(strQuery, myConnection);
myCommand.CommandType = System.Data.CommandType.Text;
myCommand.Connection = myConnection;
myCommand.CommandText = strQuery;
dt_stb = DateTime.Parse(txtBoxText1);
myCommand.Parameters.AddWithValue(":S_TARIH_B", dt_stb);
myConnection.Open();
OracleDataReader dr = myCommand.ExecuteReader(System.Data.CommandBehavior.CloseConnection);
GridView1.DataSource = dr;
GridView1.DataBind();
GridView1.Visible = true;
myConnection.Close();
}
catch
{
ScriptManager.RegisterClientScriptBlock(this, this.GetType(), " ", "alert('Choose at least one customer!')", true);
Calculate.Visible = false;
GridView1.Visible = false;
TextBox1.Text = string.Empty;
}
double sumMV = 0;
foreach (GridViewRow gvr in GridView1.Rows)
{
CheckBox cb = (CheckBox)gvr.FindControl("NameCheckBoxField2");
if (cb.Checked == true)
{
double amountMV = Convert.ToDouble(gvr.Cells[9].Text);
sumMV += amountMV;
}
}
GridView1.FooterRow.Cells[9].Text = String.Format("{0:n}", sumMV);
double sumRISK = 0;
foreach (GridViewRow gvr in GridView1.Rows)
{
CheckBox cb = (CheckBox)gvr.FindControl("NameCheckBoxField1");
if (cb.Checked == true)
{
double amountBV = Convert.ToDouble(gvr.Cells[7].Text);
sumRISK += amountBV;
}
}
GridView1.FooterRow.Cells[7].Text = String.Format("{0:n}", sumRISK);
double sumKV = 0;
foreach (GridViewRow gvr in GridView1.Rows)
{
CheckBox cb = (CheckBox)gvr.FindControl("NameCheckBoxField3");
if (cb.Checked == true)
{
double amountKV = Convert.ToDouble(gvr.Cells[11].Text);
if (amountKV != -1)
{
sumKV += amountKV;
}
}
}
GridView1.FooterRow.Cells[11].Text = String.Format("{0:n}", sumKV);
}
</script>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title></title>
<style type="text/css">
#form1
{
height: 729px;
width: 1083px;
}
.style1
{
width: 265px;
}
</style>
</head>
<body>
<form id="form1" runat="server">
<br />
<img src="../images/Scania_Logo.gif" style="height: 49px; width: 193px" />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<asp:Label ID="Label1" runat="server" Font-Bold="True" Font-Size="X-Large"
ForeColor="Blue" Height="40px" Text="BV &amp; RISK SIMULATOR"
Width="329px" style="text-align: center"></asp:Label>
&nbsp;
<asp:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server">
</asp:ToolkitScriptManager>
<br />
<div style="OVERFLOW-Y:scroll; WIDTH:362px; HEIGHT:177px">
<br />
<table border="5" bordercolor=blue style="height: 116px; width: 343px">
<tr>
<td class="style1">
<asp:CheckBoxList ID="CheckBoxList1" runat="server"
DataSourceID="ChechkBoxDataSource" DataTextField="MUS_K_ISIM"
DataValueField="HESAP_NO" Font-Size="12pt">
</asp:CheckBoxList>
</td>
</tr>
</table>
</div>
<div style="width: 331px">
<br />
<asp:Textbox ID="TextBox1" runat="server" Font-Size="X-Small" Height="13px" Font-Names="Verdana" Width="75px" ></asp:Textbox>
<asp:CalendarExtender Format="dd/MM/yyyy" ID="TextBox1_CalendarExtender" runat="server"
TargetControlID="TextBox1">
</asp:CalendarExtender>
<asp:Image ID="ImageButton3" runat="server" ImageUrl="~/images/SmallCalendar.gif"/>
<br />
</div>
<asp:SqlDataSource ID="ChechkBoxDataSource" runat="server"
ConnectionString="<%$ ConnectionStrings:ora %>"
ProviderName="<%$ ConnectionStrings:ora.ProviderName %>"
SelectCommand="SELECT DISTINCT(A.HESAP_NO),A.MUS_K_ISIM
FROM S_TEKLIF A
ORDER BY A.MUS_K_ISIM">
</asp:SqlDataSource>
<br />
<asp:Button ID="SendToGridview" runat="server" Text="Calculate" Width="73px"
onclick="SendToGridview_Click" />
<br />
<br />
<asp:GridView ID="GridView1" runat="server"
Width="16px" CellPadding="4"
GridLines="None" Height="16px" ForeColor="#333333"
AutoGenerateColumns="False" DataKeyNames="RISK,MV" BorderColor="White"
BorderStyle="Ridge" ShowFooter="True" >
<AlternatingRowStyle BackColor="White" />
<Columns>
<asp:BoundField HeaderText="HESAP" DataField="HESAP_NO" />
<asp:BoundField HeaderText="TEKLIF" DataField="TEKLIF" >
<ItemStyle Wrap="False" />
</asp:BoundField>
<asp:BoundField HeaderText="MUSTERI" DataField="MUSTERI" >
<ItemStyle Wrap="False" />
</asp:BoundField>
<asp:BoundField HeaderText="MARKA" DataField="MARKA" />
<asp:BoundField HeaderText="SASI" DataField="SASI_NO" >
<ItemStyle Wrap="False" />
</asp:BoundField>
<asp:BoundField HeaderText="DURUM" DataField="SASI_DURUM" />
<asp:BoundField HeaderText="TASIT MARKA" DataField="TAS_MAR" >
<ItemStyle Wrap="False" />
</asp:BoundField>
<asp:BoundField HeaderText="BV" DataField="RISK" DataFormatString="{0:n2}"/>
<asp:templatefield headertext="">
<itemtemplate>
<asp:CheckBox DataField="NameCheckBoxField1" ID="NameCheckBoxField1" Checked="True" runat="server"></asp:CheckBox>
</itemtemplate>
</asp:templatefield>
<asp:BoundField HeaderText="MV" DataField="MV" DataFormatString="{0:n2}"/>
<asp:templatefield headertext="">
<itemtemplate>
<asp:CheckBox DataField="NameCheckBoxField2" ID="NameCheckBoxField2" Checked="True" runat="server"></asp:CheckBox>
</itemtemplate>
</asp:templatefield>
<asp:BoundField HeaderText="KV" DataField="KV" DataFormatString="{0:n2}"/>
<asp:templatefield headertext="">
<itemtemplate>
<asp:CheckBox DataField="NameCheckBoxField3" ID="NameCheckBoxField3" Checked="True" runat="server"></asp:CheckBox>
</itemtemplate>
</asp:templatefield>
</Columns>
<EditRowStyle BackColor="#2461BF" />
<FooterStyle BackColor="#507CD1" ForeColor="White" Font-Bold="True" />
<HeaderStyle BackColor="#507CD1" Font-Bold="True" ForeColor="White" />
<PagerStyle BackColor="#2461BF" ForeColor="White" HorizontalAlign="Center" />
<RowStyle BackColor="#EFF3FB" />
<SelectedRowStyle BackColor="#D1DDF1" Font-Bold="True" ForeColor="#333333" />
<sortedascendingcellstyle backcolor="#F4F4FD" />
<sortedascendingheaderstyle backcolor="#5A4C9D" />
<sorteddescendingcellstyle backcolor="#D8D8F0" />
<sorteddescendingheaderstyle backcolor="#3E3277" />
<SortedAscendingCellStyle BackColor="#F5F7FB"></SortedAscendingCellStyle>
<SortedAscendingHeaderStyle BackColor="#6D95E1"></SortedAscendingHeaderStyle>
<SortedDescendingCellStyle BackColor="#E9EBEF"></SortedDescendingCellStyle>
<SortedDescendingHeaderStyle BackColor="#4870BE"></SortedDescendingHeaderStyle>
</asp:GridView>
<asp:SqlDataSource ID="SqlDataSource1" runat="server"
ConnectionString="<%$ ConnectionStrings:ora %>"
ProviderName="<%$ ConnectionStrings:ora.ProviderName %>"
SelectCommand=" SELECT A.HESAP_NO, A.TEKLIF_NO1 || '/' || A.TEKLIF_NO2 AS TEKLIF, A.MUS_K_ISIM ,
B.MARKA, C.SASI_NO, C.SASI_DURUM, D.TAS_MAR, NVL(RISK_SASI(A.TEKLIF_NO1, A.TEKLIF_NO2, C.URUN_SIRA_NO, C.SIRA_NO, :S_TARIH_B),0) AS RISK,
NVL(MV_SASI(A.TEKLIF_NO1, A.TEKLIF_NO2, C.URUN_SIRA_NO, C.SIRA_NO, :S_TARIH_B),0) AS MV,
MV_K(A.TEKLIF_NO1,A.TEKLIF_NO2, :S_TARIH_B)/COUNT(*) OVER() AS KV, 'NameCheckBoxField1' = 0x1, 'NameCheckBoxField2' = 0x1, 'NameCheckBoxField3' = 0x1
FROM S_TEKLIF A, S_URUN B, S_URUN_DETAY C, KOC_KTMAR_PR D
WHERE A.TEKLIF_NO1 || A.TEKLIF_NO2 = B.TEKLIF_NO1 || B.TEKLIF_NO2
AND A.TEKLIF_NO1 || A.TEKLIF_NO2 = C.TEKLIF_NO1 || C.TEKLIF_NO2
AND B.SIRA_NO = C.URUN_SIRA_NO
AND C.SASI_DURUM IN ('A','R')
AND B.DISTRIBUTOR = D.DIST_KOD
AND B.MARKA = D.MARKA_KOD
AND B.URUN_KOD = D.TAS_KOD ">
</asp:SqlDataSource>
<br />
<asp:Button ID="Calculate" runat="server" onclick="Calculate_Click"
Text="Calculate" />
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
<br />
<br />
</form>
</body>
</html>
</code></pre>
| No |
5,549,171 | 3 | <firefox><hyperlink> | 2011-04-05T08:29:47.727 | 5549220 | How do I see my hyper link preview at the bottom of firefox? | <p>I used to be able to see where a hyperlink was pointing to at the bottom of firefox. Ive got firefox 4 and there is no bar now annoyingly.</p>
<p>This is related to programming because I need to see wher hyperlinks go.</p>
| No |
5,583,581 | 4 | <php> | 2011-04-07T15:27:02.257 | 5583640 | How do i over come the undefined offset 0 notices in php | <p>How do i over come the undefined offset 0 notices in php </p>
<p>I have some statements in php which contains some arrays which are causing this notices .
But inoorder to fix this iam using empty() in php . </p>
<pre><code>$val= array(
'codes' => $this->arr1['BAC'],
'lines' => $this->arr1['array1'][0]['RACE'] // this is the statement causing undefined offset 0
);
if(!empty( $this->arr1['array1'])){
$val= array(
'codes' => $this->arr1['BAC'],
'lines' => $this->arr1['array1'][0]['RACE'] // this is the statement causing undefined offset 0
);
}
</code></pre>
<p>would the above be recommended . But my question if i put them in if() .Can the $val be accessed outside the if()</p>
| No |
5,612,394 | 2 | <c#><.net> | 2011-04-10T14:25:04.103 | 5612422 | Are there any concurrent queue types in a .NET 3rd party library? | <p>If I can't use .NET 4, are there any alternatives?</p>
| No |
5,683,977 | 1 | <java><gwt> | 2011-04-16T01:58:20.693 | 5684148 | How to edit a multi-value field with GWT Editor framework? | <p>I am trying to figure out how to use GWT Editor when my model has a field that is a Set, List, etc. </p>
<p>I have this entity proxy:</p>
<pre><code>public interface MyModel {
void setSomeCollection(Set<String> c);
Set<String> getSomeCollection();
}
</code></pre>
<p>Here is my multiselect field. I am extending ListBox so that I can change some of its behavior later.</p>
<pre><code>public class DualListBox extends ListBox implements LeafValueEditor<Set<String>> {
public DualListBox() {
super(true);
}
@Override
public void setValue(Set<String> values) {
if (values == null) {
return;
}
for (String value : values) {
for (int i = 0; i < getItemCount(); i++) {
if (getValue(i).equals(value)) {
setItemSelected(i, true);
} else {
setItemSelected(i, false);
}
}
}
}
@Override
public Set<String> getValue() {
Set<String> values = new HashSet<String>();
for (int i = 0; i < getItemCount(); i++) {
if (isItemSelected(i)) {
values.add(getValue(i));
}
}
// Debug shows that the set of values is populated correctly..
return values;
}
}
</code></pre>
<p>Basically I just cant figure out how to get fields with a Set (I have tried List as well) to work with GWTs Editor framework. Debugging so far shows that the values are coming out of the editor correctly.</p>
<p>I have looked at ListEditor but that looks like its used to edit a list of more complex that object types; not a single field with multiple possible values. I am implementing the wrong editor type? Is GWT Editor not able to handle fields that are collections yet?</p>
| No |
5,771,086 | 1 | <jquery><xml><json><rss> | 2011-04-24T14:31:17.063 | 9513903 | Parsing mixed result from google feed api | <p>Can someone give me example how to parse mixed feed result from google api (eg. XML + json). I'm trying to get enclosure element of the feed, but it seems that json result doesn't return it! Thanks!</p>
| No |
5,847,902 | 2 | <c><linker><yacc><lex> | 2011-05-01T10:49:03.713 | null | Error while calling yyin and yyparse from main.c | <p>hi i recieve the follwing error </p>
<pre><code>main.c: undefined reference to yyin
main.c: undefined reference to yyparse
</code></pre>
<p>this is what i am doing
i have a lex file
a.l
a yacc file
b.y</p>
<p>main.c is :</p>
<pre><code>#include <iostream.h>
#include <stdio.h>
#include <stdlib.h>
extern int yyparse();
extern FILE *yyin;
FILE *outFile_p;
main(int argc,char *argv[])
{
if(argc<3)
{
printf("Please specify the input file & output file\n");
exit(0);
}
FILE *fp=fopen(argv[1],"r");
if(!fp)
{
printf("couldn't open file for reading\n");
exit(0);
}
outFile_p=fopen(argv[2],"w");
if(!outFile_p)
{
printf("couldn't open temp for writting\n");
exit(0);
}
yyin=fp;
yyparse();
fclose(fp);
fclose(outFile_p);
}
</code></pre>
<p>before this i have to link :<br>
x.h
and x.c .</p>
<p>I write following commands on terminal :</p>
<pre><code>lex a.l
yacc -v -d b.y
gcc -o x.o -c x.c
gcc -o main.o -c main.c
gcc -o myexecutable main.o x.o
</code></pre>
<p>that's when i get this error. what am i doing wrong ?</p>
| No |
5,930,656 | 4 | <javascript><collections><backbone.js> | 2011-05-08T22:01:36.160 | 25928291 | Setting attributes on a collection - backbone js | <p>Collections in backbone js don't allow you to <code>set</code> attributes, but I often find that there is need to store some meta-information about a collection. Where is the best place to set that information?</p>
| No |
5,993,787 | 2 | <sql><ms-access><between> | 2011-05-13T14:57:21.353 | 5993838 | Between Operator in Access | <p>I am working on an Access database. I am trying to write a query to let the user select the start and end dates. Here is the query in design view under Criteria:</p>
<p>Between [Enter Date From(mm/dd/yyyy)] And [Enter Date to(mm/dd/yyyy)]</p>
<p>After 04/01/2011 and 05/12/2011 were entered, however, the result in the Access database is not the same as in the below query.</p>
<pre><code>SELECT SOMETHING
FROM MY TABLE
WHERE SOLD_DATE >=04/01/2011'
AND SOLD_DATE <'05/13/2011'
</code></pre>
<p>Access:16,564 records</p>
<p>MS query: 16,573 records</p>
<p>I guess it's the time assumed to be 12 AM. How can I get the query to include the end date all the way to 11:59 PM?</p>
<p>Thanks much!</p>
| No |
6,049,459 | 1 | <xml><xslt><xpath> | 2011-05-18T18:37:19.603 | null | How to get XPath node using XSLT | <p>I need to get the XPath node using XSLT. I need to check if a particular node exists within a block of xml. The only way i klnow how to do it is using the XPath node. Let me know if you know of another way to check if a certain node exists within a block of XML using XSLT.</p>
| No |
6,049,842 | 1 | <nservicebus> | 2011-05-18T19:11:56.717 | 6050651 | Nservicebus hello world in process hosting! | <p>I am trying out nservicebus as a solution instead of using WCF MSMQ binding.
I have tried to get a in process hello world working by mofigying the full duplex sample. I have got it to the point that I am sending message from the client and the server receives it ( guessing by received messages being printed on the server side) but the request message handler is somehow not registered and is not being called on receipt , i.e not hitting breakpoints that I set.</p>
<p>I think i am supposed to register a handler when configuring the server using -</p>
<pre><code>//initialise nservice bus
Bus = NServiceBus.Configure.With()
.Log4Net()
.DefaultBuilder()
.XmlSerializer()
.MsmqTransport()
.IsTransactional(false)
.PurgeOnStartup(false)
.UnicastBus()
.ImpersonateSender(false)
.CreateBus()
.Start();
</code></pre>
<p>Sorry it could be a very silly question , but just want to get started and the samples are light on in-process hosting examples.</p>
<p>Any pointers or link to examples would be great.</p>
<p>BR
Niladri</p>
| No |
6,120,079 | 2 | <sybase> | 2011-05-25T05:52:06.597 | null | Sybase - Default Constraints | <p>I need to drop all the default column level constraints on the table, in Sybase.</p>
<p>I dont have any idea how to do it, i had tried to disable the constraints with the following as below:</p>
<pre><code>ALTER TABLE Employee NOCHECK CONSTRAINT ALL
</code></pre>
<p>The above does not work even, gives an error as below:</p>
<pre><code>Error (156) Incorrect syntax near the keyword 'CONSTRAINT'
</code></pre>
<p>Also, I have tried with some custom stored proc, with the sys tables but that is NOT compliant with Sybase syntax, it works on SQL server, as below:</p>
<pre><code>declare @sql varchar(1024)
declare curs cursor for
select 'ALTER TABLE '+tab.name+' DROP CONSTRAINT '+cons.name
from sysobjects cons,sysobjects tab
where cons.type in ('D')
and cons.parent_object_id=tab.object_id and tab.type='U'
order by cons.type
open curs
fetch next from curs into @sql
while (@@fetch_status = 0)
begin
exec(@sql)
fetch next from curs into @sql
end
close curs
deallocate curs
</code></pre>
<p>Can someone please solve this riddle ..</p>
| No |
6,136,054 | 6 | <android><cursor><android-edittext><gravity> | 2011-05-26T09:00:19.147 | 6292423 | Android: Edittext with gravity center not working on device | <p>In Android I'm using a single line edittext with gravity set to center. In the Emulator this works great both the hint and a blinking cursor shows up. When testing on device (Xperia X10) neither the hint text nor the blinking cursor shows up. The blinking cursor only shows if I enter some text into the edittext.</p>
<p>This is my edittext, can anyone see if something is missing?</p>
<pre><code><LinearLayout
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:gravity="center"
android:layout_gravity="center"
>
<EditText
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:inputType="numberDecimal"
android:ellipsize="end"
android:maxLines="1"
android:gravity="center"
android:layout_gravity="center"
android:hint="Some hint text"
></EditText>
</LinearLayout>
</code></pre>
<p>What I want:</p>
<p><img src="https://i.stack.imgur.com/Qu715.png" alt="This is what I want"></p>
<p>What I get with the above code (empty edittext with no cursor):</p>
<p><img src="https://i.stack.imgur.com/k7VNE.png" alt="The edittext doesn't show the hint text and have no blinking cursor"></p>
| No |
6,246,108 | 6 | <php><sql> | 2011-06-05T21:52:46.063 | 6246145 | PHP include depending on account | <p>Im having a issue with my code i am working on. I am trying to get a include loaded depending on the status of the user (if they paid and if they have a invalid email. The NULL value is being pulled form the database however it only sends to the entermail.php</p>
<p>Here is my code does anyone see whats wrong?</p>
<pre><code> function is_premium() {
$premium_query = mysql_query("SELECT 'authLevel' FROM 'users' WHERE 'fbID' ='".$userId."'");
$premium = mysql_query($premium_query);
if ($premium=='1') {
return true;
} else {
return false;
}
}
function valid_email() {
$validemail_query = mysql_query("SELECT 'parentEmailOne' FROM 'users' WHERE 'fbID' ='".$userId."'");
$validemail = mysql_query($validemail_query);
if ($validemail != 'NULL') {
return true;
} else {
return false;
}
}
if (!empty($session) && is_premium() && valid_email()) {
include 'indexPremium';
} else if (!empty($session) && valid_email()) {
include 'entermail.php';
} else if (!empty($session)) {
include 'indexLoggedIn.php';
}else{
include 'indexNotLogged.php';
}
</code></pre>
| No |
6,390,643 | 1 | <java> | 2011-06-17T19:11:14.537 | null | Checking if code was altered | <p>I want to call a few costly update methods whenever my code changes. I hit ctrl-s in Eclipse, this triggers a file save and a hot code replacement, my program checks to see that the file was saved, spends about 5 seconds crunching numbers, and then updates the screen.</p>
<p>I'm using this thing, which I call a few times per second:</p>
<pre><code>public static long lastSourceUpdate=0;
private static boolean wasUpdated() {
File source = new File("/home/user/workspace/package/AClass.java");
long t = source.lastModified();
if (t>lastSourceUpdate+2000) { // wait for hcr
lastSourceUpdate=t;
return true;
}
return false;
}
</code></pre>
<p>There are problems with this approach:</p>
<ol>
<li>Checking the file is unreliable, since compilation and hot code replace can finish a few seconds after the file changes. That's why there's a 2000ms delay above. Though the method returns true, the code I just altered isn't updated - or worse yet, Eclipse updates it halfway through the number-crunching, and the result is hopelessly scrambled.</li>
<li>Checking files is a hack in any case, it should check classes. The disk probably doesn't need to get involved.</li>
<li>It only checks one class, but I sometimes want to check a whole package, or failing that, any changes to the project at all. When a file changes, the package directory's lastModified is not changed. A recursive scan of the folders/packages would work, but isn't very elegant if the package is huge.</li>
<li>Looks ugly.</li>
</ol>
<p>So, what is the best way to check for when code changes? Perhaps reflection? A serialVersionUID check? It's not like classes themselves have a compilationDate field - or do they? Is there some secret value that Eclipse updates? Is there a file that Eclipse changes with every save?</p>
<p>Thanks for checking this out.</p>
| No |
6,398,477 | 3 | <ruby><ruby-on-rails-3><irb> | 2011-06-18T19:45:43.420 | 6398503 | Rails -- is IRB necessary? | <p>I am following Michael Hartl's RoR toturial and there are multiple places where he uses IRB, often to add users to the database. When I use <code>rails console</code> to open IRB and then create a User in the database everything works fine, but if I try to do the same thing by running the same line of code from a file like <code>test.rb</code> in the directory of my application it doesn't work b/c it says it can't find the User model. Is there any way I can run these lines of code (i.e. for putting a user into a database) from a .rb file rather than from the IRB?</p>
| No |
6,409,857 | 1 | <c++><icu> | 2011-06-20T10:31:54.770 | null | How to get use CalendarAstronomer from ICU | <p>I would like to use the <a href="http://codesearch.google.com/#U1WZsV5wHb4/i18n/astro.h&q=CalendarAstronomer&type=cs" rel="nofollow">CalendarAstronomer</a> class from <a href="http://site.icu-project.org/" rel="nofollow">ICU</a> to calculate the sunset/sunrise values for a given location. </p>
<p>The API is good and clean, but the necessary file astro.h is not installed. Neither using apt-get nor building ICU by myself. What is wrong here? Are their any special components I could not find out about that are needed for the CalendarAstronomer class?</p>
<p>BTW: The main reason is the pretty liberal license of ICU. I found several code samples calculating sunset/sunrise values, but the licenses are often now clear. So here is an alternative question: Are their other libraries/code samples using a liberal license (Apache, BSD) calculating the sunset/sunrise in C++?</p>
| No |