text
stringlengths
83
79.5k
H: Can I install Tensorflow in Anaconda without using Keras? Can I install Tensorflow in Anaconda without using Keras? If I can what is the difference between using Keras with Tensorflow and only Tensorflow? AI: Can I install Tensorflow in Anaconda without using Keras? Absolutely. If I can what is the difference between using Keras with Tensorflow and only Tensorflow? While Tensorflow is a super powerful numerical computation and optimization library with lots of features for building neural networks, YET it is a bit tedious and nontrivial especially for beginners to use. Here Keras comes very handy. In short, Keras is Tensorflow abstraction. It allows to quickly and easily define a neural net even complex ones with a few lines of codes. Also note that Keras not only runs on top of TensorFlow but also on top of Microsoft Cognitive Toolkit, Theano, or MXNet. Look at at this post where the blogger compares defining a typical neural network via directly in Tensorflow and Keras. You can see that with Tensorflow it is minimum 17 lines of code, whereas Keras reduce it to 10 lines of code.
H: Which parameters are hyper parameters in a linear regression? Can the number of features used in a linear regression be regarded as a hyperparameter? Perhaps the choice of features? AI: I like the way Wikipedia generally defines it: In machine learning, a hyperparameter is a parameter whose value is set before the learning process begins. By contrast, the values of other parameters are derived via training. On top of what Wikipedia says I would add: Hyperparameter is a parameter that concerns the numerical optimization problem at hand. The hyperparameter won't appear in the machine learning model you build at the end. Simply put it is to control the process of defining your model. For example like in many machine learning algorithms we have learning rate in gradient descent (that need to be set before the learning process begins as Wikipedia defines it), that is a value that concerns how fast we want the gradient descent to take the next step during the optimization. Similarly as in Linear Regression, hyperparameter is for instance the learning rate. If it is a regularized Regression like LASSO or Ridge, the regularization term is the hyperparameter as well. Number of features: I would not regard "Number of features" as hyperparameter. You may ask yourself whether it is a parameter you can simply define during the model optimization? How you set the Number of features beforehand? To me "Number of features" is part of feature selection i.e. feature engineering that goes before you run your optimization! Think of image preprocessing before building a deep neural network. Whatever image preprocessing is done is never considered hyperparameter, it is rather a feature engineering step before feeding it to your model.
H: Which machine learning model should I learn for this problem? I'm working in python. Would like to practice some machine learning, and I've always been curious about an analog to the problem below... A collection of 3 letters are drawn randomly from the 26 letters of the alphabet. None, any, or all of the letters can be discarded and replaced with an equal number of not-yet-drawn letters. This discarding and replacing happens 3 times (or more generally n times). After all the replacements have taken place a point value is awarded to the resulting 3-letter-word. Some words have high point values, and others less so. I'm interested in the optimum replacement strategy. For example, say only the words "FOX" and "THE" reward any value. All other 3 letter combos are worthless. I want the machine to learn the correct replacement strategy while holding "FOW" with one replacement remaining. In this simple case, the strategy is to replace the W only, and attempt to draw the X. This strategy is superior to replacing all 3 letters in an attempt to make "THE," since drawing "THE" only gets points 1/('count of remaining letters' choose 3) times, whereas replacing the W gets points 1/('count of remaining letters'); the instance where an X is drawn. Can anyone point me in the right direction? AI: You should use a Markov reward model to model your problem. All the possible words are the different states of your chain. The replacement process corresponds to the transitions of your Markov chain. After defining all the properties of your chain (states, transitions, rewards, ...), you can train your model and get the best strategy for each current word.
H: How to determine the number of forward and backward passes in deep learning (CNN)? Is there a way to determine the number of forward and backward passes in the training of a neural network using python? AI: Forward and backward passes of the whole dataset are called epochs. The number of epochs is a parameter of the training procedure that cannot be estimated a priori, and it depends on how low you want your training/validation errors. One approach is to train until your validation error is small enough, and as long as the number of epochs is smaller than a threshold. Another approach is early stopping: when the validation error achieves a minimum, stop training. For a broader explanation on early stopping, see this blog. To sum up: the number of epochs is particular of every network and dataset, as well as optimization technique, and you should set them in order to have a low enough loss, but not overfit (early stopping is done in order to not overfit).
H: problems during training a MLP type of network I trained a neural network model, a MLP type of network, where the first several layers are 1-D convolution for processing sequence type of input. However, the training process looks like as follows, where the orange line represents the validation loss and the blue line represents the training loss. The validation loss is large compared to the training loss and the training loss also stops decreasing after the first several iterations. Are there any generic guidance to improve the performance? I have about 1 million training traces, and the number of parameters of the network is about 140K. AI: When the training loss is lower than the validation loss, the model is said to overfit the training data, i.e. it has learned so much from the training data that it only adjusts well to it and it can't generalize to new data. This phenomena is regarded as the variance of the model. The bias of the model is the difference between the training loss and and the loss you've previoussly selected as the minimum loss reachable, or the desired one. However, this analysts is usually done over other well known metrics, such as precision and recall. You first calculate these metrics on your training data, and then, on the evaluation data. Then you perform the analysis taking the same considerations. In order to reduce the variance/overfitting, there are common techniques: Increse the training data by adding more instances, if available If no more instances are available, perform data augmentation to increase the training dataset Use regularization, for example, dropout. Shorten the network. If there are lots of layers, the network may be learning too specific features from the training data As mentioned before, I'd perform the analysis using other metrics rather than the loss.
H: Should one hot vectors be scaled with numerical attributes In the case of having a combination of categorical and numerical Attributes, I usually convert the categorical attributes to one hot vectors. My question is do I leave those vectors as is and scale the numerical attributes through standardization/normalization, or should I scale the one hot vectors along with the numerical attributes? AI: Once converted to numerical form, models don't respond differently to columns of one-hot-encoded than they do to any other numerical data. So there is a clear precedent to normalise the {0,1} values if you are doing it for any reason to prepare other columns. The effect of doing so will depend on the model class, and type of normalisation you apply, but I have noticed some (small) improvements when scaling to mean 0, std 1 for one-hot-encoded categorical data, when training neural networks. It may make a difference too for model classes based on distance metrics. Unfortunately, like most of these kind of choices, often you have to try both approaches and take the one with the best metric.
H: Using packages such as sklearn vs building ML algorithm from scratch I have been using different machine learning algorithms throughout various projects at university, and attended some inspirational lectures where industrial companies show and present how they use machine learning, data mining, etc. in their work. I myself mostly use Python, and have previously used libraries such as sklearn. My problem is, that I have a huge difficulty understanding the role of built in algorithms vs making them completely from scratch with pure coding and math - i.e. using theoretical machine learning tools to actually do the work yourself. I understand doing everything yourself can be constrained by time/money/resources. Also sometimes it doesn't make sense to reinvent stuff that has been vastly optimized by others before you. I keep feeling that using sklearn's built in random forest classifier or using xgboost in python is kind of cheating. I am only preparing the data, cleaning it to get the right formats, maybe do some feature engineering og initial plottings and statistical analysis. The problem is that when all that is done, we simply feed the data to this pre-made algorithm and it does everything behind the scene, and just spits out predictions. I feel that i am not doing anything, and not using all the knowledge i have learned in the data exploration analysis. Neither am i using any of the patterns that i found in the data. Still i hear from big companies that they use xgboost and sklearn - and I can see it actively being used in Kaggle competitions. Almost every website i find only provides examples using these built in libraries, and do not go through any deeper math or statistics at all. I really enjoy working with machine learning - but i have this strong feeling that im completely missing the "professional" approach of doing things. I know there is a lot of books on theoretical machine learning - but still almost everyone online seems to just use pre-made algorithms. I have been struggling with this understanding for about a year now. The validity of these pre-made algorithms in serious industrial/business/academic use is still not clear to me. EDIT: To be more specific. My question is: How are these libraries/tools viewed in professional/industrial/academical context in comparison to actually building a model yourself. Are they just a "quick and easy" alternative way to start learning machine learning and data mining for students and amateurs, or are they in fact more powerful (than i at least know) and should not be seen as an alternative, but a viable solution for professionals? The motivation for my single question above, can be ellaborated by explaining the questions i ask myself. The very questions that began this confusion for me. Is it cheating to use these models? In which situations would you use a pre-built library, and when to avoid it? How do i merge (or use) the knowledge gained from the scientific data analysis i did before modelling, and these pre-built classifiers. AI: It depends entirely on your goal. Student Phase When you're learning about machine learning algorithms, I think it is a really good idea to implement toy examples. I find this process helps with finding what you understood well and what you did not understand as well as you thought. It's doing this work where you'll find deeper understanding of how the algorithm really works and different internal choices you have to make. Professional Phase When you have a project to deliver, you don't necessary want to rewrite a random forest implementation from scratch. Even if you could build one in a reasonable amount of time, there's value in having something like sklearn that is well vetted and robust enough to handle edge cases that you wouldn't even consider. That's the advantage of using pre-built libraries. I need more Phase Eventually, you get the point where you understand the math and know how to use the packages well and you realize that there's a feature lacking. That's when you rip open a framework like xgboost or sklearn and modify existing code or even create you own implementation. The reason you would do this is because methods are cutting edge so there just isn't anything out there or the implementation of the framework is actually a handicap in production (as I tend to find with sklearn). The issue you seem to be facing is lack of accountability for your output. If all you do is clean data push it through a model and get good results, make a chart showing results, I would say you are forgetting the "science" part in data science. The challenging part isn't using the model it is knowing what moves your model and the potential hurdles your model may face in the real world. I've seen this play out often in my career, where a junior member will make an awesome model on training and test dataset, and suddenly production comes along and performance tanks. Why would that happen? Well, because test and training were similar (and often from the same source), but the junior member failed to question the lack of variety in the data source and question if the real world behaved the same way. What I am trying to say is, being a data scientist, a small part of the job is cleaning data, running model and making pretty pictures. The real challenge is asking the why question. Why does this work? Why does the model perform poorly? Why does the model perform well? Why is this feature important?
H: Why can't I choose my hyper-parameter in the training set? Say I've divided the data into 3 parts: training, validation and test. I know for example, that in Neural Networks, the number of hidden layers is a hyper parameter. Why can't I train numerous NN architectures in the training set and then test their accuracy in the test set; thereby allowing me to choose one final model? What is the purpose of the validation set in this instance? AI: I wouldn't say you can not tune the hyper-parameters in the trainig dataset, but the purpose of doing so is different than in the validation set. In general, what it is intended with a ML algorithm is how to optimally classify or perform regression given some training data. Once the model is trained, it will be used with new data to obtain predictions, thus: We need some training data. With this dataset, the model will look for the optimal weights and biases that minimizes the selected loss function. We need some upper bound to our performance metric: for example, we can decide that our model has to perform like a human, which has showed an average metric of 95% for some specific task. The upper bound can be the best score in a certan benchmark, etc. We start training the model on the training data and we evaluate its metric, lets take the accuracy. If the accuracy is too low, we can tune the hyper-paramters until the accuracy increases in the training data (no evaluation dataset is used here). The difference between our model's accuracy and the upper bound is taken as the bias of the model (this bias has not to be confused with the biases of the neurons). So, the accuracy on the training data gives us the bias of our model. Once we decided our bias is reasonable (optimally 0) arises the question of how the model will perform when it is fed with data that has not be used for training (the real application). The difference between the accuracy on this unseen data and the accuracy on the training data is the variance of the model. This unseen data is the validation set and give us an idea of how the model generalizes to new data. If the variance is high, then the model poorly generalizes to new data. Then, we perform hyper-paramater tunning and evaluating the accuracy on the validation data until the variance is low enough, trying to not worsening the bias (variance-bias trade-off). Example: desired accuracy = 95%. Training accuracy = 93%. Validation accuracy = 82% -> Bias = 2%, Variance = 11% Summarizing: Tunning in the training data = decreasing bias Tunning in the validation data = decreasing variance What's more unclear is the role of the test dataset. I usually use it when I decided that the model will be no longer changed and I need the final metric that describes the model. The test set can also be used to have an idea about how the model performs with data which have not be intended to work with. In general, the test set gives the power of the model for the inference task.
H: do the results change if you remove duplicate rows and you sum their weights? Assuming that we have a dataset with the respondents on each row (N respondents) and their respective characteristics as columns (C characteristics). Each respondent has also a weight. In case of high number of respondents, is it a good idea to remove the duplicate respondents and sum their weights ? Will this lead to different results ? So my initial data would look like this > dt id weight v1 v2 1: 1 10 2 4 2: 2 11 2 4 3: 3 12 2 4 4: 4 13 3 5 5: 5 14 3 5 6: 6 15 3 5 And since respondents 1,2,3 are the same, and respondents 4,5,6 are the same i would end up with this > dt id weight v1 v2 1: 1 33 2 4 2: 2 42 3 5 AI: With weighted linear regression, it is exactly the same, as the expression for the loss function is a sum of weights multiplied by errors in the prediction. This works, of course, for other methods with loss functions, such as logistic regression and neural networks. This is due to the fact that the loss function is linear with respect to the weights. As you save memory, it is totally recommendable. With other methods, you should check if the criteria for choosing parameters or method is linear or not with respect to the weights. If not, you should not do it (to me it doesn't make sense that methods are not linear with respect to weights, but there might be a case where this happens).
H: Setting class-weights in a batch in where a certain class is not present I'm handling a high imbalanced dataset, thus, I'm weighing the loss function in order to penalize the misclassification of the minority classes, I set the weights in each batch as follows: w[i] = num_total_instances / num_instances_of_class[i] Where 'i' goes from 0 to N-1 (for N classes), 'num_total_instances' is the total number of instances in the batch, and 'num_instances_of_class[i]' is the number of instances of class 'i' in the batch. The problem is, as I'm doing it on each batch, It may occur that, in a certain batch there is no instance of class 'j', thus w[j] = inf. ¿What weigh should I set for class 'j'? At this moment, I'm setting this w[j] to 1E6 I know that there are other approaches to fight against imbalanced datasets and I also know that I can pre-compute the weights for the whole dataset and use them (fixed) cross all batches, but, I'd like the specific answer for this question, whereas it makes sense. For sure, it will be well received other suggestions for the calculation of the weights. AI: I don't think that the weight you use matters, whether you set it to $10^6$ or $10^8$. This is because, as examples of that class are not in the batch, the loss function for that batch will not have any of those examples contributing, so that weight won't appear. If something does not appear in the loss function, it is not used at all. For that reason, it doesn't really matter what weight to use.
H: Lightweight binary image classifier I want to build a fast binary classifier that decides if an image belongs to a given class (e.g. if it is a picture of a person). I want to do this by training a network on the RGB of pixels at a predetermined set of coordinates (e.g. 4 points, one near each corner of the image) and I want to achieve at least 75% accuracy. How many points and what architecture should I use? Or if this is a very bad method what is another way to build a classifier that makes training and classification as fast as possible maybe on the account of lowering accuracy? AI: If I wanted to build a classifier that takes short time to be trained, I would rather simplify the classifier than the data. In this case, I would rather train a logistic regression model (probably with Ridge or Lasso regularization), than a very complex architecture with few of the data I have. If I had to simplify the data, I would take the average of the three channels, thus having a black and white image as input. I would not simplify it further (I am very confident that 4 points won't give decent accuracies). I don't really know your circumstances, but Lasso in logistic regression is a method that might allow you to select some pixels. As the coefficients obtained by Lasso are sparse (see this answer), Lasso will select the pixels that are relevant towards the prediction of the class. You can then train a bigger network with those pixels that allows you to capture nonlinearities and more complexity.
H: CV hyperparameter in sklearn.model_selection.cross_validate I've got a problem with understanding the CV parameter in cross_validate. Could you check if I understand it correctly? I'm running ML algorithms in big set of data (train 37M rows), therefore I would like to run a big validation procedure to choose the best model. Using ShuffleSplit, I want to build 100 different ways of splitting data in random way: cv_split = model_selection.ShuffleSplit(n_splits = 100, test_size = .1, train_size = .9, random_state = 0 Then I want to use it as CV hyperparameter in cross_validate: cv_results = model_selection.cross_validate(model, X, Y, cv = cv_split) Does it mean that my Train set (X & Y) is divided into 100 random samples (each is then divided into: train (90% of sample), test (10 % of sample)) and during cross_validation model is built for each sample separetly (fitted on 90% of particular one/10 sample and tested on remaining 10% if this sample) and the mean prediction of those 100 models is the result? Also, if I am using Shaffle, does it mean that particular row can be in multiple samples and other will not be in any of them? In other words, 37M set is devided: First Sample 370k XY1, 90% *3.7 = 333k rows as XY1_1(train), 37k as XY1_2 (test); model fitted on .fit(X1_1, Y1_1), predition is build on .predict(X1_2) and validated against Y1_2 Second Sample 370k, 333k rows as XY2_1 and 37k rows as XY2_2; model fitted on .fit(X2_1, Y2_1), prediction built on .predict(X2_1) and validated against Y2_2 etc I am not sure If the second explanation is more clear. But this is how I structure it in my head. I also read scikit.learn guide: Cross-validation: evaluating estimator performance but I am still not sure AI: I'll answer this first: if I am using Shaffle, does it mean that particular row can be in multiple samples and other will not be in any of them If I've understood the question correctly, then yes, that is possible. See below: from sklearn.model_selection import ShuffleSplit import numpy as np from sklearn.model_selection import ShuffleSplit X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]]) y = np.array([1, 2, 1, 2]) rs = ShuffleSplit(n_splits=3, test_size=.25, random_state=8) rs.get_n_splits(X) train = [] for train_index, test_index in rs.split(X): print("TRAIN:", train_index, "TEST:", test_index) Output: TRAIN: [1 0 3] TEST: [2] TRAIN: [0 3 1] TEST: [2] TRAIN: [1 3 0] TEST: [2] Row 2 doesn't appear in any of the training sets. Next, to answer your first question: the mean prediction of those 100 models is the result Not quite. From the documentation (which you linked to): Returns: scores : dict of float arrays of shape=(n_splits,) Array of scores of the estimator for each run of the cross validation. Let's see this in action, on the above example: from sklearn linear_model from sklearn.model_selection import cross_validate lasso = linear_model.Lasso() cross_validate(lasso, X, y, cv = rs)['test_score'] returns Out[36]: array([ 0., 0., 0.]) So you see, it returns an array with the score on each cross-validation fold.
H: Model-agnostic variable importance metric I use genetic / evolutionary algorithms in python's TPOT package to find the overall best model (GBM, RF, SVM, elastic net, etc) and its tuning parameters. Now I need a way to measure each variable's contribution to the chosen model's predictive performance. How can I do this in a model-agonistic way? My current approach is to retrain the best model architecture after holding out each of the variables. For example, if my variables are [a,b,c] I'll retrain on [a,b], [a,c], and [b,c]. I define the removed variable associated with the worst performing model as the most important variable and I define the variable's predictive contribution as the decrease in predictive performance. I measure all variable's predictive performance this way. Is there anything obviously wrong with this approach? Is there a better approach? I'm familiar with variable importance in decision trees, and p-values in linear models, but I need a model agnostic approach. AI: Have you looked at the permutation importance approach in the eli5 package? The idea is that instead of retraining the model without the feature, which is computationally expensive, they replace each feature in turn with random noise in the test set. To get random noise that is drawn from the same distribution as the original feature, they just randomly shuffle that feature. Note that, as with feature importance in decision trees, this measure is biased against categorical variables with low cardinality.
H: Very Deep Convolutional Networks for Text Classification: Clarifying skip connections Question RE this research paper if anyone has experience with CNN's, pooling & skip connections: https://arxiv.org/pdf/1606.01781.pdf In figure 1, the input to the first convolutional block has shape (batch_size, 64, s) The output from block 1 must be (batch_size, 64, s) The output from block 2 must be (batch_size, 64, s) However the output from the pooling step has shape (batch_size, 128, s/2). How can a pooling step increase the number of parameters on axis 1 from 64 to 128?! My guess is that the input to the pooling layer is actually a concatenation of 2 of the previous layer's outputs. In this case it would have input shape (batch_size, 128, s). However, the paper does not appear to clearly specify at what outputs are concatenated... Can anyone clarify how this is the case? AI: As you mentioned, the paper doesn't clarify. However, my guess is that this is not due to concatenating 2 previous layers (I don't really see an specific reason to do this here) but because of concatenating the ResNet shorcut. Generally, all conv layers have a number of filters, thus determining the output size (num_filters, size) regardless the inputs. On the other hand, MaxPooling does keep the input num_filters (though in this case reducing the size). In the paper, note that the num_filters is doubled at the output of all convlayer except for the one that does not keep the ResNet shorcut (last 512 conv layer). So my guess is that they are concatenating the output of the conv layer and the shorcut which would explain the output size. Hope this helps!
H: How to understand backpropagation using derivative Before I was learning about gradient descent, but now I understand this. Now, I have a problem with the backpropagation algorithm. I know the idea - minimalize error in multilayer neural network using chain rule. However, I don't understand the role of the derivative of the sigmoid function. This derivative is described in the algorithm. What is the point of this? Can you explain this step by step using simple language? AI: This is a continuation to this answer. The first part of that answer covered gradient descent. In short this algorithm finds a set of parameters which results in a minimum of a function. From the description of the algorithm you can see where the derivative of the function is needed. How this applies to a neural network Let's first consider a single neuron network. This is called the perceptron. We will use the sigmoid activation function. It has inputs $x$ and an associated output $\hat{y} = \frac{1}{1+e^{w^Tx+b}}$. The model is parametrized by the weights $w$ and the biases. These are the model parameters we need to tune. So this single neuron takes a vector $x$ and using the weights vector and biases will output some value $y$. If the weights are randomly chosen then the output will also be random. This is of no use to us. We need to determine a way of calculating a measure of correctness or wrongness in order to know how we should change our weights. I will use a measure of how wrong we are and call this the cost $C = \frac{1}{2N} \sum_{i=0}^{N}(\hat{y} - y)^2$. This is takes the squared difference between the predicted output $\hat{y}$ and the ground truth $y$. We will take the sum of this wrongness for all the instances that we try to predict, hence the summation. And we divide by $N$ to normalize the error. The additional division by 2 is simply to make the derivative easier to compute you will see soon. So we want to minimize the cost function using gradient decent. Thus we need to take the derivative of the cost function with respect to the parameters we can tune, these being the weights $w$. We must thus compute $\frac{\partial C}{\partial w}$. Using calculus we can use chain rule in order to simplify our derivative as $\frac{\partial C}{\partial w} = \frac{\partial C}{\partial \hat{y}}\frac{\partial \hat{y}}{\partial w}$ Our gradient descent calculation thus becomes $w^{new} = w^{old} - \nu \frac{\partial C}{\partial \hat{y}}\frac{\partial \hat{y}}{\partial w}$ where $\nu$ is the learning rate. Let's explicitly spell out this derivative $\frac{\partial C}{\partial \hat{y}} = \hat{y} - y$ and, the second is the derivative of the activation function which we are using which is the sigmoid function. $\frac{\partial \hat{y}}{\partial w} = \frac{1}{1+exp(w^Tx + b)} (1 - \frac{1}{1+exp(w^Tx + b)})$
H: Bubbleplot with seaborn Is there someway to create a bubble plot with seaborn? I already know how to do it with Matplot lib, as in this tutorial. # libraries import matplotlib.pyplot as plt import numpy as np # create data x = np.random.rand(40) y = np.random.rand(40) z = np.random.rand(40) # use the scatter function plt.scatter(x, y, s=z*1000, alpha=0.5) plt.show() I would like to replicate this with Seaborn. I am able to plot, but cannot control the size of marker. import seaborn as sns import numpy as np %matplotlib inline data=pd.DataFrame({"x":x, "y":y,"z":z}) sns.lmplot(x="x", y="y",data=data, fit_reg=False) AI: A duplicate of this question in stackoverflow. Anyhow just to recap quickly, you can do it with: scatter_kws={"s": 10} And in your case it is simply: sns.lmplot(x="x", y="y",data=data, fit_reg=False,scatter_kws={"s": z*1000})
H: Keras/Theano custom loss calculation - working with tensors I'm struggling to write some tensor manipulation code for a custom loss function I'm using in Keras. Basically, I'm trying to modify a binary_crossentropy loss by adding a weight that is calculated from a particular feature. First thing I do is pass in my extra feature data to the custom loss by appending it to the y_true like this: y_trainModded = numpy.append(y_train, MyExtraData, axis=1) Which is then passed to the fit function like: model.fit(X_train, y_trainModded, epochs=2500, .....) Then extracted to make it usable like this: def myCustomLoss(data, y_pred): y_true = data[:,:2] MyExtraData = data[:,2] ... ... So far, that all works fine. However, I'm struggling with a section where I want to only select the MyExtraData where I predicted '1'. Intuitively, this would simply be something like: ExtraDataWherePredicted1 = MyExtraData[y_pred > 0] However, we're dealing with tensors, not numpy arrays. I tried casting to numpy arrays using eval(), but that didn't work. I also tried various approaches using keras.backend operations such as: WherePredicted1 = K.greater( y_pred,0) ExtraDataWherePredicted1 = tf.boolean_mask(MyExtraData, WherePredicted1) Which I could then use to weight my loss such as: return K.mean(K.binary_crossentropy(y_pred,y_true), axis=-1)-(K.mean(ExtraDataWherePredicted1)) But anything I try throws out various errors...I just can't figure out how to calculate ExtraDataWherePredicted1. I'm also finding it super hard to debug the loss function because I can't print() anything inside it, so it's very hard to double check to see if the arrays/tensors are what I expect them to be. Any help would be appreciated! AI: I think I might have finally just solved this myself. 1) I changed my Keras backend to use TensorFlow instead of Theano, so that I could use: tf.boolean_mask This command was not available under the Theano backend and thus giving me errors. 2) I had to change my code slightly to work with the correct dimensions. It now reads: WherePredicted1 = K.greater( y_pred[:,1],0.5) ExtraDataWherePredicted1 = tf.boolean_mask(MyExtraData, WherePredicted1) Still finding it hard to debug/test a custom loss function, but it's looking like feasible values to might well be correct.
H: XOR problem with neural network, cost function I am having a problem understanding the cost function in a neural network. I have read many books and blog posts, but all of them describe that point in neural networks is to minimize the cost function (like sum squared error): I tried to look at code for solving a problem with a multi layer neural network and back propagation. My question is: where in the code can I find the cost function? How can I plot the error surface? import numpy as np X_XOR = np.array([[0,0,1], [0,1,1], [1,0,1],[1,1,1]]) y_truth = np.array([[0],[1],[1],[0]]) def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_der(output): return output * (1 - output) np.random.seed(1) syn_0 = 2*np.random.random((3,4)) - 1 syn_1 = 2*np.random.random((4,1)) - 1 for i in range(60000): layer_1 = sigmoid(X_XOR.dot(syn_0)) layer_2 = sigmoid(layer_1.dot(syn_1)) error = 0.5 * ((layer_2 - y_truth) ** 2) layer_2_delta = error * sigmoid_der(layer_2) layer_1_error = layer_2_delta.dot(syn_1.T) layer_1_delta = layer_1_error * sigmoid_der(layer_1) syn_1 -= layer_1.T.dot(layer_2_delta) syn_0 -= X_XOR.T.dot(layer_1_delta) if i % 10000 == 1: print(layer_2) print(layer_2) AI: The cost function can be found in the delta rule, meaning the way you calculate your deltas. This delta is nothing more than the derivative of your error function after the weights: $\frac{\partial E}{\partial w_{ij}}$. So, if you are just interested in where the cost is encoded, this is the answer you are looking for. If you, on the other hand want to know why this formula works, I can suggest you to read the derivation on wikipedia. The maths behind it is quite uncomplicated, you only compute the derivative in each layer and propagate this derivative through the layers. This is, by the way, how backpropagation got its name.
H: plot show relationship between independent variable and dependent variable(Binary) My dependent variable is binary. Most of my independent variables are not. I am at the exploratory stage right now. Y X1 X2 0 23 0 1 29 1 0 15 1 1 40 0 1 25 1 0 22 1 This is just a portion of my data. I was thinking scatter plot to find out the relationship between Y X1 and X2. What other plots I can do to see the relationship more clearly. AI: First we will load this data into a Pandas DataFrame import pandas as pd df = pd.DataFrame(data = {'Y': [0,1,0,1,1,0], 'X1':[23,29,15,40,25,22], 'X2':[0,1,1,0,1,1]}) You want to see which variables best describe your output $Y$. First step plot your features against your output to see how they are distributed. plt.figure(figsize=(14,5)) plt.subplot(1,2,1) plt.scatter(df['X1'], df['Y']) plt.ylabel('Feature Y') plt.xlabel('Feature X1') plt.subplot(1,2,2) plt.scatter(df['X2'], df['Y']) plt.ylabel('Feature Y') plt.xlabel('Feature X2') plt.show() Then you can split your data based on the output and see the output distributions separability. This will give you information regarding the importance of each feature for building a classifier. plt.figure(figsize=(14,5)) plt.subplot(1,2,1) plt.hist(df['X1'][df['Y'] == 0], bins=3, alpha = 0.7, label = 'Y = 0') plt.hist(df['X1'][df['Y'] == 1], bins=3, alpha = 0.7, label = 'Y = 1') plt.ylabel('Distribution') plt.xlabel('Feature X1') plt.legend() plt.subplot(1,2,2) plt.hist(df['X2'][df['Y'] == 0], bins=3, alpha = 0.7, label = 'Y = 0') plt.hist(df['X2'][df['Y'] == 1], bins=3, alpha = 0.7, label = 'Y = 1') plt.ylabel('Distribution') plt.xlabel('Feature X2') plt.legend() plt.show() From this point we can see that feature X2 alone is useless in classifying Y. However, when we consider both X1 and X2 on the output we can see that X2 can help us make a better prediction of Y. We can see a linear separator would do the trick for this data. plt.scatter(df['X1'], df['X2'], c = df['Y'], cmap = 'autumn') plt.ylabel('Feature X2') plt.xlabel('Feature X1') plt.colorbar() plt.show()
H: Knn and euclidean distance I'm studying the knn classification algorithm. Why can the euclidean distance be considered a nice measure of affinity between examples ? In one dimension (1 attribute) this seems correct, but if I add dimensions, can the euclidean distance still be considerd a good measure of affinity? Why? AI: You can think of examples as vectors in $\mathbb{R}^p$, where $p$ is the number of features. Two examples will be very similar if the distance between them is close to $0$ (in the extreme case, if two examples are equal their euclidean distance is $0$). One way to measure the distance is using euclidean distance, but other distances can be used, as cosine distance or $L^p$ metrics. In fact, if $p$ is very high, then Euclidean distance is not a good measure, as it tends to make the distances too uniform (see this paper). Edit: When $p$ is very high: See this magnificient answer to the issues that very high $p$ may have.
H: How to estimate the variance of regressors in scikit-learn? Every classifier in scikit-learn has a method predict_proba(x) that predicts class probabilities for x. How to do the same thing for regressors? The only regressor for which I know how to estimate the variance of the predictions is Gaussian process regression, for which I can do the following: y_pred, sigma = gp.predict(x, return_std=True) In one dimension, I can even plot, how confident the Gaussian process regressor is about its prediction of different data points How to estimate the variance of predictions for other regressors? For example, for kernel ridge regressor, multi-layer perceptron, ensemble regressors? AI: I believe it is the probabilistic nature of a model that allows you to get the variance of predictions, or more generally defined as the uncertainty of predictions, like the Gaussian process you mentioned. This is not simply avaialble in standard regressors. I think you should be looking at Probabilistic regressors like BayesianRidge if you would like to estimate the uncertainty of your model. An implementation is also avaialble in scikit-learn, also this nice python package based on PyMC3 or directly via PyMC3 itself for instance. In the latter there are examples like for Bayesian regression in Jupyter Notebook with a good explanation. In principle, Bayesian Models do not return a a single estimate for the model parameters, but a distribution that make it possible to make inferences about new observations as well as to examine our uncertainty in the model. You may find this post useful. Note: Adding a normal prior on the weights as it is done in Bayesian regression, one turn the Least-Squares problem to regularized L2 regression under the hood as well (see the full math. derivation here). Updated Answer: I totally forgot the classical yet simple and powerful Bootstrap Sampling method to calculate confidence intervals for machine learning algorithms. A textbook definition says: Bootstrapping is a nonparametric approach to statistical inference that substitutes computation for more traditional distributional assumptions and asymptotic results. A number of advantages: The bootstrap is quite general, although there are some cases in which it fails. Because it does not require distributional assumptions (such as normally distributed errors), the bootstrap can provide more accurate inferences when the data are not well behaved or when the sample size is small. It is possible to apply the bootstrap to statistics with sampling distributions that are difficult to derive, even asymptotically. It is relatively simple to apply the bootstrap to complex data-collection plans (such as stratified and clustered samples). Reference: Fox, John. Applied regression analysis and generalized linear models. Sage Publications, 2015. Please note you do not need a model with probabilistic nature. See this post, or this answer or this one.
H: How do we predict what is in an image using unsupervised deep neural networks? From my understanding of unsupervised DNNs for image classification: The input layer is a 4,096 dimension vector (for 64 x 64 images) The hidden layers represent much lower "features" as identified by the back propagation As the model is generative, the output layer is also a 64 x 64 image Therefore, how do can we make a prediction that a new unseen image contains a specific image class (e.g. cat) if we lack labelled data? AI: After a lot of reading, I think I now understand. We really need to build 2 models. Model 1 Unsupervised Lots of unlabelled images Used to 'learn features' (i.e. better that we have done manually through years of research e.g. edge detection, colour features etc). Model 2 Supervised Few labelled images Use model 1 as the 'feature extractor'. i.e. pass a training image through model 1 and use the output layer as the feature vector. Use the same approach to test images e.g. model 1 to extract features, then use the second model to output label predictions
H: Adding new variable to model Let's say I already have a logistic regression model (or other) with N number of explanatory variables and is 70% accurate. Now if there are other variables available, how would I test if the new variables would improve my accuracy without building new model. AI: I do not think you can estimate the effect of a variable without adding it to the model. This is because the effect of a variable on the model's discriminatory power depends on the strength of association between the outcome variable and the new variable whether the new variable are collinear with some of the old variables You could in principle estimate both strength of association and collinearity, but it probably is bad practice and would result in overfitting. Also in general I think it is best to not use accuracy to evaluate a logistic regression (see ref) but rather a proper scoring rule like Brier Score. Further, when comparing two nested models (i.e. where one model contains a subset of the variables of the other model) I believe that best practice is to compare the AIC or BIC, or perform a likelihood ratio test.
H: Sklearn PCA with zero components example I'm simply trying to repeat a benchmark from the sklearn's docs. The unclear part is: n_components = np.arange(0, n_features, 5). They are applying a PCA transform with 0 components! Can somebody, please, explain, what's the mathematical meaning of this transform? AI: Think of it this way: a PCA "transform" with $k$ components essentially approximates your $n$-dimensional data points by projecting them onto a $k$-dimensional linear subspace, trying not to loose too much data variance along the way. More precisely, what you are doing is representing your original points $y \in R^n$ as: $$y \approx \mu + Vx$$ where $x \in R^k$ are the lower-dimensional "new coordinates", $\mu \in R^n$ is the mean of your data and $V\in R^{n \times k}$ is the matrix of principal component vectors. The "new coordinates" $x$ tell you how many steps you need to make along the $k$ principal components in order to reach the best possible linear approximation to $y$ by starting your trip from $\mu$. Now, if $k=0$ the model becomes: $$ x \approx \mu. $$ In other words, you are modeling all of your data as a single, fixed center point. Of course, you do not need to store any "new coordinates" here (because you do not need to move away from the mean), hence it does not make much sense as a "transform", but it is, none the less, a proper probabilistic model (a maximum-likelihood fit for a Gaussian distribution of errors, to be precise). In particular, you can speak about the log-likelihood of data under this model (which is, up to an affine transform, equal to the sum of squared errors here, but is not as trivial as you might think in the general case) and we can compare various models, choosing the one with the best likelihood. This is exactly what is done in the example from the docs you mention in the question.
H: Why don't convolutional computer vision networks use horizontally - symmetric filters? If, for example, I have a neural network for classifying dog breeds, and I feed it an image of some dog, inherently it shouldn't matter whether I feed it the original image or the image, mirrored horizontally. I'd like to implement this symmetry in the network, and by my understanding of CNNs that means i'll generally need all the filters to be horizontally symmetric, and then the network would both be more robust and would take almost 40% less time to train assuming filter size = 5. But, by my research modern networks don't use filters like that. Well... Why? what disadvantages does this architecture have that I can't find a single mention of such idea? AI: I'd like to implement this symmetry in the network, and by my understanding of CNNs that means i'll generally need all the filters to be horizontally symmetric, and then the network would both be more robust and would take almost 40% less time to train assuming filter size = 5. If two mirrored images are in the same class - e.g. they both show a dog or a cat - that is not the same as having all the components in the image (lines, textures, shapes) responding well to symmetric filters. In general this is not the case. Even for symmetric looking shapes such as faces, it is only true at a certain scale and specific pose. what disadvantages does this architecture have that I can't find a single mention of such idea? It will work poorly, because the lines, textures and shapes being detected as components of the image are rarely horizontally symmetrical. CNNs cannot easily take advantage of mirroring or rotational invariance - where the class or detection does not vary under mirror or rotation transformations. One thing you can do to improve generalisation for transformations is data augmentation, i.e. performing the non-class-changing transformations of your training images, perhaps randomly on demand during the training process. This does work, but has the opposite impact that you were hoping for in terms of efficiency. The recently-published CapsNet by Geoffrey Hinton's group may be able to take advantage of more types of transformation invariance. However, this is still in early stages of research, and not clear whether it offers a practical advantage in your case. There are implementations in various frameworks such as Keras, that you could try if you are interested.
H: Grouping company information I have 3 different datasets with company information, in all of them I have company name, but is not perfect: For example: Dataset A: Company name: Facebook Dataset B: Company name: Facebook, Inc Dataset C: Company name: facebook Some other signals like company url exists, but in terms of name matching wondering if text similarity is a good approach for this grouping problem. AI: It seems like you want to want to first get the datasets together using as shared key which in this case is the company name. That does not sound like a modelling problem but a data wrangling problem. Assuming that each datasets are in csv, then get them to pandas dataframe: df_a = pd.read_csv('data_set_a.csv') Then assuming you have the datasets in df_a, df_b, df_c and each have a column 'name': df_a.name = df_a.name.str.lower() df_a.name = [i.split()[0] for i in df_a.name] Now you will have a dataset where all three are 'facebook'. Then the three dataframes can be merged in to one: df = pd.merge(df_a, df_b, left_on='name', right_on='name') df = pd.merge(df, df_c, left_on='name', right_on='name') And now you have the three different datasets merged, and ready for analysis.
H: Can AlphaGo Zero adapt to oponents skills/profile? I read the AlphaGo Zero paper and I didn't found nothing about it in there. But I would like to know if AlphaGo Zero can adapt to the way the oponent plays (oponent profile) or something like this. Thanks!! AI: But I would like to know if AlphaGo Zero can adapt to the way the oponent plays (oponent profile) or something like this. That is not included in the algorithm as written, where the "profile" of the opponent is effectively AlphaGo Zero itself (learned through self play). It is not clear whether adapting play style to a given opponent would offer any advantage. It would be difficult to assess because AlphaGo Zero is such a strong player, that it will win a large percentage of games against human players as-is. Seeking and measuring any improvement, except versus earlier versions of itself, would be quite hard. However, there are a likely a few places in the code where learned play style of an opponent could in theory allow AlphaGo Zero to be more efficient. The most obvious is in the "rollout" policy (I'm not 100% sure if they use the same term), where the algorithm simulates and samples different possible trajectories through the game in order to predict likely outcomes. The current rollout policy in AlphaGo is learned through self play. But it is just a neural network that predicts probability of making plays given board state. It could easily be adjusted in a supervised learning fashion, based on sampled plays from an opponent. If it could be learned accurately, then it should make searches more efficient and accurate - the impossible but ideal situation being that it predicted opponents' move exactly and thus could quickly find the ultimate counter to their actions. In fact the original AlphaGo rollout policy did model human play in this way. It was based on large database of many human master level play moves, not a single player. The Deep Mind team did suggest in their paper that this gave better results at the time than a self-play policy - they tried both and the human database was better. Since then, AlphaGo Zero has surpassed the performance of original AlphaGo without the database of human moves.
H: Using machine learning to optimize parameter scores I have a dataset containing fraud and non fraudulent data. The system in place is a rule based engine with over 20 rules. If the total score is above a certain threshold the payment is classified as fraudulent. What would be an effective way of using machine learning to optimize the assigned score to the different rules(fraud rules). Thanks AI: This is a classification problem. You have 20 features (I.e., rules) and output is binary (I.e., fraud or no fraud). It is not clear what the rule features are. It sounds like they might be binary (e.g., pass or fail). Do you have training data on which items are actually fraud and which are not? If so, you can train classification model. For example, you could perform logistic regression to predict probability of fraud with your rules as inputs. The coefficients assigned to each rule could be interpreted as the scores or weights for each rule. If the predicted probability of fraud is above some probability threshold you would classify as fraud. If you do not have training data you can attempt to cluster similar rule results into two clusters. EDIT: If you can define some loss function (e.g., accuracy of predictions) then you can set up an optimization problem to find the best coefficients (I.e., those that minimize error metric). This is just an optimization problem and suitable solvers are available depending on your coding language. However, your method of weighted rules will likely not perform as well as established classification methods.
H: Moving from macbook (without GPU) to linux system with Titan V, only getting a 4x speedup, what am I doing wrong? I was prototyping a network architecture out on the macbook, and after finding something I was somewhat happy with, I wanted to test it out on a big data set on a system with a Titan V as the macbook was very slow for the bigger dataset (12 hours). I was expecting at least a 50x speed up per epoch over the CPU, if not more. Why might the speedup be only 4x (3 hours)? Err it turned out to be some very tricky driver issues, and by just cleaning out my vm and starting over, was able to fix it. This line of code in particular was invaluable - some of the other checks on "is tensorflow using the gpu" I found hovering around the web were not adequate to solving this: import tensorflow as tf with tf.device('/gpu:0'): a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) with tf.Session() as sess: print (sess.run(c)) AI: # Copyright 2015 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """A script for testing that TensorFlow is installed correctly on Windows. The script will attempt to verify your TensorFlow installation, and print suggestions for how to fix your installation. """ import ctypes import imp import sys def main(): try: import tensorflow as tf print("TensorFlow successfully installed.") if tf.test.is_built_with_cuda(): print("The installed version of TensorFlow includes GPU support.") else: print("The installed version of TensorFlow does not include GPU support.") sys.exit(0) except ImportError: print("ERROR: Failed to import the TensorFlow module.") candidate_explanation = False python_version = sys.version_info.major, sys.version_info.minor print("\n- Python version is %d.%d." % python_version) if not (python_version == (3, 5) or python_version == (3, 6)): candidate_explanation = True print("- The official distribution of TensorFlow for Windows requires " "Python version 3.5 or 3.6.") try: _, pathname, _ = imp.find_module("tensorflow") print("\n- TensorFlow is installed at: %s" % pathname) except ImportError: candidate_explanation = False print(""" - No module named TensorFlow is installed in this Python environment. You may install it using the command `pip install tensorflow`.""") try: msvcp140 = ctypes.WinDLL("msvcp140.dll") except OSError: candidate_explanation = True print(""" - Could not load 'msvcp140.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. You may install this DLL by downloading Microsoft Visual C++ 2015 Redistributable Update 3 from this URL: https://www.microsoft.com/en-us/download/details.aspx?id=53587""") try: cudart64_80 = ctypes.WinDLL("cudart64_80.dll") except OSError: candidate_explanation = True print(""" - Could not load 'cudart64_80.dll'. The GPU version of TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Download and install CUDA 8.0 from this URL: https://developer.nvidia.com/cuda-toolkit""") try: nvcuda = ctypes.WinDLL("nvcuda.dll") except OSError: candidate_explanation = True print(""" - Could not load 'nvcuda.dll'. The GPU version of TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Typically it is installed in 'C:\Windows\System32'. If it is not present, ensure that you have a CUDA-capable GPU with the correct driver installed.""") cudnn5_found = False try: cudnn5 = ctypes.WinDLL("cudnn64_5.dll") cudnn5_found = True except OSError: candidate_explanation = True print(""" - Could not load 'cudnn64_5.dll'. The GPU version of TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable. Note that installing cuDNN is a separate step from installing CUDA, and it is often found in a different directory from the CUDA DLLs. You may install the necessary DLL by downloading cuDNN 5.1 from this URL: https://developer.nvidia.com/cudnn""") cudnn6_found = False try: cudnn = ctypes.WinDLL("cudnn64_6.dll") cudnn6_found = True except OSError: candidate_explanation = True if not cudnn5_found or not cudnn6_found: print() if not cudnn5_found and not cudnn6_found: print("- Could not find cuDNN.") elif not cudnn5_found: print("- Could not find cuDNN 5.1.") else: print("- Could not find cuDNN 6.") print(""" The GPU version of TensorFlow requires that the correct cuDNN DLL be installed in a directory that is named in your %PATH% environment variable. Note that installing cuDNN is a separate step from installing CUDA, and it is often found in a different directory from the CUDA DLLs. The correct version of cuDNN depends on your version of TensorFlow: * TensorFlow 1.2.1 or earlier requires cuDNN 5.1. ('cudnn64_5.dll') * TensorFlow 1.3 or later requires cuDNN 6. ('cudnn64_6.dll') You may install the necessary DLL by downloading cuDNN from this URL: https://developer.nvidia.com/cudnn""") if not candidate_explanation: print(""" - All required DLLs appear to be present. Please open an issue on the TensorFlow GitHub page: https://github.com/tensorflow/tensorflow/issues""") sys.exit(-1) if __name__ == "__main__": main() Copy paste, run this code. Will tell you if you have tensorflow GPU support. For example, on my MacBook Pro it says TensorFlow successfully installed. The installed version of TensorFlow does not include GPU support.
H: How do i interpret this correlation Does this mean that as long as the student has good gpa and good gre even though his Alma Mater's prestige is low - he will get admitted in a college Any additional things i can interpret from below ? AI: No. From this correlation matrix you cannot draw the conclusion that as long as the student has good gpa and good gre even though his Alma Mater's prestige is low - he will get admitted in a college The reason is that correlation is a measure of association between single pairs of variables. The conclusion you draw above - on the contrary - is based on a combination of three different variables plus the outcome variable. If you want to get an estimation of the probability that a student will be admitted to college based on her gpa, gre and prestige the right way is to create a logistic regression model. Here's an example in R (provided that admit is a binary variable, with admit=1 indicating that the student is admitted and admit=0 that she is not admitted) model <- glm(admit ~.,family=binomial(link='logit'),data=data) With this fitted model you can then compute the probability that a student is admitted given her particular combination of gpa, gre and prestige.
H: Simple prediction with Keras I want to make simple predictions with Keras and I'm not really sure if I am doing it right. My data looks like this: col1,col2 1.68,237537 1.69,240104 1.70,244885 1.71,246196 1.72,246527 1.73,254588 1.74,255112 1.75,259035 1.76,267229 1.77,267314 1.78,268931 1.79,273497 1.80,273900 1.81,277132 1.82,278066 Now, I want to predict col2 by col1 and this is how I'm doing it: df = pandas.read_csv('data.csv', usecols=[0, 1], header=None) X = df.iloc[:, :-1].values.astype(np.float64) y = df.iloc[:, -1:].values.astype(np.float64) scalarX, scalarY = MinMaxScaler(), MinMaxScaler() scalarX.fit(X) scalarY.fit(y.reshape(len(y),1)) X = scalarX.transform(X) y = scalarY.transform(y.reshape(len(y),1)) model = Sequential() model.add(Dense(4, input_dim=1, activation='relu')) model.add(Dense(4, activation='relu')) model.add(Dense(1, activation='linear')) model.compile(loss='mse', optimizer='adam') model.fit(x=X, y=y, epochs=3, verbose=1) for num in range(1, 21): Xnew = np.array([[float(Decimal('2.{}'.format(num)))]]) ynew = model.predict(Xnew) print("X=%s, Predicted=%s" % (Xnew[0], ynew[0])) AI: What you are trying to do here is forecast the future values of a time series. This is a predictive problem and the future values will depend on a number of latent factors. I will assume all we have access to is historical data from the series as your question indicates. If you want to predict a future value for the time series, you should not only use the current value as an input, but rather you should use a chunk of the historical data. Since you have 18,000,000 instances, this is a lot, you can make your network quite complex in order to capture some latent trends hidden inside your data which can help predict the future value. To predict a value at time $t$ we will use the $k$ previous values. This hyper-parameter needs to be effectively tuned. Restructure the data We will structure the data such that the features $X$ are the $k$ previous time measurements, and the output target $Y$ is the current time measurement. The one that is being estimated by the model. k = 3 X, Y = [], [] for i in range(len(col1) - k): X.append(col2[i:i+k]) Y.append(col2[i+k]) X = np.asarray(X) Y = np.asarray(Y) Split your data from sklearn.cross_validation import train_test_split x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.33) Using the data in a Keras model This is a simple Keras model which should work as a first iteration step. However, due to the small amount of data you provided us I cannot get any meaningful results after training. import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv1D, MaxPooling1D, Reshape from keras.callbacks import ModelCheckpoint from keras.models import model_from_json from keras import backend as K x_train = x_train.reshape(len(x_train), k, ) x_test = x_test.reshape(len(x_test), k, ) input_shape = (k,) model = Sequential() model.add(Dense(32, activation='tanh', input_shape=input_shape)) model.add(Dense(32, activation='tanh')) model.add(Dense(1, activation='linear')) model.compile(loss=keras.losses.mean_squared_error, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) model.summary() epochs = 10 batch_size = 128 # Fit the model weights. history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))
H: TypeError: float() argument must be a string or a number, not 'function' I am trying to clean the data. But I don't know how to remove a function from a column in data frame. At row number 473 it show column N has a function . How it should be filtered out ? AI: a generic way to convert pandas column to numeric dtype: df['col_name'] = pd.to_numeric(df['col_name'], errors='coerce') this will replace all values, that couldn't be converted to numeric values with NaN's (Not a Number). PS you may want to analyze how and why do you have a reference to a lambda function in your cells. Fixing the source of that problem would be a more appropriate solution.
H: train_test_split : stratify can not be recognized? I'm trying to set stratify = True, but somehow jupyter notebook says 'name 'y' is not defined' X_train, X_test, y_train, y_test = train_test_split( loan.drop('Loan_Status', axis=1), loan['Loan_Status'], test_size=0.2, random_state=0, stratify=y) Can anyone tell me what is the proper way to do it? I'm using Scikit-learn v0.19.1 and have tried to set stratify = True / y / 2 but none of them worked. AI: You need to pass an array containing the class-labels (or whatever the criterion for stratifying is) as an argument to stratify. In your case, the answer is probably loan['Loan_Status'].values.
H: How to scale prediction back after preprocessing So I'm a newbie to machine learning and am currently using the iris data set. I ran through a quick online tutorial about predicting stock prices and thought I'd try and do the iris one myself. The issue I'm having is that I'm using preprocessing to scale the data to train my classifier. However when I make a prediction, the answer is also scaled. When I comment out all the preprocessing, I get accurate results. Is there a way to scale the prediction back? The outputs are rounded to 0, 1 or 2 with each number representing one of three species. You can see my code below: import pandas as pd import numpy as np from sklearn import preprocessing, model_selection from sklearn.linear_model import LinearRegression df = pd.read_csv("iris.csv") # setosa - 0 # versicolor - 1 # virginica - 2 df = df.replace("setosa", 0) df = df.replace("versicolor", 1) df = df.replace("virginica", 2) X = np.array(df.drop(['species'], 1)) y = np.array(df['species']) # Scale features # X = preprocessing.scale(X) X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.2) clf = LinearRegression(n_jobs=1) # Linear regression clf clf.fit(X_train, y_train) confidence = clf.score(X_test, y_test) print("Confidence: " + confidence) # Inputs sepal_length = float(input("Enter sepal length: ")) sepal_width = float(input("Enter sepal width: ")) petal_length = float(input("Enter petal length: ")) petal_width = float(input("Enter petal width: ")) # Create panda data frame with inputted data index = [0] d = {'sepal_length': sepal_length, 'sepal_width': sepal_width, 'petal_length': petal_length, 'petal_width': petal_width} predict_df = pd.DataFrame(data=d, index=index) # Create np array of features predict_X = np.array(predict_df) # Need to scale new X feature values # predict_X = preprocessing.scale(predict_X, axis=1) # Make a prediction against prediction features prediction = clf.predict(predict_X) print(predict_X, prediction) rounded_prediction = int(round(prediction[0])) if rounded_prediction == 0: print("== Predicted as Setosa ==") elif rounded_prediction == 1: print("== Predicted as Versicolor ==") elif rounded_prediction == 2: print("== Predicted as Virginica ==") else: print("== Unable to make a prediction ==") Here is an example of my output with preprocessing enabled. I'll be using one of the lines from the CSV as an example (6.4 sepal length, 3.2 sepal width, 4.5 petal length and 1.5 petal width) which should equal the versicolor species (1): Confidence: 0.9449475378336242 Enter sepal length: 6.4 Enter sepal width: 3.2 Enter petal length: 4.5 Enter petal width: 1.5 [[ 1.39427847 -0.39039797 0.33462683 -1.33850733]] [0.41069281] == Predicted as Setosa == Now with preprocessing commented out: Confidence: 0.9132522144785978 Enter sepal length: 6.4 Enter sepal width: 3.2 Enter petal length: 4.5 Enter petal width: 1.5 [[6.4 3.2 4.5 1.5]] [1.29119283] == Predicted as Versicolor == It seems I'm either doing the preprocessing wrong, or there's an extra step that I've missed out. I'm sorry if I get some of the terminology wrong and thanks in advance for answering. AI: I think your methodology is correct, but this line: # Scale features # X = preprocessing.scale(X) should be changed to: # Scale features # X = preprocessing.scale(X, axis = 1) As the default for scale is to set axis to 0 (I wonder why!). If the problem persists comment it and I will edit. Edit Although your methodology is not wrong, it is more suitable to use sklearn StandardScaler. See the documentation of this class. Usually, it is better to fit the scaler with the training data and transform the test data according to that fit.
H: MLPRegressor Output Range I am using Scikit's MLPRegressor for a timeseries prediction task. My data is scaled between 0 and 1 using the MinMaxScaler and my model is initialized using the following parameters: MLPRegressor(solver='lbfgs', hidden_layer_sizes=50, max_iter=10000, shuffle=False, random_state=9876, activation='relu') I am expecting output between 0 and 1 but getting values outside the bound (both negative values as well as > 1). Non-normalized data has the same problem, I get predictions out of range! Any idea where I could be wrong? UPDATE: Based on the answers below I played a bit with modifying the output activation layers and got some interesting results that I thought worth sharing. There are three scenarios, hope the captions convey the message clearly: Legend: Black solid line = Training Epoch Red solid line = Test Epoch Cyan dashed line = Network prediction over the entire data set Output when the network is trained using 'relu' activation layer but output_activation_ set to 'logistic Output when the network is trained using 'relu' activation layer and output_activation_ set explicitly to 'relu' Output when the network is trained using 'relu' activation layer and output_activation is left alone AI: The default output activation of the Scikit-Learn MLPRegressor is 'identity', which actually does nothing to the weights it receives. As was mentioned by @David Masip in his answer, changing the final activation layer would allow this. Doing so in frameworks such as Pytorch, Keras and Tensorflow is fairly straight-forward. Doing it in your code with the MLPRegressor means using an object attribute that isn't a standard parameter, namely output_activation_. Here are the built-in options that I can see in the documentation: activation : {‘identity’, ‘logistic’, ‘tanh’, ‘relu’}, default ‘relu’ Activation function for the hidden layer. ‘identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x ‘logistic’, the logistic sigmoid function, returns f(x) = 1 / (1 + exp(-x)). ‘tanh’, the hyperbolic tan function, returns f(x) = tanh(x). ‘relu’, the rectified linear unit function, returns f(x) = max(0, x) Setting it's value to logistic gives you the property you would like, values between 0 and 1. EDIT After comments and update from OP: in their case, using logistic (sigmoid) as the final activation negatively affected results. So perhaps it is worth trying out all possible activation functions to investigate which activation best suits the model and data. One further remark, at least within the context of deep learning, it is common practice not to use an activation at the final output of a neural network - for some thoughts around that discussion, see this thread. That being said, below is a simple working example of a model that doesn't set it, and one that does. I use random numbers to make it work, but the take-away is that the predicted values for the altered model are always within the range from 0 to 1. Try changing the random seed and re-running the script. import pandas as pd import numpy as np from sklearn.neural_network import MLPRegressor # To see an example where output falls outside of the range of y np.random.seed(1) # Create the default NN as you did nn = MLPRegressor( solver='lbfgs', hidden_layer_sizes=50, max_iter=10000, shuffle=False, random_state=9876, activation='relu') # Generate some fake data num_train_samples = 50 num_test_samples = 50 num_vars = 2 X = np.random.random((num_train_samples, num_vars)) * \ 100 # random numbers between 0 and 100 y = np.random.uniform(0, 1, (num_train_samples, 1)) # uniform numbers between 0 and 1 X_test = np.random.random((num_test_samples, num_vars)) * 100 y_test = np.random.uniform(0, 1, (num_test_samples, 1)) # Fit the network nn.fit(X, y) print('*** Before scaling the output via final activation:\n') # Now see that the output activation is (by default) simply linear i.e. 'identity' print('Output activation by default: {}'.format(nn.out_activation_)) predictions = nn.predict(X_test) print('Prediction mean: {:.2f}'.format(predictions.mean())) print('Prediction max: {:.2f}'.format(predictions.max())) print('Prediction min: {:.2f}'.format(predictions.min())) print('\n*** After scaling the output via final activation:\n') # Need to recreate the NN nn_logistic = MLPRegressor( solver='lbfgs', hidden_layer_sizes=50, max_iter=10000, shuffle=False, random_state=9876, activation='relu') # Fit the new network nn_logistic.fit(X, y) # --------------- # # Crucial step! # # --------------- # # before making predictions = alter the attribute: "output_activation_" nn_logistic.out_activation_ = 'logistic' print('New output activation: {}'.format(nn_logistic.out_activation_)) new_predictions = nn_logistic.predict(X_test) print('Prediction mean: {:.2f}'.format(new_predictions.mean())) print('Prediction max: {:.2f}'.format(new_predictions.max())) print('Prediction min: {:.2f}'.format(new_predictions.min())) Tested using Python 3.5.2.
H: Clustering a labeled data set I have a large labeled dataset with 29 classes. Is is possible to use a clustering algorithm (like k-means) in this dataset, or it's not possible since clustering algorithms are unsupervised ? AI: You can do many things: Forget about the labels: just use the features that are not labels and cluster along those features using the k-means algorithm (or another). Forget about the features: this is the dummiest way of clustering. Cluster the data in 29 clusters according to the labels that they have. If you want less clusters, you can compute the centroids of the classes and use them to join clusters of different labels. Use everything: create a categorical variable refering to the class that every example belongs to. Then, with this new variable and all the features perform a classical clustering algorithm. The way to proceed depends on if you want to use the labels or not, and how much importance you want them to have.
H: parquet format: advise on log content I'm using a python script to log IO of a grid job. the log is formatted like this: timestamp;fullpath;event;size 1526994189.49;/tmp/folder/;IN_ISDIR;6 1526994189.49;/tmp/folder2/File;IN_ACCESS;36 Those files are millions of line long. I'm using Spark to generate graphs and detect anomaly in job IO. But before doing that I need to insert the job ID and the jobname to the column making the file : timestamp;fullpath;event;size;jobid;jobname 1526994189.49;/tmp/folder/;IN_ISDIR;6;123456;afakejobname 1526994189.49;/tmp/folder2/File;IN_ACCESS;36;123456;afakejobname The thing is I'm new to big Data technologies and I would like to know if using parquet format is it better to put both jobname and jobid or knowing that I have only 15 different jobname and jobid in the same log is it better to convert it on the fly using SparkSQL and make a join to a very small table with just jobname;jobid and put only the jobid in my log. AI: Your second solution sounds great, and to use it in the most effective way, look for broadcast variables in Spark. Then, you're using the features available in spark to optimize it. Reference: https://jaceklaskowski.gitbooks.io/mastering-apache-spark/content/spark-broadcast.html
H: GANs and grayscale imagery colorization I am currently studying colorization of grayscale satellite imagery as part of my Master's internship. After looking for various machine learning techniques, I quickly decided to go for deep learning, as the colorization process can be completely automated and gives good results. I trained a wide variety of models and architectures (CNNs, Pix2Pix, DCGANs and WGANs with UNet or residual architectures, etc.) and settled with convolutional cGANs (condition: grayscal image ; output : colored image), but end up facing the same problem every time. Indeed, when training my network, the output is quite correct but always gives me grayish roofs. I think it has something to do with the distribution of pixels values, as roofs are usually orange or black in my geographic area. Many articles state the fact that GANs are prone to mode collapse. In this case, there are two modes (orange and black) which probably results in a bad local equilibrium. I tried many different techniques in order to improve training and/or avoid mode collapse : L1 loss regularization for the generator Label smoothing Noise taken from a normal distribution and concatenated to every layer in the generator/discriminator Dropout for some layers in the generator Gradient penalty Learning procedure inspired by DRAGANs and BEGANs So far, gradient penalty has given me the best results, despite still having these grayish roofs. I know there are other ways of improving the training procedure (e.g. batch discrimination), but I do not have the pre-requisites in mathematics or computer science nor the time for implementing such techniques. I was wondering if someone had some ideas for getting better results, or if someone also tried colorizing satellite imagery and ended up having such issues. Thanks for reading this long message. Have a nice day ! AI: Colorfulness, specifically managing overly gray output, was touched on in the pix2pix paper, down in Section 4.2. It might be worthwhile to try out their formulation of a conditional GAN, as their results indicate that their objective function was useful in generating a more diverse set of colors.
H: Precision and Recall if not binary I have to calculate precision and recall for a university project to measure the quality of the classification output (with sklearn). Say this would be my results: y_true = [0, 1, 2, 1, 1] y_pred = [0, 2, 1, 2, 1] confusion matrix: [1 0 0] [0 1 2] [0 1 0] I have read about it and the definition makes sense for me in a binary setting but with 3 labels I find it hard to interpret precision/recall. If I use sklearn.metrics.precision/recall_score it gives me 0.4 for both (average = micro) Now for the precision this makes somewhat sense because 2 out of 5 are correctly classified. But I am having problems interpreting the 0.4 result for recall. AI: from sklearn.metrics import recall_score If you then call recall_score.__dir__ (or directly read the docs here) you'll see that recall is The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives If you go down to where they define micro, it says 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives Here, the true positives are $2$ (the sum of the terms on the diagonal, also known as the trace), while the sum of the false negatives plus the false positives (the off-diagonal terms) is $3$. As $2/5=.4$, the recall (using the micro argument for average) is indeed $.4$. Note that, using micro, precision and recall are the same. The following, in fact, returns nothing: from numpy import random from sklearn.metrics import recall_score, precision_score for i in range(100): y_pred = random.randint(0, 3, 5) y_true = random.randint(0, 3, 5) if recall_score(y_pred, y_true, average='micro') != precision_score(y_pred, y_true, average='micro'): print(i)
H: Finding orthogonal input patterns associated with logistic function output I've been given this problem but cannot seem to get an analytical solution. I've tried satisfying the logistic function with several vectors but have difficulty finding ones which are also orthogonal. The problem begins with me being given a neural unit weight vector, $$w =\begin{bmatrix} 1 \\ \frac{1}{4} \\ \frac{1}{9} \\ \end{bmatrix}$$ The neuron output u is related to its input pattern v∈R3 by $$u = f(w^Tv + 1)$$ where $$f(x) = (1 + e^{-x})^{-1}$$ To find a pair of orthogonal vectors, v1 and v2, for which the unit output $u_{i} = 0.5, i=1,2.$ So far I know that $w^TV = 0$ for the function output to be 0.5. AI: All you have to do is complete $\omega$ to an orthogonal $\mathbb{R}^3$ basis, as $\omega^T V = 0$ and $v_1$, $v_2$ are orthogonal. You can do this using Gram Schmidt orthogonalization. The two other vectors of the basis will be $v_1$ and $v_2$. Edit: The initial vectors to do the Gram Schmidt process can be any pair of independent vectors, for instance $(1, 0, 0)$ and $(0, 1, 0)$.
H: Supervised Learning could be biased if we use obsolete data What if the data that we could use for the training is obsolete. For instance, if I train my model with the computer sales report from the 20th century and try to predict the actual trends, a disaster, right? Another good example is the one that profiles criminals. It will use the historical "mistakes" as fact and could wrongly suspect innocents(based on race or ethnicity). How could I avoid this kind of situations? AI: One approach to using knowledge from the example you gave above is to use that information as a prior for your current model. While it is unlikely that the exact trends observed a while ago will predict future trends, some general observations/correlations will likely still be relevant for the present day. Concerning your example of criminal profiling. Making predictive models in such cases can be highly biased and is controversial. I invite you to read the Outlook section XVIII.D of this ML review on the social implications of machine learning and the references mentioned.
H: Predicting which apps users may be interested in I am building a mobile app that can predict what apps users may be interested in downloading from the play store, based on what apps the user has already installed on their device and how much time they have spent on these apps. Also, there is the option to scroll through the top apps in the play store and you can "favourite" any apps that you find interesting/want to download. Based on this data, I would like to make predictions and notify users of any new popular apps that are released in the future, however since the predictions will be user specific, I am unsure if the dataset will be too small for k-means clustering (Group the apps into genres and find most popular genres). 35 is the average number of apps installed on user's smartphones, and if you include any "favourited apps" the total dataset could be around 50. Perhaps there is another more suitable technique I could apply to make accurate predictions but I am unsure where to go at the moment. AI: Broadly speaking, what you want is a recommender system. If you consider only the user-app association data (app installs, likes, etc.), and no user or app metadata, then you should look at collaborative filtering. More specifically, a simpler technique you can consider is item-similarity based recommendation (users who install app A, also install app B). A slightly more complex method involves factorization of the partial user-app association matrix and then inferring missing points in the matrix. This paper describes one such factorization technique used in the Netflix Prize movie recommendation challenge.
H: How to set class-weight for imbalanced classes in KerasClassifier while it is used inside the GridSearchCV? Could you please let me know how to set class-weight for imbalanced classes in KerasClassifier while it is used inside the GridSearchCV? # Use scikit-learn to grid search the batch size and epochs from collections import Counter from sklearn.model_selection import train_test_split,StratifiedKFold,learning_curve,validation_curve,GridSearchCV from sklearn.datasets import make_classification from sklearn.preprocessing import StandardScaler import numpy as np from sklearn.model_selection import GridSearchCV from keras.models import Sequential from keras.layers import Dense from keras.wrappers.scikit_learn import KerasClassifier from sklearn.metrics import classification_report import pandas as pd from sklearn.pipeline import Pipeline # Function to create model, required for KerasClassifier def create_model(): # create model model = Sequential() model.add(Dense(12, input_dim=20, activation='relu')) model.add(Dense(1, activation='sigmoid')) # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) return model # fix random seed for reproducibility seed = 7 np.random.seed(seed) # load dataset X, y = make_classification(n_classes=2, class_sep=2,weights=[0.95, 0.05], n_informative=3, n_redundant=2, flip_y=0, n_features=20, n_clusters_per_class=1, n_samples=1000, random_state=10) print('Original dataset shape {}'.format(Counter(y))) ln = X.shape X_train, X_test, y_train, y_test = train_test_split(X, y,random_state=0) st=StandardScaler() # create model model = KerasClassifier(build_fn=create_model, verbose=0) pipeline = Pipeline(steps=[('scaler', st), ('clf', model )]) # define the grid search parameters batch_size = [20, 40, 60, 80, 100] epochs = [ 50, 100] param_grid = dict(clf__batch_size=batch_size, clf__epochs=epochs) cv = StratifiedKFold(n_splits=5, random_state=42) grid = GridSearchCV(estimator=pipeline, param_grid=param_grid,cv=cv,scoring="f1") grid_result = grid.fit(X_train, y_train) # summarize results print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) # Predictions ypred = grid_result.predict(X_train) print(classification_report(y_train, ypred)) print('######################') ypred2 = grid_result.predict(X_test) print(classification_report(y_test, ypred2)) AI: grid_result = grid.fit(X_train, y_train, clf__class_weight={0:0.95, 1:0.05}) FYI, per the docs fit_params should no longer be passed to the GridSearchCV constructor as a dict, but should be passed directly to fit as above. http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
H: Could someone explain to me how back-prop is done for the generator in a GAN? I'm not very familiar with neural networks, however, I though I understood the concept of back propagation as starting from the error in the output layer. Say, we have 3 neurons in the output layer and their respective values end up being: [1 0.5 0.3] And we wished to obtain values [0 1 0] So we can compute a vector of errors between the two: [-1, +0.5, -0.3] (Not necessarily with the - operation, but you get the point) And back propagate from there. However, in a the generator of a GAN, it seems to me that the output layer has a bunch of neurons (representative to whatever size the entity we want to generate is) but the error is based only on a % of images the discriminator classified wrongfully. So how exactly do we do back-prop for the generator ? The only human readable examples of GAN I've found use frameworks and libraries like autoGrad that kind of obfuscate the problem for me :/ Is there a simple way to explain how this backprop can be done ? (Like, in code, not with an "intuitive example" or "simple 4 variable equation where each of the 4 variable packs 50 years of mathematics behind it") AI: There is really nothing special about the backpropagation algorithm in a generative adversarial network (GAN). It is the same as that of a convolutional neural network (CNN), as CNNs are usually what the generator and discriminator of the GAN are made of. I will assume the MNIST toy example for the explanation and I will provide code to get a GAN working below. The GAN A GAN is composed of a discriminator and a generator. Each of these are held constant while training the other. We will thus alternate between training the discriminator and the generator. This is done separately. Training the discriminator is much easier so lets look at that, then we will look at training the generator as you asked in your question. Training the discriminator The discriminator has two output nodes and is used to distinguish real instances from artificial instances. To train the discriminator we will generate $m$ instances using forward passes from the generator, these are the artificial instances, their label will be $y = 0$. To generate these we simply pass a noise vector as the input to the model. We will also use $m$ instances from the real data these will have labels $y = 1$. We can then see that the discriminator is trained exactly the same way as a basic classification CNN with 2 output nodes. I will describe the training process of a CNN below. Training the generator When we train the generator we will keep the discriminator fixed, this is necessary as to not saturate our parameters and make the discriminator too strong to ever beat. So what we essentially have is a CNN (the generator) connected to another CNN (the discriminator). The connecting nodes between these two models is going to be output which will generate the desired images once trained. Do note that the generator wants the instances it will produce in this case to be classified by the discriminator as being from the real distribution, so they will have labels $y = 1$. All together this is just a CNN and the backpropagation will be computed in the exact same way. First we pass through a noise vector, it goes through the generator, some random image gets produced at its output, that then goes through the discriminator and gets classified as $artificial$. But, we are expecting the discriminator to be fooled in this case, so this is an error, it should have been labeled as $real$. We then use backpropagation to get the error contribution of each model parameter. We will then use gradient descent to update all the parameters associated with the generator. Backpropagation This is a method used to compute the contribution of each parameter on the error term. We then use gradient descent to update these parameters such that the next pass through should result in a lower loss rate. Picking the right loss function is essential for this process. For classification tasks, as is the case with a GAN, we typically choose binary cross entropy as defined by $L = - ylog(\hat{y}) - (1-y)log(1-\hat{y})$ and over $N$ instances as is typically the case for stochastic gradient descent the loss function is $L(w) = - \frac{1}{N} \sum_{n = 1}^N \big[ y_n log(\hat{y}_n) + (1-y_n)log(1-\hat{y}_n) \big] $ where $y$ is the true label and $\hat{y}$ is the predicted label. Backpropagation in deep neural networks Take a look at this answer here which describes the process of using backpropagation and gradient descent to train a single neuron perceptron, and then a multi-layered network. The only difference is we are using the binary entropy loss function here which has a different derivative with respect to $\hat{y}$. This becomes $\frac{\partial L}{\partial \hat{y}} = - \frac{1}{N} \sum_{n = 1}^N \Big[ \frac{y}{\hat{y}} - \frac{1-y}{1-\hat{y}} \Big]$ You will then backpropagate this loss through the network by using the chain rule with the activation functions you selected at each layer. Backpropagation in a CNN Please refer to this answer for extensive details and a derivation of backpropagation for a CNN. The process is very similar to that for a deep neural network. However, the CNN uses the cross-correlation function at each layer, so you need to backpropagate the loss function through the derivative of this function. That question asks about the inner working of a CNN with two outputs very much like our discriminator. Finally, in order to train the generator, just imagine the same process with more convolutional layers ahead of it. When we have the associated gradient $\nabla L(w)$ for each parameters, only apply the gradient descent algorithm to the parameters that are a part of the generator model. I am sure that through these derivations you will see that the number of intermediate nodes between layers does not affect the backpropagation algorithm. It remains the same process. You should also be convinced that for a non-human the intermediate nodes that are the generated image from the GAN, are no different from any intermediate nodes in the model. We simply train these intermediate nodes in such a way that we can perceive some meaning, such as generating instances of the MNIST dataset.
H: Is this an over-fitting case? I'm a new programmer and this is my first ever neural network for real world application. Here is the deal, I'm using a top-less pre-trained VGG-16 with some dense layers on top of it.(for image classification problem) But no matter what hyper parameters I change I always get plots similar to these. So my question is at what loss (val_loss) can we consider the model trained? And is there a problem if the images that I feed the VGG-16 are RGB/255? And if someone could explain how high validation loss (even if validation accuracy is high) could impact the model? Thanks for taking time to answer me, and sorry if the questions seems stupid but I didn't find answers when searching else where. AI: Your model is indeed overfitting. There can be a lot of reasons why it could be happening - How many images are you training with?, Aren't you using Regularization, data augmentation using random crops or flips, Dropout to prevent overfitting? So my question is at what loss (val_loss) can we consider the model trained? We can generally stop training when Validation loss doesn't decrease anymore. This is generally done by defining a variable eg: n = 5 and checking if the validation loss has decreased after running n epochs. If you're solving a well-known problem(MNIST, Fashion-MNIST, CIFAR-10, etc), you can check the best score/loss achievable by the model and tweak your model(add or remove layers, prevent overfitting, etc) to achieve the score. And is there a problem if the images that I feed the VGG-16 are RGB/255? You should preprocess the images for faster training and also subtract the mean RGB pixel values of images used to train pretrained network(if the pretrained net is trained using imagenet data,then subtract imagenet dataset's mean RGB pixel values from your input images) from your input images. Here you can read keras preprocessing_input method. And if someone could explain how high validation loss (even if validation accuracy is high) could impact the model? Validation loss measures how generalizable your model is to unseen data. If your training loss and validation loss don't improve during training, then you are underfitting. If you're training loss is low but your validation loss is stagnant or increasing then you're overfitting, which means that your model is learning unwanted noise or patterns which help it learn or memorize training data but are not generalizable on an unseen validation set. Here is a link explaining data preprocessing, augmentation, transfer learning using pretrained net.
H: How can I infer no target in a target classification problem based on deep learning? Let's take the MNIST dataset (My application is different) with a lot of noise, I am going to train a deep NN to classify the letters. What's the right way to infer, there's no letter possibility? or a letter not included in the training set? Do I have to add the class "not a letter" and try to pick or simulate the part of the dataset that has no letters but noise, Or, I can do it without adding the extra label? AI: What you are looking for is anomaly detection, or novelty detection. This cannot always be solved by labeling some images as "no letter", since all kind of images having no letters may not be available for labeling, or maybe too costly to exhaustively label. This link has some directions you can look at for a start.
H: Difference between mathematical and Tensorflow implementation of Softmax Crossentropy with logit Softmax cross entropy with logits can be defined as: $a_i = \frac{e^{z_i}}{\sum_{\forall j} e^{z_j}}$ $l={\sum_{\forall i}}y_ilog(a_i)$ Where $l$ is the actual loss. But when you look deep inside into C++ Tensorflow implementation of SoftmaxCrossEntropyWithLogits operation, the exact formula which they use is descibed as: $l={\sum_{\forall j}}y_j ((z_j-max(z))-log({\sum_{\forall i}}e^{z_i-max(z)}))$ The part: $z-max(z)$ - is perfectly understood - it is just normalization which helps to avoid under/overflow. BUT: Where is the actual Softmax in their implementation? Why from each $z_j$ they subtract $log({\sum_{\forall i}}e^{z_i-max(z)})$ before multiply it by $y_j$? Note: One may argue that the code I provide is just Tensorflow's implementation of CrossEntropyWithLogits operation, but the actual SoftmaxCrossEntropyWithLogits operation - additionaly checks only dimentions and do not perform any more computation. AI: As I understand it, the softmax function for $z_i$ is given by $a_i$. Then just taking the loss you've defined you get back exactly the formula that is implemented. The way it is written down however is, as you mentioned, to avoid underflow/overflow. For instance, suppose you want to compute the following: $A=\log(\sum_{i=1}^{4}\exp(z_i))$, with $z_i=(-1000.5,-2000.5,-3000.5,-4000.5)$ Clearly, if you just type in the formula directly, you will get an underflow error. Instead if you isolate the main contribution in the exponential by taking the $\max(z_i)$, the same formula can be written as: $A=\max_i(z_i)+\log(\sum_{i=1}^{4}\exp(z_i-\max_i(z_i)))$ The difference now is that the expression is "numerically stable" and we see that $A\approx -1000.5$. Thus, let's make the softmax numerically stable: \begin{align} \log(a_i)&=z_i-\log(\sum_j e^{z_j})\\ &=z_i-\max_j(z_j)-\log(\sum_je^{z_j-\max_j(z_j)}) \end{align} which is the expression that is implemented for the loss (just multiply by $y_i$ and sum over $i$).
H: Alternatives to imputation of missing values? So I'm quickly learning that dealing with missing values for feature(s) in some of your observations is a part of every day life in data. I get the gist of imputation, when/how it's appropriate and when it's not, and I'll read up on it in the near future. But, what about this: Suppose you have predictors $X_1, \dots, X_p$, and you want to model, say, a binary response $Y$ via, say, logistic regression. Instead of just imputing values to missing predictor values, couldn't you have a separate model built for every possible subset of the predictors, and apply that model for prediction when those predictors happen to be present? Each of those models would naturally be trained on just the data for which those predictors (and also others) are present. This seems to me a more reasonable approach than just making up values, but I have no theoretical justification for this. I do realize that this involves building $2^p$ different models, with $2^p$ different model matrices, etc., but for a moderate $p$ and $n$ it could be feasible, and especially if only a few of your features tend to be missing more often than others. Is this ever done? And if so, is there a standard way to implement this in R? In the case of logistic regression, you can specify to R's glm function how you want it to handle NA values, but your only options seem to be either tossing out observations altogether, or some kind of imputation scheme. Thoughts? AI: These are my thoughts: This seems to me a more reasonable approach than just making up values, but I have no theoretical justification for this. Actually, there is not much theoretical justification for classical imputation. I think your method makes sense when just very few of the predictors have missing values. For instance, if there is just one predictor that has missing values, you can build a model to imput those missing values. However, as you said, things scale very badly. Moreover, apart from computational issues, if there are lots of missing values in every predictor your data is not very good, and therefore any predictive model won't be very good anyway. This is done sometimes, but I don't think there's any library in R that implements it, you'll have to code it yourself (I don't think it is difficult). If you do it and your final model is a logistic regression, I don't recommend to fit the missing values with another linear model, as you will suffer from a collinearity problem. One other thing that is typically done is that, for every predictor with missing values, create a binary predictor that is $0$ if the other predictor's value is missing and $1$ otherwise.
H: What's wrong with my deep NN of two hidden layers? batch_size = 128 size_1 = 1024 size_2 = 256 size_3 = 128 beta = 0.001 graph = tf.Graph() with graph.as_default(): tf_train_dataset = tf.placeholder( tf.float32,shape=(batch_size,image_size*image_size)) tf_train_labels = tf.placeholder( tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Weights and Biases g_W1 = tf.Variable( tf.truncated_normal([image_size*image_size,size_1])) g_B1 = tf.Variable( tf.zeros([size_1])) g_W2 = tf.Variable( tf.truncated_normal([size_1,size_2])) g_B2 = tf.Variable( tf.zeros([size_2])) g_W3 = tf.Variable( tf.truncated_normal([size_2,num_labels])) g_B3 = tf.Variable( tf.zeros([num_labels])) # g_W4 = tf.Variable( # tf.truncated_normal([size_3,num_labels])) # g_B4 = tf.Variable( # tf.zeros([num_labels])) L1 = tf.nn.relu( tf.matmul(tf_train_dataset,g_W1) + g_B1) L2 = tf.nn.relu( tf.matmul(L1,g_W2) + g_B2) # L3 = tf.nn.relu( # tf.matmul(L2,g_W3) + g_B3) dr_prob = tf.placeholder("float") ##add dropout here #L1 = tf.nn.dropout(tf.nn.relu( # tf.matmul(tf_train_dataset,g_W1) + g_B1), 1.0) #L2 = tf.nn.dropout(tf.nn.relu( # tf.matmul(L1,g_W2) + g_B2), 1.0) #L3 = tf.nn.dropout(tf.nn.relu( # tf.matmul(L2,g_W3) + g_B3), 1.0) logits = tf.matmul(L2, g_W3) + g_B3 loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))+\ beta*tf.nn.l2_loss(g_W1) +\ beta*tf.nn.l2_loss(g_W2)+\ beta*tf.nn.l2_loss(g_W3) # beta*tf.nn.l2_loss(g_W4) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) L1_pred = tf.nn.relu(tf.matmul(tf_valid_dataset, g_W1) + g_B1) L2_pred = tf.nn.relu(tf.matmul(L1_pred, g_W2) + g_B2) # L3_pred = tf.nn.relu(tf.matmul(L2_pred, g_W3) + g_B3) valid_prediction = tf.nn.softmax(tf.matmul(L2_pred, g_W3) + g_B3) L1_test = tf.nn.relu(tf.matmul(tf_test_dataset, g_W1) + g_B1) L2_test = tf.nn.relu(tf.matmul(L1_test, g_W2) + g_B2) # L3_test = tf.nn.relu(tf.matmul(L2_test, g_W3) + g_B3) test_prediction = tf.nn.softmax(tf.matmul(L2_test, g_W3) + g_B3) num_steps = 3001 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, dr_prob : 0.5} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) Now it's 2 days trying to know what's wrong with my solution, I hope somebody can spot it, the purpose is to train a simple deep NN of two hidden NN, I have checked other's solutions and I still don't get what's wrong with my code (it's 4th problem 3rd assignment of Udacity deep learning online course) I am getting the following output.. Initialized Minibatch loss at step 0: 3983.812256 Minibatch accuracy: 8.6% Validation accuracy: 10.0% Minibatch loss at step 500: nan Minibatch accuracy: 9.4% Validation accuracy: 10.0% Minibatch loss at step 1000: nan Minibatch accuracy: 8.6% Validation accuracy: 10.0% Minibatch loss at step 1500: nan Minibatch accuracy: 11.7% Validation accuracy: 10.0% Minibatch loss at step 2000: nan Minibatch accuracy: 6.2% Validation accuracy: 10.0% Minibatch loss at step 2500: nan Minibatch accuracy: 10.2% Validation accuracy: 10.0% Minibatch loss at step 3000: nan Minibatch accuracy: 7.8% Validation accuracy: 10.0% Test accuracy: 10.0% AI: You didn't tell in your question what you tried when debugging, but I'll try to answer. Short answer: It looks to me you got to choose a lower learning rate since your loss is exploding after the first iteration. Explanation: You are using a standard Stochastic Gradient Descent to perform optimization. Therefore, it's an non-adaptive learning rate algorithms, which means if that latter is poorly chosen, loss can explode if the learning rate is too high. That's why when I'm running in such optimization issues with a new neural network, what I like to do is to set a very low learning rate to ensure convergence at first. Also you could use an adaptive optimizer such as AdaGrad or Adam who have both tensorflow implementations. I hope that'll solve your issue.
H: Handling binned feature I am new to the field of data science and trying to figure out ways to handle data quality issues before performing any modeling. I am working on a house rental price data set. In this data there is feature called Total Squarefeet. The issue I am facing here is that out of 12000 records 200 have a range, for e.g., 1200 - 1800, OR 850 - 855, OR even and these ranges also have a random difference between them. Rest other are simple numbers. Is there a way to correctly handle this kind of data. Can anyone help me or guide me to a place from where i can learn and use techniques to handle such data. Thanks in advance. AI: The column you are using must be of string data type. First filter the dataset where it contain "-" and create another dataframe. On this new dataframe perform split string operation and create two different column and convert them to integer. Now create another column which has average or whatever you prefer with two column. Rename column and drop unnecessary columns. Merge it back to dataframe.
H: Dropout on inputs instead on outputs = DropConnect? Is dropping out parts of the Input vector better than dropping out parts of the Output vector? The latter literally makes this same neuron invisible to any further layers. On the contrary, ignoring pieces of the input means some of the further neurons will be able to see this neuron. Is masking the input pretty much the same as masking the weights (aka DropConnect) and thus gives a higher quality regularization? AI: It depends on the type of input pattern but to make a decision, I suggest not to. There are different reasons for that. First of all, you are damaging your input signal. I don't know whether you are familiar with the information theory or not but the signal to noise ratio will be too small and if you do so, you will be left with a signal which is far from your real signal. Moreover, it is also not good to add dropout in the convolutional layers because they are feature extractors and they are significant features for classification problems. If you miss them, means that you are losing information more than usual. Consider the point that your input to the network is already resized to a smaller shape than its original shape, for instance, the input shape of typical CNNs is 224 * 224 while the original shape may be ten times bigger or even more for each direction. You may have seen that in the Lenet-5 the authors have used a data-augmentation technique that changes the colors of the inputs with different distributions. The point there is that they have not changed the locality of the input signal. Moreover, the signal to noise ratio also is not a too small number due to the fact that they have not set the input features to zeros. They just have changed them slightly. Finally, the last layer should not employ drop-out. Because the output size has to have specified characteristics, sum to one. Dense layers due to having a large number of weights, and consequently a large number of activations, are good points for exploiting drop-out.
H: Normalising data with multiple methods When training a neural network, I appreciate that data normalisation helps training. However, is it a good idea to normalise the data in multiple ways. For instance, is it a good idea to apply z-score normalisation on min-max normalised data? That is if the input data is already normalised to [0, 1], is it a good idea to train on the z-scores of that? AI: I've not seen any paper about that but based on what I've faced till now, normalizing data intuitively is just for assigning same importance to different features which their raw values do not have a same range. Take a look at here. Also, you can take a look at here that professor says that you just need to employ a technique and it's not really important which technique. Also, take a look at here.
H: How to predict class label from class probability given by predict_generator for testdata? While using Keras' flow_from_directory method to train my model on a multi-class image classification problem, the predict_generator function gives the class probabilities. So, my query is how to get the corresponding class-labels for those class probabilities? AI: You just take the class with the maximum probability. This can be done using numpy argmax function.
H: Resizing images for training with Mobilenets I have a script to download images, but the images are of different resolutions so I have written a script to shrink the image. I have two options: size=(224,224) with cv2 cv2.resize(img,size,interpolation=cv2.INTER_AREA) with PIL img.thumbnail(size,Image.ANTIALIAS) after saving them I see cv2 doesn't maintain the original ratio where as PIL maintains the ratio. My question : Maintaining aspect ratio is important or not. If yes (224,224) is a good choice or should I set it to higher resolution. Sorry if the question is naive, I am new to image processing. AI: Maintaining aspect ratio is important or not. Yes, it's really important in most cases. As you can read from here, Why does aspect ratio matter? It’s all to do with the relationship of the main subject to the sides of the frame, and the amount of empty space you end up with around the subject. An awareness of the characteristics of the aspect ratio of your particular camera can help you compose better images. It also helps you recognise when cropping to a different aspect ratio will improve the composition of your image. In deep learning tasks, it depends how you want to feed data to your network. It's better to train your network with real data. Consequently if you are going to face data with standard aspect ratio, you have too keep it during training. If yes (224,224) is a good choice or should I set it to higher resolution. Depending on your task 224 may be good or not. In typical classification networks, it is an acceptable size, like the inputs of AlexNet or VGG and such customary nets. For localization tasks, not really. For instance, the input height and width of YOLO is more than that 1.
H: Really bad value of Val loss I am using GTZAN dataset to make a CNN and classify by musical genres. I'm getting very good results except Val. loss (See Image) I am processing the audio files using Librosa, obtaining the spectogram and then using the power_to_db function. This is my CNN Model: class CNNModel(object): def __init__(self, config, X): self.filters = 32 # number of convolutional filters to use self.pool_size = (2, 2) # size of pooling area for max pooling self.kernel_size = (3, 3) # convolution kernel size self.nb_layers = 4 self.input_shape = (128, 625, 1) # cambiar por x.shape def build_model(self, nb_classes): model = Sequential() model.add( Conv2D( self.filters, self.kernel_size, padding ='same', input_shape = self.input_shape)) model.add(BatchNormalization(axis=1)) model.add(Activation('relu')) model.add( Conv2D( self.filters, self.kernel_size)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size = self.pool_size)) model.add(Dropout(0.25)) model.add( Conv2D( self.filters + 32, self.kernel_size, padding ='same')) model.add(Activation('relu')) model.add( Conv2D( self.filters + 32, self.kernel_size, padding ='same')) model.add(MaxPooling2D(pool_size = self.pool_size)) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(nb_classes)) model.add(Activation("softmax")) #mirar return model I leave the link of my github in case you want to see the whole code. Every song is (128, 625) Shape, I used MinMaxScale to Scale the data. This is my loss function and my optimizer loss = losses.categorical_crossentropy, optimizer = optimizers.SGD(lr=0.001, momentum=0, decay=1e-5, nesterov=True) I have read about overfitting and that seems to be the cause but I do not know how to solve it at the code level. Update 1: With Dropout(0.9) I get this results: Thank you AI: You can test with higher values of Dropout (0.5,0.7,0.9) and/or try L1/L2 Regularization to combat overfitting : Keras Regularizers. Update: You can play with a combination of l1/l2 regularization and dropout for your convolutional and FC layers. Start with low values of lambda (0.001) and increase it thereon. A common practice is to have dropout only in the FC layers, see if it helps your problem. Also, from the looks of your loss curve, your model probably hasn't converged.You can train it until your validation loss starts going up/becomes stable.
H: Reinforcement Learning in 2018, best tips and tricks? Putting aside things applicable to neural networks such as dropout, l2 regularization, new opitmizers - what are the cool things I should be adding to my Reinforcement Learning algorithm (Q-Learning and SARSA) to make it stronger, in 2018? So far, I know these: Experience Replay (gather experiences, then train at once on some of them) Slowly making our network approach the Q-values of a Target Network. The target Network is only cloned sometimes. If our function approximator is RNN and our Memory Bank stores sequences of several timesteps, after selecting few random sequences from Memory bank, only backpropagate a half of the sequence, without going back to its start AI: Have you looked at the Rainbow RL? It combines all improvements of RL. Apart from structural changes other improvements come from Reward Shaping, Curiosity-driven Learning, RL with Unsupervised Auxiliary tasks and RL with external memory (e.g. Neural Episodic Control, Model-free episodic Control). It's pitty that you leave out from your question Policy Gradients, they are lots of fun :) Happy reading!
H: Web page data extraction using machine learning I would like to extract some specific information from web pages. Web pages contain person profiles, and I want to extract information such as name, email, research interested-areas. Structure of each page is different from one another. How can I extract such information using machine learning? What kind of a method, features I can use? Or can I use NLP for such task? AI: If I understand your question properly, this seems to be a scraping problem which you can do using Beautifulsoup in python.
H: Difficulty in choosing Hyperparameters for my CNN My task is to estimate a person's age based on a face image of that person. To that end I'm using a CNN and at first stage I was based on the following article: DeepExpectation which uses a VGG16 architecture to predict a person apparent age (the age that other people would vote). I'm using ResNet 50 architecture (and I'm using this implementation of it: ResNet Tensorflow in tensorflow). The dataset I use for learning is taken from the same article above and can be downloaded from here: WIKI-IMDB dataset. It is composed of 523,051 face images with tagged ages but I found that most of them are garbage (doesn't have real age as the label or has more than one face in it or no face at all). after filtering this dataset (by throwing images that have more than one face or no face at all or the label that represents the age doesn't make sense) I'm left with approximately 150k images in my dataset. in the process of making this dataset I centered all faces in the image and cropped it to be 224*224*3. To increase the size of my dataset I flipped horizontally each image in my dataset to get a total of 300k images in my dataset. I then split the dataset to 240k train, 30k validation and 30k test images. I'm loading a pre-trained model (trained on ImageNet) and I hoped to get the results they got in the article above (something close to 3.2 years with MAE as the evaluation metric). I also should mention that I'm using cross-entropy as the loss and an l2 loss for regularization and I have 101 classes as the logits (ages 0-100). The batch size I'm using is 64 (The maximum my GPU can handle with). I tried several combinations of hyperparameters but until now the best MAE I got on the validation set was 5.8 years. Examples of loss graphs I got and their hyperparameters: weight decay = 5e-4, momentum optimizer with momentum=0.9, first learning rate = 0.01 and reducing it by a factor of 10 each 2 epochs until reaching to 1e-6, the above LR is the LR assinged to the last FC layer. for the layer before it I used 0.5 of that LR. for the middle layers I used 1e-2 of that LR and for the first layer I used 1e-3 of that LR. I got the following graphs: (orange = train , blue = validation) loss: LR: MAE: example #2: same hyperparameters as in the example above only different ratios between first layers and last FC layer. in this example I used 0.5 of the LR in the last FC layer for the layer before it and 0.1 of the LR in the last FC layer for the other layers. I got the following graphs: (orange = train , blue = validation) loss: LR: MAE: As you can see from the last 2 examples, I'm reaching to ~6 pretty fast but it seems as the optimization on the train loss gets stuck and also that the validation loss doesn't approach the train loss from some point. Do you have any suggestions? Also, I'm having some thoughts on whether it is good to apply some of the following: Initalize weights from the pre-trained network until some layer Initialize all weights from the pre-trained network beside the batch-norm Freeze some of the layers after loading them and train only the last layers Apply different LR to different layers (as I did in the examples above) Should I use Momentum optimizer or maybe Adam Optimizer If I'm using Momentum optimizer, what should be the learning rate schedule (learning rate decay)? If I'm using Adam optimizer, does it make sense to use different learning rates to different layers? What should be the weight decay hyperparameter? Any help would be much appreciated. I didn't know which stackexchange site is more appropriate for this question so I also posted this question on CrossValidated: Difficulty in choosing Hyperparameters for my CNN AI: I am going to comment on your ideas: This is generally recommended. There is no proven reason not to do it but many reasons to do it, and you should initialize as much weights as possible. Typically it is done until the last layer. Initialize everything that you can from pre-trained network. I would freeze the first layers in the first epochs, and after that unfreeze them. If possible, I would rather use differential learning rates, as this post shows. Totally, as shown above. Adam is more widely used, I would use that. Cosine scheduling with reestarts has been shown to be very effective, but any learning rate with reestarts should be similar. It does make sense. I wouldn't worry about that until you are overfitting, and you are not overfitting yet. If you overfit, I would recommend dropout instead of weight decay.
H: Trouble with accuracy of multiclass perceptron I have built a multiclass perceptron, but it has low accuracy (around 80%). I think I'm missing something. One possibility is that I should add a bias, but I'm not sure how to incorporate that. The task is, given 2 dimensions, predict the class, which is between 0 and 8. I could use some pointers as to where this code is going wrong. We are not given classes for test data, but rather need to populate it and it is checked elsewhere. def dot_product(weights_list, training_data): return [sum([a * b for a, b in zip(training_data, i)]) for i in weights_list] def predict(row, weights_list, n_classes): # get dimensions * weight activations = dot_product(weights_list, row[:-1]) # get index of argmax predicted_label = activations.index(max(activations)) return predicted_label def train_weights(train, n_epoch, n_classes): # prepopulate just weights weights_list = [[0.0 for x in range(len(train[0])-1)] for x in range(n_classes)] for epoch in range(n_epoch): for row in train: actual_class = int(row[-1]) # argmax from predicting each class prediction = int(predict(row, weights_list, n_classes)) # if incorrect: # lower score of wrong answer by this row's values # raise score of correct answer by this row's values if actual_class != prediction: weights_list[prediction] = [a - b for a, b in zip(weights_list[prediction], row[:-1])] weights_list[actual_class] = [a + b for a, b in zip(weights_list[actual_class], row[:-1])] return weights_list train = [[83.0, -14.0, 6.0], [77.0, 15.0, 6.0], [93.0, 35.0, 3.0], [86.0, -8.0, 6.0], [-51.0, -79.0, 1.0], [62.0, -73.0, 1.0]] test = [[36.0, 27.0, -1], [6.0, 99.0, -1], [-3.0, 16.0, -1], [-40.0, -61.0, -1], [70.0, 67.0, -1], [86.0, -14.0, -1], [-92.0, 67.0, -1]] n_classes = 9 n_epoch = 10000 weights_list = train_weights(train, n_epoch, n_classes) for row in test: prediction = predict(row, weights_list, n_classes) row[2] = prediction AI: Firstly, in all the linear separator algorithms such as linear regression, logistic regression and the perceptron, adding the bias is as simple as adding a feature column consisting of all 1's. Then the third weight that will be trained will act as the bias $b$. I have some working code for a multi-class perceptron First let's generate some artificial data with 2 features. Each distribution of points is going to be Gaussian with a given mean and variance described as a list of lists for each dimension under the variable params. def gen_data(params, n): dims = len(params[0]) num_classes = len(params) x = np.zeros((n*num_classes, dims)) y = np.zeros((n*num_classes,)) for ix, i in enumerate(range(num_classes)): inst = np.random.randn(n, dims) for dim in range(dims): x[ix*n:(ix+1)*n,dim] = np.random.normal(params[ix][dim][0], params[ix][dim][1], n) y[ix*n:(ix+1)*n] = ix return x, y params = [[[ 5,1], [ 5,1]], [[ 0,1], [ 0,1]], [[2, 1], [ 2,1]], [[-2, 1], [ 2,1]]] n = 300 x, y = gen_data(params, 300) plt.scatter(x[:,0], x[:,1]) plt.show() Alright so now we have 4 distributions with different labels. Let's split the data for sanctity's sake. x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.33) Let's train the weights using the training data def get_weights(x, y, n_epochs, verbose = 0): # Append a ones column to the feature for the bias data = np.ones((x.shape[0], x.shape[1]+1)) data[:, 0:x.shape[1]] = x # Set the targets as integers for comparison targets = y.astype(int) # Initialize the weights as a matrix # number of classes by number of features weights = np.ones((len(set(y)), x.shape[1]+1)) for epoch in range(n_epochs): for i, target in zip(data, targets): temp = np.dot(i, weights.T) pred = np.argmax(temp) # If wrongly predicted update prediction if pred != target: weights[target, :] = weights[target, :] + i weights[pred, :] = weights[pred, :] - i if verbose == 1: print('Iteration: ', epoch) print(weights) print('---------------------------------------------') return weights weights = get_weights(x_train, y_train, n_epochs = 30, verbose = 1) This converges to approximately this [[ 23.62752045 16.03867499 -111. ] [ -3.96545848 -8.66924406 47. ] [ -0.94290763 -1.84413793 33. ] [ -14.71915434 -1.52529301 35. ]] We get an accuracy calculated using the score def predict(x, weights): data = np.ones(( x.shape[0], x.shape[1]+1 )) data[:, 0:x.shape[1]] = x predictions = np.argmax(np.dot(data, weights.T), axis = 1) return predictions def score(x, y, weights): pred = predict(x, weights) return sum(pred == y_test)/len(pred) score(x_test, y_test, weights) 0.8686868686868687 We can check our results using a confusion matrix. For the training set from sklearn.metrics import confusion_matrix predictions = predict(x_train, weights) plt.imshow(confusion_matrix(y_train, predictions)) plt.show() And the testing set predictions = predict(x_test, weights) plt.imshow(confusion_matrix(y_test, predictions)) plt.show() So we see that in fact our algorithm is performing quite well. We can then plot our points to see how it is classifying them. I will plot the training points as small circles, and the testing points as larger ones. The dark points are those which are misclassified colors = ['y', 'r', 'b', 'g', 'k'] # Predict training set predictions = predict(x_train, weights) for i, t, p in zip(x_train, y_train, predictions): if t == p: plt.scatter(i[0], i[1], c=colors[int(t)], alpha = 0.2, s=20) else: plt.scatter(i[0], i[1], c=colors[int(t)], alpha = 1) # Predict test set predictions = predict(x_test, weights) for i, t, p in zip(x_test, y_test, predictions): if t == p: plt.scatter(i[0], i[1], c=colors[int(t)], alpha = 0.2) else: plt.scatter(i[0], i[1], c=colors[int(t)], alpha = 1) # Plot the linear separators x1 = np.linspace(np.min(x[:,0]),np.max(x[:,1]),2) x2 = np.zeros((weights.shape[0], 2)) for ix_w, weight in enumerate(weights): x2 = 1 * ( - weight[2] - weight[0]*x1) / weight[1] plt.plot(x1, x2, c = colors[ix_w]) plt.xlabel('Feature 1') plt.ylabel('Feature 2') plt.xlim([np.min(x[:,0]), np.max(x[:,0])]) plt.ylim([np.min(x[:,1]), np.max(x[:,1])]) plt.show() This code generalizes to the binary classification task as well params = [[[ 5,1], [ 5,1]], [[ 0,1], [ 0,1]]] Stop training on convergence If you want to stop the algorithm based on convergence you can use a stop criteria. For example you can stop training once every weight in your matrix changes by less than a very small number. The very small number we usually choose is machine epsilon 2.220446049250313e-16, which is essentially zero. Sometimes this requirement is too stringent so it can be replaced by any number of significant values. Change the get_weights code to include the break criteria as from copy import deepcopy def get_weights(x, y, n_epochs, verbose = 0): # Append a ones column to the feature for the bias data = np.ones((x.shape[0], x.shape[1]+1)) data[:, 0:x.shape[1]] = x # Set the targets as integers for comparison targets = y.astype(int) # Initialize the weights as a matrix # number of classes by number of features weights = np.zeros((len(set(y)), x.shape[1]+1)) past_weights = np.zeros((len(set(y)), x.shape[1]+1)) for epoch in range(n_epochs): for i, target in zip(data, targets): temp = np.dot(i, weights.T) pred = np.argmax(temp) # If wrongly predicted update prediction if pred != target: weights[target, :] = weights[target, :] + i weights[pred, :] = weights[pred, :] - i if np.abs(weights - past_weights).all() < np.finfo(float).eps: break past_weights = deepcopy(weights) if verbose == 1: print('Iteration: ', epoch) print(weights) print('---------------------------------------------') return weights
H: What is the difference between bootstrapping and cross-validation? I used to apply K-fold cross-validation for robust evaluation of my machine learning models. But I'm aware of the existence of the bootstrapping method for this purpose as well. However, I cannot see the main difference between them in terms of performance estimation. As far as I see, bootstrapping is also producing a certain number of random training+testing subsets (albeit in a different way) so what is the point, advantage for using this method over CV? The only thing I could figure out that in case of bootstrapping one could artificially produce virtually arbitrary number of such subsets while for CV the number of instances is a kind of limit for this. But this aspect seems to be a very little nuisance. AI: Both cross validation and bootstrapping are resampling methods. bootstrap resamples with replacement (and usually produces new "surrogate" data sets with the same number of cases as the original data set). Due to the drawing with replacement, a bootstrapped data set may contain multiple instances of the same original cases, and may completely omit other original cases. cross validation resamples without replacement and thus produces surrogate data sets that are smaller than the original. These data sets are produced in a systematic way so that after a pre-specified number $k$ of surrogate data sets, each of the $n$ original cases has been left out exactly once. This is called k-fold cross validation or leave-x-out cross validation with $x = \frac{n}{k}$, e.g. leave-one-out cross validation omits 1 case for each surrogate set, i.e. $k = n$. As the name cross validation suggests, its primary purpose is measuring (generalization) performance of a model. On contrast, bootstrapping is primarily used to establish empirical distribution functions for a widespread range of statistics (widespread as in ranging from, say, the variation of the mean to the variation of models in bagged ensemble models). The leave-one-out analogue of the bootstrap procedure is called jackknifing (and is actually older than bootstrapping). The bootstrap analogue to cross validation estimates of generalization error is called out-of-bootstrap estimate (because the test cases are those that were left out of the bootstrap resampled training set). [cross validation vs. out-of-bootstrap validation] However, I cannot see the main difference between them in terms of performance estimation. That intuition is correct: in practice there's often not much of a difference between iterated $k$-fold cross validation and out-of-bootstrap. With a similar total number of evaluated surrogate models, total error [of the model prediction error measurement] has been found to be similar, although oob typically has more bias and less variance than the corresponding CV estimates. There are a number of attempts to reduce oob bias (.632-bootstrap, .632+-bootstrap) but whether they will actually improve the situation depends on the situation at hand. Literature: Kohavi, R.: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection, Mellish, C. S. (ed.) Artificial Intelligence Proceedings 14$^th$ International Joint Conference, 20 -- 25. August 1995, Montréal, Québec, Canada, Morgan Kaufmann, USA, , 1137 - 1145 (1995). Kim, J.-H. Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap , Computational Statistics & Data Analysis , 53, 3735 - 3745 (2009). DOI: 10.1016/j.csda.2009.04.009 Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005). The only thing I could figure out that in case of bootstrapping one could artificially produce virtually arbitrary number of such subsets while for CV the number of instances is a kind of limit for this. Yes, there are fewer combinations possible for CV than for bootstrapping. But the limit for CV is probably higher than you are aware of. For a data set with $n$ cases and $k$-fold cross validation, you have CV $\binom{n}{k}$ combinations without replacement (for k < n that are far more than the $k$ possibilities that are usually evaluated) vs. bootstrap/oob $\binom{2 n - 1}{n}$ combinations with replacement (which are again far more than the, say, 100 or 1000 surrogate models that are typically evaluated)
H: How to normalize a boolean feature for neural nets? I have a feature that is boolean and I would like to feed it to a neural net as one of the inputs. I think in theory the best is to encode as false->0 and true->1 because 0 as an input will deactivate weights of a neuron. Is this correct? AI: Actually, it is not clear what you mean by deactivating but if it means the output of neuron would be zero, it is not correct due to having bias term, also known as intercept. Furthermore, we usually use normalisation for features which are of different scales. Your boolean values do not have a large range. You don't need to scale them. If I want to be more precise, you may need depending on the other features' range, because they may change slightly among different input patterns and vary less than let say 1e-5 for different samples, but most of the time, booleans are not needed to be scaled.
H: Time series feature extraction from raw sensor data for classification? I have a tabular raw data from sensors with associated label and i want to extract the time series features like mean,max,min and std from the data all the sensor data and form another table or export to csv file so that i can do classification task on that data. Data table AI: For clarification: mean,max,min,std are not "time series features", they are data features in general. Assuming that you want to do it in python, you should take a look at pandas.DataFrame class. Once you initialize a Dataframe object with your tabular data, you can call its methods DataFrame.min(), DataFrame.max(), DataFrame.mean(), DataFrame.std() for your purpose. You can insert all these calculated characteristics into a new DataFrame and thereafter call Dataframe.to_csv() to export them in a csv file.
H: Deep learning with Tensorflow: training with big data sets Goal I am trying to build a neural network that recognizes multiple label within a given image. I started with a database composed of 1800 images (each image is an array of shape (204,204,3). I trained my model and concluded that data used wasn't enough in order to build a good model ( with respect to chosen metric). So i decided to apply data augmentation technique in order to get more images. I managed to get 25396 images ( all of them are of shape (204,204,3)). I stored all of them in arrays . I obtained (X,Y) where X are the training examples (is an array of shape (25396,204,204,3)) and Y are the labels ( an array of shape (25396,39) : the number 39 refers to the possible labels in a given image). Issues My data (X,Y) weights approximately arround 26 giga bytes. I successfully managed to use them . However, when i try to do manipulation (like permutations) I encounter memory Error in python. Exemple 1. I started jupyter and successfully imported my data (X,Y) x=np.load('x.npy') y=np.load('y.npy') output: x is an np.array of shape (25396,204,204,3) and y is an np.array of shape (25396,39). 2. I divide my dataSet in train and test by using sklearn built in function train_test_split X_train, X_valid, Y_train, Y_valid= `train_test_split(x_train,y_train_augmented,test_size=0.3, random_state=42)` output -------------testing size of different elements et toplogie: -------------x size: (25396, 204, 204, 3) -------------y size: (25396, 39) -------------X_train size: (17777, 204, 204, 3) -------------X_valid size: (7619, 204, 204, 3) -------------Y_train size: (17777, 39) -------------Y_valid size: (7619, 39) 3. I am creating a list composed of random batches extracted from (X,Y) and then iterate over the batches in order to complete the learning process for a given epoch :'this opperation is done in each epoch of the training part. Here is the function used in order to create the list of random batches: def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0): """ Creates a list of random minibatches from (X, Y) Arguments: X -- input data, of shape (input size, number of examples) Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples) mini_batch_size -- size of the mini-batches, integer Returns: mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y) """ np.random.seed(seed) m = X.shape[0] mini_batches = [] # Step 1: Shuffle (X, Y) permutation = list(np.random.permutation(m)) shuffled_X = X[permutation,:] shuffled_Y = Y[permutation,:] # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case. num_complete_minibatches = floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning for k in range(0, num_complete_minibatches): mini_batch_X = shuffled_X[k * mini_batch_size : (k + 1) * mini_batch_size, :] mini_batch_Y = shuffled_Y[k * mini_batch_size : (k + 1) * mini_batch_size, :] mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) ''' mini_batches.append((X[permutation,:][k * mini_batch_size : (k + 1) * mini_batch_size, :], Y[permutation,:][k * mini_batch_size : (k + 1) * mini_batch_size, :])) ''' # Handling the end case (last mini-batch < mini_batch_size) if m % mini_batch_size != 0: ### START CODE HERE ### (approx. 2 lines) mini_batch_X = shuffled_X[ num_complete_minibatches * mini_batch_size:, :] mini_batch_Y = shuffled_Y[ num_complete_minibatches * mini_batch_size:, :] ### END CODE HERE ### mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) ''' mini_batches.append((X[permutation,:][ num_complete_minibatches * mini_batch_size:, :], Y[permutation,:][ num_complete_minibatches * mini_batch_size:, :])) ''' shuffled_X=None shuffled_Y=None return mini_batches 4. I am creating a loop (of 4 iterations) and i am testing the random_mini_batch function in each iteration. At the end of each iteration I am assigning None values to the list of mini_batches in order to liberate memory and redo the random_mini_batch_function in the next iteration .So these line of codes works fine and I ve got no memory issues: minibatch_size=32 seed=2 for i in range(4): seed=seed+1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) minibatches=None minibatches_valid=create_mini_batches(X_valid, Y_valid, minibatch_size) print(i) minibatches_valid=None 5. If I add iteration over the different batches! then I am getting a memory issue. In other words, if a run this code i get an error: minibatch_size=32 seed=2 for i in range(4): seed=seed+1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) #added code: iteration over mini_batches for minibatch in minibatches: print('batch training number ') #end of added code minibatches=None minibatches_valid=create_mini_batches(X_valid, Y_valid, minibatch_size) print(i) minibatches_valid=None MemoryError Traceback (most recent call last) <ipython-input-13-9c1942cdf0bc> in <module>() 3 for i in range(4): 4 seed=seed+1 ----> 5 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) 6 7 for minibatch in minibatches: <ipython-input-3-2056fee14def> in random_mini_batches(X, Y, mini_batch_size, seed) 23 ---> 24 shuffled_X = X[permutation,:] 25 shuffled_Y = Y[permutation,:] 26 MemoryError: Does any one knows what's the issue with np.arrays ? And why does the simple fact of adding an loop (iterating over the list of batches) result in a memory error. Questions 1.Is it a good idea to load the whole dataset and then proceed to training? ( I need to create random batches in each epoch, so I don't see how to do so if the data is not preloaded ? You take random mini-batches from preloaded data, right?) 2. Are there any possible solutions guys? AI: Does any one knows what's the issue with np.arrays ? shuffled_X = X[permutation,:] makes copies, so it will allocate new array each time you do a permutation and blow up your memory. If you don't have problems with storing whole dataset in memory you should be fine if you create batches just by using random indices, not shuffling the entire data matrix (np.random.choice is your friend). Is it a good idea to load the whole dataset and then proceed to training? If your data fits into memory, then yes. You might want to try to learn what to do when that's not the case though - I personally find Keras - stuff from keras.preprocessing.image useful for that (at least for loading images).
H: How can I plot line plots based on an input python dataframe? I need help to create a plot using 3 different columns from a dataframe. my dataframe looks like this: index CMPGN_NM COST_SUM SUMRY_DT 2 GSA_SMB_SMB_Generic_BMM 8985 2018-05-17 3 GSA_SMB_SMB_Generic_BMM 7456 2018-05-18 4 GSA_SMB_SMB_Generic_BMM 5761 2018-05-19 10 GSA_SMB_SMB_Generic_BMM 4251 2018-05-20 5 GSA_SMB_SMB_Generic_BMM 10521 2018-05-21 6 GSA_SMB_SMB_Generic_BMM 10216 2018-05-22 7 GSA_SMB_SMB_Generic_BMM 11023 2018-05-23 9 GSA_SMB_SMB_Generic_BMM 11242 2018-05-24 8 GSA_SMB_SMB_Generic_BMM 8817 2018-05-25 1 GSA_SMB_SMB_Generic_BMM 6937 2018-05-26 0 GSA_SMB_SMB_Generic_BMM 4581 2018-05-27 I would like the output to look like the graph as below AI: Here's a solution: I've created a sample dataframe with some arbitrary values. Here it is: import pandas as pd import numpy as np from datetime import datetime test = pd.read_csv('/home/sagar/Desktop/test.csv') # Convert your date from 'str' to 'datetime' format test['SUMRY_DT'] = test['SUMRY_DT'].map(lambda x: datetime.strptime(x, '%Y-%m-%d')) # Set it as your dataframe index test.set_index('SUMRY_DT', inplace=True) test CMPGN_NM COST_SUM SUMRY_DT 2018-05-17 GSA_SMB_SMB_Generic_BMM 8985 2018-05-18 GSA_SMB_SMB_Generic_BMM 7456 2018-05-19 GSA_SMB_SMB_Generic_BMM 5761 2018-05-20 GSA_SMB_SMB_Generic_BMM 4251 2018-05-21 GSA_SMB_SMB_Generic_BMM 10521 2018-05-22 GSA_SMB_SMB_Generic_BMM 10216 2018-05-23 GSA_SMB_SMB_Generic_BMM 11023 2018-05-24 GSA_SMB_SMB_Spark 11242 2018-05-25 GSA_SMB_SMB_Generic_BMM 8817 2018-05-26 GSA_SMB_SMB_Generic_BMM 6937 2018-05-27 GSA_SMB_SMB_Generic_BMM 4581 2018-05-10 GSA_SMB_SMB_Spark 7089 2018-05-13 GSA_SMB_SMB_Spark 2121 2018-05-11 GSA_SMB_SMB_Spark 234 2018-05-12 GSA_SMB_SMB_Spark 11077 # Plot your data test.groupby('CMPGN_NM')['COST_SUM'].plot(legend=True) With the actual data, your chart would resemble the picture you have provided. Hope this helps.
H: In which epoch should i stop the training to avoid overfitting I'm working on an age estimation project trying to classify a given face in a predefined age range. For that purpose I'm training a deep NN using the keras library. The accuracy for the training and the validation sets is shown in the graph below: As you can see the validation accuracy keeps rising with smaller steps than the training accuracy. Should i stop training at the epoch 280 in which the training and the validation accuracy have the same value or should i proceed the training process as long as the validation accuracy is rising, even thought the training accuracy value is also getting at overfitted values (eg. 93%). AI: As long as your validation accuracy increases, you should keep training. I would stop when the test accuracy starts decreasing (this is known as early stopping). The general advise is always to keep the model that performs the best in your validation set. Although it is right that your model overfits a little since epoch 280, it is not necessarily a bad thing provided that your validation accuracy is high. In general, most machine learning models will have higher training accuracy compared to validation accuracy, but this doesn't have to be bad. In a general case, you expect your accuracy to behave in the following way. In your case, you're before the early stopping epoch, so even if your training set accuracy is higher than your test set accuracy, it is not necessarily an issue.
H: QGtkStyle could not resolve GTK I have installed Orange 3 in Ubuntu 18.04 using Anaconda. It runs just fine, but the menus appear as blank. I obtain the following error when I execute it: QGtkStyle could not resolve GTK. Make sure you have installed the proper libraries. I have been trying to sort it for days without success. Any idea of how to fix this? AI: A quick fix has been published at https://stackoverflow.com/a/50583925/3866828 It works adding export QT_STYLE_OVERRIDE=gtk2 at the end of the ~/.bashrc file. Then it's a matter of executing source ~/.bashrc to see the content of the menus the next time that Orange 3 is loaded.
H: Export pandas to dictionary by combining multiple row values I have a pandas dataframe df that looks like this name value1 value2 A 123 1 B 345 5 C 712 4 B 768 2 A 318 9 C 178 6 A 321 3 I want to convert this into a dictionary with name as a key and list of dictionaries (value1 key and value2 value) for all values that are in name So, the output would look like this { 'A': [{'123':1}, {'318':9}, {'321':3}], 'B': [{'345':5}, {'768':2}], 'C': [{'712':4}, {'178':6}] } So, far I have managed to get a dictionary with name as key and list of only one of the values as a list by doing df.set_index('name').transpose().to_dict(orient='list') How do I get my desired output? Is there a way to aggregate all the values for the same name column and get them in the form I want? AI: Does this do what you want it to? from pandas import DataFrame df = DataFrame([['A', 123, 1], ['B', 345, 5], ['C', 712, 4], ['B', 768, 2], ['A', 318, 9], ['C', 178, 6], ['A', 321, 3]], columns=['name', 'value1', 'value2']) d = {} for i in df['name'].unique(): d[i] = [{df['value1'][j]: df['value2'][j]} for j in df[df['name']==i].index] This returns Out[89]: {'A': [{123: 1}, {318: 9}, {321: 3}], 'B': [{345: 5}, {768: 2}], 'C': [{712: 4}, {178: 6}]}
H: How to check if audio samples have only noise or are silent? I have a wav file I want to split into frames in order to feed it into a machine learning model. The problem is that the audio has silence with some noise at some points. My problem is that I do not want to include frames with no sound (or only noise) in my dataset. One solution I believe is to use a model for speech recognition or something similar to do classification and see if a frame includes only silence or noise. However, I am searching for a solution that will not rely on machine learning but mostly on signal processing techniques or some other pre-processing method. So, how could I exclude these frames with only silence or noise? AI: First you should apply speech enhancement algorithms to remove noise from speech. Then, you should use VAD (Voice Activity Detector) to remove silence from speech.
H: Isolation forest: how to deal with identical values? I am trying to develop my own implementation of isolation forest algorithm. However I don't know how to deal with points that have the same value for a given feature. To better understand the problem, consider this example: in my dataset I have the following data: (1, 2), (3,5), (3,4) We may suppose that after one iteration, we get the following split: root / \ / \ / \ (1,2) (3,4), (3,5) Now, how am I supposed to deal with the right branch? If (due to randomness) we decide to split according to the second feature, we should not have any problem because they are different (4 and 5). However what about if we decide to split according to the first feature (which is 3 in both cases)? Should I repeat the random feature selection until I can split the remaining data? AI: No need to use all the features. That is the cool part of having an ensemble. If one tree who is using a small set of data with some set of features is unable to use the feature, some other tree will use it. If the point is a true outlier, the score that is calculated later will offer the due reflection.
H: How to use Scikit-learn's affinity propagation clustering with my own datasets? I am trying to cluster my datasets using affinity propagation. I followed this and this links to grasp the basics of affinity propagation clustering. The sample code available at sklearn is as follows: from sklearn.cluster import AffinityPropagation from sklearn import metrics from sklearn.datasets.samples_generator import make_blobs # Generate sample data centers = [[1, 1], [-1, -1], [1, -1]] X, labels_true = make_blobs(n_samples=300, centers=centers, cluster_std=0.5, random_state=0) # Compute Affinity Propagation af = AffinityPropagation(preference=-50).fit(X) cluster_centers_indices = af.cluster_centers_indices_ labels = af.labels_ While running it, it works as stated on the website. However, I could not understand it completely. I want to modify this code to use with my datasets. My datasets consist of values from different sensors on a 2-D surface. I want to cluster the values with similar sensors readings at various points on the 2-D surface. How can I do it? Thank you. AI: Clustering algorithms usually assume that you have positions of objects, and want to find dense groups of observations. If I understand you correctly, you have a 2d grid of sensor readings, and you want to segment them into regions. That is a slightly different problem. If you'd just put your sensor readings into a clustering, then the clusters will not be spatially coherent: clustering assumes there is no particular order to the points. So you'll need to look into segmentation. A naive way would be to use (sensor.x, sensor.y, sensor.value) tuples. Including the sensor positions will cause the results to be somewhat spatially coherent. But that makes it very sensitive to scaling, and there is no "correct" way of scaling this. There is a trade-off between spatial coherence and measurement coherence.
H: Getting same result for all predictions in cnn This is my first time training a model in cnn and predicting results but I am getting same value for images I have input. Here is my code import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.optimizers import Adam from keras.layers.normalization import BatchNormalization from keras.utils import np_utils from keras.layers import Conv2D, MaxPooling2D, ZeroPadding2D, GlobalAveragePooling2D from keras.layers.advanced_activations import LeakyReLU from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Convolution2D from keras.layers import MaxPooling2D from keras.layers import Flatten from keras.layers import Dense classifier=Sequential() (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape(X_train.shape[0], 28, 28, 1) X_test = X_test.reshape(X_test.shape[0], 28, 28, 1) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train/=255 X_test/=255 number_of_classes = 10 Y_train = np_utils.to_categorical(y_train, number_of_classes) Y_test = np_utils.to_categorical(y_test, number_of_classes) classifier.add(Convolution2D(32,3,3,input_shape= (28,28,1),activation='relu')) classifier.add(BatchNormalization(axis=-1)) classifier.add(MaxPooling2D(pool_size=(2,2))) classifier.add(Convolution2D(32,3,3,activation='relu')) classifier.add(BatchNormalization(axis=-1)) classifier.add(MaxPooling2D(pool_size=(2,2))) classifier.add(Flatten()) classifier.add(Dense(output_dim=256,activation='relu')) classifier.add(BatchNormalization()) classifier.add(Dense(output_dim=10,activation='softmax')) classifier.compile(optimizer='adam',loss='categorical_crossentropy',metrics= ['accuracy']) gen = ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3, height_shift_range=0.08, zoom_range=0.08) test_gen = ImageDataGenerator() train_generator = gen.flow(X_train, Y_train, batch_size=64) test_generator = test_gen.flow(X_test, Y_test, batch_size=64) classifier.fit_generator(train_generator, steps_per_epoch=60000, epochs=1, validation_data=test_generator, validation_steps=10000) import cv2 image = cv2.imread("pitrain.png") gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) # grayscale ret,thresh = cv2.threshold(gray,150,255,cv2.THRESH_BINARY_INV) #threshold kernel = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3)) dilated = cv2.dilate(thresh,kernel,iterations = 13) # dilate im2,contours, hierarchy = cv2.findContours(dilated,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) # get contours # for each contour found, draw a rectangle around it on original image for contour in contours: # get rectangle bounding contour [x,y,w,h] = cv2.boundingRect(contour) # discard areas that are too large if h>300 and w>300: continue # discard areas that are too small if h<40 or w<40: continue # draw rectangle around contour on original image cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,255),2) image.shape image=image[:,:,:1] newimg = cv2.resize(image,(28,28)) img.shape img = np.reshape(newimg,[1,28,28,1]) cv2.imshow("screen",img) classifier.predict(img) The output I am getting is array of zeros with 1 at third position. This is where I copied the contour part from https://www.quora.com/How-do-I-extract-a-particular-object-from-images-using-OpenCV Epoch is equal to 1 because I only wanted to test my model and still I got accuracy of above 99% AI: Ok so I changed your model to simplify the training for example's sake. I will go through the example in detail below. When you feed a value from a different distribution to your model you are always at risk of misclassification. For example, your model was trained using handwritten numbers, thus it is not surprising for the model to missclassify numbers that are typewritten. Of course a deeper more complex model could do this. Getting the data The MNIST data is of size 28 by 28. We will convert these values to floating point values and then normalize them to the range 0 to 1. We will also determine that we have 10 output classes. from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.callbacks import ModelCheckpoint from keras.models import model_from_json from keras import backend as K from keras.datasets import mnist import numpy as np (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. # The known number of output classes. num_classes = 10 # Input image dimensions img_rows, img_cols = 28, 28 Next we need to reshape the data such that it matches with the Tensorflow framework which we will be using under the hood of Keras. This requires that the instances be the first dimension and it also requires a channels dimension as the last one. thus for the MNIST data we need to have $(6000, 28, 28, 1)$. # Channels go last for TensorFlow backend x_train_reshaped = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test_reshaped = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) Then we will bin the outputs into one-hot-encoded vectors # Convert class vectors to binary class matrices. This uses 1 hot encoding. y_train_binary = keras.utils.to_categorical(y_train, num_classes) y_test_binary = keras.utils.to_categorical(y_test, num_classes) Let's build our simple model, you can add more layers to this to make it more robust model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) Let's train our model epochs = 4 batch_size = 128 # Fit the model weights. model.fit(x_train_reshaped, y_train_binary, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test_reshaped, y_test_binary)) This should yield about 98.75% on the validation set. This is pretty good and should be enough to test some new data using this set. Validating results using MNIST Let's pass through a few random values from the MNIST dataset to see if they get rightfully classified by our model. Note how I set the dimensionality of the data. The instance dimension must exist even if we only have a single instance to predict as well as the channel dimension ix = 0 plt.imshow(X_test[ix,:,:,0], 'gray') plt.show() temp = np.zeros((1,28,28,1)) temp[0,:,:,0] = X_test[ix,:,:,0] model.predict_classes(temp) array([7], dtype=int64) Ok so that worked! Let's try another one ix = 100! array([6], dtype=int64) That worked as well! Validating the model on novel data The image you sue has to match stylistically with the ones in the MNIST data set. Ideally they should be of the same distribution. The image you provided luckily gets correctly classified but it should be noted that this may not be the case for other numbers that are typewritten. If you want your algorithm to detect these then you should really train with typewritten numbers as well as handwritten numbers. from skimage.transform import resize def rgb2gray(rgb): return np.dot(rgb[...,:3], [0.299, 0.587, 0.114]) im = plt.imread("7 type.jpg") im = rgb2gray(im) im = resize(im, (28, 28)) plt.imshow(im[:,:], 'gray') plt.show() temp = np.zeros((1,28,28,1)) temp[0,:,:,0] = im model.predict_classes(temp) array([7], dtype=int64) Validating on hand drawn numbers More realistically would be to use hand drawn numbers since that is what we used to train. I just drew this one using Paint. array([8], dtype=int64) array(2, dtype=int64)
H: Model's loss weights I've model with two output layers, age and gender prediction layers. I want to assign different weight values for each output layer's loss. I've the following line of code to do so. model.compile(loss=[losses.mean_squared_error,losses.categorical_crossentropy], optimizer='sgd',loss_weights=[1,10]) My question is what is the effect of loss weights on performance of a model? How can I configure the loss weights so that the model can perform better on age prediction? AI: This will affect how the backpropagation for each of these outputs will cause the intermediate nodes within the network to be updated. If the output nodes' error were equal then the gradient descent process of the intermediate nodes will favor the resulting error of each of these outputs equally. By more heavily weighting an output node it means that the gradient descent will favor a set of parameters that perform better on that output node.
H: Naive Bayes for SA in Scikit Learn - how does it work Okay so i scrape data from the web on movie reviews. I also have already got my own 'dictionary' or 'lexicon' with words and their labels (1-poor, 2-ok, 3-good, 4-very good, 5-excellent). SO the input are paragraphs of movie reviews and i use Scikit Learn Naive Bayes to evaluate the sentiment of each comment , which would be a paragraph. I would like to know how it works under the hood. I ASSUME it uses a bag of words concept. So , I am describing my assumption about how Scikit Learn Naive Bayes works for SA. How close am i to the reality ? I am neither a data scientist nor a statistician, but this is a summary of what i THINK happens in Naive Bayes algorithms for Sentiment Analysis, in Scikit Learn. Training Phase Lets assume i am using labels like 1,2,3,4,5 for each paragraph in the training set. Each paragraph is one unit for training as well as evaluation. STEPS :- 1) Drop unwanted words like THE, BUT, AND and so on 2) Read the first word say 'BEACH', pick it's label from it's parent paragraph, say '5'. So attach 5 to BEACH and put it back in the bag. 3) So add up the number of times each word matched a given label. Same word 'BEACH' could occur with multiple labels across the input paragraphs. So keep count of each label for the given word. So maybe ['BEACH' - label 1 - 10], ['BEACH' - label 2 - 8] and so on. 4) After above step for all words, sum up the probability of getting a certain label for a given word. So "BEACH" may have 1/6 probability of label 1, 2/6 probability of 2, 1/6 probability of 3, 1/6 probability of 4 and 1/6 probability of 5. So these 5 probabilities for 'BEACH" are put back in the bag of words. Remove the Word-label-count entities from Step 3 , which were used to obtain these probabilities. So now each word can have a maximum of 5 probabilities in the bag. Evaluating Sentiment Analysis - New Data 1) Now when we feed the real data, when it hits a word, it checks if it is found in its bag. If not it omits it 2) Say it finds BEACH in one comment. It checks highest probability for 'BEACH' is 2/6 for label 2. So it picks label 2 for the word BEACH. Similarly it picks corresponding labels with highest probability for all the words in bag matching the input. 3) Now it sums up all the labels for all words for the paragraph whose SA needs to be computed. Let us say they add up to 20. It has to convert this 20 to somewhere between 0 and 1. So it uses a logarithmic conversion which converts 20 to somewhere between 0 and 1. Let us say 0.75. 4) So 0.75 is the weight for this comment. Is this how it works ? Thanks for any inputs. AI: Collecting your data From the comments you state that you wish to classify comments into a label (1-poor, 2-fair, 3-ok, 4-good, 5-very good). Thus you will be training a model that maps a set of words (the paragraph) to a number which represents the rating. I will assume that you also have labels for some of the comments which we will call your training set. I do not have access to your data but I have a similar dummy example which uses comments and tries to classify them as being a review of a hotel or a car rental. I think this example would suit this purpose nicely. Pre-process the data There's many techniques we can use for this purpose. For example some popula rones are bag-of-words, tf-idf, and n-grams. These techniques will take a comment and vectorize it into a set of features. The features which we will use are learned from the training set and then will be the same for new comments coming in. We will use bag-of-words for this example. Bag-of-words This method has a dictionary of words which we identify from our training set. We take all the unique words after stemming. We then count the number of times that each word shows up in the comment, that is the vector that represents a given instance. For example if our dictionary looks like ['car', 'book', 'test', 'wow'] And the comment was Wow, this car is AMAZING!!!!! The resulting vector would be [1, 0, 0, 1] We will repeat this vectorization process for each comment. We will then be left with a matrix where the rows represents each comment and the columns represent the frequency at which these words appear in the comment. We will call this matrix $X$ it is our dataset. Each of these instances will also have a label we will call that vector $Y$. We want to map each row in $X$ to it's label $Y$. Assume this is the data you have for 40 instances in a training dataset for the cars/hotel comments. With it's corresponding label on the right. Label 0 is for cars and label 1 is for hotels. The columns of the matrix $X$ are ['car', 'passenger', 'seat', 'drive', 'power', 'highway', 'purchase', 'hotel', 'room', 'night', 'staff', 'water', 'location'] Split the data We will split the data in order to test the accuracy of our model from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3) Apply Naive Bayes from sklearn.naive_bayes import GaussianNB clf = GaussianNB() clf = clf.fit(X_train, y_train) clf.score(X_test, y_test) This gives a result of 1.0 Perfect classification! How does it work? Gaussian Naive Bayes assumes that each feature is described by a Gaussian distribution (you can pick other distributions which may be better suited for your data, such as multinomial). It will then assume that each feature is entirely independent of one another. Thus we can say that the probability of $P(Y=0|X) = P(X_0|Y=0)P(X_0) * ... * P(X_n|Y=0)P(X_n)$ $P(Y=1|X) = P(X_0|Y=1)P(X_0) * ... * P(X_n|Y=1)P(X_n)$ for your $n$ features. The the one that is the largest is going to be our label. If $P(Y=1|X)$ is larger then we will say that the comment was that of a car. So to do this we need to train the parameters of our probability distributions $P(X_i|Y)$. To calculate this term we need to know the parameters of our distribution, in our case we are using a Gaussian distribution, thus we need the mean and the variance $\mathcal{N}(\mu, \theta)$. We will build a dictionary of values for each label that contains the mean and variance for each feature. This is the training stage! Homemade Naive Bayes import csv import random import math import pandas as pd import numpy as np class NaiveBayes(object): def __init__(self): self.groupClass = None self.stats = None def calculateGaussian(self, x, mean, std): exponent = np.exp(-1*(np.power(x-mean,2)/(2*np.power(std,2)))) std[std==0] = 0.00001 return (1 / (np.sqrt(2*math.pi) * std)) * exponent def predict(self, x): probs = np.ones((len(x), len(self.stats))) for ix, instance in enumerate(x): for label_ix, label in enumerate(self.stats): probs[ix, int(label)] = probs[ix, int(label)] * \ np.prod(self.calculateGaussian(instance, self.stats[label][0], self.stats[label][1])) return np.argmax(probs, 1) def score(self, x, y): pred = self.predict(x) return np.sum(1-np.abs(y - pred))/len(x) def train(self, x, y): self.splitClasses(x, y) self.getStats() pass def splitClasses(self, x, y): groupClass = {} for instance, label in zip(x, y): if not label in groupClass: groupClass.update({label: [instance]}) else: groupClass[label].append(instance) self.groupClass = groupClass def getStats(self): stats = {} for label in self.groupClass: mean = np.mean(np.asarray(self.groupClass[label]), 0) std = np.std(np.asarray(self.groupClass[label]), 0) stats.update({label: [mean, std]}) self.stats = stats clf = NaiveBayes() clf.train(X_train, y_train) clf.score(X_test, y_test) Once again we get an accuracy of 1.0 on the testing set!
H: Accuracy reduces drastically when using TruncatedSVD with hashingvector I have around 0.8 million product description with categories. There are around 280 categories. I want to train a model with given dataset so that in future I can predict Category for the given product description. Since the dataset was large I was not able to able TF-IDF on that data it throws MemoryError. I found that Hashingvector was desirable when dealing with large data. But when Hashingvector was applied I found it produced data with 1048576 features. It took around 1 hour to train and SGD model and produced 78% accuracy. Code: import pandas as pd from sklearn.feature_extraction.text import HashingVectorizer from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.decomposition import TruncatedSVD from sklearn.linear_model import SGDClassifier from sklearn.calibration import CalibratedClassifierCV data_file = "Product_description.txt" #Reading the input/ dataset data = pd.read_csv(data_file, header = 0, delimiter= "\t", quoting = 3, encoding = "utf8") data = data.dropna() train_data, test_data, train_label, test_label = train_test_split(data.Product_description, data.Category, test_size=0.3, random_state=100, stratify=data.Category) sgd_model = SGDClassifier(loss='hinge', n_iter=20, class_weight="balanced", n_jobs=-1, random_state=42, alpha=1e-06, verbose=1) vectorizer = HashingVectorizer(ngram_range=(1,3)) data_features = vectorizer.fit_transform(train_data.Product_description) sgd_model.fit(data_features, train_label) test_data_feature = vectorizer.transform(test_data.Product_Combined_Cleansed) Output_predict = sgd_model.predict(test_data_feature) print(accuracy_score(test_label, Output_predict)) Output: Accuracy 77.01% Since the dimension was high I thought reducing the dimension would increase the accuracy and reduce the training time. I used TrancatedSVD to reduce the dimension but this drastically reduced the prediction accuracy but reduced the training time to 10 minutes. Code2: from sklearn.decomposition import TruncatedSVD clf = TruncatedSVD(100) clf.fit(data_features) Output: Accuracy 14% Edit: When I tried TruncatedSVD with 1000 as limit it throws memory error so only I opted to use 100 as the limit. It is stated that reducing n_features on HashingVector will cause collisions there can be collisions: distinct tokens can be mapped to the same feature index. However, in practice, this is rarely an issue if n_features is large enough (e.g. 2 ** 18 for text classification problems) in scikit site. I got optimum accuracy when I used ngram between 1 to 3 so only used that. AI: Some possible directions: when you're unsure how your BoW model works check different n-gram ranges (did you even check how it works with only unigrams?) TruncatedSVD by default only iterates 5 times (n_iter), so you could increase that It's not at all surprising to get such drastic change of classification quality for such low dimensionality. Some questions you could ask are why 100 dimensions? Does TSVD work 1000 dimensions or something like that? What is the reconstruction error (how does TSVD's explained_variance_ratio_ look like)? HashingVectorizer also has parameter that can control number of features, namely n_features.
H: Predictive modeling on time series: How far should I look back? I am building a model for classification on a dataset which is collected by recording a system's behaviour through 2 years period of time. The model is going to be used in the same system in real time. Right now I am using the whole dataset (2 years) to build my classifier but I doubt it might not be the right approach. Since I am trying to model the behaviour of a system in real time, it is possible that older data points in the dataset have become irrelevant or uninformative compared to system's current environment (distribution of the system's input changing drastically through time for example). My question is how can I determine which parts of the dataset to use for training, taking the last 1.5 years instead of all 2 years for example. Is there a statistical way to help me decide that a specific time period is not helping or possibly hurting the model to correctly classify more recent data points? AI: Hold out the most recent block of data (maybe something like a month) to use as your validation data. Try multiple models and different ways of training those models. For example, training only on the most recent couple of months vs the whole time period. Take a small number of your top performers and test their performance on the validation set and see what does better. There’s nothing special about this answer. Anytime you want to know is X better than Y, testing performance on a held out validation set is almost always the way to go.
H: GAN to generate a custom image does not work I have been training a GAN in the cloud for some time now. I use Google's free credit. My laptop with a CPU doesn't seem to be up to the task. The image I want to generate is this. Even though the number of epochs is about 15000 I don't get anything close to the original. This is the main code. I don't claim to fully understand the deep layers. It took a few days to even write this code. The rest of the code is boilerplate to train. There is no compilation error and I look at the images using TensorBoard. The output from the generator is (1024,1024) images. Should this be the same as my original which is a (299,299) images. Should I calculate using formulas how each layer transforms the image to understand it better ? How do I fix this ? I have mixed and matched API's just to create a working example assuming that doesn't create any problem. X = tf.placeholder(tf.float32, shape=[None, 299, 299, 1], name='X') Z = tf.placeholder(dtype=tf.float32, shape=(None, 100), name='Z') is_training = tf.placeholder(dtype=tf.bool,name='is_training') keep_prob = tf.placeholder(dtype=tf.float32, name='keep_prob') keep_prob_value = 0.6 def generator(z,reuse=False, keep_prob=keep_prob_value,is_training=is_training): with tf.variable_scope('generator',reuse=reuse): linear = tf.layers.dense(z, 1024 * 8 * 8) linear = tf.contrib.layers.batch_norm(linear, is_training=is_training,decay=0.88) conv = tf.reshape(linear, (-1, 128, 128, 8)) out = tf.layers.conv2d_transpose(conv, 64,kernel_size=4,strides=2, padding='SAME') out = tf.layers.dropout(out, keep_prob) out = tf.contrib.layers.batch_norm(out, is_training=is_training,decay=0.88) out = tf.nn.leaky_relu(out) out = tf.layers.conv2d_transpose(out, 32,kernel_size=4,strides=2, padding='SAME') out = tf.layers.dropout(out, keep_prob) out = tf.contrib.layers.batch_norm(out, is_training=is_training,decay=0.88) out = tf.layers.conv2d_transpose(out, 1,kernel_size=4,strides=2, padding='SAME') out = tf.layers.dropout(out, keep_prob) out = tf.contrib.layers.batch_norm(out, is_training=is_training,decay=0.88) print( out.get_shape()) out = tf.nn.leaky_relu(out) tf.nn.tanh(out) return out def discriminator(x,reuse=False, keep_prob=keep_prob_value): with tf.variable_scope('discriminator',reuse=reuse): out = tf.layers.conv2d(x, filters=32, kernel_size=[3, 3], padding='SAME') out = tf.layers.dropout(out, keep_prob) out = tf.nn.leaky_relu(out) out = tf.layers.max_pooling2d(out, pool_size=[2, 2],padding='SAME', strides=2) out = tf.layers.conv2d(out, filters=64, kernel_size=[3, 3], padding='SAME') out = tf.layers.dropout(out, keep_prob) out = tf.nn.leaky_relu(out) out = tf.layers.max_pooling2d(out, pool_size=[2, 2],padding='SAME', strides=2) out = tf.layers.dense(out, units=256, activation=tf.nn.leaky_relu) out = tf.layers.dense(out, units=1, activation=tf.nn.sigmoid) return out GeneratedImage = generator(Z) DxL = discriminator(X) DgL = discriminator(GeneratedImage, reuse=True) AI: Generating text as an image is extremely difficult and I have never seen a GAN applied in the image space to generate pages of text. The reason this is so hard is because of the way in which text is perceived by humans and the way a GAN works. Humans read arbitrary symbols which are sequenced from left to right along the same line and combined into rows. Moreover, these symbols are combined into groups which represent words. This is extremely complex. The symbols must be intelligible, the words must be real ones as invented by humans. Lastly, the combination of words into sentences need to be logical and follow guidelines of human language. And even FURTHER the sequence of sentences must be coherent to transmit a message. A GAN operating in image space will try to learn the distribution of the training set in a pixel-wise manner as that is your inputs. The distribution of the pixels will not effectively be able to group characters together in a logical manner, and the words will not be real, and the sentences will all be nonsense. You will most likely end up with blurry lines of random looking symbols, kind of like a zebra print. Another problem is the amount of data you have. Even if this problem was possible with a GAN you would need tens of thousands of instances to train a GAN effectively. What I suggest I suggest to read the texts that you are training with and use this data to train a LSTM which has been shown to be very effective for generating language. But, be warned even with the best LSTMs you rarely get text that can fool humans into thinking its real. The LSTM will provide you with your text. Then you can train a GAN to generate the characters of the alphabet and you can use this generator to print out the text that the LSTM generates.
H: Dimensions For Matrix Multiplication Can anyone explain why the following code produces input_t with a shape of (32,) instead of (,32), given the fact that inputs has a shape (100, 32)? Shouldn't input_t produce a vector with 32 attributes/columns? import numpy as np timesteps = 100 input_features = 32 output_features = 64 inputs = np.random.random((timesteps, input_features)) state_t = np.zeros((output_features,)) W = np.random.random((output_features, input_features)) U = np.random.random((output_features, output_features)) b = np.random.random((output_features,)) successive_outputs = [ ] for input_t in inputs: output_t = np.tanh(np.dot(W, input_t) + np.dot(U, state_t) + b) successive_outputs.append(output_t) state_t = output_t AI: Imagine the matrix inputs as a 2D table. You have 100 rows and 32 columns. Then the for loop acts as an iterator which will return values along the first dimensional axis. This dimension thus disappears and returns the remaining dimensions. When there is a single dimension the default in Python is $(n,)$. A matrix $(100, 32)$ iterates through $(32,)$ A matrix $(100,28,28)$ iterated through $(28,28)$ A matrix $(100,2,2,2)$ iterated through $(2,2,2)$
H: Determine useful features for machine learning model I am working with a dataset with hundreds of features. I wish to create a simple machine learning model using 7-10 features from the original dataset. My question is this: What quantitative metrics can I use to determine that a feature will be useful to the learning model? I have been comparing the distribution of the target mean over the feature groups, to the target mean of the dataset as a whole. For example, take a binary feature X and a binary target. Let's say the target has a mean of 0.10 when taken over the entire dataset. To analyze the feature X, I take the target mean for each group within feature X. mean (X=0) = 0.07 mean (X=1) = 1.15 In this way, I can observe the effect of a feature on the target. I know that there must be some stronger metrics which people use to determine the strength of a feature. In school I used p tests to determine the statistical significance of a variable. Is there an analog in DS/ML? AI: I suggest taking a look at this page for some more ideas: Feature Selection That being said a couple of ideas that come to mind quickly, is to: use a tree based method (like Random Forest) and look at your feature importances. Scikit Learn has a handy class for doing just that see the link above. Use some sort of regularization/penalty like L1 or L2 regularization. That will force non-useful features to have parameters close to zero. Recursively remove variables and see what the resulting output is and cross-validate. Again sklearn has a method for this. Generally, these methods will be "expensive" as you are fitting multiple models to get you where you need to go.
H: which deep learning text classifier is good for health data I have a data set like this: postID Sentence drugYesOrNo 1 He went out with his friends 2 He behaved nicely while talking with me 3 He stopped using drugs after a while 1 4 He did not meet any friend during last week 1 He slowly cut usage of drugs 1 2 He smiled like he is good 3 He did not seem happy with his situation As you see there is two features. the first feature is our sentence and the second feature shows is this sentence is a sign that patient has stopped his drug or not. the first column shows that sentence which are part of a paragraph. for example HERE sentence 1-4 are one paragraph in which we have splited them to see which sentence exactly show the stopping of drugs. so sentence 3 in first paragraph shows this. In the second case, sentence 1-3 is part of a paragraph. here sentence one shows that this person stopped using drugs(which is not good the person should continue) so my goal is to apply a deep learning text classifier on my text data and make a model and so when I received A NEW PARAGRAPH, I will be able to predict if the person has stopped his drugs or not. first question, with this case study, which deep learning text classifier may work best? Secondly, as you see we have cut the paragraph in to series of the sentences. but in reality we will give a paragraph to test the model. in your idea what will be the best approach to deal with this? the thing that came to my mind is that while testing and receiving a paragraph, we again split the paragraph to sentences and give those sentences to the model but I am not sure it is a good approach. we have 900 of these sentences, again Im not sure with this much of data it will be goof to apply deep learning classifier on it. I appreciate it if you give me your point of view:) Update after reading comments I asked a couple of guys to make such a dataset for me. I mean looking at paragraph, spliting, then saying which sentence has that meaning(stopping drugs or not). what if I did not ask them to explicitly say which sentence does have that meaning and just point which paragraph does have that meaning(stopping drugs or not). do you think labeling exactly which sentence does have that meaning has been a good idea rather than which paragraph does have that meaning? I hope I am clear enough :) AI: Yes, you should split the paragraph to sentences and give those sentences to the model. Your deep structure should be like this: In the first layer, you must put a word embedding layer to represent a sentence as a sequence of vectors. In the second layer, you must put LSTM to be able to model your sequence vector as a single vector. Now, you can add successive layers with linear, relu or sigmoid activation functions to make your model deeper. In the last layer, you must use sigmoid activation function to do binary classification.
H: Storing engineered features in a database I have some data in raw csv files which I would like to store in a MySQL database. The problem is there are constant feature engineering done on this dataset so coming up with one schema to fit all the needs is not possible. The approach I thought of was to have one main table where the original data is held, and for each new feature that is created, a new table is created. Then, the user can join the original table with other tables which includes the features they want and use it for their own purposes. With the above approach, I'm worried about having too many joins when a user needs numerous features. Please advise on alternative approaches to this problem. Thanks in advance! AI: Depending on the technical level of your users, the frequency of the update, the complexity of the transformation, the need to share these features among the users, etc. would a custom VIEW for each user be a feasible solution? Alternately, would you consider some ETL tools where you can create/ modify calculated columns, customizing the data pipeline as needed?
H: Least Squares Regression $Ax=b$ when $A$ is fixed and $b$ is varied The typical setting for least squares regression (or over-determined linear system) for $Ax=b$ is to solve $x$ given $A$ and $b$. In other words, $A$ and $b$ are fixed when we solve the problem. My question is that is there any application that $b$ is changed by users (i.e., $b$ is interpreted as a query) while $A$ is fixed? AI: In least squares regression, the problem that you solve is: $X^T X \beta = X^T y$ where $\beta$ are the parameters, $X$ the regressors and $y$ the predicted variable. If you want, in your notation $A$ to be constant and $b$ to change, you need to have constant regressors and change the predicted variable $y$. I don't think there are applications that the regressors are not changed but the predictors do change, but if you want to find one it will probably be in online learning.
H: Which convolution should I use? Conv2d or Conv1d I've dataset which contains dlib landmark points of the faces. I'm using keras to train a model. The dataset shape is (length_of_dataset,68,2). I know that I've two options. The first is using conv1d with input_shape = (68,2). The second is using conv2d with input_shape = (1,68,2). My question is which one is better? And why? AI: When using Conv2D , the input_shape does not have to be (1,68,2). The number of samples does not have anything to do with the convolution, one sample is given to the layer at each time anyway. What changes is the number of spatial dimensions of your input that is convolved: With Conv1D, one dimension only is used, so the convolution operates on the first axis (size 68). With Conv2D, two dimensions are used, so the convolution operates on the two axis defining the data (size (68,2)) Therefore you have to carefully chose the filter size. For instance, if you chose a Conv2D with a filter size (4,2), it will produce the same results as a Conv1D with size (4) as it will operate fully on the second axis of data. Finally, there is no response to what is the best method. Generally, Conv2D work well on images and Conv1D on text. Given the size of your second dimension of data, Conv2D does not seem to make a lot of sense hence Conv1D should work well.
H: Implement Sliding Window Algorithm to Detect Spikes How do I implement sliding window algorithm with a window size of 10 and visualize the data iteratively to see spikes/possible outliers in the dataframe, using python? Please help a beginner. AI: welcome to DS-SE and to Data Science in general! :) Your problem can be solved really easily in Python. Please, take a look at Pandas DataFrame class to represent your data, it makes it really convenient because of all the pre-built methods that includes. One of its methods is pandas.DataFrame.rolling() (see here) that does exactly what you asked for: rolling (sliding) window calculations, please take a look! Hope it helps!
H: How to learn Machine Learning I want to get into machine learning. I've been in information security for the last 10 years so i have an IT background. Where is the best place to start: Can anyone recommend a good book? And also a platform i can use to practice (preferably free) Also if there are any online courses someone could recommend that would be great. I looked into AWS's offering of machine learning but that is not included in the free tier. Any help/advice would be much appreciated. Thank You. AI: Online Course: Andrew Ng, Machine Learning Course from Coursera. Book: Tom Mitchell, Machine Learning, McGraw-Hill, 1997.
H: Making bigram features from a particular dataset I have a folder which has a number of files which have a format like these madvise write write write write read read madvise ioctl ioctl getuid epoll_pwait read recvfrom sendto getuid epoll_pwait that is it is a set of words which repeat.This is how all the files are like. Now I have created a feature vector table using unigram that is each word becomes a feature and each file becomes a row where I put the frequency of that word occuring in the respective columns. Now I want to create a similar FVT using bigrams.I was wondering how to do that in this case. AI: Bigram is better to be used with sentences. In your case, files contain a list of words, as I could understand. Therefore using bigrams in your project might not yield expecting results. However, if you are still willing to do that, this is how you calculate bigrams: Take the list of words and count the frequencies of adjacent words. Ex: (madvise, write) - 1 (write, write) - 3 (write, read) - 1 (read, read) - 1 (read, madvise) - 1 . . . . (sendto, epoll_pwait) - 1
H: Tools to explore various datasets TL:DR - Do you know of good automated tools to explore a dataset? Long version: I have a few different datasets to work with from most various areas of business. I wonder if there are good software/scrips that could automatically round up the following answers for me: Things I would be looking for: datatype of a column? dimension or measure? histogram or count of values in a column correlation between measures unique key auto-find import into a strongly typed data structure Obviously I'm looking for something more than basic summary(cars). Something that goes through each column and spits out detailed analysis report AI: There is a very cool active Python package called pandas-profiling, is exactly what you want. With a simple pandas_profiling.ProfileReport(df) it returns a lot of important statistical information about your data, the official documentation says: For each column the following statistics - if relevant for the column type - are presented in an interactive HTML report: Essentials: type, unique values, missing values Quantile statistics like minimum value, Q1, median, Q3, maximum, range, interquartile range Descriptive statistics like mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, skewness Most frequent values Histogram Correlations highlighting of highly correlated variables, Spearman and Pearson matrixes Look at their demo here. I personally used it few times and it quite nice, BUT it dataset is large (mostly I mean in variable space) it takes a long time to give the statistics. I think there should be an option to even return all these statistics for a subset of features (columns) in case you do not want for all and it is very high dimensional!
H: How to select the learned model using $k$-fold cross validation? Let us consider a case where $1000$ data is given, i.e., the data set $U=\{x_1, \ldots, x_{1000}\}.$ When we want to use $k$-fold validation scheme, we first divide the data set into $k$ groups. With out loss of generality, the parameter $k$ is assumed to be $10$. Hence, we have $S_1=\{x_1, \ldots, x_{100}\}$, $S_2=\{x_{101}, \ldots, x_{200}\}$, $\ldots$, $S_{10}=\{x_{901}, \ldots, x_{1000}\}$. I can obtain models, $f_k$, by learning a data set $U \setminus S_k$ for $k=1, 2, \ldots, 10$. I can obtain error rates, $r_k$, by testing a data set $S_k$ with $f_k$. Hence, I can obtain the error rates, $r$, by averaging $r_k$'s, i.e., $\sum_{k=1}^{10}r_k/10$. I understand the $k$-fold cross validation so far. Most materials I've seen just say the error rate averaged by $k$ scenarios in $k$-fold cross validation. However, they do no t say about $f_k$'s. However, in this case, which model do I have to use among all $f_k$'s? AI: $k$-fold is just performed to obtain a measure of your accuracy, as using the training accuracy is generally a too optimistic measure of the accuracy. If you want to deploy a final model, what is recommended is to train a last model with all the data. In fact, when you compare two models, $f$ and $g$, what you do is obtain two error rates $r_f$ and $r_g$ by cross-validation, and you keep the model with the lowest error rate. After that, if the model with the lowest error rate is $f$, you retrain $f$ with all your data. To sum up, $k$-fold cross-validation is a method to give measures of the performance of the model, if you want to obtain the best model just train it with all your data.
H: Why my neural network does not predict decimal values in range [-1,1]? When it is able to predict the integer values I am trying to perform a very simple experiment, predict the input number. The concept is same as an auto-encoder. But with just one layer, which can handle the task of encoding and decoding- Also, wanted to observe how the network learns, with different training examples. Initially, I took 10000 training samples, of integers, in range[1, 10000]. So, example if I pass 767676 as one of the test sample it should predict the output as 767676.0 This test passed very well. The activation function used here was 'softplus', which keeps the values in range [0, inf). the network - train_d = numpy.array([i for i in range(10000)], dtype=np.int) model = Sequential() model.add(Dense(1, input_shape=(1, ), activation='softplus')) model.fit([train_d], train_d, epochs=5000, batch_size=256, shuffle=True) Now when I gave it 10000 training sample of decimal values in the range[-1, 1]. E.g. 0.3456 the expected output will be 0.3456 but rather, its giving me 0.5721. The network - train_d = numpy.array[round(random.uniform(-1, 1), 4) for i in range(10000)], dtype=np.float) model = Sequential() model.add(Dense(1, input_shape=(1,), activation='softsign')) model.fit([train_d], train_d, epochs=5000, batch_size=256, shuffle=True) AI: First of all, during the 10000 integer test, did you use all the integers from 0 to 9999 in training? If yes, then you have fully covered the whole input range. This means that while testing, you actually feed the network with data that are identical to the training data, therefore the accuracy is very high. What is the result of the network if you train it with 10000 randomly sampled integers in range of 10,000 and then test it with ALL integers in range of 10000? This test will reveal if your encoder generalizes well enough. Also keep in mind that the smaller the number, the more difficult it is for the network to train due to vanishing gradient. Therefore, decimal numbers can saturate the learning process and lead to lower accuracy. Try changing the batch size (use smaller batches) and see if you decrease your output error.
H: Can you provide examples of business application of vector autoregressive model? Vector Autoregressive models are exploited at Economics faculties all around the world. They are just another statistical model that solves problem of forecasting, although in a deeply complexity-uncovering manner. Yet to my surprise, there is no evidence it has been used outside pure economics domain, namely, to solve business problems like we all - Data Scientists - do. Can you share either your experience with application of VAR to solve business problem, a scenario in which it could hypothetically be used or state why it cannot be easily applied in business environment (if this is an objective, mathematically/empirically grounded reason, I am not looking for opinions on why this method has slow adoption)? The question can apply as well to VECM models of course. AI: Caveat: I have a doctorate in economics and that is why I knew how, and where, and when to apply this type of model. Sure, I used a vecm model last year to figure out how many credit cards get compromised per month given how much activity we see. (I work for a financial institution). We were looking for a long-run relationship and a short-run relationship. From this model, a survival analysis, and a couple of other techniques we were able to determine the optimal time to set the expiration date on a card. I applied this technique because I was afraid I was getting the wrong answer, namely because the number of transactions and number of fraud transactions were cointegrated.(All of the models I tried gave similar results which was comforting). Given that we are talking about fraud strategies I don't feel comfortable giving more details than this, hopefully, that gives you a flavor of what I was trying to accomplish. Basically, I was trying to do a robustness check. It isn't easily applied in a business context because this model isn't intuitive, it isn't easily explained, and it takes great skill to interpret it. The "mathematical" answer probably has something to go like this, ask an average data scientist what the impulse-response function for their model looks like and you will get a blank stare. Ask them what the FEVD looks like for their model, and you will get a blank stare. Data scientists tend to be trained in non-linear models (read machine learning), not cointegrated time-series models. It isn't that they aren't capable, they've just never been trained to think in those terms. I know technically not math, and technically that is an opinion but it is probably the truth. Also doing the math, I can probably get a better fit out of a non-linear model like an LSTM network(?). So if I only care about pure prediction, it wouldn't be worth my time. Where VAR and VECM shine is that although complicated, they are essentially linear, and therefore, fairly interpretable. So perhaps what you are looking for is a business setting that requires causally valid, interpretable, multivariate time-series models. If you look hard enough, I'm sure you can come up with a couple of interelated KPIs being tracked in your business where it would make sense. But in my experience, if you need a fast dirty answer, forgive me econometrics teachers everywhere, you can just plug the differenced series into a linear regression. (Warning not best practices, but it does alright).
H: How can I evaluate data mining model? I will evaluate classification models I made. That's logistic regression and decision tress. 1. What I use standards for comparison? 2. Suppose model selection's standard is ASE. One is high ASE of training data, low ASE of test data, and the other is ASE of training data is low and ASE of test data is high.. If you select a model, which models do you choose? AI: Accuracy (for classification problems) Precision Recall F1 Score AUC-ROC, particularly for imbalanced datasets Good performance on the training set and bad performance on the test set is due to overfitting. So you should try to find ways to tackle overfitting, such as regularization of parameters, parameter tuning using cross-validation etc...
H: Numerical example of Confusion in understanding learning rate in xgboost I fail to understand as to how learning rate is used in XGBoost? Can anyone explain using a numerical example? AI: Each iteration is supposed to provide an improvement to the training loss. Such improvement is multiplied with the learning rate in order to perform smaller updates. Smaller updates allow to overfit slower the data, but requires more iterations for training. For instance, doing 5 iteations at a learning rate of 0.1 approximately would require doing 5000 iterations at a learning rate of 0.001, which might be obnoxious for large datasets. Typically, we use a learning rate of 0.05 or lower for training, while a learning rate of 0.10 or larger is used for tinkering the hyperparameters...
H: Regression equation for ordinal data I'm doing research where a part of the collected data is of Ordinal type. I will implement ANN with Logistic Regression function in the Activation function. What I have learnt from documents of other websites as well as an answer in https://datascience.stackexchange.com/, the target value is of ordinal type while the independent variables are of ratio or interval type. But my independent data is of Ordinal type and the target data will be label (say Like or Unlike). How should I build a function for ANN if I'm not wrong in my understanding? AI: I think that what is influenced by the type of your independent variables is not the structure of your neural network but the encoding of your data. If your independent variables are ordinal, then you can map them to integer numbers or other ordered sets (and maybe normalize them to make them between 0 and 1). If your target variable is a label, it is a classification problem. Therefore, the neurons in the last layer in your neural network should have an activation function optimized for classification problems, like sigmoid. You also need a specific loss function, like binary crossentropy. There was a case study, however, that showed that for ordinal classification (when your target labels are ordinal), mean squared error works well. You can take any activation functions for all hidden layers in your network as long as they are non-linear. The choice of a particular activation function will influence the rate of convergence of your learning algorithm but I don't see how a particular activation function in hidden layers may be preferable for a certain type of input variables, like if they are ordinal or not.
H: Why is it important to have sufficient number of instances in your dataset for each stratum? As per the figure 1, most of the median-income values are clustered around \$20,000-\$50,000, but some median incomes go far beyond \$60,000. I didn't understand the explanation behind why housing['median_income'] has to be divide by 1.5 housing['income_cat'] = np.ceil(housing['median_income'] / 1.5) Explanation - It is important to have a sufficient number of instances in your dataset for each stratum, or else the estimate of the stratum's importance may be biased. Can someone help me to understand the explanation and also why is it only 1.5 ?. As per the explanation, why the estimate of stratum's importance will be biased when there is no adequate instances for each category ? AI: Lets take an example that you are trying to evaluate average income of a company. Assume company has 100 employees in total and they belong to two stratum (70 management and 30 technical executives) and management executives on average are paid higher compared to technical executives. You are the surveyor: Due to resource constraints, you have access to survey of any 20 individuals chosen at random. Case 1: you did a random sampling from 100 employees and end up with 10 management and 10 technical executives. you will get an estimate for average income that is biased towards technical executives. In other words, you are giving more importance/ representation to income of technical executives than actual representation in the population. Similarly, in other attempt: Case 2: you did a random sampling from 100 employees and end up with 17 management and 3 technical executives. you will get an estimate for average income that is biased towards management executives. Both the above sampled cases have bias and are not representative of the population. Therefore, when we have the stratum identified before the sampling, stratified sampling should be done. For eg. random sample for 14 out of 70 management executives and 6 out of 30 technical executives. In your given example of the housing and income data, income scale has been shrunk by a factor of 1.5, so as to create less no of income categories for the same bin width. Any other factor > 1 could also be used to ensure each stratum has considerable number of instances.
H: What is the exact definition of VC dimension? I'm studying machine learning from Andrew Ng Stanford lectures and just came across the theory of VC dimensions. According to the lectures and what I understood, the definition of VC dimension can be given as, If you can find a set of $n$ points, so that it can be shattered by the classifier (i.e. classify all possible $2^n$ labeling correctly) and you cannot find any set of $n+1$ points that can be shattered (i.e. for any set of $n+1$ points there is at least one labeling order so that the classifier can not separate all points correctly), then the VC dimension is $n$. Also Professor took an example and explained this nicely. Which is: Let, $H=\{{set\ of\ linear\ classifiers\ in\ 2\ Dimensions \}}$ Then any 3 points can be classified by $H$ correctly with separating hyper plane as shown in the following figure. And that's why the VC dimension of $H$ is 3. Because for any 4 points in 2D plane, a linear classifier can not shatter all the combinations of the points. For example, For this set of points, there is no separating hyper plane can be drawn to classify this set. So the VC dimension is 3. I get the idea till here. But what if we've following type of pattern? Or the pattern where a three points coincides on each other, Here also we can not draw separating hyper plane between 3 points. But still this pattern is not considered in the definition of the VC dimension. Why? The same point is also discussed the lectures I'm watching Here at 16:24 but professor does not mention the exact reason behind this. Any intuitive example of explanation will be appreciated. Thanks AI: The definition of VC dimension is: if there exists a set of n points that can be shattered by the classifier and there is no set of n+1 points that can be shattered by the classifier, then the VC dimension of the classifier is n. The definition does not say: if any set of n points can be shattered by the classifier... If a classifier's VC dimension is 3, it does not have to shatter all possible arrangements of 3 points. If of all arrangements of 3 points you can find at least one such arrangement that can be shattered by the classifier, and cannot find 4 points that can be shattered, then VC dimension is 3.
H: Computer Vision: Handling dataset(3D data or scan) with different timesteps I'm planning on training a CNN on CT scans for classification. The problem is CT scans are taken slice by slice, and in a typical scan, there could be more than 200 slices. The number of slices in a scan isn't uniform and depend on the scanning machine and age of the person(for whom the scan is taken). 1)How should I make the number of slices uniform for feeding to a deep learning network? This sorta problem is handled in NLP by padding a chosen vector( or something similar) to sentences which have lengths less than the predefined length and truncating sentences which have lengths greater than the preset length. 2)Can a similar approach be used to make slices(timesteps) uniform or is there a better way? AI: I would use an LSTM-RNN to encode sequences of arbitrary length (length meaning the number of slices) into fixed-size output. It should be very easy to implement it in Keras :) Then feed the sequences in your CNN. Check sequence-to-sequence encoder-decoder scheme, like this one. The same concept is applied with state-of-the-art algorithms that require fixed size inputs, like this one.
H: ML model to transform words I build model that on input have correct word. On output there is possible word written by human (it contain some errors). My training dataset looks that: input - output hello - helo hello - heelo hello - hellou between - betwen between - beetween between - beetwen between - bettwen between - bitween etc. During preprocessing I add a measure of the distortion of a word. Then I hardcoding letters for numbers. My current model's using CNN. The number of neurons of input is the same as the longest word in training dataset and the number of neurons of output is the same as the longest word in traning dataset. This model doesn't work as I excepted. Word on the output is not look as I except. eg. input - output house - gjrtdd Question: How can I build/improve model for this task? Is CNN a good idea? What other methods can I use for this task? AI: Try a totally different approach, using Generative Adversarial Networks. For this purpose you need: A Generator A Discriminator See the scheme (credit O'Reilly): The "Real Images" block in the scheme should be your training dataset (or ground truth). The Generator should generate the distorted words and the Discriminator should verify if the word is "adequately" distorted, based on a criterion of your choice, which can be any similarity measure between known words (database) and the generated one. Both the Generator and the Discriminator get trained on-the-go while in the training phase and in the end you will have two trained networks, of which the Generator would be very useful for your purpose. Helpful source: https://deeplearning4j.org/generative-adversarial-network
H: Chi-squared for continuous variables I am using chi-squared to determine feature importance as I select features to train a supervised ML model. I create a contingency table for the feature/target, and feed this contingency table into the scipy.stats.chi2_contingency module. This module returns the chi-squared value and the p-value. I have acheived reasonable results with boolean variables, but I am suspicious of the results for categorical variables with more than 2 categories. Specifically, I am fairly sure that one continuous feature, age, is correlated with the target, to some level of significance. From plotting histograms and KDEs, I know that the probability distribution of the feature for (target = 0) is quite different from the probability distribution for (target = 1). However, when I bin the age feature into 2-7 bins, the chi-squared test yields a p-value of ~1e-39. Is there anything that I am missing with regards to the chi-squared test and categorical variables? Does this test only work for monotonic relationships? AI: It sounds like the chi-squared test confirms your suspicions about age being correlated with the response. As far as I am aware, the null-hypothesis for the chi-squared test is "there is no relationship" between the two variables. The test statistic is calculated based on the assumption that all observations are evenly distributed amongst the cells in the contingency table, so the test should work for most kinds of relationships. A word of warning - the test is sensitive to data imbalances.
H: Need help in understanding a small example Pardon me, I agree the title of the question is not clear. I would like to know the understanding of below steps which are picked from the textbook "Hands on machine learning". >>> housing['income_cat'].value_counts() >>> 3.0 7236 2.0 6581 4.0 3639 5.0 2362 1.0 822 If I am not wrong, the above step is to get the counts for each class. For example, for class '3' there are 7236 instances. Likewise, for class '2' there are 6581 instances. >>> housing['income_cat'].value_counts / len(housing) >>> 3.0 0.350581 2.0 0.318847 4.0 0.176308 5.0 0.114438 1.0 0.039826 Next, I was not clear, what was the intention behind the above step. By doing the above step, what am I suppose to learn ?. and >>> from sklearn.model_selection import StratifiedShuffleSplit >>> split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) >>> for train_index, test_index in split.split(housing, housing["income_cat"]): strat_train_set = housing.loc[train_index] strat_test_set = housing.loc[test_index] >>> strat_test_set['income_cat'].value_counts() / len(strat_test_set) >>> 3.0 0.350533 2.0 0.318798 4.0 0.176357 5.0 0.114583 1.0 0.039729 Name: income_cat, dtype: float64 How come strat_test_set['income_cat'].value_counts() / len(strat_test_set) results are almost same to the results of housing['income_cat'].value_counts / len(housing) ? AI: Regarding your first question, by dividing by the length, you get the percentage of each category, and you can see that one category has a very low percentage. If you use the regular train_test_split, the proportion of the categories will be different in each set, as the split is made randomly, and this can introduce a bias. Imagine categories with a very low number of observations, you could even have categories missing entirely in either of the sets, which will cause trouble for your model. The use of stratified sampling allows you to have the same proportion for the categories. That is why you see the same results: it is the goal of this sampling.
H: Does matrix shape for training/testing sets have to be in a particular order? I've noticed that in the Andrew Ng Deep Learning course that for image analysis he always has X_train matrices in the shape of [height, width, 3, num_inputs], or, if flattened, [height X width X 3, num_inputs]. He also has his y_train as [1, num_inputs]. To me, it is more intuitive to flip these so that X_train is [num_inputs, height X width X 3] and y_train is [num_inputs, 1]. Is there any motivating reason or justification that it has to be the way he does it or is it just preference? Is this a standard or does it vary? AI: It depends on the deep learning framework that you use, and you have to use the shape that the functions of the framework use. I think it is different in Tensorflow and Pytorch. The recommendation is to check before doing anything in the documentation of the framework.
H: How to set class weights for imbalanced classes in Tensorflow? Do we have an equivalence for the following question, but with tensorflow : How to set class weights for imbalanced classes in Keras? ?? AI: There's a function that does it automatically: tf.contrib.losses.softmax_cross_entropy(logits, onehot_labels, weight=weight)