text
stringlengths 83
79.5k
|
---|
H: How to do feature analyzing : pandas groupby(). mean
I'm analyzing Titanic data from Kaggle1 with a kind of guiding book. In the book, feature analyzing about the relationship between Pclass data and Survived data is done like below.
train_set = pd.read_csv('train_csv)
fig = plt.figure(figsize=(12,4))
ax1 = fig.add_subplot(121)
PclassPlot = train_set['Survived].groupby(train_set['Pclass]).mean()
ax.bar(x=PClassPlot.index, height=PClassPlot.values)
Why do you need to use mean data?
I thought I didn't need to culcalate mean to see the relationship between Pclass and Survived data.
Please give me tips for analyzing data.
Thank you for your help.
AI: Means are calculated just to have an idea of the relationship. For more definite analysis, one can use simple statistics to know relationship between two variables. The test to be applied depends on type of variables:
If both are numeric: Correlation (Pearson or Spearman)
If both are grouping (nominal) variables: Chi-square test
Fisher exact test can also be used if only 2 groups in each variable.
If one is grouping and other numeric:
if only 2 groups: Student t-test or Mann-Whitney U test
if more than 2 groups: ANOVA or Kruskal–Wallis test
Means of different groups will give you only partial information about relationship. However, it may be sufficient to know if the feature is likely to be important for prediction. As a method of feature selection, if groups' mean do not differ significantly, this feature may be dropped from analysis. |
H: Is it necessary to tune the step size, when using Adam?
The Adam optimizer has four main hyperparameters. For example, looking at the Keras interface, we have:
keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
The first hyperparameter is called step size or learning rate. In theory, an adaptive optimization method should automatically modify the learning rate during optimization. Thus, I would expect lr to be a not very influential hyperparameter, which I can remove from the list of hyperparameters I have to tune on the validation set, thus saving some time.
Does this hold true in practice? I.e., is it true that, at least for a class of problems (say, image recognition), lr doesn't affect the optimization a lot, and thus we can just leave it at the default value of 0.001? Or is it still as extremely influential, as it is for SGD with momentum?
EDIT: to clear a misunderstanding brought up in an answer, learning rate and step size are synonymous. See the algorithm definition in the Adam paper: $\alpha$, whose default value is 0.001, is clearly named the step size. I reckon it's probably not a great name (the actual size of the step in the parameter space depends on the accumulated first order and second order momentums, as well as on the gradient, of course) but unfortunately such misleading terminology is the norm in optimization (at least in Deep Learning papers).
AI: the lr (learning rate) parameter is the MOST influencial parameter for Adam (or any other optimization algorithm). I It does not mean step size... as that is not controlled by Adam but by the neural network and how you feed batches into it.
So yes, lr is very, very important, and it is the learning rate, not the step size.
UPDATE
Actually as mentioned by OP, the learning rate and step size are the same thing (my bad, I confused it with the batch size).
Generally, my recommendation is to try with learning rates of 0.0001, 0.001, 0.01 and 0.1.
The rule of thumb is that a smaller learning rate provides a much stable learning curve... but a much slower one while a larger learning rate will converge quicker... but it will become unstable. So you will have to do a bit of exploration. I normally use 0.001 and 0.0001 but that depends a lot on the problem you are dealing with.
Also, as compared with SGD, Adam includes momentum, essentially taking into account past behaviour (the further away such behaviour is in the past, the least relevant it is) that is controlled by the beta params, but I do not recommend to modify them, as normally it is not required. So, although Adam is a more stable learning method (and normally a quicker one) than SGD, the learning rate it is still the most fundamental parameter to play with. |
H: Connect a dense layer to a LSTM architecture
I am trying to implement an LSTM structure in plain numpy for didactic reason. I clearly understand how to input the data, but not how to output.
Suppose I give as inputs a tensor of dimension (n, b, d) where:
• n is the length of the sequence
• b is the batch size (timestamps in my case)
• d the number of features for each example
Each example (row) in the dataset is labelled 0-1. However, when I fed the data to the LSTM, I obtain as a result the hidden state h_out which has the same dimension of the hidden size of the network. How can I obtain just a number that can be compared to my labels and properly backpropagated?
I read that someone implements another dense layer on top of the LSTM, but it's not clear to me the dimensions that such layer and its weight matrix should have.
AI: What you are getting as the output is the internal LSTM state.
In order to get value comparable to your labels, add a dense layer on top of it. Output dimension of dense layer would be the number of labels you want result.
If its 0 and 1, only 1 output neuron can work along with sigmoid
If there are 5 label classes, then output dimension of dense layer should also be 5 |
H: tensorflow simple regression nan after >29 observations
I have code as below. If the number of data points changed to any number above 30 (example 40) then i get nan for values of final_slope , final_intercept
why?
For 25 examples it runs fine. I am using a cpu version on tensorflow on my windows machine.
The number of datapoints can be changed by changing number on the line 4th line
n= 40
import numpy as np
import tensorflow as tf
n= 40
x_data = np.linspace(0,10,n) + np.random.uniform(-1.5,1.5,n)
y_label = np.linspace(0,10,n) + np.random.uniform(-1.5,1.5,n)
import matplotlib.pyplot as plt
#%matplotlib inline
plt.plot(x_data,y_label,'*')
m = tf.Variable(0.39)
b = tf.Variable(0.2)
error = 0
for x,y in zip(x_data,y_label):
y_hat = m*x + b #Our predicted value
error += (y-y_hat)**2 # The cost we want to minimize (we'll need to use an optimization function for the minimization!)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train = optimizer.minimize(error)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
epochs = 1000
for i in range(epochs):
sess.run(train)
# Fetch Back Results
final_slope , final_intercept = sess.run([m,b])
print (final_slope , final_intercept)
AI: The way you calculate the 'error' function is wrong.
You aggregated error value whenever you train the model. Therefore, the error value increases over time and reaches infinity (you can print the error value at each epoch to check that).
The error function should be computed as follows.
error = tf.reduce_mean((y_hat - y_label)**2)
By the way, you can check the error value at each epoch.
Choose the hyperparameters of the model to make sure that the error value decreases in the time span.
Here is the error curve over the epoch span. |
H: Why do I not get 100% Accuracy with KNN with $K=1$
I am playing with KNN on the Iris Dataset
I expected to get 100% accuracy with $K=1$ since every point should predict itself based on the Voronoi volume around it created by the KNN algorithm.
However using Scikit Learn I do not find this result. Here is my code.
import pandas as pd
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
from sklearn.datasets import load_iris
iris = load_iris()
X = pd.DataFrame(iris['data'])
X.columns = ['sepalLength','sepalWidth','petalLength','petalWidth']
featureX = 'sepalLength'
featureY = 'sepalWidth'
X_2 = X[[featureX,featureY]]
y = iris['target']
y = (np.array(y)==2).astype(np.int)
knn = KNeighborsClassifier(n_neighbors=1, p=2)
knn.fit(X_2, y)
y_pred = knn.predict(X_2)
cm = confusion_matrix(y, y_pred)
print(cm)
I get the following output for the confusion matrix. I see 11 out of 150 samples are incorrectly classified.
[[95 5]
[ 6 44]]
Why is it not perfect ?
AI: In your training set (X_2,y), there are some samples with the same input features X_2 but different labels y. For example, the 73rd and 147th samples, which are labelled into class 0 and 1, respectively, have the same input values [6.3, 2.5]. There are more samples like this in the dataset. Therefore, you could not construct a perfect classifier for such data. |
H: How can I get my trained model ready for production
I successfully trained my model using the sklearn's multiple linear regression. This is the code I used:
import pandas as pd
dataset = pd.read_csv('C:\\mylocation\\myfile.csv')
dataset2 = pd.get_dummies(dataset)
y = dataset.iloc[:, 31:32].values
dataset2.pop('Target')
X = dataset2.iloc[:, :180].values
#Split the dataset
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.2, random_state = 0)
#Feature Scaling
#from sklearn.preprocessing import StandardScaler
#sc_X = StandardScaler()
#X_train = sc_X.fit_transform(X_train)
#X_test = sc_X.transform(X_test)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
#Predicting the Test set results
y_pred = regressor.predict(X_test)
According to validation results my y_pred is a reasonable predictor. Now, I would like to take into production this model and I am wondering what are reasonable steps to apply this model to the whole dataset I have stored and future datasets if needed.
AI: You should first cross validate your pipeline, making sure that you get an homogeneous y_pred result.
Then you can retrain a model with the same parameters on your full dataset. Pickle the model as well as any preprocessor tools, and reuse them to predict on new data. |
H: Is there any implementation of Extended Isolation Forest algorithm in R/Python?
I am using isofor package for regular Isolation Forest but I came by an article about Extended Isolation Forest and need your advise which package has this functions implemented in R/Python.
AI: There is a package on Github called "Extended Isolation Forest for Anomaly Detection", I used it a couple months ago and it seemed to work. For how accurate or how buggy it is, I'm not sure but if anything seems off you can check the source code for errors in the implementation of the paper Extended Isolation Forest by Harir et.al. |
H: How can I use a class variable with many possible values in logistic regression?
I am attempting to build a logistic regression model that determines the probability of an outcome based on a set of independent variables.
For context, the data is based on a project in which sales representatives and branch managers for a builders merchant were given price recommendations for their customers' deals, and were given the option of saying 'Yes' or 'No' to these price recommendations. The Yes or No answer is my dependent variable; I need to determine which variables can predict with the highest probability whether the respondent will say 'no' to a price recommendation.
Most of the independent variables work fine in this model, except for one which I am currently unable to test; individual differences between the respondents.
My hypothesis is that the propensity to say 'no' will be stronger in some individual respondents than others, be it for psychological or geographical reasons, and that these individual differences will be a stronger determiner for the dependent variable than any other class variable.
There are about 800 respondents, so simply shoving them in as an independent variable does not produce the desired results.
Is there a method of doing this in logistic regression? Should I use another analysis technique for this?
I am using Statistical Analysis Software (SAS) to carry out the logistic regression.
AI: This may not be the answer you are looking for, but I think this is a telling part of your challenge
My hypothesis is that the propensity to say 'no' will be stronger in some individual respondents than others, be it for psychological or geographical reasons, and that these individual differences will be a stronger determiner for the dependent variable than any other class variable.
Unless I misunderstood you, it seems you want to have an input parameter that indicates how reflexively a respondent will be to just respond with 'No'. The problem is, as I see it, you don't know the answer to this. You have no way to measure this directly. No?
If you have more data about the respondents, maybe you can back into this. Huge assumption here, but if you could get a count of the number of recommendations made vs the number of times they have said 'No' you could come up with some factor:
$$f_{(No)} = {{Number\ of\ "No"\ Responses}\over{Number\ of\ Recommendations}}$$
This may get you what you are looking for, but it assumes you can tie back the data to the respondent and that the respondent is, in fact, the Branch Manger or Sales Representative, not the customer. The problem with this approach is that it assumes that all the recommendations were equally competitive.
HTH |
H: What are your thoughts on SKLearn's dismissal of GPUs for machine learning?
SKLearn has this broad claim in its FAQs:
Outside of neural networks, GPUs don’t play a large role in machine learning today, and much
larger gains in speed can often be achieved by a careful choice of
algorithms.
Anyone care to add color for or against this claim?
AI: GPU doesn't inherently fit naturally into all machine learning algorithms. A natural contender is one that inherently takes a myriad of matrix multiplication. This makes sense since graphic processors inherently were design for this. However, for an algorithm like a Random Forest this may not be so important. Also there exist a cost to transfer information to a GPU. Any algorithm that is O(n) should not be computed on a GPU because it takes O(n) to communicate the data. There's a few other issues that GPU present dealing with RAM and Threading, each of which often render making a GPU variant of a project more of a hassle than it's worth.
Furthermore, adding GPU to the sklearn framework inherently adds a hardware dependency and complexity that seems needless for shallow algorithms. Odds are, if you're needing access to your GPU, you are dealing with a neural network, which has it's own unique architecture challenges. I think it makes far more sense to separate deep learning into it's own module (look at how huge tensorflow/pytorch/etc) project are) than force Sklearn to add hardware dependencies for marginal computational gains. |
H: What does it mean when someone says "Most of the data science algorithms are optimization problems"
I was trying to understand the Gradient Descent algorithm from this article and the author says
Most of the data science algorithms are optimization problems
I come from software engineering background trying to get a basic understanding of data science. Can someone help me explain what this means?
AI: Most algorithms try to minimizes some objecive functions.
For example, in linear regresssion, given $(x_i, y_i)$, we try to find $\hat{y}_i= \alpha_0 + \sum_{j=1}^d \alpha_j x_{i,j}$ and we we want it to be close to $y$. We try to minimize the mean squared error in our estimation.
That is our objective function is
$$\min _\alpha\frac1n \sum_{i=1}^n (y_i-\alpha_o- \sum_{j=1}^d \alpha_j x_{i,j})^2.$$
We might have a model with unknown parameters and we can use Maximum Likelihood Estimator to find out our model, in that case, we maximize the likelihood function. Again, we get another optimization problem.
In general supervised classification error we are trying to minimize the error, which is typically formulated as a minimization of a loss function. In SVM, we are trying to find a boundary that maximizes the margin between the two classes, again, we are trying to maximize an objective function. |
H: How to represent an image as state in a Q-table
I'm trying to do Q-learning with the Atari games using the gym python's package.
I want to use the image as the state of my algorithm, but I came up with a doubt: Is the state all the possibilities for all the pixels of the image?
I've seen many Q-learning implementations, and in the Q-table they always store all the possible states as rows, and the actions as columns (for example, https://www.learndatasci.com/tutorials/reinforcement-q-learning-scratch-python-openai-gym/)
Without doing maths, all the possible states for, let's say, a 200x200 grayscale image are insane. How can I handle this problem without doing neural networks?
I guess there must be an approach for this scenario, but I haven't found any good example.
Thank you!!
AI: The Atari games are some of the best examples in which you would need a function approximator (e.g. Neural Network) in order to solve them. This is because the state space explodes and thus you would need something that will generalize quite well in unseen states.
I am not quite sure what you mean by
I want to use the image as the state of my algorithm, but I came up with a doubt: Is the state all the possibilities for all the pixels of the image?
The state at a specific timestep represents the state of the game and will be the input to your algorithm. Why you would need all the possible combinations of the pixels in that image (most of these combinations won't make any sense and the agent will never encounter them. What you need though is your agent to encounter enough game states that will enable it to generalize well in unseen states.
If you were trying to solve Atari games with a look up table then you would have to store every single image, that your agent encounters at every single time step, in a table (more precisely a reference of that image which will correspond to a $Q$ value). In other words you are trying to store all the states that your agent might encounter in the game (impossible in Atari games - unless you do some very sloppy assumptions about the nature of the states). |
H: Conjugated gradient method. What is an A-matrix in case of neural networks
I am reading about conjugated gradient methods to understand how they exactly work. I understand that a pair of vector $u$ and $v$ are conjugated with respect to $A$ if $u^TAv=0$. I also read that $A$ is symmetric, positive definite matrix.
I am trying to find out how is that related to training of neural network by minimalising mean-square error function using CG method. What will be $A$ matrix in that case? How $A$ matrix is connected to weights of neural network. And is it still symmetric and positive definite? I read What is conjugate gradient descent? this thread and resources linked there, but I still can't figure it out.
I'm sure I'm missing something simple, but could you give me a bit of explanation?
Thank you,
Max
AI: The traditional conjugate gradient descent is an increment on the gradient descent that just takes a direction that is fully orthogonal to the previous descent direction. There is no $A$ matrix in that case.
There are different rules (you can check some in my old optimization toolbox at https://github.com/mbrucher/scikit-optimization/blob/master/scikits/optimization/step/conjugate_gradient_step.py). If I remember properly FR combined with strong Wolfe-Powell line search rule give one of the best answer. The issue is that it requires more computation, which is why line search is never used in neural networks optimization. |
H: Output of classifier.predict Tensorflow extract probabiltity
When I do a prediction with my DNN clasifier I get a dictionary like this.
{'probabilities': array([9.9912649e-01, 8.7345875e-04, 8.5633601e-12], dtype=float32), 'logits': array([ 12.641698, 5.599522, -12.840958], dtype=float32), 'classes': array(['0'], dtype=object), 'class_ids': array([0])}
Can someone explain me the values of probability and logits? Why the three values ?
The docs just states
Evaluated values of predictions tensors.
And do not refer (the docs) to a struct/explanation of the output
Thanks!
AI: At the probabilities key you will find the probabilities of every label. Tensorflow just chooses the one with the highest. So in order to get the probability of the current outcome, you need to do something like this.
results = classifier.predict(input_fn = lambda: mem_input_fn())
for r in results:
idx = r["classes"][0] # idx is the predicted label
print idx, r["probabilities"][int(idx)] |
H: How to create column for my csv file in python
I have a CSV file having these values (without column):-
I:30n
J:0n
J:0n
U:1000n
C:0n
I:12n
I:10n
I:10n
I:10n
I:10n
I want to add a column name for these rows values.
Suppose all I (i.e: I:30n, I:12n, etc)value record should be in
one column and likewise all J (i.e J:0n, J:0n) should be in one
column and vise-versa using python.
Can anyone help?
AI: You can use Pandas for this, your file format isn't exactly comma-separated values file. But still you can use pandas read_csv() method.
Suppose your file name is test_file
import pandas as pd
df = pd.read_csv('test_file', sep=':', header=None)
>>> df
0 1
0 I 30n
1 J 0n
2 J 0n
3 U 1000n
4 C 0n
5 I 12n
6 I 10n
7 I 10n
8 I 10n
9 I 10n
Then you can use the pivot() function:
>>> df.pivot(columns=0)
\ 1
0 C I J U
0 NaN 30n NaN NaN
1 NaN NaN 0n NaN
2 NaN NaN 0n NaN
3 NaN NaN NaN 1000n
4 0n NaN NaN NaN
5 NaN 12n NaN NaN
6 NaN 10n NaN NaN
7 NaN 10n NaN NaN
8 NaN 10n NaN NaN
9 NaN 10n NaN NaN
If your intention is to write it back to a file you can use the to_csv() method.
# this row eliminates the level headers of the columns at level 0
>>> df.columns=df.columns.get_level_values(1)
>>> df
0 C I J U
0 30n
1 0n
2 0n
3 1000n
4 0n
5 12n
6 10n
7 10n
8 10n
9 10n
>>> df.to_csv('new_test_file', index=False)
OR
If you wish to make it less sparse, you can first turn it into a dict and then back to DataFrame:
>>> _dict = df.groupby(0)[1].apply(list).to_dict()
>>> _dict
{'C': ['0n'], 'I': ['30n', '12n', '10n', '10n', '10n', '10n'], 'J': ['0n', '0n'], 'U': ['1000n']}
>>> pd.DataFrame.from_dict(_dict, orient='index')
0 1 2 3 4 5
C 0n None None None None None
I 30n 12n 10n 10n 10n 10n
J 0n 0n None None None None
U 1000n None None None None None
>>> pd.DataFrame.from_dict(_dict, orient='index').T
C I J U
0 0n 30n 0n 1000n
1 None 12n 0n None
2 None 10n None None
3 None 10n None None
4 None 10n None None
5 None 10n None None
pd.Series.to_dict()
pd.DataFrame.from_dict()
pd.DataFrame.T |
H: Who wrote the formula for gini importance/sklearn's feature importance score?
I've been looking for a paper where the Gini importance was first proposed, but I am not sure if this is actually how it came to be.
Here's the formula I am familiar with and am looking to find in a paper:
$$\frac{N_s}{N_t} * \left(i - \frac{{N_s}_r}{N_s} i_r - \frac{{N_s}_l}{N_s} i_l \right)$$
where
$N_s$ = number of samples at a particular node
$N_t$ = number of total samples
$i$ = Impurity
$*_r$ = measure of the right child node
$*_l$ = measure of the left child node
If I understand properly, this is the formula sklearn's random forests also use in model.feature_importances_
Tags (since I don't have the rep to create new ones yet): Feature Importance Score, Gini Importance
AI: There is a paper that covers "The origins of the Gini index". Gini index was detailed by Leo Breiman et al. in "Classification and regression trees" book in 1984. Leo Breiman also wrote a seminal paper on Random Forests in 2001 which includes the notion of feature importance. |
H: How to create a new column based on two other columns in Pandas?
I am searching for a way to create a new column in my data. I have tried using iterows() but found it extremely time consuming in my dataset containing 40 lakh rows. So here is what I want.
Consider I have 2 columns:
Event ID, TeamID ,I want to find the no. of unique TeamID under each EventID as a new column. In other words, I want to find the number of teams participating in each event as a new column.
AI: You can try something like this to get a new dataframe that has pairs of (EventID, TeamCount):
event_id_team_count = data.groupby('EventID').agg({'TeamID': lambda x: x.nunique()})
event_id_team_count.rename(columns={"TeamID": "TeamCount"}, inplace=True)
If you want to have this new column in the original dataframe, all you need to do is to join the original dataframe with the one you have just created:
data = data.join(other=event_id_team_count, on="EventID") |
H: Scaling values for LSTM
I have the following time series data set
Each row is a unique Item, and each column shows the amount purchased per day. There are a total of 33 columns.
I'm taking the first 32 columns(leaving out the last column, which will be my target) as my training set, and the last 32 rows (leaving out the first column) as my testing set
X_train = dataset[:, :-1] # taking all columns except the last column
y_train = dataset[:, -1:] # setting the last column to be the target
X_test = dataset[:, 1:] # taking all columns expect the first column
I'm going to feed X_train and y_train in my LSTM model, and use the model to perform prediction on X_test.
Now, I wish to performing minmax scaling on dataset before performing training, but I have some questions:
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(-1, 1))
scaled_dataset = scaler.fit_transform(dataset)
This has obvious data leakages, because the scaler is fitted with the test values.
I thought of creating 2 scalers, one for the training set, and one for the target
scaler_x = MinMaxScaler(feature_range=(-1, 1))
scaler_y = MinMaxScaler(feature_range=(-1, 1))
scaled_all_data = scaler_x.fit_transform(dataset[:, :-1])
scaled_y = scaler_y.fit_transform(dataset[:, -1:])
I'm not sure if that's the right approach. I've already tried searching for answers, but their situation is not quite like mine, or the questions are unanswered yet.
Any advice on how I should perform value scaling?
AI: Why not scale the values as the very first step? Import your values, scale the values and then do your splits for training/test/etc. |
H: how to evaluate feature quality for decision tree model
Most of the tutorials assume that the features are known before generating the model and give no way to select 'good' feature and to discard 'bad' ones.
The naive method is to test the model with new features and see how the new results change compared to the previous model but it can be complex to interpret when the tree is complex.
Is there an academic way to select good features and to discard bad ones?
(ressources appreciated)
AI: The main reasons for seeking an efficient feature selection are the machine learning algorithm get faster training, reduces the complexity of a model, facilitates interpretation and improves the accuracy of a model.
Look for Filter Methods , Wrapper Methods and Embedded Methods to learn more about your issue.
Filter methods are generally used as a preprocessing step. The selection of features is independent of any machine learning algorithms. Instead, features are selected on the basis of their scores in various statistical tests for their correlation with the outcome variable. Here you have to look for Linear discriminant analysis, Pearson’s Correlation, Chi-Square.
Some common examples of wrapper methods are :
Forward Selection: Is an iterative method in which we start with having no feature in the model. In each iteration, we keep adding the feature which best improves our model till an addition of a new variable does not improve the performance of the model.
Backward Elimination: Here, we start with all the features and removes the least significant feature at each iteration which improves the performance of the model. We repeat this until no improvement is observed on removal of features.
Recursive Feature elimination: It is a greedy optimization algorithm which aims to find the best performing feature subset. It repeatedly creates models and keeps aside the best or the worst performing feature at each iteration. It constructs the next model with the left features until all the features are exhausted. It then ranks the features based on the order of their elimination.
Embedded methods combine the qualities’ of filter and wrapper methods. It’s implemented by algorithms that have their own built-in feature selection methods.
Some of the most popular examples of these methods are LASSO and RIDGE regression which have inbuilt penalization functions to reduce overfitting.
Other example of embedded methods that could fits you is Regularized trees.
follow the link with some of these algorithms in sklearn.
sklearn - Feature Selection
I hope this could help you to start. |
H: What is an intuitive explanation for the log loss cost function?
I would really appreciate if someone could explain the log loss cost function And the use of it in measuring a classification model performance.
I have read a few articles but most of them concentrate on mathematics and not on intuitive explanation and also a basic implementation using python with a small dataset would really helpful, So I can understand it better.
It would really help many looking for the same here. Thanks.
AI: Ok, so here is how it works. Say you want to classify animals, and you have cats, dogs and birds. This means that your model will output 3 units in the form of a vector (call it a list if you prefer).
Each element of the list represents and animal, so for example
Position 0 represents how likely the input is to be a cat
Position 1 represents how likely the input is to be a dog
Position 2 represents how likely the input is to be a bird.
Now imagine you get an input that is a bird, in a happy world, your algorithm should output a vector like
[0, 0, 1]
That is, the input has 0% chance of being a cat, a 0% chance of being a dog and a 100% chance of being a bird.
I reality, this is not so simple, and most likely your output would be something like this
[0.15, 0.1, 0.75]
This means 15% chance of being a cat, 10% chance of being a dog and 75% chance of being a bird. Now, notice that this means your algorithm will still consider the input as a bird, so in terms of classification, sure the output would be correct... but it would not be as correct as it had predicted 100% chance of a bird.
So, the intuition is that the logloss measures how far away you are from perfection, where perfection would be identifying the correct label with a 100% chance and the incorrect labels with a 0% chance.
Final word of advice: Do NOT be afraid of math, you will really need to get the grasp of it at some point, do not let the sum terms intimidate you, after all they just represent loops in programming.
UPDATE
Let's dive into the math, specially to demystify it.
The logloss formula is given by
$$ LogLoss = - \frac{1}{n} \sum\limits_{i=1}^n [y_i \cdot log(\hat{y_i}) + (1-y_i) \cdot log(1-\hat{y_i}) ] $$
Where
$n$ represents the number of examples, in our case i will use 2 examples.
$y_i$ represents the correct answer for example $i$
$\hat{y}_i$ represents our prediction for example $i$
So, for our example we have two examples, (remember that the examples are denoted by $y_i$) which are
[0, 1, 0] # Example 1: This means the correct answer is dog
[1, 0, 0] # Example 2: This means the correct answer is cat
Now, let's go for our predictions, remember that the predictions are denoted by $\hat{y}$ let's say they are
[0.1, 0.6, 0.3]
[0.85, 0.05, 0.1]
And let's apply the scary formula here. First notice that $$\sum\limits_{i=1}^n$$ just means sum all the elements from i=1 to $n$, in our case, $n$ is the number of examples, and we have two of those.
So, for $i=1$ we have
$[y_i \cdot log_e(\hat{y_i}) + (1-y_i) \cdot log_e(1-\hat{y_i}) ]$
For $i=1$
term1 = [0, 1, 0] * log([0.1, 0.6, 0.3]) + (1-[0, 1, 0]) * log(1 - [0.1, 0.6, 0.3])
For $i=2$
term2 = [1, 0, 0] * log([0.85, 0.05, 0.1]) + (1-[1, 0, 0]) * log (1-[0.85, 0.05, 0.1])
And finally we have
log_loss = (-1/2) * (term1 + term2)
Using sklearn log_loss, the answer is 0.3366
Now, do not get too lost in the math here, just notice that it is not THAT hard, and also do understand that the loss function here tells you essentially how wrong you are, or if you prefer, it measures the "distance" from perfection. I strongly recommend you to code the logloss by yourself (normally numpy is a good option to do so :) |
H: Difference between 1x1 Convolution and TimeDistributed(Dense())
Are these lines of code equivalent in Keras? From a few runs, they seem to be, and also intuitively since the channels dimension of my data is 1, my understanding is that a fully connected acts like a convolutional layer. Is one better computationally (they have the same number of parameters)?
Conv1D(filters=1, kernel_size=1, strides=1, padding='causal')
TimeDistributed(Dense(1))
For example, I have a regression problem, where I have (batches,64) points coming in and (batches,64) points coming out of the model.
My model is:
model = Sequential()
model.add(Reshape([64, 1], input_shape=(64,)))
model.add(BatchNormalization())
model.add(Conv1D(filters=4, kernel_size=8, strides=1, padding='causal', activation = 'relu'))
model.add(BatchNormalization())
model.add(Conv1D(filters=1, kernel_size=1, strides=1, padding='causal'))
model.add(Flatten())
model.compile(loss=loss, optimizer=optimizer)
My thinking for the layers is:
reshape dimensions to 3D.
variance scale.
conv layer, activation.
variance scale.
final layer: my question is whether is matters if its a TimeDistributed(Dense) or Conv in my situation.
reshape back to a 2D waveform.
Thank you in advance!
AI: First thing is that TimeDistributed should not be useful for your case.
It helps when having 2D data consisted of timesteps (ex. in recurrent forms) and you only have one dimension : (64, 1).
Secondly to answer, yes, your intuition is right. Conv1D with size 1 and a channel size of 1 is equivalent to a Dense of size 1. |
H: How to identify potential customers who are ready to convert in to paid?
I have data that has actions that perform on my tool and I would like to predict the customers who are ready to convert from free/trail to the paid category.
My data looks like the following:
dummy<-data.frame(license=sample(c("Free","Trail","Paid"),10000,replace = T,prob = c(0.6,0.35,0.05)),
plan_type=sample(1:5,10000,replace=T),
action1=sample(0:100,10000,replace = T),
action2=sample(0:1000,10000,replace = T),
action3=sample(0:10,10000,replace = T),
num_days_in_product=sample(0:500,10000,replace = T))
head(dummy)
license plan_type action1 action2 action3 num_days_in_product
1 Paid 1 100 71 5 285
2 Free 5 75 438 1 2
3 Free 1 5 555 7 389
4 Free 3 4 105 0 150
5 Free 1 16 348 7 423
6 Free 5 15 866 8 270
> table(dummy$license)
Free Paid Trail
6016 516 3468
> prop.table(table(dummy$license))
Free Paid Trail
0.6016 0.0516 0.3468
Let me know if any extra information needed.
AI: Two items come to mind:
You will gain the most benefit from this site when you propose a model and let other people comment on and modify the model. As it stands, you are merely proposing what your data looks like. Do you have a model that you've tried? What data science steps have you already taken with your project? If you post that, this site will be of much better help to you
You should be aware that what you posted makes for a good initial dataset but that's all it is, your initial set. If I was in your position, I would be working towards developing a meta dataset that contains a much larger number of factors. When did the person join? What was the time in between actions? Which country is their IP address from? How many times have they logged into the tool? Etc, etc, etc. When you start to think of your data in this manner, and give yourself a much larger number of factors to start with, then you are taking steps that are much more likely to result in a reliable predictive model. |
H: Value of features is zero in Decision tree Classifier
I used CountVectorizer and TfidfVectorizer seperately to vectorize text which is 100K reviews and passed the vector data to a Decision tree Classifier. Upon using _feature_importances__ attribute of Decision tree Classifier, the feature importance values for all my feature are just 0.0. But with the same dataset, I'm able to find feature importances for naives bayes and logisitic regression by using feature_prob for naive bayes and coef_ attributes for logisitic regression.
Other things I tried:
1. I tried changing ngram_range in countvectorizer
2. I tried limiting/not limiting min_df and max_feature parameters passed in countvectorizers
But couldn't make it work.
Any help is appreciated
Code:
positivereviews = df[df.Score == 1]
negativereviews = df[df.Score == 0]
countvect = CountVectorizer(stop_words='english')
positivebow = countvect.fit(positivereviews.CleanedText[0:100000])
pos_xtrain = positivebow.transform(positivereviews.CleanedText[100000:200000])
pos_y = positivereviews.Score[100000:200000]
clf = DecisionTreeClassifier(max_depth=3, min_samples_split=2)
clf.fit(pos_xtrain, pos_y)
def show_most_informative_features(vectorizer, clf, n=20):
feature_names = vectorizer.get_feature_names()
coefs_with_fns = sorted(zip(clf.feature_importances_, feature_names))
top = coefs_with_fns[:-(n + 1):-1]
for (coef_1, fn_1)in top:
print ("\t%.4f\t%-15s" % (coef_1, fn_1))
Counter(clf.feature_importances_)
Counter({0.0: 63514})
1) positivereviews is the dataframe which has features CleanedText and Score
where CleanedText is text which is preprocessed and Score is either 1 in this set since I splitted positive and negative review using Score
2) I also searched online for this problem, but couldn't find any instance of this issue
EDIT1: can it be because since we are dealing with categorical feature in this problem, I might be overfitting badly and hence I do not see any value for any features?
Thanks in advance!!
AI: I'm pretty sure that your feature importances are 0 because your classifier isn't doing any classifying. From the code, it looks like you're training only on positive examples, and giving the fit function a label vector that consists entirely of 1s. The classifier has no information; the decision rule is just "when given an example, predict 1".
There's no way to measure which features are most strongly associated with the label because they're all equally associated - there's only one label, so there's no way to associate the features with anything else.
Is there a reason you're not using the negative examples? It seems like you have the dataframe available.
When you ran naive bayes and logistic regression, did you also give those models only the positive examples? |
H: Architecture for linear regression with variable input where each input is n-sized one-hot encoded
I am relatively new to deep learning (got some experience with CNNs in PyTorch), and I am not sure how to tackle the following idea. I want to parse a sentence, e.g. I like trees., one-hot encoded the parse output of each word, and feed that into a ML system. The output of each sentence is a floating-point number. As an example, the sentence I like trees. could be pre-processed and encoded as fixed-size feature vectors per token:
[[0 1 0 0 1] [1 0 0 0 0] [1 0 1 1 0] [0 0 0 0 1]]
or flattened
[0 1 0 0 1 1 0 0 0 0 1 0 1 1 0 0 0 0 0 1]
However, the length of sentences can differ of course. From what I know there are some fixes to this. Using padding (when shorter than your defined cut-off) or cutting off the length to that cut-off (when longer). Another solution I often see mentioned is the use of an RNN/LSTM but I have no experience with them (only the basic theoretical notion).
The expected output (label) of this sentence could be something like 50.2378.
My question, then, is which architecture is best suited for this task? RNNs are popular in NLP so I am leaning towards them, but I am not sure whether a regression task fits well (or how it fits) with the architecture of an RNN. I have looked for RNNs and regression but I can only find use cases involving time series, not NLP or one-hot encoded features.
In essence, what I have is a dataset of sentences that are preprocessed to get some features per token. Let's assume for brevity sake that these features are syntactic, e.g. the word's POS tag and its dependency information (e.g. subj, obj, and so on). These features would then get one-hot encoded (I assume) to get an easy-to-use dataset. The input, thus, are sentences encoded in such a way that the information of tokens is shown, such as the flattened example above.
The output for every sentence is some floating-point number, representing some value that has been calculated beforehand in theory these will indicate the sentence's equivalence with its translation, but that is not at all important for the system. The important part is that every input is a sentence mapped to a number.
Some dummy data. The actual sentence, it's vector representation, and the output.
I like trees. [0 1 0 0 1 1 0 0 0 0 1 0 1 1 0 0 0 0 0 1] 50.2378
What is that thing? [1 1 0 1 0 0 0 0 1 1 1 0 0 0 0 1 1 0 1 0 0 1 1 0 1] 20.1237
Who are you? [0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 1 0 1] 1.6589
The cookies smell good today. [0 0 1 0 1 1 0 1 1 0 1 1 1 0 1 1 1 0 1 0 1 0 1 1 1 0 0 0 0 1] 18.6295
I do! [0 1 0 0 1 1 0 1 1 0 0 0 1 0 1] 24.5489
The goal, then, is that an unseen given sentence can be pre-processed and one-hot encoded and given as input, and that an output (floating-point number) is predicted based on the trained model/function.
AI: The output for every sentence is some floating-point number, representing some value that has been calculated beforehand (...) the important part is that every input is a sentence mapped to a number.
In essense this is a regression problem, i.e. given some input predict the (most likely) numeric output. However you could also see this as a classification problem if you are not interested in real numbers but in some range (=class), or a probability for each class.
RNNs are popular in NLP so I am leaning towards them,
RNNs are well suited for sequence processing. The way you present the problem does not seem like there is much sequence processing involved - while you can get this to work with RNNs, I propose to try other less computationally involved approaches first.
I have looked for RNNs and regression but I can only find use cases involving time series, not NLP or one-hot encoded features.
Unless there is some specific requirement to use RNNs or in extension LSTMs, I would try random forests, e.g. the RandomForestRegression regressor from scikit-learn. This should work reasonably well using the one-hot encoded sentences (as outlined in the question) as X and the pre-calculated number as Y. Should you decide to instead make this a classificastion problem, you can easily switch to the RandomForestClassification classifier from the same package.
an unseen given sentence can be pre-processed and one-hot encoded and given as input, and that an output (floating-point number) is predicted
I suggest to use a Pipeline for both training and prediction, then serve this Pipeline to your application. |
H: How does a Bayes regularization works?
I'm trying to get grasp of Bayes regularization algorithm.
List of symbols 1st:
$F$ - objective function
$\gamma$ - regularization parameter
$M$ - number od neural network weights
$N$ - number of data tuples
$e$ - modeling error
$w$ - net weight
$D$ - data set input-output pair
$H$ - Hess matrix of objective function computed based on Jacoby's matrix
I get it that minimalising an objective function:
$$
F = \gamma\sum\limits _{j=1} ^Mw_j^2 + (1-\gamma)\sum\limits_{i=1}^Ne_i^2
$$
is equivalent to maximising likelihood $P(w|D,\gamma)$, which can be computed according to Bayes' theorem:
$$
P(w|D,\gamma) = \frac{P(D|w,\gamma)P(w|\gamma)}{P(D|\gamma)}
$$
Likelihood $P(w|\gamma)$ is assumed to be a gaussian one and can be computed as:
$$
P(w|\gamma)=\left(\frac{\gamma}{2N}\right)^{\frac{M}{2}}\cdot e^{-\frac{\gamma}{2}w^Tw}
$$
Likelihood $P(D|\gamma)$ can be computed too as:
$$
P(D|\gamma)=\left(\frac{\pi}{\gamma}\right)^{-\frac{N}{2}}\left(\frac{\pi}{1-\gamma}\right)^{-\frac{M}{2}}\frac{(2\pi)^\frac{M}{2}\cdot -e^{-F(w)}}{\sqrt{|H|}}
$$
And now my questions are: how is $P(D|w,\gamma)$ computed? Or maybe is there any special assumptions about it? And should I minimalise or maximise $P(w|D,\gamma)$ to minimalise objective function $F$.
I base on those articles:
Article1
Article2
If anyone is able to explain those thing to me, I'd be most grateful.
Thank you,
Max
AI: I have found answer for my question here.
If anyone needs it in future: all above likelihoods are assumed to be a gaussian distributions. Likelihood $P(\gamma)$ is assumed to be uniform. In the article it is shown how exactly are they defined and how to compute further is described in detail in article linked above. |
H: Mean Absolute Error in Random Forest Regression
I am new to the whole ML scene and am trying to resolve the Allstate Kaggle challenge to get a better feeling for the Random Forest Regression technique.
The challenge is evaluated based on the MAE for each row.
I've run the sklearn RandomForrestRegressor on my validation set, using the criterion=mae attribute. To my understanding this will run the Forest algorithm calculating the mae instead of the mse for each node.
After that I've used this: metrics.mean_absolute_error(Y_valid, m.predict(X_valid)) in order to calculate the MAE for each row of data.
What I would like to know is if the logic I'm following is sound. Am I making a fundamental mistake or missing something here? Should I have used the default MSE based Regressor and then calculate the MAE of each row using the mean_absolute_error function?
AI: Let me clarify few fundamental things:
In sklearn, RandomForrest Regressor criterion is:
The function to measure the quality of a split
It's a performance measure (by default, MSE) which helps the algorithm to decide on a rule for an optimum split on a node in a tree.
Kaggle is giving you a metric, i.e. MAE (again a performance/ quality measure) but to evaluate the performance of your ML model, once finalized.
To come back to your question: while both MAE/ MSE are performance measures, they are being used at two different stages of a modeling process and might not be related. So, while it makes sense to evaluate your final model on MAE as you would be judged on it, you can choose any of MAE/ MSE for criterion (i.e. for RandomForest) depending on performance at validation stage.
While the above being said, keep in mind that you might want to evaluate the validation errors (i.e. for finalizing a model) on the same metric (i.e. MAE in this case), to keep error measure consistent with the test set evaluation. |
H: What algorithms can be used to derive matching rules between known matches in datasets?
Lets say I have two datasets with different column names except for a unique ID key
Table 1 CSV
first_name,middle_name,last_name,uno,id
John,D,Smith,1,1
John,C,Smith,1,2
John,B,Doe,1,3
Suzy,C,Q,1,4
Table 2 CSV
fname,mname,lname,one,id
John,D,Smith,1,1
John,C,Smith,1,2
John,B,Doe,1,3
Suzy,C,Q,1,4
John D Smith is user ID #1 and is in both tables.
Is there a pre-built algorithm, package or tool that can do the following.
Join across tables where id is the same
For known matches, try to identify what rules could have been used to match the two records together.
Test hypothesis, like "fname and first_name are the same, is that enough to produce the target 'id' variable? Let me check other data. No. What about fname + lname?
Test if assertions hold true against other known matches.
End output would be
table 1 (first_name, middle_name, lastname) are the best join against
table 2 (fname, mname, lname)
AI: What you're trying to do is called "entity resolution" or "record linkage" (you can do a more thorough search in google). A typical approach is to treat it as classification, using as rows each combination of entries from one table matched against the other, label being whether they are a match or not; and any features that you can think about such as edit distance between the two, number of characters in common, whether they start and end with the same letter, largest common subsequence, number of characters in each, etc.
(a more computationally efficient approach is to treat it as one-class classification). |
H: What is the error rate for random guessing?
I am studying for my Machine Learning exam. In sample exam questions, there is a specific one that I could not understand:
And the question is: "What is the error rate for random guessing?"
I could not understand the question, can anyone explain it to me?
AI: Random guessing of value of whcih column? For example random guessing the gender of each row is: probability of it is male but guess female + vice versa which is $\frac{7}{10}\times\frac{1}{2} + \frac{3}{10}\times \frac{1}{2} = \frac{1}{2}$ and it means the error of random guessing the value of Gender is $50\%$.
Also you can compute for the others by the possible value for each column. For example for the Hair Length the error would be $\frac{2}{3}$ or $66.66\%$. |
H: Why normalization kills my accuracy
I have a binary sound classifier. I have a feature set that is extracted from audio with size of 48. I have a model(multi layer neural network) that has around %90 accuracy on test and validation sets. (without normalization or Standardization)
I see that the feature values are mostly around [-10, +10]. But there are certain features with a mean of 4000. Seeing unproportional values within features, I thought some feature scaling might improve things. So using scikit-learn tools I tried the following:
- Simply removing the means from features
- Normalizer
- Min max scaler
- Robust Scaler
And all these above ended up dropping my accuracy to ~ %50! (%100 recall, %50 precision)
So how is this possible? And what is the correct way to normalize my data?
AI: There could a skewed power envelope or non-stationary data. As a result, off-the-shelf feature scaling could attenuate the signal.
There are feature scaling techniques that tend to work better for audio signals, examples include: RMS level (Root Mean Square Level), Cepstral Mean Subtraction (CMS), RelAtive SpecTrAl (RASTA), kernel filtering, short time gaussianization, stochastic matching, and feature warping.
You should make sure you understand your raw data and the assumptions of each feature scaling technique before application. Accuracy-driven machine learning might lead to the wrong conclusions. |
H: Reinforcement learning for continuous state and action space
Problem
My goal is to apply Reinforcement Learning to predict the next state of an object under a known force in a 3D environment (the approach would be reduced to supervised learning, off-line learning).
Details of my approach
The current state is the vector representing the position of the object in the environment (3 dimensions), and the velocity of the object (3 dimensions). The starting position is randomly initialized in the environment, as well as the starting velocity.
The action is the vector representing the movement from state t to state t+1.
The reward is just the Euclidean distance between the predicted next state, and the real next state (I already have the target position).
What have I done so far?
I have been looking for many methods to do this. Deep Deterministic Policy Gradients works for a continuous action space, but in my case I also have a continuous state space. If you are interested in this approach, here's the original paper written at DeepMind:
http://proceedings.mlr.press/v32/silver14.pdf
The Actor-Critic approach should work, but it is usually (or always) applied to discrete and low-dimensional state space.
Q-Learning and Deep-Q Learning cannot handle high dimensional state space, so my configuration would not work even if discretizing the state space.
Inverse Reinforcement Learning (an instance of Imitation learning, with Behavioral Cloning and Direct Policy Learning) approximates a reward function when finding the reward function is more complicated than finding the policy function. Interesting approach, but I haven't seen any implementation, and in my case the reward function is pretty straightforward.
Is there a methodology to deal with my configuration that I haven't explored?
AI: As you say, policy gradient methods work better than value-based methods (like DQN) with continuous action spaces.
Your problem seems very similar to the MountainCarContinuous environment. If you check the leaderboard, you can see how the top score uses A2C. You can find a clear implementation of the same algorithm (using the same environment) in the following notebook: Continuous MountainCar Actor Critic Solution.ipynb.
Here you can also find a collection of Policy Gradient Algorithms. For example, PPO has proven to be very successful in environments like the one that you describe.
TL;DR Look for Policy gradient methods. |
H: Are there real world applications where deep fully connected networks are better suited than ConvNets
I would like to give some brief background for my question to avoid answers that explain the difference between fully connected nets and ConvNets.
I completed the first 3 courses in the deep learning specialization from deeplearning.ai (Coursera). Simultaneously I've worked through the first two home work assignments from Stanford's CS321n course for the Spring of 2017, as well as Michael Nielson's book on deep learning.
I believe I have a reasonable understanding of the architecture of both types of networks, and how to write them in python/tensoflow. I also implemented my own very simple fully connected feedfoward net in C where I was able to get 98% on MNIST.
I've also done some reading about this question in terms of which type of data is better suited for a ConvNet, and the answer I've come up with is similar to:
If the order of the input features matters to their meaning, then a
ConvNet will work best, but if you replace the position of one column
with another and the meaning is still intact, then a ConvNet will not
work.
This means that spreadsheet data (if I've understood correctly) is not suited for a ConvNet.
Unfortunately I don't know any deep learning practitioners to ask this question: are there fully connected deep neural networks in the wild that are currently in use? The best example I can think of is financial data. If that's true, what are some others?
AI: Yes, there are plenty use case where a fully connected network is more appropriate.
ConvNet works because the weights are shared in its kernels. It makes a lot intuitive sense to share the weights in image recognition task, because a flower at the top left of a picture has the same local pattern as the flower at the right bottom of the picture. What we want is to learn of pattern that is local.
However, this logic no longer applies when you consider program such as the famous iris classification problem. The idea of "local" simply doesn't exist for such data set. each of the features (i.e, petal width, petal length etc) are all "standalone" concept. In those case, it simply doesn't make sense to use ConvNet.
BTW, you can use ConvNet on financial data as well. As financial problem usually involves a time dimension, where the concepts of "local" does exist. |
H: RL - Weighthing negative rewards
Let's consider that I give an agent a reward of -1 (minimum reward) every time it performs an action which leads to the premature end of the episode (i.e., the agent dies). Besides, I also give a negative reward in the [-1, 0) interval when the agent performs an action that I want to avoid to repeat too much. However, these actions do not terminate an episode.
Is it possible that the agent will learn to take the "-1" action given that it is possible to collect many negative rewards which in the end will sum less than -1? This is, the agent will prefer to commit suicide rather than experience many negative rewards.
PD: the agent also receives positive rewards in the (0, 1] interval, which tend to be of higher magnitude than the negative rewards but the agent receives them less often when it is exploring.
AI: Yes, it is possible. In my case, with a specific reward function, my agent (snake) preferred to suicide instead of trying to reach the target, because of the "live penalty" (the agent receives a penalty each step, to speed up the exploitation phase over the exploration). Check this blog posts, they go through all of this in detail and they're so far the best ones I have found:
https://mpatacchiola.github.io/blog/2017/01/29/dissecting-reinforcement-learning-3.html
Also, since the author has a background in Cognitive Neuroscience, each algorithm is explained from a neuroscience point of view. |
H: What are Machine learning model characteristics?
This question I have received in some Machine Learning related interview and Here is the question
What questions would you ask to learn about machine learning model characteristics?
This is what I think:
I did a bit research on the internet & found this resource, but still not very clear about How ML model characteristics are equivalent to asking trade-off between different algorithms in supervised learning settings. What I understood and framed my answer is:
First, for simplicity, I assumed that this model is used for some supervised learning task(classification/ regression) then,
I would first try to find a learning algorithm used to create this model, because this will provide me with a clue of the created model and help me to talk about different issues within it(such as,
How complex or simple your model is(feature engineering!)?
What are your training & test errors(Model performance)
Both will help me to talk about the bias vs variance trade-off.
Furthermore, I could talk about different learning algorithm trade-offs in supervised settings.
Is the model based on identifying correlations i.e output variable can be expressed in terms of the linear/non-linear combination of features?
(targeting linear regression, logistic regression, SVM or Neural network algorithms, or
it's decision tree-based algorithms.
I am not confident enough that my answer is complete or correct (since I am not able to understand what precisely characteristics of a model are?), so I need this community's help to provide feedback! please feel free to add your suggestion to it!
AI: I'm a little torn on helping on this question because I think that you're being given good advice above about modifying your question and using this site in a better way. But at the same time, I hate when questions are closed so quickly on here because the people with those votes just do a terrible job with that privilege (a privilege that I have but rarely use because nothing should be closed here). So, I'm going to choose to help here but please use the feedback you're being given when posting here in the future.
When I interview most data scientists, I am looking for understanding of concepts and rationale. With this particular question, I don't think they are looking for deep detail; a smart scientist starts by getting a high view into the project. So I think that with this question, they want to see how you walk through the analysis. I would reply with the following, roughly in this order:
What is the business case the algorithm is trying to solve?
Is this algorithm predictive or is it doing categorizations?
How many factors are in the complete dataset? How many factors are actually used?
Is it a neural network or does it use "traditional approaches" like regression, decision trees, etc, etc?
Can you show me a confusion matrix for the results? What is the accuracy? What is the recall? What is the precision?
Can you show me an ROC curve?
I think that at this point, once you are given the information and have time to analyze it, you will be in a much better position to make statements about a particular model. Good luck! |
H: How to represent linear regression in a decision tree form
I have read that decision trees can represent any hypothesis and are thus completely expressive. So how do we represent the hypothesis of linear regression in the form of a decision tree ? I am referring to the equation
w0 + w1.x1 + w2.x2 + .... + wn.xn = yn
AI: The basic idea would be to divide up your feature space in small multi-dimensional intervals, and then assign to each point in a given interval the average value that your linear regression model has in that interval. This is something you can do with a tree.
This is similar in spirit to approximating (in 1D space) the function $y = x$ with a "piecewise constant", "staircase-like" function like http://mathworld.wolfram.com/NearestIntegerFunction.html: you could divide your 1D space in equal intervals (e.g. of length 1), and you'd assign to each interval the average value that the function $y = x$ has in it.
Note that such "piecewise constant" function can be defined as a tree: suppose you wanted to know the "tree" approximation of $y = x$ for $x^* = \pi = 3.14159..$, then you could do:
Is $x^* > 0$? Yes
- Is $x^* < 0.5$? No (if it was, I would have approximated $x^*$ with 0)
- - Is $x^* < 1.5$? No (if it was, I would have approximated with 1)
- - - Is $x^* < 2.5$? No (if it was...)
- - - - Is $x^* < 3.5$? Yes -> I approximate $x^*$ with $3$ (3 is the average of $y = x$ between 2.5 and 3.5).
Note that as the size of the intervals shrinks (in "tree language", as you grow your tree more and more), the better the approximation would become. Also, rather than decision one would speak of regression tree in this context.
To generalize this idea, just imagine that you could carry out a similar procedure for any linear function in 1D $y = ax + b$ using a "staircase-like" function, and in N dimensions for any linear function in N-D $y = a_0 + a_1 x_1 + \cdots + a_n x_x$.
Actually, you don't have to restrain to linear functions, as as you said trees are more flexible still (you just need to appropriately assign the values to each interval of your feature space)! |
H: Optimal proportion between the amount of Class = 1 and the amount of Class = 0?
I am quite new machine learning methods, so I may not write proper technical formulas.
My question is about the optimal proportion between sample size in Class = 1 and Class = 0 in a binary classification problem. (I might use different models, but I will start with decision tree).
I am trying to develop a risk model for a population of 300.000 by using some risk factors, and each risk factor has a risk-weight. So basically it is a accumulative model
Risk_amount = Risk_factor_1 * weight_1 + Risk_factor_2 * weight_2 + Risk_factor_3 * weight_3 ...
My big question is if a Risk_amount really belongs to Class = 1 or not.
I want my ML-model to find the most optimal weights to those risk factors.
I already know that 1000 out of 300.000 are risky in real. So the amount of target data with Class = 1 is 1000.
So the total amount of target data with non-risky population Class = 0 is 299.000
The question is what is the optimal proportion between the amount of Class = 1 and the amount of Class = 0?
1000 Class = 1 vs. 1000 Class = 0
or
1000 Class = 1 vs. the whole Class = 0 as 299.000?
AI: The balance between two classes in a classification is very important, as you do not want your model to overfit for a particular class.
This is where you use metrics apart from accuracy to really evaluate how good your model truly is. In case you are not able to balance the dataset, there are multiple ways to work with imbalanced data. They are as follows:
You could use certain techniques like SMOTE to generate more samples of the undersampled class
You must split your dataset in test and train with stratification so that you have a balance in your evaluation.
You can sub-sample the large class and balance the two classes, and this multiple times by taking random sub samples of the larger class.
Thorough analysis of the result is very important to understand how to proceed towards the solution to the problem. Please look at f1 score, precision and recall apart from accuracy. Also read about micro/macro averaging of these metrics.
There are a lot of conversations on datascience stackexchange and stackoverflow on how to work with imbalanced data for classification. Here is a link : https://stackoverflow.com/questions/40568254/machine-learning-classification-on-imbalanced-data
Have fun with machine learning :) |
H: How to represent a user who hasn't churned in training data
I am building a file with sample data that has a bunch of variables:
date, customer_id, amount_spent, number_of_transactions, time_since_last_transaction etc. that i am mapping against days_to_churn
I will train my model using Keras to map the emboldened variables to the italicised days_to_churn. However there are many instances where a user is an active subscriber and this value is blank.
How would I go about incorporating this to ensure that I am not excluding active people from my churn calculations?
AI: In your setup, the only way is probably to set it to some large number, say 365. However, this will force you to discard all dates that are less than 365 days old, because you can't be certain a customer won't churn before he reaches 365 days (which is still in the future).
A better and more common way is to look at churn on a rolling window basis, e.g. will the customer churn within the next 30 days. This turns your problem into a classification problem. That way you only need to discard the most recent 30 days of data. |
H: Why does Feature Importance change with each iteration of a Decision Tree Classifier?
After applying PCA to reduce the number of features, I am using a DecisionTreeClassifier for a ML problem
Additionally I want to compute the feature_importances_. However, with each iteration of the DecisionTreeClassifier, the feature_importances_ change.
Iteration #1
Iteration #2
Why would it change? I thought the initial split was made on a feature to "produce the purest subsets (weighted by their size)". Acting on the same training set, why would that change?
Thanks in advance for any help.
AI: From sklearn.tree.DecisionTreeClassifier help:
The features are always randomly permuted at each split. Therefore, the best found split may vary, even with the same training data and max_features=n_features, if the improvement of the criterion is identical for several splits enumerated during the search of the best split. To obtain a deterministic behaviour during fitting, random_state has to be fixed.
Also, you might want to have a look at my critique on feature importance. |
H: What is a suitable Tensorflow model to classify images into foggy/not foggy?
I want to classify photos taken by multiple webcams that are operating in mountainous regions into foggy / not foggy. The photos are in various sizes and were taken under very different light conditions and in different areas.
I read about Tensorflow and its ready-to-use image recognition models (which of course would have to be re-trained for the foggy/non-foggy categories).
However, these models are trained to classify images into categories according to objects within these images. As I want to classify my images based on their overall appearance (blurry, greyish, far away objects hardly detectable, ...) I was wondering if these models are really suitable or if there is a better approach for this task. Any help is highly appreciated!
AI: From the sounds of the problem you could probably do some thing with extracting some features from the images such as how many edges they have, brightness (to get day/night), average color values. Then using a more simpler classification algorithm such as SVM, KNN, Decision Tree, Random Forest.
The Tensorflow ready-to-use models look to be very complicated models with a large number of layers so it will take a long time to train and run. It will also be very hard to train them from scratch (Retraining a pretrainid network could help with that though). So I think they might be a bit overkill. Also note that those models were probably made with the imageNet dataset in mind which has 1000 classes where you have just 2.
Its very hard to know what will work without seeing the images or being able to trying it first.
I would start with simpler faster methods before trying slower more complicated methods.
So in order try the feature extraction plus classifier if that is unable to learn a good relationship then move on to a basic CNN if that doesnt work move on to the more complicated CNN models.
With respect to the objects within rather than the overall appearance. For a CNN using different pooling layers can effect this change this. eg. Max Pooling will get the max value of a filter so can be largely effected by a small part of the image where as Average Pooling will be better at looking at the whole image. |
H: Strategies for continuously assessing and improving model performance
I am building a supervised machine learning model to generate forecast.
So I would have historic data like this:
SKU, Month, .... other features, Actual Volume
That I can use a model to generate forecast, using the actual volume as the label.
Of course, there would be a variance between the forecast volume and the actual volume
What are the proper ways to leverage such data, without generating any data leakage, to incorporate such info to train the model to minimize the variance?
Should the data be fed back to the data with moving average, etc. and retrain, or is there other better strategy to properly assess the performance of the model and learn from it?
The data will be time series data with various features such as exchange rate, salesperson, etc.
AI: I'm glad to see this question because this site gets such few questions on models that are actually in a production state.
If I was in your position, I would start to think about how I'm going to use this algorithm on a go-forward basis and start to log everything. Every new prediction that your algorithm makes is also a new data point for training.
So you need to take the time to create a feedback loop and (1) take new datapoints, put them into training, (2) re-train your algorithm, (3) deploy a new version and then (4) use that revised algorithm. You repeat this entire process on a continual basis throughout the life of your project. From the technical side, it's an exercise in re-inforcement learning since your algorithm won't start from a base of zero knowledge. Also, most of the steps I described will most likely require you to write new code, most of which will be related to process and workflow, not necessarily data science or algorithmic.
Finally, as an FYI, it is said that there are multiple algorithms out there that carry out these types of cycles on a continual basis, literally updating on a minute-by-minute (or faster!) basis, always providing answers on the latest available data points. |
H: Image normalisation methods
I have found some research papers specifying explicitly the normalisation technique they used to get the results.
What difference do
IMG /255.0
And
IMG - mean / STD
Has on the performance of the CNNs?
AI: So, before just diving into the performance of the CNNs based on the two methods, just lets start with what they both will do to the input.
The first method (IMG/255.0, and we'll call it scaling) will have the data scaled between 0 and 1. The second method (IMG - mean/STD, we'll call it centering) will have inputs ranging from arbitrary numbers (can be both positive and negative) with 0 being the mean.
Scaling will improve the convergence speed of the model, whereas centering will not only speed up the convergence of the model, but will also deal with the gradient exploding and vanishing problem.
As mentioned in CS231, centering is enough, since it does the scaling thing as well.
Lastly, as long as you don't have a very deep model, the two techniques won't make a lot of difference because batch-normalisation takes care of pretty much everything regarding image normalization, where all the layers receive the normalized data, and not only just the first one. |
H: What features are extracted from pre-trained model of CNN Keras?
I would like to use the CNN pre-trained model in feature extraction but I don't know what features are extracted from that.
Please let me know!
AI: Here is a good post about how features are extracted using CNN: https://towardsdatascience.com/cnn-application-on-structured-data-automated-feature-extraction-8f2cd28d9a7e
"I don't know what features are extracted from that."
It depends on what dataset was that CNN pre-trained on. E.g. if the model was pre-trained on a dataset consisting of dogs and cats, the features should contain useful information about dogs and cats.
In practice it turned out that models pre-trained on ImageNet dataset extract features which are a good starting point for training on other custom datasets. |
H: Data Science Career: From Researcher To Data Scientist
I’m currently working as a Research Assistant in Computer Science, specializing in both Human-Computer Interaction (HCI) and Health Informatics (HI) fields. As part of my role, I collect data from several clinical professionals (clinicians, physicians, and doctors) and analyze it (e.g., t-test, ANOVA and so on). Transforming the analyzed data and organizing it into several datasets, we report and use it by creating scientific papers that will fundament our analysis.
Recently, I started to have an interest in the Data Science field, and I’m considering to develop in that direction and become a Data Scientist. As I mentioned, I’m already applying statistical analysis as part of my job. That rises few questions:
How do the HCI and Data Science areas relate to each other?
How can I shift from being a Research Assistant in the HCI field into Data Science career?
What kind of studying/courses should I take?
AI: Great question! I can appreciate that you have a base of statistics to work from. Most data scientists - especially the ones that go to these data "camps" or whatever - out there do not have this and it should serve you well in your career. Here are some answers to your questions, in order:
Your background will help you because you are used to things like study design and execution and working with data. But I don't seen anything specific to HCI that would help you. When you are a good data scientist, the data itself takes on a muted importance. Right now, when I work on a project, I care that the data was collected in a sustainable way, but what the data is doesn't really matter. Eventually you will reach this too - it's all just numbers and math :-)
You have the statistics side. So now you need the (1) programming skills and the (2) the modeling skills. You should choose to learn a language like Python or R and develop your skills there. From there, you should start looking at the relevant libraries like Tensorflow and Keras to help you build your modeling skills. You should also pick up a textbook or some other resource that talks about what models actually do and how you can tune them, agnostic of the language you choose. Finally, you'll be ready for some research on neural networks and how those work. The measure of success here is being able to write some "traditional" models in Tensorflow (like regression, decision trees, etc) and also being able to write a neural network in Keras.
I think that your base of already working around experiments and (presumably) applying the scientific method will serve you well and already puts you ahead of most data scientists out there. Now, it's just a matter of being able to do all of that with code and, if you follow the steps above, I think you will see those connections and how you can enter the world of data science. |
H: How to get mean test scores from GridSearchCV with multiple scorers - scikit-learn
I'm trying to get mean test scores from scikit-learn's GridSearchCV with multiple scorers.
grid.cv_results_ displays lots of info. But
grid.cv_results_['mean_test_score'] keeps giving me an error.
I've checked the docs and similar questions with no luck. What am I doing wrong?
Code:
scorers = {
'r2': 'r2',
'nmsle': 'neg_mean_squared_log_error',
'nmse': 'neg_mean_squared_error'
}
params = [
{
'regressor': [GradientBoostingRegressor()], 'preprocessing': [None],
},
]
grid = GridSearchCV(pipe, params, cv=5, scoring=scorers, refit='nmse')
grid.fit(X_t, y_train)
grid.cv_results_['mean_test_score']
Error:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-161-a021fe7b05bb> in <module>
20 print(f'Val score set score rmse: {rm}')
21
---> 22 grid.cv_results_['mean_test_score']
23
24 # for mean in means:
~/anaconda3/lib/python3.6/site-packages/sklearn/utils/deprecation.py in __getitem__(self, key)
124 warn_args, warn_kwargs = self._deprecations[key]
125 warnings.warn(*warn_args, **warn_kwargs)
--> 126 return super(DeprecationDict, self).__getitem__(key)
127
128 def get(self, key, default=None):
KeyError: 'mean_test_score'
```
AI: For multi-metric evaluation, the scores for all the scorers are available in the cv_results_ dict at the keys ending with that scorer's name ('_scorer_name').
so use
grid.cv_results_['mean_test_(scorer_name)']
Ex: grid.cv_results_['mean_test_r2'] |
H: How to pass 2 features to LSTM , one of them is one-hot-encoded with Keras?
I have a very simple LSTM model
model = Sequential()
model.add(LSTM(64, input_shape=(seq_length, X_train.shape[2]) , return_sequences=True))
model.add(Dense(y_cat_train.shape[2], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_cat_train, epochs=100, batch_size=10, verbose=2)
The input X_train has 2 feature , one is categorical (values 1-4) and the other is numeric (values 1-100). There are 4 classes in y_test that I one-hot-encoded with keras's to_categorical .
Should I encode the categorical input feature as well ? If I do , how can I pass it along with the other feature ? (e.g. now a timestep looks like this for example: [1,44])
Later , I would like to take make a sampling , meaning I need to take the predicted y_hat<t-1> and pass it as x<t> . I will have to pass the second numeric feature (1-100) along with it. How can it be done ?
EDIT : note that I do not want my numeric feature to become categorical since there is importance to the values (meaning 2<10<90 etc)
AI: (1) Yes, it is a common practice to encode the categorical feature by one-hot encoding, for example, encode [1,44] as [1,0,0,0,44], encode [2,44] as [0,1,0,0,44], etc.
(2) Same as (1), just concatenate the one-hot encoded categorical feature and the numerical feature(s). |
H: WEKA Random Forest J48 Attribute Importance
I have been using WEKA to classify very long duration audio recordings. The best performing classifiers have been Random Forest and J48. The attributes used to classify the audio are acoustic indices. This process of generating these indices is quite resource intensive.
I would like to determine the importance of the various attributes. Is there a way for these classifiers to report this?
I see J48 produces a decision tree, is it safe to day the attributes used at the root of the tree are most important?
AI: If you must use WEKA, I would suggest looking in the Attribute Selection panel. In this panel, you can rank attributes by information gain, as well as look at which subsets of attributes perform best. To my knowledge, you can't get this information directly from the classifier in the WEKA Explorer, but it is quite easy to get in other frameworks such as scikit-learn: see this example in the docs. |
H: Using neural network for "features matching" binary classification
We have a dataset of numerical features from two images and we want to check if these images match or not using only these features.
Basically we have have these columns:
fA1, fA2, ..., fA14: 14 features from image A
fB1, fB3, ..., fB14: 14 features from image B
We want to predict if image A match image B (y=1) or not (y=0). So it's a features matching problem.
The main usecase is for face recognition using this framework:
BERND HEISELE
So is there any neural networks architecture known for this situation (we have a 7 million annotated training set) ?
N.B: we don't have any images, we have only numerical features.
AI: From what I understand, your dataset is of pairs of images and a binary classification of their pairing?
There are models using an architecture named Siamese Neural Networks which are used exactly for this task of determining similarity.
You can start by reading the following article: Learning to Compare Image Patches via Convolutional Neural Networks. They are using Siamese architecture to compare between different image patches (similar to what you are describing): |
H: How to classify images Neural Network didn't trained to Understand
Let's say I trained a Convolution neural network to Identify Cats , Dogs and wolves . But suddenly I feed it pictures of rabbits and Lions. so how can I classify those as pictures as "Other"
I tried to do this by Adding "Sigmoid" activation functions and getting probabilities for each selection. I thought probabilities for each selection might be different. I thought If I feed a Rabbit picture to NN it will out 20% cat ,44% dog and 34% wolf. That way I can clearly figure NN is confused. But Unfortunately it gives me results like 70% cat , 10% Dog , 20% wolf.
Can You suggest me a Way to fix this problem ?
AI: Unfortunately, a neural network is only able to compute probabilities on labels that it has been trained to recognize. In your model, you only have three identified labels and presumably trained on a data set that only includes the three classed. So your model is evaluating everything in terms of those three labels. If you feed in images of a car, it is going to give you the probability of the car being a cat, dog, or wolf and the probabilities will add up to 100%.
There are several approaches to try to deal with this problem.
Increase Training Examples
As Nga Dao suggested, add another class others and add a bunch of images that are not part of the target classes with the label others. This is probably the easiest option but it may not produce much better results.
One-vs-Rest Modeling
Create a binary classifier for each class and take the class with the highest probability over a threshold. For example, when classifying a rabbit using the cat binary-classifier, presumably, the probability of a cat will be lower than not a cat. If all probabilities are below a threshold you feel is significant, then label the image as other. If you have two significant probabilities, which might happen when classifying a dog and you have dog and wolf as classes for example, take the class with the highest probability.
You may need to optimize the threshold to get the best performance out of your classifier. |
H: Ploting eigenvectors
I've generated two clouds of 3d points from multivariate_normal
data = np.random.multivariate_normal([2,2,2],[[1,0,0],[0,5,0],[0,0,10]],
size=500)
data = np.vstack((data, np.random.multivariate_normal([-2,-2,-2], [[1,0,0],[0,5,0],[0,0,10]], size=500)))
data = data - data.mean(axis=0)
And try to do PCA like this
covmat = np.cov(data.T)
v, W = np.linalg.eig(covmat)
And draw:
def get_vec(eig_v, eig_vec):
t = np.linspace(0, eig_v)
return np.array([np.array(v * eig_vec) for v in t])
def ang(v1, v2):
return np.rad2deg(np.arccos(np.dot(v1,v2)/np.linalg.norm(v1)/np.linalg.norm(v2)))
l1 = get_vec(v[0], W[:,0])
l2 = get_vec(v[1], W[:,1])
l3 = get_vec(v[2], W[:,2])
x = data[:,0]
y = data[:,1]
z = data[:,2]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot(l1[:,0],l1[:,1],l1[:,2], c='r')
ax.plot(l2[:,0],l2[:,1],l2[:,2], c='b')
ax.plot(l3[:,0],l3[:,1],l3[:,2], c='y')
ax.scatter(x,y,z,c='g')
plt.show()
This is what I get:
It's clearly visible that axes are not orthogonal. I've checked it and they seem to be orthogonal with regard to numbers:
print(ang(W[:,0], W[:,1]))
print(ang(W[:,0], W[:,2]))
print(ang(W[:,1], W[:,2]))
90.00000000000003
89.99999999999999
90.0
Could it be that such a tiny error makes that much visual difference?
AI: The PCA projections do not look not orthogonal because your figure axes are not equal.
Set all axis equal with something like this:
ax.axis('equal')
or
ax.xlim(-5, 5)
ax.ylim(-5, 5)
ax.zlim(-5, 5)
ax.gca().set_aspect('equal') |
H: Accuracy keep changing by changing randomState of classifier
I try to classify car sound samples. Using MLPClassifier from Scikit. I'm getting very different and confusing test results between 2 different test sets, and I am stuck:
Training is done with the first data set of 1500 samples, splitted as 70/30 train/test. Second set is isolated, I use only for the final testing, 700 samples completely unseen data.
First test set is stable and always around %90 test, %99 train set accuracy.
Second test is completely random, it changes while I change the randomState variable of the classifier. It can be %20 or %80 accuracy.
This is extremely frustrating. The difference between two sets is first one is mainly from sounds that come with UrbanSound data collection. And second set is more real world, I recorded with an iPhone. I checked that they all have the same duration, sample rate, and bit rate.
So my question is:
if your accuracy changes randomly by changing your randomState of your classifier, on a certain test set, what does this tell you about your data?
And second what would be my approach now? Totally lost.
ps: My features are first 20 of the mfcc coefficients. OR 60 band of mel spectogram. I try different things.
I am also wondering if all these code examples, academic papers about sound classification which uses the sound samples from UrbanSounds and ESD50 sets, did they ever test their accuracy with completely random real world sounds, recorded and processed with different tools ?!
Below is when I plot these 2 different sound sets(only the positive class) with:
plt.plot(car_features_1,'.')
plt.plot(car_features_2,'.')
They look quite different to me, they are both car sounds one is recorded by iphone other is coming from Urbansounds .etc
This is the code for 60 features(60 from mel spectogram ):
clf = MLPClassifier(activation='relu', solver='adam', alpha= 1, hidden_layer_sizes=(60, 60, 60), random_state=None, max_iter=2000)
clf.fit(X_train, y_train)
And this is how i extract features
mel = np.mean(librosa.feature.melspectrogram(X, sr=sample_rate).T,axis=0) #128 array
mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=20).T,axis=0) #20 array
AI: Pal, you have set alpha to 1. Alpha is a L2 regularization term, its value is normally around 0.0005 ... 0.0001. By setting it to 1 you force the optimizer to make your model's weights almost zero.
P.S.
After setting l2 reg term to its default value, you might get better accuracy, but if not, pay attention to the following. The ratio between the amount of data size (1500 samples, how many feature vectors?) and the number of weights (20x60x60x60xN, so more than 4M weights) is too small. It might well be that the model simply memorize the training set and does not generalize for sound classification.
Here are some suggestions that might help in training a sound classifier:
Start with a simpler model. Sounds are known to be well classified with Gaussian mixture models, for instance. Unlike deep learning models, those are easier to train.
Use data augmentation to increase the amount and diversity of your training data. You can mix the sounds with some light background noises.
Try to reduce the amount of weights. Three layers with 60 neurons each is too much. Usually, layers become smaller with going up, something like 64->32->16. Also, convolutional layers can be very useful here, as they share the weights across their neurons. I used them quite successfully for sound classification.
scikit-learn is quite unusual choice for training deep learning models. I would try Keras, it works perfectly with numpy arrays. |
H: how to get prediction from trained random forest model?
i have a dataset with two columns user posts (posts) and personality type (type) , i need personality type according to posts using this dataset so i used random forest regression for prediction
here is my code:-
df = pd.read_csv('personality_types.csv')
count_vectorizer = CountVectorizer(decode_error='ignore')
X = count_vectorizer.fit_transform(df['posts'])
y = df['type'].values
Xtrain, Xtest, Ytrain, Ytest = train_test_split(X, y, test_size=0.33)
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(Xtrain, Ytrain)
Y_prediction = random_forest.predict(Xtest)
accuracy:
random_forest.score(Xtrain, Ytrain)
acc_random_forest = round(random_forest.score(Xtrain, Ytrain) * 100, 2)
print(round(acc_random_forest,2,), "%")
100%
now i want to get prediction from a custom text how can i achive that ?
how can i get personality type of a post separately using this model.
AI: Use the CountVectorizer you have fitted to preprocess your custom input then feed it to your model for prediction.
custom_input = ['insert text here']
custom_input = count_vectorizer.transform(custom_input)
custom_prediction = random_forest.predict(custom_input) |
H: Uniformity of color and texture in an image
I am new to the field of deep learning and have a problem in determining whether two images have uniform color and texture. For example, I have a
Master image -
Now, with respect to this image i need to determine whether the following images have uniform texture and color distributions -
image 1 -
image 2 -
image 3 -
I need to develop an algorithm which will evaluate these 3 images with the master image. The algorithm should approve the image 1 and reject image2 because of its color. And reject image 3 because of color and texture.
My approach for the problem was directly analyzing image for texture detection. I found that Local Binary Patterns method was good among all texture recognition methods (but I am not sure). I used its skimage implementation with opencv in python and found that the method worked.
from skimage import feature
import numpy as np
import cv2
import matplotlib.pyplot as plt
class LocalBinaryPatterns:
def __init__(self, numPoints, radius):
# store the number of points and radius
self.numPoints = numPoints
self.radius = radius
def describe(self, image, eps=1e-7):
# compute the Local Binary Pattern representation
# of the image, and then use the LBP representation
# to build the histogram of patterns
lbp = feature.local_binary_pattern(image, self.numPoints,
self.radius, method="uniform")
(hist, _) = np.histogram(lbp.ravel(),
bins=np.arange(0, self.numPoints + 3),
range=(0, self.numPoints + 2))
# normalize the histogram
hist = hist.astype("float")
hist /= (hist.sum() + eps)
# return the histogram of Local Binary Patterns
return hist
desc = LocalBinaryPatterns(24, 8)
image = cv2.imread("main.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
hist = desc.describe(gray)
plt.plot(hist,'b-')
plt.ylabel('Feature Vectors')
plt.show()
It detected the features and made a histogram of feature vectors. I plotted the histogram using matplotlib and clearly found that image 1 and image 2 texture features were almost similar to the master image. And image 3 texture features were not matching.
Then I started analyzing images for their color. I plotted the color histograms using opencv as -
import cv2
from matplotlib import pyplot as plt
def draw_image_histogram(image, channels, color='k'):
hist = cv2.calcHist([image], channels, None, [256], [0, 256])
plt.plot(hist, color=color)
plt.xlim([0, 256])
def show_color_histogram(image):
for i, col in enumerate(['b', 'g', 'r']):
draw_image_histogram(image, [i], color=col)
plt.show()
show_color_histogram(cv2.imread("test1.jpg"))
I found that color histogram of image 1 matched with master image. And color histograms of image 2 and 3 did not matched. In this way I figured out that image 1 was matching and image 2 and 3 were not.
But, I this is pretty simple approach and I have no idea about the false positives it will match. Moreover I don't know the approach for the problem is the best one.
I also want this to be done by a single and robust algorithm like CNN (but should not be computationally too expensive). But I have no experience with CNNs. So should I train a CNN with master images?. Please point me in the right direction. I also came across LBCNNs, can they solve the problem?. And what can be other better approaches.
AI: this is pretty simple approach
Firstly, can you evaluate your scripts on more images to get an idea of how well it performs? If you get an acceptable classification accuracy (or e.g. a good F1 score), then there is no need to try out a CNN!
I have no idea about the false positives
Actually if you cannot evaluate your method like that, then a CNN is also not possible!
If you still want to try a deep learning method, bear in mind that you will generally need a lot of images, let's say at least 1000 in your "master image" training set - then hopefully a good percentage of images to test again (hold-out set / test set).
I don't know the approach for the problem is the best one
Your current method seems reasonable to me. There probably is "the best" method in general, so don't be too worried.
Somebody has written An Analysis of Deep Neural Networks for Texture classification - maybe that contains some ideas to get you started :-)
Here is a short video intro to one approch that introduced the Local Binary CNN, which was originally used for image classification, but perhaps could be adjusted to your problem.
Instead of aiming for target classes, you would simply need to map input to the labels, and could even relax the focus on colour/texture - the CNN would extract what it needs to learn the mapping.
NOTE: the downside of an end-to-end system like that is that you would no longer know what your model is using as its features to make its prediction! Colour? Texture? Raw RGB values in the top left corner?
This is actually where the LBCNN could shine, because its sparse (stochastically) binary kernels are much less likely to overfit to your training data, compared to a standard CNN.
While I couldn't find the code of the authors from the video above, here is an LBCNN implementation for face detection. |
H: Numpy element wise comparison for a particular value in two arrays
If I have two arrays as shown below:
a = numpy.array([0, 0, 1, 0, 1, 1, 1, 0, 1])
b = numpy.array([1, 1, 1, 0, 0, 1, 1, 0, 0])
Is there an easy way using numpy to count the number of occurrences where elements at the same index in each of the two arrays have a value equal to one. In the above two arrays, the elements in position(zero-indexed) 2, 5 and 6 are equal to 1 in both the arrays. Thus I want to get a count of 3 here.
Thank you for any help that you may be able to provide.
AI: There are two ways I'll show you (there are probably a lot more using NumPy):
First method: chaining operations
You can use "masking" followed by the comparison and finally a sum operation:
We want all values in a from the indices where b is equal to 1:
part1 = a[b == 1]
Now we want all places where part1 is equal to 1
part2 = part1[part1 == 1]
now we are left with all the places where a and b are equal to 1, so we can simply sum them up:
result = part2.sum()
Method 2: built in numpy.where
This is much shorted and probably faster to compute. NumPy has a nice function that returns the indices where your criteria are met in some arrays:
condition_1 = (a == 1)
condition_2 = (b == 1)
Now we can combine the operation by saying "and" - the binary operator version: &.
part1 = numpy.where(condition_1 & condition_2)
To get your desired output, we can take the length of the resulting set of indices:
result = len(part1)
Read the documentation about numpy.where to see the other things it can do for you! |
H: Machine Learning library in Python, list or numpy or pandas
Background: We are trying to build a customized ML library in Python 3 to tackle analysis we often repeat, in a general fashion. But it would not be nearly as general as sklearn. In fact, we are prepared to break some interfaces if that give us enough performance boost in return.
The basic starting point would be constructing a Learner by feeding it X and y, and predict on new input X0
learner = Learner(X, y)
y0 = learner.predict(X0)
One of the design decision is what data type to use for X and y, here are 3 choices, along with some rudimentary brainstorm advantages for each respectively:
native Python list: X being a list of lists, y being a list. Would this have better performance for being 'closer to metal'?
numpy: X being an ndarray of (n, p), y being an ndarray of (n, ). This would benefit from the richer functionalities in numpy/scipy. This is also the data type choice of sklearn.
pandas: X being a DataFrame, y being a Series. This can utilize more Data Analysis (read: dirty work) functionalities from pandas. This way we can also refer to variables with their names instead of just integer indices. But performance would be the worst?
Please share your thinking of pros and cons for each choice from both tech and math perspectives. Thanks in advance!
PS: I thought about whether this should be a StackOverflow question, but still feel this is more Data Science.
AI: Pandas does normally a decent job allowing dataframes to behave as numpy arrays.
My recommendation is to use numpy types, the reason is that, for consistency with pretty much what the industry is doing, you are much safer with numpy.
I love pandas, and I love the dataframes, but they provide extra functionality that the model does NOT need, the same way in general programming you will not use a String to represent a boolean (even though tou could do it with a String), simply because you should use whatever data types provides you the functionality you need... and nothing else.
So, numpy is the way to go. As for python lists, you do not get the mathematical operations that you get with numpy, so do not consider them. |
H: Use MinMaxScaler , label encoder, one hot encoder , keras prediction file for later prediction
I'm new to neural networks and data science field. I have a dataset with over 90,000 rows. which Include 9 text columns & 29 Number Columns. after encoding with label encoder and one hot encoder It has over 10,000 columns. Now I like to save those scalers, encoders and prediction files for later use. But I have no Idea how to save and use them later for single prediction. Any help is appreciate. Thank You
AI: You could use pickle to store your encoders/scalers/etc. It is a common way of storing Python objects
from sklearn.preprocessing import LabelEncoder
import pickle
# Fit a label encoder
le = LabelEncoder()
X = le.fit_transform(X)
# Pickle the encoder for later use
with open('path/and/name', 'wb') as f:
pickle.dump(le, f)
Then when you have it stored it can be used again by loading the pickle
# Read the pickle from file
with open('path/and/name', 'rb') as f:
le = pickle.load(f)
# Use the already fitted encoder to transform new data
X = le.transform(X)
Then repeat the process for your all your preprocessing objects |
H: Are RNN or LSTM appropriate Neural Networks approaches for multivariate time-series regression?
Dear Data Science community,
For a small project, I've started working on Neural networks as a regression tool, but I am still confused about possibilities of some variants. Here's what I am aiming to do:
I have multiple input data time series $X(t)=[X_1(t), X_2(t), X_3(t),X_4(t)]$, and multiple target data time series which I want to modelize $Y(t)=[Y_1(t), Y_2(t)]$. All data are available for training.
I aim to train my model/regression on an interval $[t_0,t_n]$, and then be able to apply it on a larger different interval.
I know that relation between my $Y$ and $X$ are non-linear, but also that in need to take in account lag, or inertia. For example, $Y_1(t)$ is dependant of $X_1([t-dt_1,t])$ and $X_2([t-dt_2,t])$. All $dt_n$ are different, but I have an approximate idea of how 'far' I need to reach.
With this in mind, through some research I have been guided to focus on Recurrent Neural Networks (RNN) and Long Short Term Memory (LSTM) networks. I aim to use TensorFlow/Keras to work on this. However, after some reading, I'm getting confused with those solutions. Many people present them in prediction applications, which supposedely means that those networks use data from a time interval (either $X$ or $Y$ in my case) on an interval $[t_0,t_n]$ to predict $Y(t_{n+1})$. But my objective is to use $X([t_0,t_n])$ to modelize $Y([t_0,t_n])$ (on the same time interval). I am getting confused with this notion of "prediction".
Therefore, are RNN and LSTM networks appropriate solutions for my multivariate time series regression/model project? Or am I already going the wrong way?
As a beginner in this field, any reference or link to ressources/tutorial, or demo, is also gladly welcome.
AI: Here is a really good source to begin multivariate time-series forecasting in Keras using LSTMs.
I aim to train my model/regression on an interval $[t_0,t_n]$ and then be able to apply it on a larger different interval.
There's no harm in this as long as you perform the right kind of multi-step forecasting. If your problem requires you to train on $[t_0, t_n], \text{for some } n < 100$ and produce $y_1(t_{n+1}), y_2(t_{n+1})$ as outputs then APPLY it on $[t_n, t_n+100]$ then there will be issues in implementation, as most ML models will require you to provide the same shape as you did when you were training your model. Here's when sliding across time-series will help you. Simply put,
$$
\begin{align}
\text{Train on: }& [t_0, t_n] &\text{ Output: }& y_1(t_{n+1}), y_2(t_{n+1}) \\
\text{Predict on: }& [t_1, t_{n+1}] &\text{ Output: }& y_1(t_{n+2}), y_2(t_{n+2})\\
\text{Predict on: }& [t_2, t_{n+2}] &\text{ Output: }& y_1(t_{n+3}), y_2(t_{n+3})\\
& \vdots & & \vdots\\
\end{align}
$$
I know that relation between my Y and X are non-linear, but also that in need to take into account lag or inertia. For example, Y1(t) is dependent of X1([t−dt1,t]) and X2([t−dt2,t]). All dtn are different, but I have an approximate idea of how 'far' I need to reach.
Well, you can model your time-series data as $X_1(t-k),\cdots,X_1(t), X_2(t-d),\cdots,X_2(t)$ as $X$ (input) and $X_1(t+1), X_2(t+1)$ as $y$ (target), where $k$ and $d$ refer to different lags for each variable. |
H: Is it possible to make a CS:GO Machine Learning AI?
I am not an expert on Machine Learning, Neural Networks or NEAT. In fact, I probably have no clue what I'm talking about. My question is if you can make a learning AI that learns to play complex multiplayer games and possibly outpreform humans. If it is possible, could you also recommend a language or languages to make this AI in? (I know I'll probably have to take a VACation for botting, but it's something I feel like we should try.)
AI: The answer is yes:
Example of a neural network outplaying human players in DOTA. I haven't been able to find much regarding what kind of neural network but here is what is on the OpenAI website. If you're a beginner you can learn some basic architecture and design principles in neural networks using Python and Keras (a neural network library). |
H: Machine Learning applied to database design
Is it possible to apply machine learning to database design given a data source with (pseudo) relationships?
AI: This is such an interesting question. I suppose that it is possible but you would have to answer some more questions before you can actually get help with modeling something.
Are you looking for it to learn SQL or NoSQL?
You'd have to make a distinction between something that can learn relational database design versus something that learns how to be a DBA and work in a particular language. For example, relational databases are based on theory (and relatively straightforward) but how you implement certain things in Oracle or SQL Server (as examples) will vary greatly. Or maybe you're looking for a particular type of design like data warehousing (star patterns, etc). Whichever approach you choose would have a profound effect on the type of model you are going to build.
There are some pitfalls that you would have to account for. A relationship based on text columns is acceptable design, but a relationship based on integer hashes of those same text columns is much better. How a model would account for something like this is unknown to me.
Relating to the item above, you would have to come up with some metric for the success of your model. Is it the levels of relational design that it can reach? Is it some hardware performance benchmark? Is it some level of cognition that your model can reach for extremely complex designs?
I think that once you answer these types of questions you will be in a much better position to start model development. |
H: Machine Learning in real time
I am a newbie in ML world, but very curious and enthusiastic about it. Have gone through articles and some hands-on too. Still got a silly doubt.
In sample datasets (like Iris or diabetes or breast cancer etc.) and exercises, I find the data to be well-formed and ready to be digested by ML model (algos). But, in reality, it is much more than that.
For example, Amazon's feature of suggesting (cross-selling) products. Based on my previous searches and orders, it probably would be looking into several features and predicting further items. I'm curious -
How does it work in real time?
How my product searches (they are not numbers) are fed into the ML model under-the-hood?
Please share your thoughts.
Regards,
RB
(using Python 3.6 & latest-1 versions of NumPy, Scikit and Pandas)
AI: OK, first regarding the real time part.
Machine learning (particularly supervised learning) has two parts
Training
Predicting
Training is a complex and slooooow process, you need to figure out a good model, then train it with the correct data... not simple and computationally expensive.
Predicting however is MUCH simpler, as normally consist (in the world of neural networks) of a simple forward pass in the neural network, and a forward pass is, at its core, a bunch of matrix multiplications. So once a model has been built and train, predicting something using the existing model is quite quick, also one can retrain the model with new data too.
Now, regarding with the text searches and how to deal with them. There is a very cool idea called word embeddings, it simply consist on transforming words into vectors. Imagine a 2d grid (like the x,y coordinates we studied in school), now imagine that the vertical axis (y) denotes how powerful something is, the hifher the y value, the more powerful it is. And now imagine that the x axis denotes the gender of something. (lets assume negative x means male and positive x means female).
And now thing of the word : King. King is male (x=Negative value) and powerful (y=Positive value). With those two coordinates, and starting from position 0,0 you could have a vector.
Now think of Queen, in this case the coordinates will be x=Positive value and y=Positive value, now you have another vector.
And now think of the word Spartacus (he was a slave in ancient rome), so that word will have x=Negative and y=Negative.
Think of this, suddenly not only you can represent the words as numbers, you can also represent them with vectors, which allows you to possibly figure out that King and Emperor are both SIMILAR words. |
H: Scaling label encoded values for Linear Algorithms
I have encoded categorical variables to numerical values. As we know that for feeding values to Linear Algorithms like SVM or KNN, we scale the values for columns having large variations.
I have three label encoded columns, one of which has unique values from 1-3, another has 1-150 and another has 0,1,-1 where -1 represents missing values. How will the MinMaxScaler affect here? Or it is not needed to scale these columns?
If not, how can I avoid these specific columns? Generally we scale the whole dataframe.
P.S. I'm using LinearSVC algorithm.
AI: Sounds like you should use a OneHotEncoder instead of a LabelEncoder since you are trying to encode non-ordinal data such as missing values. Also, one hot encoded values don't get affected by the MinMaxScaler, so that should be fine.
If MinMaxScaling makes sense is dependent on the categories. If your categories are ordinal, like [1,2,3] is [low, medium, high], then it still makes sense to use a LabelEncoder with a MinMaxScaler.
But if you have non-ordinal categorical values, like [-1,0,1] for [MISSING, DOG, CAT], then it would be better to use a OneHotEncoder instead of forcing ordinality with a LabelEncoder. Otherwise the algorithms you will use will make the assumption that the distance MISSING-DOG is longer than MISSING-CAT, which makes no sense. |
H: Given a 12x12 binary image (only black and white pixels) what is its dimensionality? And how can I define dimensionality of a data space?
Suppose I have a grid 12x12 of pixels that can be only black or white. I can't understand if the dimensionality is 2 or 3. I mean... Is dimension given by 12x12 or 12x12x2 ?
AI: There is some ambiguity in dimensionality you ask for. The vector space that your input lies in is $12\times12 = 144$ dimensional. So, if you're going to apply some kind of dimensionality reduction algorithm, e.g. PCA, this is the dimension. But, we sometimes tend to refer to dimension as the shape of the tensor, which is $2$ in this case. Some libraries might read the image into $12\times12\times3$ tensors, in which the third entry indicates RGB decomposition, even if the image is black and white. In this case, the tensor dimension is $3$. |
H: Difference in labelling and normalizing train/test data
I am working on a dataset comprised of almost 17000 data points. Since it's a financial dataset and the components are many different companies, I need necessarily to split it by date. Therefore, supposing I have 10 years of data, I am training over the first 8 years and testing over the remaining 2. This approach I am pretty sure is consistent with the classification problem I need to do.
I am using LSTM network for predicting the direction of financial returns, depending on a bunch of features which are derived from companies' financial statements. Starting from the fact I am obtaining training accuracy greater than test accuracy with almost any architectures and hyperparameter configuration, I suppose there is something wrong in the way I have manipulated the dataset.
Here comes my concerns. I labelled my dataset looking at the median returns and putting 1 if the return for a single data point (company value at a specific date) is above such median, 0 otherwise. Am I correct if I compute two different medians? So that I labelled the training set using its median return and in the same way the test set using its own median return? Should I compute the median over the entire dataset, label it and then splitting?
Moreover, I scaled the training data to be in a range of (0,1). Should I do the same kind of normalization with my test set? I did it, but I wasn't sure about it.
It's kind of my first application of neural networks and I need those clarification about hwo treating the dataset, without influencing the results.
AI: Ideally, there should be no information leakage between your training and test sets. You need to scale your test data using the bounds found for the training data. And, you need to calculate the median with respect to your training data. Think as if you'll have the test samples one by one, not as a batch. Then, how would you calculate your test median, and how would you scale it? |
H: Should I scale my features?
I have a dataset that looks something like this;
ID | Location | Job_title | blue_jumper | red_jumper | yellow_jumper | green_jumper | Target(purple_jumper)
-------------------------------------------------------------------------------------------------------------------
B7372 | Rome | Builder | 2 | 1 | 0 | 9 | 1
D9823 | London | Lawyer | 0 | 1 | 8 | 3 | 0
E8718 | Rome | Teacher | 3 | 0 | 2 | 3 | 1
etc etc.....
What I would like to do is to use this information to predict whether a person will have a Purple jumper (Target 1 or 0)
Things to note about this dataset I think are the following;
I have an ID that relates to the individual
I have a number of catergorical features
I have some information relating to features that are of the same type (jumpers) but differing by some aspect (colour)
These features are of the same type as the Target (e.g jumpers)
The target is binary (e.g. I am not looking to predict how many Purple jumpers a person has, just whether they have one or not)
As the Target is binary I know I could use a classification method but I have decided to use Multiple Linear Regression. I like this algorithm because it gives me a measure out that is equivalent to how much like a 1 or 0 my record is.
I have generated dummies for my categorical features but what I am struggling with is whether or not to scale my other features in a situation like this.
AI: It is not necessary to scale your numerical values when using linear regression. However, people still do it since it can speed up training if the algorithm uses gradient descent and it might make your coefficient and intercept terms more interpretable.
You can read more about it here: https://stats.stackexchange.com/questions/185624/feature-scaling-normalization-in-multiple-regression-analysis-with-normal-equa
On another note, there are a lot of binary classifiers that will give you a probability instead of a binary output. For instance, in sklearn many binary classifiers have a predict_proba method that does just this. I can recommend random forest. |
H: Why does averaging a sentence's worth of word vectors work?
I am working on a text classification problem using r8-train-all-terms.txt, r8-test-all-terms.txt from https://www.cs.umb.edu/~smimarog/textmining/datasets/. The goal is to predict the label using a Random Forest classifier.
Each text sentence as been vectorized using the GoogleNews word vectors. The embedding source can be found here: https://github.com/mmihaltz/word2vec-GoogleNews-vectors
In the example I am following along with there is one step that irks me - there is a step that converts my array of vectorized tokens to a single vector by taking the mean over the tokens e.g.
def transform(self, data):
v = self.word_vectors.get_vector('king')
self.D = v.shape[0]
X = np.zeros((len(data), self.D))
n = 0
emptycount = 0
for sentence in data:
tokens = sentence.split()
vecs = []
m = 0
for word in tokens:
try:
vec = self.word_vectors.get_vector(word)
vecs.append(vec)
m += 1
except KeyError:
pass
if len(vecs) > 0:
vecs = np.array(vecs)
X[n] = vecs.mean(axis=0) # take the mean of the vectors? what does it mean?
else:
emptycount += 1
n += 1
print("Number of samples with no words found: %s / %s" % (emptycount, len(data)))
return X
I am leaving out some boilerplate but later on I run the model and the results are surprisingly good:
model = RandomForestClassifier(n_estimators = 200)
model.fit(XTrain, YTrain)
print("train score:", model.score(XTrain, YTrain))
print("test score:", model.score(XTest, YTest))
> train score: 0.9992707383773929
> test score: 0.9378711740520785
I understand that the random forest model expects to have one row per example so it is unable to consume a sequence of embeddings like a RNN might. So you are required to convert to single row (1-D array).
My question is: WHY does it work? It seems at odds to me that the averaged word vectors would be able to capture anything about the context or meaning of a sentence by merely averaging over the encodings.
Best case scenario I would expect this technique breaks down for larger blocks of text because you would tend to squash all your examples into the same neighborhood of your input space.
It would be great to get some clarification on this.
AI: It works for the same reason why the good old bag-of-words + TF-IDF works. Despite loosing some word ordering information, a text can be still classified by the typical keywords. Since texts on different topics differ a lot with respect to the vocabulary used, simply putting together the words' embeddings might work surprisingly well.
Here is a paper that shows that a simple sentence embedding methods beats sophisticated supervised methods including RNNs and LSTMs. Their method is just a weighted average of the word vectors, modified a bit using PCA/SVD. Section 4.3 tells that the word ordering plays a role, but not too much. |
H: How the original data can be written in the space defined by these M principal components?
Suppose you apply PCA on the data $x_1,...,x_6$ and find that data can be fully described using M principal components $u_1,...,u_M$. How the original data can be written in the space defined by these M principal components?
AI: Assuming your basis vectors are sorted by decreasing eigenvalues, a sample $x_i$ can be transformed into the new space using dot products (projections), i.e. $[u_1^Tx_i,...,u_M^Tx_i]$. Note that, you first de-mean your data. |
H: How to use correlation matrix when the dataset contains multiple columns with text data?
How to use it with Amazon fine food reviews dataset?
AI: The problem is that the correlation matrix has to be done with numerical values. So what you have to do is to transform the texts into numerical vectors. There are several ways of doing this, there are libraries like gensim that can make implementation easier.
Bag-of-words
A vector of the most frequently used words across all texts is created. Then for each text sample the occurrence of each word in that sample is counted.
"John likes to watch movies Mary likes movies too"
Would give
{"John":1,"likes":2,"to":1,"watch":1,"movies":2,"Mary":1,"too":1, ... other words in corpus}
And then the values of this could be used to calculate the correlation matrix
[1,2,1,1,2,1,1, ...]
This method can also be improved by using TF-IDF.
Word embeddings and averaging
Word embeddings are when a word is mapped to vectors that try to encode the information in a word. These can trained from scratch for the task at hand, but there are also pretrained word embeddings that can input a word and output a embedding vector. To get a single vector from an entire text the average word embedding can be used.
Doc2Vec
Doc2Vec expands on word embeddings and is speficially used for embedding entire chunks of texts. |
H: How much data to use for feature selection?
Working on my master's thesis, this is a problem I'm unable to find good resources about.
I'm working with data of 18 participants, who are either active or passive. Each participant is then subjected to a 3 x 3 experiment and results in a total of around 676 trials per participant (around 12.168 trials in total). There are 100 data points in each trial but cannot be used separately from the trial (since its an EEG epoch).
My data consists of 579 features, so I need some sort of feature selection as literature shows that most of them are irrelevant, but I want to use a bottom-up machine learning approach (to verify this).
Is there a rule of thump/literature to use for the amount of data needed for feature selection?
AI: TLDR
Use all your data, throw a bunch of ensemble ML (probably just random forest) at it, and pick a good model. Usually, that works exceptionally well.
How much data should you use?
Ideally, you should always be separating your data into Train, Test, and Validation. Due to small dataset sizes, this isn't always possible, but it is still important to prevent overfitting. You can use all your training and testing data for feature selection and that shouldn't introduce any significant biases to your ML model. Your validation set should only be used to approximate the error once you are entirely done training your model.
Simple ML Solutions:
Use some kind of random forest or gradient boosting model on all your training/testing data. These models are designed for high performance in high dimensional data and by checking feature importances/permutation importances/mean decrease in accuracy you will know which features are/aren't important. It is common to use these kinds of models in the biological space where there are millions of features and only a thousand or so responses. Despite the disparity, I've still reached high accuracy in these situtations.
Simple Statistical Solutions
Measure the correlation of each feature and keep only those features that have an absolute correlation above/below a certain amount. You can use a few statistical tests to filter out features that are not significant. Here are a few examples:
Pearson Correlation
F-test
Variance
Lasso Regression (technically an ML algorithm)
Pitfalls of the Above
Each of the above uses some kind of assumption to figure out which features you should select for final model training. Sometimes you don't actually need to pick a subset of features ie when you use RF. Additionally, the statistical tests often miss abnormal types of correlations or miss multidimensional relationships. Feature selection is really a case-by-case decision that no-one can give you a definitive answer on without seeing the actual data you use. |
H: What is Monte Carlo dropout?
I understand how to use MC dropout from this answer, but I don't understand how MC dropout works, what its purpose is, and how it differs from normal dropout.
AI: Let's start with normal dropout, i.e. dropout only at training time. Here, dropout serves as a regularization to avoid overfitting. During test time, dropout is not applied; instead, all nodes/connections are present, but the weights are adjusted accordingly (e.g. multiplied by the keep ratio, which is 1 - dropout_ratio). Such a model during test time can be understood as an average of an ensemble of neural networks.
Notice that for normal dropout, at test time the prediction is deterministic. Without other source of randomness, given one test data point, the model will always predict the same label or value.
For Monte Carlo dropout, the dropout is applied at both training and test time. At test time, the prediction is no longer deterministic, but depending on which nodes/links you randomly choose to keep. Therefore, given a same data point, your model could predict different values each time.
So, the primary goal of Monte Carlo dropout is to generate random predictions and interpret them as samples from a probabilistic distribution. In the authors' words, they call it Bayesian interpretation.
Example: suppose you trained a dog/cat image classifier with Monte Carlo dropout. If you feed a same image to the classifier again and again, the classifier may be predicting dog 70% of the times while predicting cat 30% of the time. Therefore, you can interpret the result in a probabilistic way: with 70% probability, this image shows a dog. |
H: Trying to find the correlation between inputs and output
I have tried the pandas code for trying to find out the correlation between the output and the inputs I am feeding. Here is the code:
dataframe.corrwith(dataframe['output']).plot(kind='barh',figsize=[20,10], legend=True,grid=True)
I got the following image:
I was trying to understand which column will affect the result more in the positive or negative or neutral way.
The above image I got, I am not able to conclude what exactly it means.
Can someone please tell me whether I am on the right direction of what I am trying to achieve? What is the meaning of the above image?
Let me know. Here is the link to the sample data set: Training.csv
AI: In a correlation framework above, the biggest driver of the output is the input which has the greatest absolute correlation value.
Correlation lies in the range [-1,1], and:
Negative correlation (correlation < 0) implies that the input and output move in opposite directions - i.e. as the input increases, the output decreases (and vice versa).
Nil correlation (correlation == 0) implies that the two variables are completely unrelated.
Positive correlation (correlation > 0) implies that the input and output move in the same direction - i.e. as the input increases, the output increases (and vice versa).
In the chart above, it looks like all-but-one of the inputs are negatively correlated with the output. This implies that as these inputs increase, the output decreases and vice versa.
A few things about your approach:
There is more than one type of correlation - Spearman (rank) and Pearson (linear) correlation are two examples. Be mindful of which you are using.
It would be helpful to rank/sort the result before plotting it. It would be easier for you to visually identify the drivers of the output if your chart was sorted.
You may also want to drop output from the chart. A variable's correlation with itself is always 1, and so this does not add any value to the graphic. |
H: Generated training set on convnet
I have a dataset with roughly 800 images that are classified in 18 classes.
The classes are spread unevenly, with some classes having 30 images and others having 5.
In order to increase my dataset,I've decided to use image augmentation modifying each image a little,making 20 new images for every image.
I then decided to use my created images as my training set and my original images as my validation set.
Due to unavailability of a GPU,I couldn't train it a lot,but I ended up with around 50% success rate on a training set and 30% on the validation set.
Was the decision of only using my original dataset as validation a good one? If not,why?
AI: I don't think it is wise. Your intention to do validation on your real data is correct. But the way you have it now your model will be prevented from training on data that is from the same distribution as what you actually want to predict.
It is best to first split the images into training and validation sets, then do data augmentation on the training set. |
H: CNN to many outputs
I have a dataset with 100 columns (categorial one-hot encoded) and 1 column with text data (simple sentences) and i want to build a neural network to arround 380.000 outputs labels. I have no idea what can i do.
I was thinking about a CNN with Embedding Layer and then many dense layers.
Any suggestions?
AI: A shell for the type of network architecture you might be looking for could reasonably be:
{Input} -> {representation layers} -> {output}
Your question has a few implicit layers, first:
How do I take input of various types (numeric data vs text data)
How do I handle representations of the text data?
How do I handle output for 380,000 output classes?
So, let's tackle these one at a time.
How do I take input of various types (numeric data vs text data)
The specific details of this are going to depend heavily on which framework you're using, as implementing this in base PyTorch is different than in Keras or Tensorflow, for example.
In any case, you're likely going to want to define different inputs for the data types you want to handle differently:
{Numeric Input}->|-- {Numeric Representation} -|
|>{Dense Layer(s)} --> {Outputs}
{Text Input} --->|-- {Text Representation} ----|
How do I handle representations of the text data?
It sounds like you have an opinion for this, and that you're wanting to try:
{Numeric Input}->|-- {Numeric Representation} ->|
|>{DenseLayer(s)} -> {Outputs}
{Text Input} --->|-> {CNN} -> {Embedding Layer}>|
That is a totally reasonable approach. Other approaches that form appropriate representations of text data are recurrent cells, of which one very frequently used type for text data is the LSTM.
How do I handle output for 380,000 output classes?
There's a pretty easy, straightforward way to handle 380,000 classes. You simply make the output layer a softmax (logistic) layer with 380,000 cells. Easy.
>|-- {Numeric Representation} ----->|
|---> >{DenseLayer(s)} -> {Softmax, 380000}
-->|-> {CNN} --> {Embedding Layer}->|
Now, the training dataset to support an effective training of this network would need to be very, very large. Each of the 380000 classes would need to be reasonably supported by training examples in order to allow the trained network to distinguish between the classes.
Without knowing the details of the 380,000 classes, it's difficult to provide advice as to handle this difficulty (if it is one, you may have billions of training examples to work with). Common approaches are:
Grouping classes conceptually- for example, combining 'dog', 'cat',
and 'bird' into 'Animals'
Limiting the classes to classes of interest. If there are relatively
few classes that you care about, you can limit the class definitions to each of those and "other". |
H: How can I know if my NN TensorFlow model is overfitted or not?
I am new with TensorFlow (Python) and I can not juge my obtained results in terms of training and testing accuracy
I am using the GradientDescentOptimizer with a learning coeff equal to 10^(-4) and I have executed the following code :
for gg in range (1501):
session.run(optimizer, feed_dict=train_data)
train_accuracy = session.run(accuracy, feed_dict=train_data)
if gg % 100 ==0 :
#print(train_accuracy)
session.run(optimizer, feed_dict=test_data)
test_accuracy = session.run(accuracy, feed_dict=test_data)
print(test_accuracy)
One time I have commented the testing accuracy to print only the training accuracy and one time I did the contrary.
I had stored them in the following table:
My problem is that I am not able to detect overfitting if it exists or not
Can you help me please!
AI: Accuracies do not signal a warning as @Simon points out, but you seem to be training your optimizer with the test data at each 100-th iteration, which makes all our conclusions invalid.
i.e. the line
session.run(optimizer, feed_dict=test_data) |
H: Can parallel computing be utilized for boosting?
Since boosting is sequential, does that mean we cannot use multi-processing or multi-threading to speed it up? If my computer has multiple CPU cores, is there anyway to utilized these extra resources in boosting?
AI: You can estimate in parallel each of the weak learners. For example, searching for optimal splits in 'weak' decision trees can be streamlined by utilizing large number of cores. |
H: Difference between a target and a label in machine learning
If I have a supervised learning system (for example for the MNIST dataset) I have features (pixel values of MNIST data) and labels (correct digit-value).
However sometimes people use the word target (instead of label).
Are target and label interchangeable? Is label just used for classification? Target both for classification and regression?
AI: Target: final output you are trying to predict, also know as y. It can be categorical (sick vs non-sick) or continuous (price of a house).
Label: true outcome of the target. In supervised learning the target labels are known for the trainining dataset but not for the test.
Label is more common within classification problems than within regression ones. Nonetheless, they are often used interchangeably without great precision. |
H: Disadvantages of hyperparameter tuning on a random sample of dataset
I often work with very large datasets where it would be impractical to check all relevant combinations of hyperparameters when constructing a machine learning model. I'm considering randomly sampling my dataset and then performing hyperparameter tuning using the sample. Then, I would train/test the model using the full dataset with the chosen hyperparameters.
What are the disadvantages of this approach?
AI: One of the good practices is to create a split in the dataset for each tuning/ training step of your pipeline. Since you have large datasets, you should have enough data to split the original dataset into multiple subsets and still have a relevant number of rows for each step. As such, as an example, you can divide your dataset in 60% training, 20% hyperparameter tuning and 20% for the test.
It is important to avoid optimizing the hyperparameters with the same data you train on because this can lead to overfitting both tuning steps of your model to the same source of data.
Also, be careful on how you sample the original datasource. When dealing with highly skewed categorical features, random sampling can lead to categories in the test set which are not observed during training, which can cause some models to break. Also, numerical features should have a similar distribution between the training and the test set. |
H: Filtering Pandas Dataframe Results without saving to a variable first
Is there a way to more efficiently filter a result on a data frame without having to explicitly save it in a variable and then filter? For instance, in the code below I would like to add something to line 3 to be able to achieve either df1 or df2.
import pandas as pd
df=pd.DataFrame({'A' : [1,2,3,4,5], 'B' : [0,0,0,0,0]})
df=df.var()
df1 = df[df!=0]
df2 = df[df > 3]
AI: If you want to filter Pandas Series "on the fly", you can use .loc[] indexer in conjunction with the callable method (for example using lambda function).
Demo:
In [8]: df.var()
Out[8]:
A 2.5
B 0.0
dtype: float64
In [9]: df.var().loc[lambda ser: ser!=0]
Out[9]:
A 2.5
dtype: float64
In [10]: df.var().loc[lambda ser: ser>3]
Out[10]: Series([], dtype: float64)
If you want to filter a DataFrame, then you can use DF.query(...) method:
In [11]: df
Out[11]:
A B
0 1 0
1 2 0
2 3 0
3 4 0
4 5 0
In [12]: df.query("A >= 3")
Out[12]:
A B
2 3 0
3 4 0
4 5 0 |
H: Loss and Regularization inference
I'm building a Matrix Factorization model for MovieLens dataset with batch-wise training. Loss function for the batch:
$$
L_{batch} = 1/|B|\sum_{(u,i)\in{B}}(r_{ui} - \mu - b_u - b_i - p_u^Tq_i)^2 + \lambda(||p_u||^2 + ||q_i||^2)
$$
$$
L_{batch} = (L_{base\_loss} + L_{reg\_loss})/|B|
$$
$$
L_{base\_loss} = \sum_{(u,i)\in{B}}(r_{ui} - \mu - b_u - b_i - p_u^Tq_i)^2
$$
$$
L_{reg\_loss} = \sum_{(u,i)\in{B}}\lambda(||p_u||^2 + ||q_i||^2)
$$
where $r_{ui}$ is the observed rating, $\mu$ is the global average rating, $b_u$ and $b_i$ are the average deviations of user $u$ and item $i$ from the global average rating respectively, $p_u$ and $q_i$ are the learned user embeddings and movie embeddings respectively, $B$ is the batch.
What should I infer from regularization loss going up? Model is not able to capture the underlying information using the current embedding size, or the scale of regularization loss is too less compared to the base loss?
AI: There is no problem with the fact that your regularization loss is going up.
The cost function of your model is a weighted sum of the regularization loss and the base loss, so during training the model looks to minimize them together, but eventually it comes to a point where it has to choose and minimize one on the expense of the other. The fact that it is prefers the base loss means that your regularization hyper-parameter is not too big (eventually your goal is to reduce the base loss).
In general terms: The goal of the regularization loss is to simplify the model during training - demanding smaller weights forces the model to increase the size of just the important weights, and not all of them. So some wights converge to very small and insignificant values, effectively reducing the number of weights in the model. Smaller model means less overfitting and more generalization.
In your case: During the training of your model, it starts from reducing the regularization loss (because it is an easy task), but at some point it starts to notice crucial features which it wants to emphasize so it begins to increase the weights of those features, which results in higher regularization loss but a smaller total loss (a good thing, it is learning but is also under control). |
H: Guidelines for vocabulary sizes for BoW
I am currently trying to get a vocabulary for BoW-vector generation out of a set of 200k scientific abstracts.
I do some basic filtering of tokens already like lowercasing, stop-word-removal, stemming, not taking tokens with size < 2, leaving tokens out that can be converted to a number and so on. But still I count more than 121k distinct tokens, which seems like a lot to me.
As I am quite new to all this I am wondering if there exist guidelines for how big such a vocabulary should be in average, maybe even depending on the originating field.
AI: I don't think there's any definitive answer for this and it will depend on your particular domain. Here's how I go about it:
The English language contains about 20,000 words (or at least the most common) so I use that as a baseline
I expand this number to account for some common misspellings
Does my data contain special things like emojis? Emojis can still convey meaning, so I expand my vocabulary to include de-emoji'd text
Does my data contain specialized text like scientific and/or academic terms? I expand my baseline number based on this.
Finally, you can always check your token index to find how many words you have out-of-vocabulary. If that number seems reasonable enough to you to proceed then you move forward, otherwise you expand your baseline number a little more. |
H: What could explain a much higher F1 score in comparision to accuracy score?
I am building a binary classifier, which classifies numerical data, using Keras.
I have 6992 datapoints in my dataset. Test set is 30% of the data. And validation set is 30% of the training set.
When evaluating the model, I get these values:
recall: 0.8914240755310779
precision: 0.7006802721088435
f1_score: 0.7846260387811634
accuracy_score: 0.7035271816800843
How come is the accuracy_score so about 10% lower than the F1-score?
Here is the code I'm using to evaluate the model:
print('recall: ', recall_score(Y_test, y_pred))
print('precision: ', precision_score(Y_test, y_pred))
print('f1_score: ', f1_score(Y_test, y_pred))
print('accuracy_score: ', model.score(X_test, Y_test, verbose=0))
And here is my model:
def create_model(neurons=23):
model = Sequential()
model.add(Dense(neurons, input_dim=37, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy', precision, recall])
return model
model = KerasClassifier(build_fn=create_model, epochs=500, batch_size=5, verbose=1)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=seed)
transformer = Normalizer().fit(X_train)
X_train = transformer.transform(X_train)
transformer = Normalizer().fit(X_test)
X_test = transformer.transform(X_test)
tensorboard = TensorBoard(log_dir="logs/{}".format(time.time()))
time_callback = TimeHistory()
es = EarlyStopping(monitor='val_acc', min_delta=0, patience=20, verbose=0, mode='auto',restore_best_weights=True)
# Fit the model
history = model.fit(X_train, Y_train, validation_split=0.3, epochs=200, batch_size=5, verbose=1, callbacks=[tensorboard, time_callback])
AI: You have imbalanced classes. Notice that your accuracy is very close to your precision, and quite dissimilar to your recall. This means that your precision (accuracy of positive predictions) is dominating the overall accuracy measure - nearly all of the cases in your data are classified as positive, so the accuracy among predicted positives is almost equivalent to the accuracy among all cases.
The F1 score is the harmonic mean of precision and recall, so it's a class-balanced accuracy measure. You have better performance on the minority class than the majority class, which is evidenced by the nearly equivalent accuracy and precision, and much higher recall. |
H: Word embeddings for Information Retrieval - Document search?
What are good ways to find for single sentence (query) the most similiar document (text). I asked myself if word vectors (weighted average of the documents) are suitable to map a single sentence to a whole document?
AI: Doc2Vec is on possible approach. With this, model learns to "cluster" similar sentences together.
Most simplistic approach is to aggregate word vectors but that ignores order of words. Details on few of the approaches :
https://towardsdatascience.com/sentence-embedding-3053db22ea77
https://medium.com/explorations-in-language-and-learning/how-to-obtain-sentence-vectors-2a6d88bd3c8b |
H: What is the difference between regplot and lmplot in seaborn?
Seaborn library in python suggests to use either lmplot or regplot to visualise a regression between two variables. What is the difference between the two plots ? The result I was able to get are slightly different but I have no idea why !
AI: regplot() performs a simple linear regression model fit and plot. lmplot() combines regplot() and FacetGrid.
The FacetGrid class helps in visualizing the distribution of one variable as well as the relationship between multiple variables separately within subsets of your dataset using multiple panels.
lmplot() is more computationally intensive and is intended as a convenient interface to fit regression models across conditional subsets of a dataset.
You can research this comparison for yourself in the seaborn docs for regplot(), lmplot(), and FacetGrid to see which function will meet your needs. |
H: Why do a lot of people use ipython notebook over python file when doing analyzing data? Is it the same in industry?
I have seen that a lot of people write code in ipython notebook when doing statistical analysis on data, apart from easy visualization after each step rather than running the whole code every time on a .py file. What are the other advantages,if there are any?
Also in industry which one is used more often?
AI: iPython notebooks are great for some cases. I use them because of:
Easy in-place editing and immediate execution, very friendly for quick and experimental stuff
In-place visualization. Also, ability to have multiple figures on the same page, compare them, re-run figures, move the cells. Much more convenient than multiple and independent OpenCV's imshow windows.
They are more convenient for step-by-step iterative analysis
Since they are collection of code blocks prepended with a description and followed with the results, they are great for tutorials and guides.
You can compare the notebooks to test stands, where you can quickly assemble something working to demonstrate to the others, while Python modules are more like building blocks to be used either for demonstration or for production later. Thus, combination of both is very powerful. |
H: How to use vectors produced by TF-IDF as an input for fuzzy c-means?
I have done text processing with TF-IDF method and as an output got a list of normalized vectors [0, 1] for each document. Such as below:
Document 1
word1:1.0, word2:0.9, ..., word_n:0
Document 2
word2:1.0, word1:0.4, ..., word_n:0
...
etc
The above is basically a list of key-values where key is a term and values are TF-IDF values, where value 1 means that the term matches the document the most compared to other terms in the set.
My question is, to what form should I transform these vectors in order to properly use fuzzy c-means clustering on them? I feel like it should be 2D matrix of something, but can't figure it out.
At the very end I would like to have a trained model which on a given input could say to what documents (based on the membership values) it belongs with the highest chance.
AI: You could create a table from the TF-IDF vectors in which each feature or column represents a word, each row a document and if a word does not appear in a document use 0 as TF-IDF vector value. Then you could apply c-means clustering to this table. |
H: LSTM for prediction of next location step - help with standardization
I have a few questions regarding the topic and I hope someone might have experience with any of them.
What I am trying to do is train an LSTM network, whose input is a sequence of N steps in a XYZ space (i.e 3 features over N point per sample, each point is part of some coordinate in space) and i want to predict the next point in the same XYZ space.
Note: Not all samples are from the same DB, meaning the XYZ space varies between some samples/
My questions are:
Say I want to use min-max scaling, do I scale across ALL samples at once or scale per XYZ space? say i have 100 samples from XYZ_A and 50 from XYZ_B, do i take my min/max from 150 point of both space or not?
should I also scale my output labels? and if so should I use them in the initial scale or should I only scale on my train inputs and use that scaler on my outputs?
When I want to make a prediction after training, should I scale the data according to my training data?
OR since my features are relatively from the same domain should I skip scaling all together?
AI: Say I want to use min-max scaling, do I scale across ALL samples at once or scale per XYZ space? say i have 100 samples from XYZ_A and 50 from XYZ_B, do i take my min/max from 150 point of both space or not?
Even though your samples come from different databases, they represent the same physical quantity, i.e. 3D coordinates. Since the units of measurement are the same for the samples of both databases, you should apply the scaling over all 150 samples, not separately.
should I also scale my output labels? and if so should I use them in the initial scale or should I only scale on my train inputs and use that scaler on my outputs?
I wouldn't use the word labels, since you are predicting the next XYZ coordinates, therefore it is a regression problem, not classification. However, you should also apply the scaling function on those values too, since they have the same units of measurement as the input.
When I want to make a prediction after training, should I scale the data according to my training data?
Exactly, usually you fit a scaling function to your training dataset and you apply the same (already fit) function on the testing dataset. |
H: Size of Output vector from AvgW2V Vectorizer is less than Size of Input data
Hi,
I have been seeing this problem for quite some time. Whenever I tried vectorizing input text data though avgw2v vectorization technique. The size of vectorized data is less than the size of the input data. Is there any statistical reason behind this? In my case 100K is the size of the input and it gives 999,98 sized output
I'm wondering what is causing this problem. Thanks in advance
Code:
listofsentences=[]
for sent in x_train:
listofsentences.append(sent.split())
training_model = Word2Vec(sentences=listofsentences, workers=-1,min_count=5)
modelwords = list(training_model.wv.vocab)
std_avgw2v_x_train = []
for everysentence in tqdm(listofsentences):
count = 0
sentence = np.zeros(100)
for everyword in everysentence:
if everyword in modelwords:
w2v = training_model.wv[everyword]
count += 1
sentence += w2v
if count != 0:
sentence/=count
std_avgw2v_x_train.append(sentence)
len(std_avgw2v_x_train)
>99998
len(x_train)
>100000
EDIT1: I'd like to mention that I Just started learning ML, Its been 55 days since I started. Also, the same code gives our 100K output samples While I vectorize with TFIDFW2V
I have attached the image of the same. Kindly look into it
AI: I think the issue can be one of the two :
A . You have missing value in x_train
B . One of the values in x_train has no word that is there in modelwords.
In both the cases ,
if everyword in modelwords:
w2v = training_model.wv[everyword]
count += 1
sentence += w2v
condition doesn't get satisfied and you end up not addding any new value to sentence |
H: What is exactly meant by neural network that can take different types of input?
There is a scientific document that implements a convolutional neural network to classify 3 different types of data, although how exactly, is unknown to me.
Here's the explanation of network architecture:
This section describes architecture of our neural net which is
depicted in Fig. 3.
Our network has three types of inputs: Screenshot (we use upper crop of the page with dimensions 1280 × 1280, however this net can
work with arbitrarily sized pages), TextMaps (tensor with dimensions
$128 \times 160 \times 160$) and Candidate boxes (list of box coordinates of
arbitrary length).
A screenshot is processed by three convolutional layers (the first two layers are initialized with pretrained weights from BVLC
AlexNet). TextMaps are processed with one convolutional layer with
kernel size $1 \times 1$ and thus its features capture various
combinations of words. These two layers are then concatenated and
processed by final convolutional layer.
What exactly is implied by Our network has three types of inputs above? Is it possible for convolutional neural network to pass different types of inputs differently?
From my understanding, neural network for Screenshot input would be created like this:
def CNN(features, labels, mode):
input_layer = tf.reshape(image, [-1, 1280, 1280, 1])
# Conv+ReLU
conv_relu_1 = tf.layers.conv2d(
inputs=input_layer,
filters=96,
kernel_size=[11, 11],
padding="same",
activation=tf.nn.relu)
# MaxPool
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[3, 3], strides=2)
# Conv + ReLU
...
So let's say this is first neural network, then should I create another neural network for TextMaps and concatenate results? Or does every magic just happen in a single neural network?
In short, can I create neural network that takes different types of input individually or do I use different neural networks for each of them and then group their outputs?
Thank you!
AI: In short, can I create neural network that takes different types of
input individually or do I use different neural networks for each of
them and then group their outputs?
Yes, you can. Check the Functional API of Keras, on how to define multi input/output networks. Then you can create different models for the processing of each input and fuse them together into a single multi-input model using the keras.models.Model() class.
In the following example, you can see that the main_input is processed differently than the aux_input and both are thereafter merged together to be propagated through the rest of the layers of the network. |
H: Naive bayes, all of the elements in predict_proba output matrix are less than 0.5
I've created a MultinomialNB classifier model by which I'm trying to label some test texts:
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import preprocessing
from sklearn.naive_bayes import MultinomialNB
tfv = TfidfVectorizer(strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}',
use_idf=1,smooth_idf=1,sublinear_tf=1)
# df['text'] is a long string text of words
tfv.fit(df['text'])
lbl_enc = preprocessing.LabelEncoder()
# df['which_subject'] is one of the following 7 subjects: ['Educational', 'Political', 'Sports', 'Tech', 'Social', 'Religions', 'Economics']
y = lbl_enc.fit_transform(df['which_subject'])
xtrain_tfv = tfv.transform(df['text'])
# xtest_tfv has 7 samples
xtest_tfv = tfv.transform(test_df['text'])
clf = MultinomialNB()
clf.fit(xtrain_tfv, y)
y_test_preds = clf.predict_proba(xtest_tfv)
Now y_test_preds is as follows:
0.255328 0.118111 0.129958 0.123368 0.119301 0.131098 0.122836
0.122814 0.265444 0.117637 0.13531 0.116697 0.122812 0.119286
0.131485 0.114459 0.258224 0.122414 0.118132 0.134005 0.12128
0.125075 0.131948 0.122668 0.258655 0.116518 0.119995 0.12514
0.124356 0.116987 0.121706 0.119796 0.266172 0.127231 0.123751
0.132295 0.1192 0.13366 0.119445 0.123186 0.257318 0.114895
0.126779 0.118406 0.123723 0.127393 0.122539 0.117509 0.263652
As you see, all of the elements are less than 0.5. Does this table show anything? Can I conclude that the classifier is not able to label test text?
AI: No, your classifier can label text. It doesn't do it well but it is still almost 2 times better than random (for 7 classes, random will get you ~0.15 accuracy).
Looking at the test set is not enough. You need to create the same confusion matrix for you training set.
If the results you will get for the test set are similar in magnitude than maybe your model is too simple for the task or maybe you haven't trained it long enough.
If the results of the test set are good, than you might have a generalization problem (overfitting), which means that you need to increase the regularization during training. It also might mean that your training set comes from a different distribution than your test set. |
H: NLP: What are some popular packages for phrase tokenization?
I'm trying to tokenize some sentences into phrases. For instance, given
I think you're cute and I want to know more about you
The tokens can be something like
I think you're cute
and
I want to know more about you
Similarly, given input
Today was great, but the weather could have been better.
Tokens:
Today was great
and
the weather could have been better
Can NLTK or similar packages achieve this?
Any advice appreciated.
AI: Spacy can do this. Spacy's semantic parser is based on Language models trained on large corpus of text.
This parser can break sentence into lower level components such as words / phrases.
More details and examples :
https://spacy.io/usage/linguistic-features
Example with the first sentence from questions: https://explosion.ai/demos/displacy?text=I%20think%20you%27re%20cute%20and%20I%20want%20to%20know%20more%20about%20you&model=en_core_web_sm&cpu=0&cph=0 |
H: How does dropout work during testing in neural network?
The below paragraph is picked from the textbook Hands-On Machine Learning with sci-kit learn & Tensorflow. I couldn't understand what the author is trying to convey. It would be really grateful if someone can provide an explanation for below paragraph with an example. I have referred these links Dropout in Deep Neural Networks, https://www.coursera.org/lecture/deep-neural-network/dropout-regularization-eM33A, that somehow aligns with the same objective. However, i am seeking good explanations. Please help me.
Suppose p = 50%, in which case during testing a neuron will be connected to twice as many input neurons as it was (on average) during training. To compensate for this fact, we need to multiply each neuron's input connection weights by 0.5 after training. if we don't, each neuron will get a total input signal roughly twice as large as what the network was trained on, and it unlikely to perform well. More generally, we need to multiply each input connection weight by the keep probability(1-p) after training.
Alternatively, we can divide each neuron's output by keep probability during training (these alternatives are not perfectly equivalent, but they work equally well)
AI: During training, p neuron activations (usually, p=0.5, so 50%) are dropped. Doing this at the testing stage is not our goal (the goal is to achieve a better generalization). From the other hand, keeping all activations will lead to an input that is unexpected to the network, more precisely, too high (50% higher) input activations for the following layer.
Consider the neurons at the output layer. During training, each neuron usually get activations only from two neurons from the hidden layer (while being connected to four), due to dropout. Now, imagine we finished the training and remove dropout. Now activations of the output neurons will be computed based on four values from the hidden layer. This is likely to put the output neurons in unusual regime, so they will produce too large absolute values, being overexcited.
To avoid this, the trick is to multiply the input connections' weights of the last layer by 1-p (so, by 0.5). Alternatively, one can multiply the outputs of the hidden layer by 1-p, which is basically the same. |
H: Why BatchNormalization fails in Keras
I try to test ResNet approach on cifar10 dataset with the following python code:
# load data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = X_train.astype('float32')/256
X_test = X_test.astype('float32')/256
y_train = keras.utils.to_categorical(y_train)
y_test = keras.utils.to_categorical(y_test)
# build a model
def res_unit(x):
x_shortcut = x
x = Conv2D(16, (1, 1), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(16, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(32, (1, 1), padding='same')(x)
x = BatchNormalization()(x)
x = Add()([x_shortcut, x])
x = Activation('relu')(x)
return x
X_input = Input((X_train.shape[1:]))
X = Conv2D(32, (3, 3), padding='same')(X_input)
X = BatchNormalization()(X)
X = Activation('relu')(X)
X = res_unit(X)
X = res_unit(X)
X = res_unit(X)
X = Flatten()(X)
X = Dense(32)(X)
X = Activation('relu')(X)
X = Dense(10)(X)
X = Activation('softmax')(X)
model = Model(inputs=X_input, outputs=X)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# run model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=5)
which outputs the worst result for 10-class classification: 0.1 accuracy on train and validation.
But when I comment all lines with BatchNormalization, everything starts to be OK: I got 0.57 accuracy after first epoch and then it even rises up.
What is the problem with BatchNormalization in my case? Am I using it in a correct way?
AI: I have played around with it and managed to get a working configuration.
Necessary changes:
axis=1 for BatchNormalization. Normally, this should be used when data_format="channels_first", but it worked for me despite TensorFlow backend
Standardize the input:
X_train = X_train * 2.0 - 1.0; X_test = X_test * 2.0 - 1.0
Avoid batchnorm after the last convolution in res_unit.
The working solution looks like this:
# build a model
def res_unit(x, prefix=''):
x_shortcut = x
x = Conv2D(16, (1, 1), padding='same', name='{}/{}'.format(prefix, 'conv1'))(x)
x = BatchNormalization(name='{}/{}/bn'.format(prefix, 'conv1'), axis=1)(x)
x = Activation(activation='relu', name='{}/{}/relu'.format(prefix, 'conv1'))(x)
x = Conv2D(16, (3, 3), padding='same', name='{}/{}'.format(prefix, 'conv2'))(x)
x = BatchNormalization(name='{}/{}/bn'.format(prefix, 'conv2'), axis=1)(x)
x = Activation(activation='relu', name='{}/{}/relu'.format(prefix, 'conv2'))(x)
x = Conv2D(32, (1, 1), padding='same', name='{}/{}'.format(prefix, 'conv3'))(x)
x = Add(name='{}/add'.format(prefix))([x_shortcut, x])
x = Activation('relu', name='{}/relu'.format(prefix))(x)
return x
def main():
(X_train, y_train), (X_test, y_test) = load_data()
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
X_train = X_train * 2.0 - 1.0
X_test = X_test * 2.0 - 1.0
y_train = keras_utils.to_categorical(y_train)
y_test = keras_utils.to_categorical(y_test)
print(X_train.min())
print(X_train.max())
X_input = Input((X_train.shape[1:]))
X = Conv2D(32, (3, 3), padding='same', name='conv0')(X_input)
X = BatchNormalization(name='conv0/bn', axis=1)(X)
X = Activation('relu', name='conv0/relu')(X)
X = res_unit(X, prefix='block1')
X = res_unit(X, prefix='block2')
X = res_unit(X, prefix='block3')
X = Flatten()(X)
X = Dense(32)(X)
X = Activation('relu')(X)
X = Dense(10)(X)
X = Activation('softmax', name='softmax')(X)
model = Model(inputs=X_input, outputs=X)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
I've got 59.07% validation accuracy with it. However, sometimes training does not converge, it feels like it is very sensitive to the random init.
Still don't know why making batchnorm work here is so hard... |
H: What is the difference between dynamic programming and Q-learning?
What is the difference between the DP-based algorithm and Q-learning?
AI: Both Q learning and Value Iteration (a DP technique) use similar update rules based on Bellman optimality equations:
$$v_*(s) = \text{max}_{a}\sum_{s',r} p(s',r|s,a)(r + \gamma v_*(s'))$$
$$q_*(s,a) = \sum_{s',r} p(s',r|s,a)(r + \gamma\text{max}_{a'}q_*(s',a'))$$
The main difference is that DP uses an explicit model. DP requires that you know $p(s',r|s,a)$. The update rule for DP is literally the first equation turned into an update rule:
$$v_{k+1}(s) = \text{max}_{a}\sum_{s',r} p(s',r|s,a)(r + \gamma v_{k}(s'))$$
In comparison, Q learning does not require knowing $p(s',r|s,a)$, as it is based on sampling from experience. The update rule is modified to be based on samples of observed data, which have the same values in expectation, as if you had used $p(s',r|s,a)$, but without knowing it:
$$Q_{k+1}(S_t,A_t) = Q_{k}(S_t,A_t) + \alpha(R_{t+1} + \gamma\text{max}_{a'}Q_k(S_{t+1},a') - Q_{k}(S_t,A_t))$$
This is still an important difference even when both systems are run on an internal model/simulation. DP does not need to simulate anything, it iterates over the model directly. Whilst Q learning needs to work with sampled transitions - they might be simulated, but this is not the same as iterating over all states as in DP. It can often be the case that it is easier to simulate the environment than to calculate $p(s',r|s,a)$ for the full model.
Which should you choose:
Choose Dynamic Programming when you have access to the full state transition and reward model in a simple form (i.e. you have $p(s',r|s,a)$ or equivalent), and the state space is not too large - ideally the number of states is small enough to fit in memory. However, there are ways to use DP when you have a larger state space, by modifying which states it processes. So you still can use DP on larger problems if you really want to.
Choose Q learning when you don't have a model, or when the state space is too large to iterate over in full. |
H: How to get out of local minimums on stochastic gradient descent?
I'm not programming a neural network but I'm looking at it from a non-hands-on, theoretical point of view and I'm currently wondering how to escape a local minimum and how to get to a global minimum.
If you start at a point, for instance: (red)
When you compute the gradient of the error function and step in the direction of greatest descent, you'd end up in that immediate local minimum. AFAIK, you'd get stuck there. How do neural network trainers go about this? Do they start from a new random configuration of weights at each batch and see if the cost is smaller, or is there some way of immediately getting to the global minimum?
I've heard of a method resetting the learning rate to 'pop' out of local minimums but I'm not sure how that works when the gradient is 0. I've also heard that stochastic gradient descent is more reliable than gradient descent at finding global minimums, but I don't know how using the training data in batches rather than all at once allows it to steer around local minimum in the example, which is clearly steeper than the path to the global minimum behind it.
AI: Stohastic gradient descent loss landscape vs. gradient descent loss landscape
I don't know how using the training data in batches rather than all at once allows it to steer around local minimum in the example, which is clearly steeper than the path to the global minimum behind it.
So, stochastic gradient descent is more able to avoid local minimum because the landscape of batch loss function is different than the loss function of whole dataset (the case when you calculate the losses on all data and then update parameters). That means the gradient on the whole dataset could be 0 at some point, but at that same point, the gradient of the batch could be different (so we hope to go in other direction than the local minimum).
Neural network architecture and loss landscape
In order to escape the local minimum, your neural architecture can also help. For example, see this work: Visualizing the Loss Landscape of Neural Nets. It shows that skip connections can smoothen your loss landscape and, hence, help the optimizers to find the global minimum more easily.
Local minimums vs global optimum
Finally, there are some works suggesting that the local minimums have almost the same function value as the global optimum. See this question and answer. |
H: Normalization before or after resizing
I'm training deep learning network using images (to be exact - I'm solving semantic segmentation problem).
What's the proper order of resizing (I need to resize images to fixed width X height) and normalization (dividing by 255 value) of images in preprocessing?
Does it make more sense to do first resizing and then normalization? Or first normalization and then resizing?
AI: Resize and then normalize, that's the only pipeline that makes sense.
If you resize after normalization, depending on the resize algorithm, you may end up with values that are outside of the normalized range. |
H: Oversampling before Cross-Validation, is it a problem?
I have a multi-class classification problem to solve which is highly imbalanced. Obviously I'm doing oversampling, but I'm doing cross-validation with the over-sampled dataset, as a result of which I should be having repetition of data in the train as well as validation set. I'm using lightgbm algorithm, but surprisingly there is not much difference between cross-validation score and the score on the unseen dataset.
However I just want to know whether its fine to do cross-validation after oversampling the dataset, if not why am I getting such close score on the validation set and the unseen test set?
Also if its not correct to do oversampling before the cross-validation, then it becomes to lengthy to split the data into validation and training and then again sample the training set, and again during final prediction if you're looking to use all the data then you've to append the validation and the training data and then again oversample. Is there any shortcut method to solve the problem?
AI: Oversampling the training data may help the classifier to better predict on the originally less represented class. This does not mean that it should be applied to performance metrics, as it changes the original target distribution and thus creates bias in the results.
Imagine the problem of cancer detection, where your original dataset is unbalanced: 10% of the patients have cancer y=1 and the remaining 90% don't y=0. If you train a classifier which is prone to error on unbalanced datasets (such as an Artificial Neural Network), you may end up predicting always the majority class: y=0.
If you oversample to a new distribution, let's say 50/50, your classifier is expected to increase the performance, specially on the positive class. Nonetheless, to measure the performance on real data, which is by itself skewed, measure on oversampled may not be the best choice.
Thus, if you are optimizing the hyperparameters or choosing from a set of classifiers, cross-validating with oversampled data may provide you with a different perspective on the classifier's ability to predict on both classes with equal importance. Nonetheless, if you are estimating the real-life prediction capability, I would not advise you to oversample such validation data! |
H: How to calculate Accuracy, Precision, Recall and F1 score based on predict_proba matrix?
I found this link that defines Accuracy, Precision, Recall and F1 score as:
Accuracy: the percentage of texts that were predicted with the correct tag.
Precision: the percentage of examples the classifier got right out of the total number of examples that it predicted for a given tag.
Recall: the percentage of examples the classifier predicted for a given tag out of the total number of examples it should have predicted for that given tag.
F1 Score: the harmonic mean of precision and recall.
Following this question of mine, my MultinomialNB classifier calculated the predict_proba matrix for the test set (with 14 samples) as follows:
0.192995 0.0996929 0.173688 0.136715 0.126616 0.133012 0.137282
0.174185 0.109345 0.169467 0.144389 0.115021 0.132762 0.154831
0.14172 0.190075 0.125429 0.155343 0.122939 0.149733 0.114763
0.130958 0.2304 0.108793 0.174371 0.115698 0.122529 0.117251
0.139486 0.0938475 0.236573 0.133689 0.118372 0.165151 0.112881
0.135901 0.0845106 0.262501 0.127767 0.119785 0.166609 0.102926
0.136622 0.13782 0.119651 0.320522 0.0854596 0.0996346 0.100292
0.139607 0.181654 0.112189 0.259983 0.0920986 0.106649 0.107819
0.151441 0.0929748 0.155358 0.130407 0.208591 0.151803 0.109425
0.132648 0.122881 0.130545 0.126466 0.196319 0.142594 0.148548
0.135545 0.101456 0.177762 0.118609 0.120773 0.253616 0.0922385
0.132612 0.112645 0.111808 0.102153 0.113548 0.327516 0.0997178
0.111618 0.0859541 0.106807 0.116613 0.085918 0.0873931 0.405696
0.107745 0.0936872 0.0877116 0.122336 0.0902212 0.0909265 0.407373
1. The Answerer of my last question, said that although the predict_proba matrix elements are all less than 0.5, they may be useful in text labeling. But From the above definitions, I concluded that the Accuracy and Precision of the prediction is zero, since all of the predicted values are less than 0.5. Am I correct?
2. I'm not sure about the Recall and F1 score and how to calculate them.
3. How can I interpret the matrix and the model's usefulness?
Edit 1:
Using this answer I changed my predict_proba matrix above (named in the code as pred_prob ) with a shape of (14,7) to a matrix (named y_pred) with a shape of (7,1) and then used a one_hot_encoder function to convert it to a confusion matrix (named y_pred_one_hot) as follows:
y_pred = np.argmax(pred_prob, axis=1)
def one_hot_encode(actual, n_classes):
if len(actual.shape) == 1:
actual2 = np.zeros((actual.shape[0], n_classes))
for i, val in enumerate(actual):
actual2[i, val] = 1
actual = actual2
return actual
y_pred_one_hot = one_hot_encode(y_pred, n_classes=7)
Now y_pred_one_hot is:
1 0 0 0 0 0 0
1 0 0 0 0 0 0
0 1 0 0 0 0 0
0 1 0 0 0 0 0
0 0 1 0 0 0 0
0 0 1 0 0 0 0
0 0 0 1 0 0 0
0 0 0 1 0 0 0
0 0 0 0 1 0 0
0 0 0 0 1 0 0
0 0 0 0 0 1 0
0 0 0 0 0 1 0
0 0 0 0 0 0 1
0 0 0 0 0 0 1
Now is this y_pred_one_hot matrix, the confusion matrix?
AI: To compute performance metrics like precision, recall and F1 score you need to compare two things with each other:
the predictions of your model for your evaluation set (in what follows, I'll call them y_pred)
the true classes of your evaluation set (in what follows, y_true).
From what you write, you have obtained just the predictions of your model, and that's what you have in y_pred. You have constructed y_pred so that each of its components is equal to the class that is assigned the maximum probability by your model. All fine here!
The key ingredient that you are missing is the array of true classes (in your question, you called them "tags") associated to your evaluation examples. You totally need this information to understand if the predictions of your model are correct (or not). You should be able to construct an array y_true containing the true classes/tags of your examples from...knowing the actual ones.
For example, if your 1st text belongs to class 3, your 2nd text belongs to class 1, your third text belongs to class 2, your y_true will be an array like
y_true = np.array([3, 1, 2, # ... the rest of components ])
Now, to compute accuracy, precision, and recall, you need to compare y_true and y_pred. If they coincide, congratulations: that means that your algorithm works perfectly on your evaluation set!
In general though not all the components of y_pred will coincide with y_true. To quantify agreement/discrepancies you can use metrics like accuracy, precision, etc. You can code them yourself, but the scikit-learn library comes with functions for the purpose.
For instance you can easily compute accuracy, precision, recall and F1 score, even the confusion matrix for your problem with the code
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
# print accuracy
print("Accuracy: ", accuracy_score(y_true, y_pred))
# print precision, recall, F1-score per each class/tag
print(classification_report(y_true, y_pred))
# print confusion matrix, check documentation for sorting rows/columns
print(confusion_matrix(y_true, y_pred))
P.S. Note that in what I did I didn't use your y_pred_one_hot (which is not a confusion matrix!), and that the precision is not zero (it may be, but you have to compute it using y_true)!
P.S. Beware using predict_proba with Naive Bayes as output probabilities may not be calibrated. |
H: Predicting service date
If I had an automated system that pays my bills, but the website where I pay them won't tell me when the next bill will be available. What is a good approach of predicting the date of the next bill? I don't see a regression model here as a good approach. I believe it's better to get a simple statistical mean or median between bill dates in days, and then guess the next bill date as: nextBill = lastBillDate + mean. Is that the right approach?
AI: I don't know your data, so I can't say if your approach is good or not. But your approach does not cover weekdays. So I would apply time series analysis methods predict the service date. |
H: Protein interaction prediction- how to input this data structure
I know the basics of machine learning and have quite an experience with time series data or data fed in a tabular format. But in the picture, the data is arranged as a graph. Is there a way to input the graph into a ML tool such as Artificial Neural Network or any other? I don't know if there is a theory for handling such data structure. The task is to recreate the graph from the output of the ML algorithm after training. So, whatever input I get, the output should be the same as the input -- quite similar to an auto-associative memory. Can somebody please help?
AI: https://en.wikipedia.org/wiki/Adjacency_matrix
For such problems, you can tabulate these connections as adjacency matrix and create a network to predict weights for the matrix given some properties of nodes (Say for a social graph; given properties of User1 and User2 [for example Zipcode, school ...] output 1, or 0),
Some examples are :
https://www.biorxiv.org/content/biorxiv/early/2018/01/14/247577.full.pdf
http://kawahara.ca/convolutional-neural-networks-for-adjacency-matrices/
Edit : Illustration of Train_X and Train_Y
Columns A through H form Train_X and Column I is Train_Y. |
H: predict future value in every one hour using (t+60 minutes) LSTM neural network in python
I have a data csv file including with three inputs and two output with time series. Here data took an every one hour one hour. So I need to predict my next future value at t+60 according to the previous input value and at that time period if having new input value using regression neural network. So I choose LSTM neural network to predict next future value. But I don't know how to give time period to predict my future value.
Can anyone suggest me how to solve this problem?
Can anyone give me any examples to clear out this problem?
Here that prediction value will come as input value (g).
subset of my csv file
here I upload my code;
def create_data(data, look_back=1):
dataX, dataY = [], []
for i in range(len(data) - look_back - 1):
a = data[i:(i + look_back), :]
dataX.append(a)
dataY.append(data[i + look_back, 2])
return numpy.array(dataX), numpy.array(dataY)
data = pd.DataFrame(data,columns=['g','p','c'])
numpy.random.seed(7)
data = data.values
scaler = MinMaxScaler(feature_range=(0, 1))
data = scaler.fit_transform(data)
train_size = int(len(data) * 0.67)
test_size = len(data) - train_size
train, test = data[0:train_size, :], data[train_size:len(data), :]
# reshape into X=t and Y=t+1
look_back = 3
trainX, trainY = create_data(train, look_back)
testX, testY = create_data(test, look_back)
# reshape input to be [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], look_back, 3))
testX = numpy.reshape(testX, (testX.shape[0],look_back, 3))
model = Sequential()
model.add(LSTM(6, input_shape=(look_back,3)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history= model.fit(trainX, trainY,validation_split=0.33, nb_epoch=10,
batch_size=30)
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
trainPredict_extended = numpy.zeros((len(trainPredict),3))
trainPredict_extended[:,2] = trainPredict[:,0]
trainPredict = scaler.inverse_transform(trainPredict_extended) [:,2]
print(trainPredict)
testPredict_extended = numpy.zeros((len(testPredict),3))
testPredict_extended[:,2] = testPredict[:,0]
testPredict = scaler.inverse_transform(testPredict_extended)[:,2]
trainY_extended = numpy.zeros((len(trainY),3))
trainY_extended[:,2]=trainY
trainY=scaler.inverse_transform(trainY_extended)[:,2]
testY_extended = numpy.zeros((len(testY),3))
testY_extended[:,2]=testY
testY=scaler.inverse_transform(testY_extended)[:,2]
trainScore = math.sqrt(mean_squared_error(trainY, trainPredict))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY, testPredict))
print('Test Score: %.2f RMSE' % (testScore))
# shift train predictions for plotting
trainPredictPlot = numpy.empty_like(data)
trainPredictPlot[:, :] = numpy.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, 2] = trainPredict
# shift test predictions for plotting
testPredictPlot = numpy.empty_like(data)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(data)-1, 2] =
testPredict
AI: One option is to :
Break data into constant-frequency observations (E.g.: Assume that g,p,c,out are same for all time periods between two observations). With this, you will get samples every N minutes.
Training data will then be a set of [Last M observations till T, Observation for T + 60]
This Train and test data can be fed into a network. |
H: Classification: how to handle reviews/long english words in feature set with all other numerical features
I am currently working on an use case where feature set contains numeric values such as amount, as well as a review feature which contains long winded english text.
the english text will very well differ between train and test data.
eg 'i have seen and its good' , 'nto ok','timepass',etc
how do i combine the text feature set with numerical data and feed it to a machine learning model?
i will nt be able to use encoding , these text variables are not categorical values . they are varying .
import pandas as panda
from sklearn.feature_extraction.text import TfidfVectorizer
words = ['i hv paid','i dont like','its good','yum yum']
a = panda.DataFrame({'amount':[10,20,30,40],'word':words})
tf = TfidfVectorizer()
csr = tf.fit_transform(words)
#how do i now use my csr to feed both amount and word to my machine learning model
AI: One of the ways to address your use case could be to create 2 separate models, one model using your text data features and another one using your numerical features and combine their results using ensembling.
The other way could be to create numeric features out of your text features (e.g. tf-idf, word2vec) and combine them with your numeric features and feed them to your model.
Sample code-
from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
import pandas as pd
words = ['i hv paid','i dont like','its good','yum yum']
tf = TfidfVectorizer()
vector = tf.fit_transform(words)
df = pd.DataFrame([10,20,30,40])
df2 = pd.DataFrame(vector.toarray())
pd.concat([df,df2], axis=1)
Sample output- |
H: Reg. Pandas factorize()
-Hi Experts-
I just read about factorise() function in Pandas. Using this I'm able to encode (enumerate) my string values into numbers. But, now I'm not able to understand what numbers corresponds to what string.
Ex.
df['product_name'] # Ex. A, B, C
df['product_name'] = df['product_name'].factorize()[0]
df['product_name'] # Ex. 0, 1, 2
Just illustration, not actual o/p -
A - 0
B - 1
C - 2
How can i get this? Please advise.
-Curious newbie :)
AI: From the documnetation
Encode the object as an enumerated type or categorical variable.
This method is useful for obtaining a numeric representation of an
array when all that matters is identifying distinct values. factorize
is available as both a top-level function pandas.factorize(), and as a
method Series.factorize() and Index.factorize().
The examples section goes on to show that the output of the factorize method actually returns two things:
labels - referring to the new values for each of your classes
uniques - essentially the mapping back to your original labels
In your line of code:
df['product_name'] = df['product_name'].factorize()[0]
The part at the end: [0] means you are only taking the labels, throwing away the uniques that map back to your input.
If you keep both by making the same line:
df['product_name'], mapping = df['product_name'].factorize()
You could now do the rest of your work and end up with a results column full with the factorised output, you can use this line to get the original values back from those factorized labels:
mapped_back_to_product_name = mapping.take(results)
I suggest reading the documentation to get more information on how best to use the method :-) |
H: How to make a region of interest proposal from convolutional feature maps?
Problem
Keras does not have any direct implementation of region of interest pooling. I am aware of how to perform maxpooling, but I don't know how to get bounding boxes from feature maps passed from convolutional layer.
Is there any way to directly implement a region proposal algorithm?
Example
Let's say there is an architecture like this:
So we have a multi-input neural network architecture that eventually leads to the ROI MaxPool layer. We have three inputs, screenshot, textmaps and candidates, let's take candidates out. Then we would have such code in Keras:
from keras.models import Model
from keras.layers import Input, Dense, Conv2D, ZeroPadding2D, MaxPooling2D, BatchNormalization, concatenate
from keras.activations import relu
from keras.initializers import RandomUniform, Constant, TruncatedNormal
# Network 1, Layer 1
screenshot = Input(shape=(1280, 1280, 0),
dtype='float32',
name='screenshot')
# padded1 = ZeroPadding2D(padding=5, data_format=None)(screenshot)
conv1 = Conv2D(filters=96,
kernel_size=11,
strides=(4, 4),
activation=relu,
padding='same')(screenshot)
# conv1 = Conv2D(filters=96, kernel_size=11, strides=(4, 4), activation=relu, padding='same')(padded1)
pooling1 = MaxPooling2D(pool_size=(3, 3),
strides=(2, 2),
padding='same')(conv1)
normalized1 = BatchNormalization()(pooling1) # https://stats.stackexchange.com/questions/145768/importance-of-local-response-normalization-in-cnn
# Network 1, Layer 2
# padded2 = ZeroPadding2D(padding=2, data_format=None)(normalized1)
conv2 = Conv2D(filters=256,
kernel_size=5,
activation=relu,
padding='same')(normalized1)
# conv2 = Conv2D(filters=256, kernel_size=5, activation=relu, padding='same')(padded2)
normalized2 = BatchNormalization()(conv2)
# padded3 = ZeroPadding2D(padding=1, data_format=None)(normalized2)
conv3 = Conv2D(filters=384,
kernel_size=3,
activation=relu,
padding='same',
kernel_initializer=TruncatedNormal(stddev=0.01),
bias_initializer=Constant(value=0.1))(normalized2)
# conv3 = Conv2D(filters=384, kernel_size=3, activation=relu, padding='same',
# kernel_initializer=RandomUniform(stddev=0.1),
# bias_initializer=Constant(value=0.1))(padded3)
# Network 2, Layer 1
textmaps = Input(shape=(160, 160, 128),
dtype='float32',
name='textmaps')
txt_conv1 = Conv2D(filters=48,
kernel_size=1,
activation=relu,
padding='same',
kernel_initializer=TruncatedNormal(stddev=0.01),
bias_initializer=Constant(value=0.1))(textmaps)
# (Network 1 + Network 2), Layer 1
merged = concatenate([conv3, txt_conv1], axis=-1)
merged_padding = ZeroPadding2D(padding=2, data_format=None)(merged)
merged_conv = Conv2D(filters=96,
kernel_size=5,
activation=relu, padding='same',
kernel_initializer=TruncatedNormal(stddev=0.01),
bias_initializer=Constant(value=0.1))(merged_padding)
If you look at the end of the code (and architecture itself), we pass concatenated activations from two different Conv+ReLu layers and then pass it to ROI MaxPool layer.
Thank you!
AI: To implement region proposal you need two major parts:
The region proposal network that generates a set of candidate bounding boxes. It can be implemented simply as two convolutional layers to 1) predict the object presense and 2) predict offsets for the default (anchor bounding boxes)
The ROI pooling layer that provides a fixed-size feature vector for an arbitrary sized proposal.
Here is an implementation of Faster R-CNN in Keras, and here is a detailed explanation of the model and the code.
Here is implementations of the RPN, and here is implementation of the ROI pooling. |
H: Confusion on Delta Rule and Error
I'm currently reading Mitchell's book for Machine Learning, and he just started gradient descent. There's one part that's really confusing me.
At one point, he gives this equation for the error of a perceptron over a set of training examples.
$$E(\vec{w})\equiv \frac12 \sum_{d \in D}(t_d-o_d)^2$$
$O_d$ is the actual output of $ \vec{W} \cdot \vec{X}$, where $ \vec{X}$ is the input vector and $\vec{W}$ is the weights vector.
$t_d$ is the target output, what we want to get.
The sum over all the $D$ means we sum over every single $\vec{X}$ we can input.
Okay, so far so good, I understand that.
However, he then gives this example:
But that is just not true!!!! That equation for the error does NOT give us a single minimum!!!
According to his previous rule, if we're considering the error for a single weight vector and a single training vector, the equation for the error would be:
$$E(\vec{w}) = \frac{1}{2} (t_d - (w_0 x_0 + w_1 x_1))^2$$
Which has an infinite number of minimums!!! Every time $(w_0 x_0 + w_1 x_1) = t_d$
I graphed it here to show you:
In that picture, $x$ and $y$ are the two rows of the weight vector $\vec{w}$.
Please help! I've been confused about this for the last three hours!
Thanks
AI: You are right, the least square solution need not be unique as you have illustrated.
In general, we are trying to solve for $w$ in the system $$Xw=t$$
which of course need not even be consistent (meaning has a solution). In the event that it has a solution, the uniqueness is also not guaranteed if the matrix $X$ has non-zero vector in the nullspace of $X$.
For the least square solution, we are trying to minimize
$$\frac12(Xw-t)^T(Xw-t)$$
of which the minimal is attained when $$X^T(Xw-t)=0$$
that is $$X^TXw=X^Tt$$
and it is only unique when $X^TX$ has full column rank. However, note that uniqueness is not needed for gradient descent to work. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.