text
stringlengths 83
79.5k
|
---|
H: Doubt regarding the number of weights in 2 layer neural network
Considering a hypothetical scenario , where we have 10 input layers, and 5 output layers.
How many weights are there in the neural network?
If this is implemented in pytorch, the answer will be 50. But shouldn't it be 15 since 10 weights from input layer, and 5 weights from output layer? Please resolve my doubt.
Please forgive my english, not a native speaker.
AI: All units in one layer gets connected to all units in the next layer. To accomplish this all units in the first layer will need a weight per unit in the last layer. Therefore you get the number of weight be multiplication and not addition like you assumed.
Let's for example add one more layer with 3 units in the end and maybe it will be clearer. We have layers with units [10, 5, 3]. Then we will have $W_1: 10\times5=50$ weights between the first two layers and $W_2: 5\times3=15$ between the last two layers and together they will be $W_1 + W_2 = W <=> 50 + 15 = 65$. |
H: Effect of adding gaussian noise to the input layer in a CNN
I often come across Keras code that adds GaussianNoise to the input, however its not clear to me what advantages does it offer to the learning.
input_img = layers.Input(t_x.shape[1:], name = 'RGB_Input')
pp_in_layer = layers.GaussianNoise(GAUSSIAN_NOISE)(input_img)
AI: Adding noise in the input data is equivalent to adding a regularization term to the objective function.
See Training with Noise is Equivalent to Tikhonov Regularization and Adding noise to the input of a model trained with a regularized objective. |
H: Why does test data need to be normalized on train data mean and std?
I understand why it is usefull to normalize data in general (at least I think I do). You take the mean and the standard deviation of the train data and apply it to both, the train and test data.
Why is it that we can not take the mean and std of the whole dataset combined (train and test data) and then normalize around these values?
AI: The reason you split your dataset to Training and Test is to simulate real-world cases. What you actually do with the train-split validation is to evaluate your model in unknown data.
Imagine now that you have trained your model and you are on a production where new data keep coming for prediction. You might not get them in mass, but one by one such as in an API call. You don't have the mean and standard deviation of those "new" data. You only have the mean and std during the training process.
To sum up, train-test validation tries to be as close as possible to the real problem. And since you won't know anything about your upcoming data, you should not use any knowledge you get from the test data. |
H: ML algorithm with fixed number of inputs and variable number of outputs
I am trying to solve the following problem:
Let's say I have a chess position:
I encode each square as one-hot encoded vector of length 13
index 0 for empty square
index 1 for white pawn
index 2 for black pawn
index 3 for white bishop
index 4 for black bishop
etc..
So, at the end I have a matrix ${M}$ with a shape 64x13, which I intend to use as an input of the model. I want to create a model able to map matrix ${M}$ to FEN representation of the position.
Given the example above, FEN representation of the position is: 1b1N4/7k/4b3/8/8/8/3P4/3K4
The length of a FEN-string is not fixed. I failed to find a ML algorithm which accepts fixed number of inputs and produces variable number of outputs. What machine learning algorithm can I use in that case ? Is RNN appropriate for that task ?
P.S. I do understand, that it is trivial to just write a function to map ${M}$ to FEN, but I do not want it, I want the model to learn it from data.
AI: RNN (and LSTM / GRU)can generate variable length output. For example, generation of text.
In such problems, RNN is designed to generate next character (It keeps track of what characters have generated in past). RNN should generate "End Of Output" character to indicate end of sequence.
For example : https://chunml.github.io/ChunML.github.io/project/Creating-Text-Generator-Using-Recurrent-Neural-Network/
Following method will have to be changed :
def generate_text(model, length):
ix = [np.random.randint(VOCAB_SIZE)]
y_char = [ix_to_char[ix[-1]]]
X = np.zeros((1, length, VOCAB_SIZE))
for i in range(length):
X[0, i, :][ix[-1]] = 1
print(ix_to_char[ix[-1]], end="")
ix = np.argmax(model.predict(X[:, :i+1, :])[0], 1)
y_char.append(ix_to_char[ix[-1]])
return ('').join(y_char)
to :
stop_characters = set(['.','?'])
..
..
ix = np.argmax(model.predict(X[:, :i+1, :])[0], 1)
predicted_char=ix_to_char[ix[-1]]
if(predicted_char in stop_characters ):
break
y_char.append(predicted_char) |
H: Is shuffling training data beneficial for machine learning?
I was curious to know if shuffling ML training data is beneficial to better results?
Sorry not a lot of wisdom here, but I have been reading a post from pythonprogramming.net for this topic.
I copied this function from the post and modified to just save my shuffled data to csv file.
def Randomizing():
df2 = df.reindex(np.random.permutation(df.index))
df2.to_csv('C:\\Users\\Machine-Learning-Electric-Data\\randomized.csv')
Randomizing()
What appears to happen is only the index gets shuffled and all other data stays the same. I have many columns in my pd dataframe where I would need to keep all rows the same. (randomly shuffle all rows, its time series data) If this is beneficial can someone give me a tip on how to randomly shuffle my data more than just the index?
AI: Shuffling the training data is generally good practice during the initial preprocessing steps.
When you do a normal train_test_split, where you'll have a 75% / 25% split, your split may overlook class order in the original data set. For example, class labels that might resemble a data set similar to the iris data set would include target variables that resemble the following:
For example: [0, 0, 0, 1, 2, 2, 2, 3, 3, 3, 3, 3]
You could see from this example above, that splitting your data without shuffling might lead to very poor performance in your test set evaluation. Said another way, you may only capture the classes 0, 1, and 2 in your training data and only 3 will be represented in your test data. Specifically for classification tasks, but also for other ML tasks it may be useful to shuffle your data. However, each situation is different so the best idea would be to try it both ways to see whether you see a significant improvement or not.
Hope this answers your question. Drop a comment if you would like any further clarification. |
H: What does a negative coefficient of determination mean for evaluating ridge regression?
Judging by the negative result being displayed from my ridge.score() I am guessing that I am doing something wrong. Maybe someone could point me in the right direction?
# Create a practice data set for exploring Ridge Regression
data_2 = np.array([[1, 2, 0], [3, 4, 1], [5, 6, 0], [1, 3, 1],
[3, 5, 1], [1, 7, 0], [1, 8, 1]], dtype=np.float64)
# Separate X and Y
x_2 = data_2[:, [0, 1]]
y_2 = data_2[:, 2]
# Train Test Split
x_2_train, x_2_test, y_2_train, y_2_test = train_test_split(x_2, y_2, random_state=0)
# Scale the training data
scaler_2 = StandardScaler()
scaler_2.fit(x_2_train)
x_2_transformed = scaler_2.transform(x_2_train)
# Ridge Regression
ridge_2 = Ridge().fit(x_2_transformed, y_2_train)
x_2_test_scaled = scaler_2.transform(x_2_test)
ridge_2.score(x_2_test_scaled, y_2_test)
Output is: -4.47
EDIT: From reading the scikit learn docs this value is the R$^2$ value. I guess the question is though, how do we interpret this?
AI: A negative value means you're getting a terrible fit - which makes sense if you create a test set that doesn't have the same distribution as the training set.
From the sklearn documentation:
The coefficient $R^2$ is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a $R^2$ score of 0.0. |
H: Naive Bayes Classifier
Could someone please explain to me how and why can we go from equation $4.3$ to equation $4.4$:
$$\hat{c}= \arg\max_{c \in \mathcal{C}}P(c|d) = \arg\max_{c \in \mathcal{C}}\frac{P(d|c)P(c)}{P(d)}\tag{4.3}$$
$$\hat{c}= \arg\max_{c \in \mathcal{C}}P(c|d) = \arg\max_{c \in \mathcal{C}}P(d|c)P(c)\tag{4.4}$$
AI: We are trying to select the optimal $c$, here $d$ is fixed and hence $P(d)$ and $\frac1{P(d)}$ is just a positive constant.
Multiplying an objective function with a positive constant doesn't change the optimal solution, hence we can drop $P(d)$. |
H: Multivariate Time Series Binary Classification
I have continuous (time series) data. This data is multivariate. Each feature can be represented as time series (they are all calculated on a daily basis). Here is an example:
Days F1 F2 F3 F4 F5 Target
Day 1 10 1 0.1 100 -10 1
Day 2 20 2 0.2 200 -20 1
Day 3 30 3 0.3 300 -30 0
Day 4 40 4 0.4 400 -40 1
Day 5 50 5 0.5 500 -50 1
Day 6 60 6 0.6 600 -60 1
Day 7 70 7 0.7 700 -70 0
Day 8 80 8 0.8 800 -80 0
F1, F2, .. F5 are my features, and Target is my binary classes. If I use a window size of 3, I can convert my features into time-series data. Then, I will have [10,20,30] for feat_1, [1,2,3] for feat_2 and so on. With the window size of 3, I have 5 features* 3 window_size, a total of 15 features if written in the same vector.
The problem with this method is putting them into the same vector might cause some problems since the feature values are different.
Example of multivariate time series (15 features in 1 network):
[10, 20, 30, 1, 2, 3, 0.1, 0.2, 0.3, 100, 200, 300, -10, -20, -30]
[20, 30, 40, 2, 3, 4, 0.2, 0.3, 0.4, 200, 300, 400, -20, -30, -40]
....
[60, 70, 80, 6, 7, 8, 0.6, 0.7, 0.8, 600, 700, 800, -60, -70, -80]
The other option is to create separate time series networks (RNNs mostly, LSTM or CNN or their combination) for each of the features with the same target and then combine their results. In this scenario, I have 5 different networks, and all of them are univariate time series binary predictions.
Example of different networks with univariate time series data (3 features in 5 networks):
[10, 20, 30]
... This is for network 1
[60, 70, 80]
[1, 2, 3]
... This is for network 2
[6, 7, 8]
...
[-10, -20, -30]
... This is for network 5
[-60, -70, -80]
The problem with this one is that I might lose the feature correlation information even though I'm putting their results into another network.
My question is, which is the best way to use when dealing with multivariate time series problems? I want to use the first method, but value differences worry me. The second method is easier, but I worry I might lose some essential information.
AI: You can add all features as input to RNN/LSTM (Day #, F1, F2, ... F5) and binary class as output.
This article has an example of such network. |
H: Can Adagrad be used to optimize non-differentiable functions?
I am reading a book (TensorFlow For Dummies, Matthew Scarpino), and here it says:
Adagrad methods compute subgradients instead of gradients. A subgradient is a generalization of a gradient that applies to nondifferentiable functions. This means AdaGrad methods can optimize both differentiable and nondifferentiable functions.
But I haven’t seen such a claim anywhere else. So I want to know:
Can we really optimize non-differentiable loss functions just because we are using Adagrad?
AI: The book has a misunderstanding (but it's understandable where it came from).
If you can compute subgradients, you can use gradient descent. You don't have to use AdaGrad -- you can use any gradient method you like. Basically, you just use a subgradient in place of the gradient in the update step. See, e.g., https://en.wikipedia.org/wiki/Subgradient_method.
AdaGrad is an adjustment to gradient descent that adjusts the update step. I believe AdaGrad is orthogonal to whether you use subgradients or gradients in the update step.
I can see where the misunderstanding comes from. The original paper on AdaGrad talks about subgradient methods. However, if you read the introduction carefully, you will recognize what's going on there. Subgradient methods are a broader category, that includes both ordinary gradient descent and subgradient descent. So, the paper is just trying to be as general as possible. Their method applies both to ordinary gradient descent and also to subgradient descent. |
H: Boruta Python No feature Selected
I run Boruta with RandomForestClassifier the previous day on my data (nb features = 36) and got 17/36 confirmed. Now I run it again and there is 0/36 and stop at the 9th iteration. Any idea why this is happening?
%%time
rfc = RandomForestClassifier(n_estimators=200, n_jobs=-1, class_weight= 'balanced_subsample' , max_depth=10)
boruta_selector = BorutaPy(rfc, n_estimators='auto', verbose=2, random_state=1)
boruta_selector_before.fit(X, y)
the current output look is the follow
Iteration: 1 / 100
Confirmed: 0
Tentative: 36
Rejected: 0
Iteration: 2 / 100
Confirmed: 0
Tentative: 36
Rejected: 0
Iteration: 3 / 100
Confirmed: 0
Tentative: 36
Rejected: 0
Iteration: 4 / 100
Confirmed: 0
Tentative: 36
Rejected: 0
Iteration: 5 / 100
Confirmed: 0
Tentative: 36
Rejected: 0
Iteration: 6 / 100
Confirmed: 0
Tentative: 36
Rejected: 0
Iteration: 7 / 100
Confirmed: 0
Tentative: 36
Rejected: 0
Iteration: 8 / 100
Confirmed: 0
Tentative: 0
Rejected: 36
BorutaPy finished running.
Iteration: 9 / 100
Confirmed: 0
Tentative: 0
Rejected: 36
CPU times: user 1.95 s, sys: 377 ms, total: 2.33 s
Wall time: 2.79 s
AI: My mistake came from the fact that my target variable sample only had one class. To reduce computation time, I took a sample from the main data. |
H: Is a "curve" considered "linear"?
In linear regression, we are fitting a polynomial to a set of data points. In Bishop's book of Pattern Recognition & Machine Learning, there are a few examples where the fit is a curve or a straight line. I am a bit confused if a curve is linear or not. The term linear means that the fit should be a linear function or a polynomial of degree 1 i.e., a straight line. But in many resources, examples are shown where the fit can be a polynomial of degree 3,9 etc. So, are these higher order polynomials linear?
AI: Polynomial regression (for nth degree polynomial) in statistics is a special case of linear regression. Lets give an example for square function:
1. y = w*x
This is linear in terms of both weight (w) and data (x).
2. y = w*(x^2) OR y = w*z ; where z = x^2
This is still linear in terms of weight (w) and still treated as a linear regression for the transformed data (z). While the modeled relationship between y and x is certainly non-linear.
As you can notice above: The commonality in (1) and (2) is the linearity with the weight/ coefficient of linear regression. |
H: How can I plot/display a dataset or an image distribution?
I want to view a specific image or a dataset's distribution, and see if they are different.
Does simply writing something like :
# mydataset.shape = (50k,32,32,3)
plt.hist(mydataset.reshape(-1))
do the trick? or should I be doing something else?
for example doing so on cifar10 gave me this plot :
However, it doesn't look right, there are 50K training images, and I don't know how to interpret this or even if it is the correct way of doing it in first place!
if I do :
#dataset shape is (50k, 3072)
plt.hist(mydataset.reshape(-1,32*32*3))
#and testset which is (10K,3072)
plt.hist(mytestset.reshape(-1,32*32*3))
This is what I get :
So this is very puzzling to me, and I have no idea how to go about this!
Since there are 10 bins, is it giving me the distribution of each class(cifar10 as 10 classes) ? if it is so why do I get the same shape when I try to get the distribution of a single image?
I mean when I try to do :
#image is of shape (1,3072)
plt.hist(mytestset[0])
this is what I get :
and interestingly, the histogram for the whole test set looks like this :
#dataset shape is (10k,32,32,3)
plt.hist(mytestset.reshape(-1,32*32*3))
Why do I get ten bins for the single image as well?
what does each axis mean then?
What should I be looking for when it comes to image/dataset distribution?
Is it the raw values for the whole dataset only?
or Is it the raw values for each class? or even each image?
AI: I want to view a specific image or a dataset's distribution, and see
if they are different. Does this do the trick?
It depends what you want to understand or learn about your data.
what does each axis mean then?
In all of your plots, the x-axis ranges from 0-255, which is because in all your plots, you are creating histograms of the individual pixel values of your images. A single pixel consists of a vector (tuple) of three values: (red, green, blue). Each of those three colours can take a value 0 to 255 (usually an unsigned 8-bit integer)So by creating a histogram over your images, you are essentially counting how many time each of the possible values appear.
These plots can therefore tell you something about the average colour distribution. The peaks seem to be around 100 for each of your plots, so I guess there are a lot of mixed colours - colour where the RGB values are in the range 100 - 150. So less pixels are purely reg, green or blue e.g. (0, 0, 255) would be purely blue.
You can also compute/visualise the histogram of colours using the OpenCV library, which has great functions for doing exactly this kind of thing (and will run way faster than matplotlib's histogram method because OpenCV uses the C++ backend library). Have a look at this great walkthrough.
Why do I get ten bins for the single image as well?
The reason all your histograms have 10 bins, is because you are not specifying a value for the bins argument to plt.hist, so the default value is taken from the basic configurations of matplotlib, which you can see by running this:
print(plt.rcParams["hist.bins"]) # will print 10 by default
The y-axes on those two plots do make sense; the dataset with 50k images has higher counts for each pixel value compared to the 10k dataset.
What should I be looking for when it comes to image/dataset distribution?
Is it the raw values for the whole dataset only? or Is it the raw values for each class? or even each image?
Each plot is showing the distribution of raw values only, for whichever set of data you use. You used mtdataset and mytestset as input, so in each case you are only seeing the distribution for those images of course. There is no inclusion of the actual labels anywhere, so you are not breaking down the distributions into the target classes, for example. Just raw pixel values are considered.
What should I be looking for when it comes to image/dataset
distribution?
This is problem specific.
You could be looking to see that there is indeed a distribution of RGB values, as a sanity check that you don't have some really skewed set of colour images that e.g. are mostly black or white. You might compare the distributions of the training and test sets to one another, to ensure that they are similar - meaning the training set is indeed representative of the test set. If this were not the case, any model you might train on a specific task could be biased towards the training set and perform badly on the test set (it will not have seen images similar to the test set if the distributions are very different!)
For training neural networks with images, it is common to normalise the distribution of pixel values to the range [-1, +1], which helps smoother learning via smoother gradient updates. |
H: How to place multiple graphs in same coordinate system (pandas,mathplotlib) -> look pic
in short how to set graphs in same coordinate system, not separated as in pic each in its own coordinate system in one frame....
AI: Problem is with the dtype of your y values. It's "object" and thus matplotlib thinks two object containing same float value are different.
Use,
y = a.astype(float).values
y1 = b.astype(float).values
y2 = c.astype(float).values |
H: Why is performance worse when my time-series data is not shuffled prior to a train/test split vs. when it is shuffled prior to the split?
We are running RandomForest model on a time-series data. The model is run in real time and is refit every time a new row is added. Since it is a timeseries data, we set shuffle to false while splitting into train and test dataset.
We observed that there is a drastic change in scores when shuffle is True and when shuffle is false
The code being used is as follows
# Set shuffle = 'True' or 'False'
df = pandas.read_csv('data.csv', index_col=0)
X = df.drop(columns=['label'])
y = df['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05, shuffle=True)
count = 0
predictions = []
for idx in X_test.index.values:
# Train the model on training data
# print(count, X_train.shape, y_train.shape)
rf = RandomForestRegressor(n_estimators = 600, max_depth = 7, random_state = 12345)
rf.fit(X_train, y_train)
predictions.append(rf.predict(X_test.loc[X_test.index == idx]))
# print(len(predictions))
X_train.loc[len(X_train)] = X_test.loc[idx]
y_train.loc[len(y_train)] = y_test.loc[idx]
count+=1
Initially, we thought the difference is due to covariance shift in the data. But that shouldn't affect this much for continuous fit
Here are screenshots for the prediction plot
With shuffle = False
With shuffle = True
AI: Without looking further into the data myself, I can surmise that something has changed recently with your data such that if you split without shuffling, some aspect of the data in your test set (which is what you most recently collected) is underrepresented in your training set. By shuffling the data, you allow those more recent samples to also be present in your training set, and thus your test set performance improves. |
H: What is GridSearchCV doing after it finishes evaluating the performance of parameter combinations that takes so long?
I'm running GridSearchCV to tune some parameters. For example:
params = {
'max_depth':[18,21]
}
gscv = GridSearchCV(
xgbc,
params,
scoring='roc_auc',
verbose=50,
cv=StratifiedKFold(n_splits=2, shuffle=True,random_state=42)
)
gscv.fit(df.drop('LAPSED', axis=1), df.LAPSED)
print('best score: ', gscv.best_score_, 'best params: ', gscv.best_params_)
All fine. Because I've specified some verbosity, it outputs some stuff about what it's doing, like this:
Fitting 2 folds for each of 2 candidates, totalling 4 fits
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[CV] max_depth=18 ....................................................
[CV] ........... max_depth=18, score=0.9453140690301272, total= 8.2min
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 8.3min remaining: 0.0s
[CV] max_depth=18 ....................................................
[CV] ........... max_depth=18, score=0.9444119097669363, total= 7.9min
[Parallel(n_jobs=1)]: Done 2 out of 2 | elapsed: 16.3min remaining: 0.0s
[CV] max_depth=21 ....................................................
[CV] ........... max_depth=21, score=0.9454705777130412, total= 8.4min
[Parallel(n_jobs=1)]: Done 3 out of 3 | elapsed: 24.8min remaining: 0.0s
[CV] max_depth=21 ....................................................
[CV] ........... max_depth=21, score=0.9443863821843195, total= 8.3min
[Parallel(n_jobs=1)]: Done 4 out of 4 | elapsed: 33.2min remaining: 0.0s
[Parallel(n_jobs=1)]: Done 4 out of 4 | elapsed: 33.2min finished
However once it's finished running all the folds, it takes a very long time (at least as long as it takes to fit and evaluate one fold for one combination of parameters) for it to return the output to print('best score: ', gscv.best_score_, 'best params: ', gscv.best_params_), even though I could literally manually calculate that easily from the data it outputs as part of that fitting process. I presume this means that the algorithm is hung up doing something else after it finishes fitting and evaluating the different models, but I'm not sure what that might be.
n.b. actually, the thought just occurred to me that this might be time spent retraining the model on the parameters it identified as giving the best performance, so that it's available to the .predict() etc methods. I'm just checking that now by passing refit=False to prevent that from happening, and if it works I'll answer my own question.
AI: Yep I figured it out. The answer is that by default GridSearchCV's last act is to expose the API of the estimator object you passed so that you can directly call things like .predict() or .score() on the GridSearchCV object itself. It does this by retraining the estimator against the best parameters it found during cross validation. If you want to skip this step (because, for example, you're going to go on to do more development or cross-validation afterwards) then you can pass refit=False to prevent that from happening. |
H: MAE,MSE and MAPE aren't comparable?
I'm a newbie in data science. I'm working on a regression problem. I'm getting 2.5 MAPE. 400 MAE 437000 MSE. As my MAPE is quite low but why I'm getting high MSE and MAE? This is the link to my data
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import Normalizer
import matplotlib.pyplot as plt
def mean_absolute_percentage_error(y_true, y_pred):
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
import pandas as pd
from sklearn import preprocessing
features=pd.read_csv('selectedData.csv')
import numpy as np
from scipy import stats
print(features.shape)
features=features[(np.abs(stats.zscore(features)) < 3).all(axis=1)]
target = features['SYSLoad']
features= features.drop('SYSLoad', axis = 1)
names=list(features)
for i in names:
x=features[[i]].values.astype(float)
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
features[i]=x_scaled
Selecting the target Variable which want to predict and for which we are finding feature imps
import numpy as np
print(features.shape)
print(features.describe())
from sklearn.model_selection import train_test_split
train_input, test_input, train_target, test_target =
train_test_split(features, target, test_size = 0.25, random_state = 42)
trans=Normalizer().fit(train_input);
train_input=Normalizer().fit_transform(train_input);
test_input=trans.fit_transform(test_input);
n=test_target.values;
test_targ=pd.DataFrame(n);
from sklearn.svm import SVR
svr_rbf = SVR(kernel='poly', C=10, epsilon=10,gamma=10)
y_rbf = svr_rbf.fit(train_input, train_target);
predicted=y_rbf.predict(test_input);
plt.figure
plt.xlim(300,500);
print('Total Days For training',len(train_input)); print('Total Days For
Testing',len(test_input))
plt.ylabel('Load(MW) Prediction 3 '); plt.xlabel('Days');
plt.plot(test_targ,'-b',label='Actual'); plt.plot(predicted,'-r',label='POLY
kernel ');
plt.gca().legend(('Actual','RBF'))
plt.title('SVM')
plt.show();
test_target=np.array(test_target)
print(test_target)
MAPE=mean_absolute_percentage_error(test_target,predicted);
print(MAPE);
mae=mean_absolute_error(test_target,predicted)
mse=mean_squared_error(test_target, predicted)
print(mae);
print(mse);
print(test_target);
print(predicted);
AI: I'll be honest, I haven't thoroughly checked your code. However, I can see that the range of values of your dataset is approx [0,12000]. As an engineer, I see that:
sqrt(MSE) = sqrt(437000) = 661 units.
MAE = 400 units.
MAPE = 2.5 which means that MAE can be up to 0.025*12000= 250 units.
All three cases show similar magnitude of error, so I wouldn't say that "MAPE is quite low but you're getting high mse and MAE".
Those 3 values explain the results from similar yet different perspectives. Keep in mind, if the values were all the same, there would have been no need for all 3 of those metrics to exist :) |
H: Is there any formal explanation for the sensitivity of AdaBoost to outliers?
AdaBoost is known to be sensitive to outliers & noise. However, the explanation seems to be hard to found or nontrivial.
AI: Consider a supervised problem with target values y either being -1 or 1. Let's say we are fitting function f(x) using our AdaBoost classifier. Intuitively we want positive margins, where a margin is defined as $y_{i}f(x_{i})$. A positive margin means we have correctly classified that sample since the sign of both f and y are the same. A negative margin means we have misclassified the sample.
The loss function for Adaboost is defined as an exponential:
$L(y, f(x)) = exp(-yf(x))$
This exponential then ends up more strongly penalizing points with a large negative margin (i.e., outliers) than other classifiers. Here is a plot of the loss function vs. the margin for several classifiers:
So, to summarize: Adaboost is susceptible to overfitting outliers due to the form of its loss function.
Source: Hastie, et. al, The Elements of Statistical Learning, 2017, pp. 343-347 |
H: How does the cost function of LSTM works?
I am searching to understand how does LSTM network work, but I couldn't find any good sources that explains how it's cost function works?
I mean I know we have a sequence of inputs x<1> to x<t>, and a sequence of y_hat<1> to y_hat<t>, but how do we calculate loss or cost function? Should we calculate y_label<i> - y_hat<i> for each unit? If so, where do we accumulate them and when we use it? Where? How?
Is LSTM network supervised learning or I am confused?
AI: It is my understanding that there are no differences for loss function optimization in RNN's as compared to any other neural network models. The loss is calculated at the very end of your network output.
Say you have this network architecture for predicting a categorical variable:
Layer Type, Output Size
Input: (None, 10, 6) = (batch_size,seq len,num_features)
LSTM: (none, 10,100) =(batch_size,sequence_len,hidden_unit_size)
TimeDistributed Dense: (None, 10, 10) = (batch_size,sequence_len,num_categories)
Softmax Activation: (none, 10, 10) = (batch_size, sequence_len, num_categories)
Here, you have 10 time-steps for the sequence and the final softmax output is giving a prediction for the next value of the categorical variable at every timestep (Assuming 1-hot representation of the categorical feature). The Y value for each time step for training is simply the 1-hot representation of the categorical value at t+1. The LSTM layer output gives (batch_size, 10, 100). So, you now have a 100 dimensional representation of every time-step. You pass this to the TimeDistributed Dense and then to softmax. So, at the very end, at the softmax output layer, you have a Y vector of size (batch_size, seq_len, 1) that contains the true output for all 10 time-steps, for every sample in the batch. You then would use this the same exact way as any other NN model using mini-batches to calculate loss and backprop errors to network weights. |
H: How is data science applied in investment banking?
What do data scientists do at investment banks? What tools are they using? What kind of analysis are they doing? Why are they doing it? Etc.
AI: First off, realize that in most banks there are what are known as "front office" (FO) and "back office" (BO) operations. Your scope of work is very much driven by which one of these you are at the bank.
The most obvious answer you are going to get here is around FO operations and being a "Quant" trader. But even within the quants, there are still divisions. For example, there are some quants that only do algorithm development on a theoretical level, they do not do any programming. There are also quants that do programming and very little algorithm development. Depending on the bank, you might also find that data engineers are considered quants in the sense that they work in the same department and are constantly looking for way to optimize the dataflow across all teams.
Along with the quants, you're also going to find quants that do nothing but model validation. Trust me, you have to jump through a lot of hoops before a bank will trust you with their money. That process includes a comprehensive model validation process where every part of your proposal will be taken apart and (hopefully) validated for use. Again, depending on the bank, most quants actually start as model validators, going through proposals for models. If you can't provide proper rationale in these papers, they sure as heck aren't going to let you write algorithms for them!
On the BO side, you can still work for an investment bank as a data scientist but not actually be a quant. For example, virtually every bank has some sort of fraud detection function that is staffed by data scientists. That fraud outlook may be looking both externally and internally. There's also data scientists working in compliance to make sure your bank is running legally across any number of regulations.
So, there you have it. There's tons of opportunities for you to be a data scientist within an investment bank. |
H: does tensorflow use opencv to covert image to numpy array
Does tensorflow use opencv to covert image to numpy array ?
How is feature extraction done in tensorflow ? How is it different from opencv model ?
AI: TensorFlow has two ways of processing image data
Built-in Image Ops ( https://www.tensorflow.org/api_guides/python/image )
tf.keras Pre-processing ( https://keras.io/preprocessing/image/ )
Feature extraction is at pixels / channel level. Higher level features (Such as Edge / Contrast) are learnt typically by CNN layers.
If needed, you can feed Image Data + Features extracted by OpenCV to TensorFlow. |
H: Help to choose algorithm for computing difference between 2 texts?
I have a task to create a tool, which will be able to find articles-duplicates of a given reference article.
I know word vectorization (tf-idf,word2vec), RNN methods, but i can not choose something suitable for my situation.
My requirments:
data are being collected on the fly (program parses articles from web sites, so i don't have regular DB with collection of texts)
there is a reference text, whose copies need to be found
copies could be copypasted, partially copypasted (by paragraphs) or paraphrased
reference-vs-copy comparison algorithm is preferable, but not required (instead of reference-vs-corps)
algorithm shouldn't do deep semantic analyzis, only kind of word counting, word vectorization, substring search
instead one algorithm, i can use a set of herurisitcs
algorithms can do false positive dicisions
I come up with such ideas:
download pretrained word2vec and compare means of word-vectors
Build a dictionary word->count from every text and compare it to reference dictionary
collect about 100 texts, vectorize them according to tf-idf and find closest to the reference
I will apreciate, if you will point specific algorithms, libs, examples based on key-word extractions, dummy substring search, line difference comparison for python or CLI.
AI: algorithm shouldn't do deep semantic analyzis, only kind of word counting, word vectorization, substring search
For such tasks, test if built-in Elastic Search / SOLR models work well. These products have multiple models that can be tuned with configuration. With these models, you provide a document and find other documents that are "similar".
Some of the available algorithms are :
Cosine Similarity
Okapi_BM25
DFR
LM Jelinek Mercer
Docs :
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
https://lucene.apache.org/solr/guide/6_6/morelikethis.html |
H: How to launch a Machine Learning model?
First of all thank you for taking your time to read my question. I have done a Machine Learning model with a dataset (The famous one about Cancer) and I want to know how can I do to predict the results for new variables. I think that I have to keep training the data (often) to have more accured data to use in my prediction but for predicting new data, ¿Is as simple as changing the test data (y variable) to the new data? Thank you so much for taking your time and any help would be appreciate it.
AI: Personally , at the first time, I used this link to do predictions in deep learning:
https://machinelearningmastery.com/how-to-make-classification-and-regression-predictions-for-deep-learning-models-in-keras/
if you're using another algorithm (not neural networks), which one? |
H: How to reshape data for LSTM training in multivariate sequence prediction
I want to build an LSTM model for customer behaviour. It's the first time for me working on a timeseries, so some concepts are not clear to me at all.
My prediction problem is multidimensional, meaning that I also want to predict many informations associated to an action for each customer.
The dataset is currently shaped as a list of 2d padded arrays of one-hot encoded features (customer actions + other informations), for example:
customer_id encoded_features
0 25464205 [[0,1,0],..,[1,1,1],[1,0,1],..,[1,0,1]]
1 56456574 [[0,1,1],..,[1,0,1],[1,0,1],..,[1,1,1]]
where each element in the encoded_features entries represents a specific timestep.
My idea here is to use keras input shape
(n. customers, n. timesteps, length of features encoding)
In the example above it would be (2,#timesteps,3).
I have two main questions:
Is this whole setting rigth for the prediction of next single customer action? I would like to simply give a new sequence of features for a certain customer and predict all features in the next timestep.
I am thinking about splitting the data (according to a certain ratio) into sequential training and test sets, in order to test the trained model on unseen feature vectors. In the example above it would be:
customer_id X_train y_train
0 25464205 [[0,1,0],..] [1,1,1]
1 56456574 [[0,1,1],..] [1,0,1]
customer_id X_test y_test
0 25464205 [[1,0,1],..] [1,0,1]
1 56456574 [[1,0,1],..] [1,1,1]
Notice that X_train and X_test will generally contain all Train/Test events, except for the last one which has to be predicted.
Is this a correct interpretation?
AI: This makes sense. It should work for input and first couple of layers. For output layers, you can have a softmax if you need to generate only next record in sequence.
Following Keras code has an example that :
Accepts multi-dimensional inputs (Each sample is a Sequence of video frames)
Predicts next few frames of video ( Multi dimensional since each pixel is a feature)
https://github.com/keras-team/keras/blob/master/examples/conv_lstm.py |
H: TF tells me to use Keras optimizer, tells me the opposite when I change it?
I have this piece of code:
model = Sequential()
model.add(Dense(32, input_dim=9, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='relu'))
learning_rate = 0.001
(1) optimizer = tf.train.AdamOptimizer(learning_rate)
(2) optimizer = tf.keras.optimizers.Adam(lr=learning_rate, decay=1e-6)
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['mse'])
For the optimizer, at first I had line (1). It worked, but I got the following warning when making ModelCheckpoints:
WARNING:tensorflow:TensorFlow optimizers do not make it possible to access optimizer attributes or optimizer state after instantiation. As a result, we cannot save the optimizer as part of the model save file.You will have to compile your model again after loading it. Prefer using a Keras optimizer instead (see keras.io/optimizers).
Apparently, you have to use the Keras optimizer if you want to use this. Ok, no problem, so I commented (1) out, and inserted (2).
Now it doesn't even want to compile anymore, and gives me this error instead:
ValueError: optimizer must be an instance of tf.train.Optimizer, not a class 'tensorflow.python.keras.optimizers.Adam'
This sounds a bit contradictory to me? What am I doing wrong?
AI: The error is telling you to use the actual keras optimizer, not the tensorflow.keras class. Import Adam directly from keras library and it runs ok:
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import Adam
model = Sequential()
model.add(Dense(32, input_dim=9, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='relu'))
learning_rate = 0.001
optimizer = Adam(lr=learning_rate, decay=1e-6)
model.compile(loss='mean_squared_error', optimizer=optimizer, metrics=['mse']) |
H: Gracefully removing observations with outliers in N fields
I have a function.
remove_outliers <- function(x, na.rm = TRUE, ...) {
#find position of 1st and 3rd quantile not including NA's
qnt <- quantile(x, probs=c(.25, .75), na.rm = na.rm, ...)
H <- 1.5 * IQR(x, na.rm = na.rm)
y <- x
y[x < (qnt[1] - H)] <- NA
y[x > (qnt[2] + H)] <- NA
x<-y
#get rid of any NA's
x[!is.na(x)]
}
Given a dataset(numbers) like this:
x
5
9
2
99
3
4
The functioning is obvious
remove_outliers(numbers)
means I now have this:
x
5
9
2
3
4
However, what if I have an ID that I want to retain, such as:
number_id numbers
12 5
23 9
34 2
45 99
56 3
67 4
How do I remove the outlier(99) with the remove_outliers function(or another, better suited function), to get this data:
number_id numbers
12 5
23 9
34 2
56 3
67 4
(note the entire observation with the outlier has been removed)
And how can I scale this solution to handle n more variables?
I can do it very ungracefully by taking out each column separately and building a new data frame with loops, but it's hardly readable and a mess to debug. Is there a more graceful way?
AI: This will achieve what you want. You can remove outliers from any column you wish, just pass that column number as an argument in the function.
id <- c(12,23,34,45,56,67)
num <- c(5,9,2,99,3,4)
prac <- data.frame(id, num)
remove_outliers <- function(x, col) {
#find position of 1st and 3rd quantile not including NA's
qnt <- quantile(x[ ,col], probs=c(.25, .75), na.rm = TRUE)
H <- 1.5 * IQR(x[ ,col])
x[ ,col] <- ifelse(x[ ,col] < (qnt[1] - H) | x[ ,col] > (qnt[2] + H), NA, x[ ,col])
#get rid of any NA's
x <- x[!is.na(x[ ,col]), ]
x <- assign("dataset", x, envir = .GlobalEnv)
return(x)
}
remove_outliers(prac, 2) |
H: Gaussian Mixture Models as a classifier?
I'm learning the GMM clustering algorithm. I don't understand how it can used as a classifier. Here are my thought:
1) GMM is an unsupervised ML algorithm. At least that's how sklearn categorizes it.
2) Unsupervised methods can cluster data, but can't make predictions.
However, sklearn's user guide clearly applid GMM as a classifier to the iris dataset.
If I have to guess, maybe after clustering, each cluster is assigned to a class label based on some kind of majority voting. However, I can't find any documentation. Could someone shed more light on this process from unsupervised to supervised learning?
A related question: when using GMM as a classifier, is it common practice to simply make n_components=n_classes, instead of checking AIC, BIC, etc.?
AI: Some unsupervised models can make predictions, but not ones that necessarily match the original class labels. Once a GaussianMixture model has been fitted, it can predict which of the clusters a new example belongs to. This is exactly what the predict and predict_proba functions do in this case, and given that the number of clusters is set to 3, the number of classes, the predict function will predict a label from $\{0, 1, 2\}$.
However, this still raises the question of how does the GaussianMixture assign particular labels to the clusters? In general, it is arbitrarily chosen, but in the example from sklearn that you linked, they cheat when initialising the cluster centers:
# Since we have class labels for the training data, we can
# initialize the GMM parameters in a supervised manner.
estimator.means_init = np.array([X_train[y_train == i].mean(axis=0)
for i in range(n_classes)])
The initial position for each cluster center is at the center of each class, which also has the additional consequence of correctly ordering the cluster labelling to match the original class labels. This means that the GMM predicting which cluster a new instance belongs to is equivalent to predicting which class it might belong to in this case. I believe this has been done for easy visualisation of the different covariance matrix options. |
H: Mxnet deepdog (hot dog not hot dog) example - how does the network know it is classifying a hotdog
Looking at the mxnet documentation.
It takes the pretrained squeenext1_1 weights, and sets imagenet_hotdog_index variable to 713.
net = models.squeezenet1_1(pretrained=True, prefix='deep_dog_', ctx=contexts)
# hot dog happens to be a class in imagenet.
# we can reuse the weight for that class for better performance
# here's the index for that class for later use
imagenet_hotdog_index = 713
Then they set a 2 class output layer on top of it
deep_dog_net = models.squeezenet1_1(prefix='deep_dog_', classes=2)
deep_dog_net.collect_params().initialize(ctx=contexts)
deep_dog_net.features = net.features
print(deep_dog_net)
Where I get confused is in the classify_hotog function - it is applying softmax to the output layer, and then returning the highest result index. This would make perfect sense if we had somehow told the network to compare against index 713. But it is called for prediction before the index variable is being reused? How does the network know the class to compare against is index 713/hot dog? We've basically taken squeezenet and reduced it down to 2 class output. But how does the network know what class to compare it to? Why would it give a high probability to the second argument/class when showing a hotdog - seems to me it shouldn't know what class it is comparing it to?
out = mx.nd.SoftmaxActivation(net(image.as_in_context(contexts[0])))
print('Probabilities are: '+str(out[0].asnumpy()))
result = np.argmax(out.asnumpy())
I would have expected it to maybe use the full squeezenet output layer, and set something like
if np.argmax(out.asnumpy()) == 713:
"Hot Dog!'
Else:
"Not hot dog!"
Never understood this and would appreciate if anyone could help me get this detail.
AI: You are on the right track. But you would have to realize that 713 is only, effectively, being used as an index lookup to get the weights just for that class and "seed" your new model. From there, model becomes a binary classification model where 0 is nothotdog and 1 is hotdog. Once the weights are picked up, 713 shouldn't be found anywhere else in your model.
So the output, initially, is going to be something [0.493, 0.507], each corresponding to the probability that the image belongs to one of the two classes. From there, the call with argmax() is looking for which probability is highest, according to the binary selection. So it would pick 0.507 and call it a hotdog since that's corresponds to the binary value of 1.
If you want to verify me on all this, go to your output code and do something like print(out.asnumpy()) and you'll see what I mean. |
H: How to most effectively utilize historical data to train churn model
Suppose we have some historical data of users activity on a website and we want to build a churn prediction model (let's say we want to predict churn in a 2 month window). The usual approach, as I understand it, is to take a slice of that historical data at time $t$ and see which users churn in the time interval $(t, t + 2 \text{ month})$, so we take some features at time $t$ and train our model.
However, that way we only use a small part of our data, feeding our model only users who were active at time $t$. But what if we want to use all historical data? One way that comes to mind is take a lot of slices of our data at times $t_1, t_2, t_3, \dots$ and just merge them in one dataset, however different slices could have the a lot of same users even if we take these slices very far apart from each other. So our model could potentially learn that if a particular set of features occurs many times in our data set, then the user with these features is less likely to churn, (e.g. if we take two time slices at $t_1$ and $t_2>1$ then, if a user is presented in both of these slices, he can't churn at least at time $t_1$). So it doesn't seem to be the right way to do it...
How can I extract as much information as possible from a historical data over large period of time without spoiling the model?
AI: That's correct. You will have the same user multiple times. This can introduce a small bias. I would make a few points why this is better than working with your first approach where you have less data:
First, if the same user shows up in multiple slices that means he doesn't churn or churns later than others. So you bias the model towards figuring out what high retention users look like. As you say:
So our model could potentially learn that if a particular set of features occurs many times in our data set, then the user with these features is less likely to churn
and this is actually a good thing if you have a good mix of user retention periods.
Second, you can include the time $t$ as a feature (or their signup date etc.). This will teach your model that users that haven't churned in the past are less likely to churn in the future and reduce the importance the model puts on these other patterns you mention.
More generally, you should not only have a set of static features that don't change for the same user from $t$ to $t+1$. You should also have set of features that is dependent on $t$, e.g. "the number of logins in the last 7 days". That way the model will see differences between the same user at time $t$ and $t+1$. |
H: Platt's Scaling for multiclass classification?
How does one do platt's scaling for multiclass classification? For example, if the final layer of my DNN is a soft-max activation with 10 classes, then how does platt's scaling work exactly?
Do I train multiple logistic regressions using one-vs-many classification? Or is there a better way?
AI: There are a few multiclass variants of Platt scaling. The easiest approach is as you have described; simply perform one Platt scaling on each class.
However, there are more sophisticated options--a very simple one to implement is training a standard logistic regression on the logits (the values before the softmax activation is applied). This has called matrix scaling and can overfit pretty easily, so only use this if you have a large calibration set. Alternatively, a fewer-parameter version called vector scaling is relatively simple to implement, where the weights matrix inside the logistic regression is restricted to be a diagonal matrix. Finally, a very simple option that has been shown to work well for neural networks is temperature scaling, where all logits are simply scaled by a single scalar parameter.
You can read more about these and their application to neural networks in Section 4.2 of "On Calibration of Modern Neural Networks" (2017) - available here |
H: Why doesn't loss go down during Neural Net training?
I am working on a Kaggle competition and have tried 2 different code approaches and have the same issue: the loss is large (18247478709991652.0000) and does not go down or is nan.
I'm not sure if there is something wrong with the code or with the data. I tried both scaled and non-scaled data and got the same results. I tried it with the full data set (3,000 examples) and an abbreviated data set.
Here is the abbreviated data.
import numpy
import pandas
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
dataframe = pandas.read_csv('data/tmdb/train_processed.csv')
dataframe.drop('id', axis=1, inplace=True)
Y = dataframe['revenue'].values
dataframe.drop(columns=['revenue'], inplace=True)
X = dataframe.values
def baseline_model():
model = Sequential()
model.add(Dense(13, input_dim=3, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
model.compile(loss='mean_squared_error', optimizer='adam')
return model
seed = 7
numpy.random.seed(seed)
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasRegressor(build_fn=baseline_model, epochs=100, batch_size=5, verbose=1)))
pipeline = Pipeline(estimators)
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(pipeline, X, Y, cv=kfold)
print("Result: %.2f (%.2f) MSE" % (results.mean(), results.std()))
AI: Your loss does go down, but not significantly. This is because your target values are very large ~10e7 and the default learning rate is scaled for smaller values. The easiest way to fix this is to normalize Y.
If your intention is for your code scale Y, then the problem is that Pipeline does not apply StandardScaler (or any transformations) to Y. You have to use sklearn.compose.TransformedTargetRegressor, or apply the transforms to Y outside of Pipeline.
Pick 1:
Outside the pipeline:
Y = dataframe['revenue'].values
Y = StandardScaler().fit_transform(dataframe['revenue'].values.reshape(-1,1))
Inside the pipeline:
estimators.append(('mlp', KerasRegressor(build_fn=baseline_model, epochs=100, batch_size=5, verbose=1)))
from sklearn.compose import TransformedTargetRegressor
estimators.append(('mlp',TransformedTargetRegressor(
regressor=KerasRegressor(build_fn=baseline_model, epochs=100, batch_size=5, verbose=1),transformer=StandardScaler()))) |
H: How do I find the relevant features out of 11,000+ possibilities?
While working on Kaggle Competition, I ended up with 11,726 columns which are mostly "dummies" (one hot encoding). Is this too many?
I know that we need to find out which features are relevant, but not sure how to do this.
AI: Your solution will depend on a couple of factors. One is what type of model you are using. If you are using something that automatically calculates feature importances then you could simply look at these (or take a more balanced look using permutation importances).
While you could look at feature importances, with ~11,000 possibilities this is going to be pretty difficult. The main focus should be to cut down these features into something more manageable, do you really need one hot encoding? Without knowing more about the dataset I can't provide much more advice. |
H: Micro-F1 and Macro-F1 are equal in binary classification and I don't know why
I have a binary classification problem which in the test set, the number of data in both classes are equal (the test number of class 0 and class 1 are equal). Since we know that the number of samples from every class are equal, I use median on the probabilities of logistic regression output (probabilities of samples for class 1) and map probabilities to zero and one and then calculate f1-micro and f1-macro. But they are absolutely equal and I don't know is it odd or not and why is this happening. I would be grateful if you have any idea what's going on and what is wrong.
AI: The difference between macro and micro averaging for performance metrics (such as the F1-score) is that macro weighs each class equally whereas micro weights each sample equally. If the distribution of classes is symmetrical (i.e. you have an equal number of samples for each class), then macro and micro will result in the same score.
As an example for your binary classification problem, say we get a F1-score of 0.7 for class 1 and 0.5 for class 2. Using macro averaging, we'd simply average those two scores to get an overall score for your classifier of 0.6, this would be the same no matter how the samples are distributed between the two classes.
If you were using micro averaging, then it would matter what the distribution was. Say that class 1 made up 80% of your data, the formula would then be 0.7*80% + 0.5*(100%-80%) which would equal 0.66, since each sample is weighed equally and as a result the score is representative of the data imbalance. If class 1 made up 50% of your data, the formula would shift to 0.7*50% + 0.5*(100%-50%) which would be 0.6, the same as the result from macro averaging.
If your data was perfectly balanced, then macro and micro averaging will both result in the same score. If not, there's still a chance that they result in the same score depending on the exact distribution of scores (or if your estimator has the same performance for all classes involved). |
H: keras plotting loss and MSE
Can someone give me a tip on how I could incorporate MSE & loss plots? I have been following some machinelearningmastery posts to plot this but the application is classification and I am attempting regression. Also what is different in my script is I am defining the model thru calling a function, so I am curious if my script could be re-written without the function def wider_model() that defines the model.
This script below works except what is commented out on the bottom for the plt plots. In the machinelearningmastery post, someone does ask this question how to do for regression, and supposedly if you print print(history.history.keys()) two values are returned for dict_keys([‘mean_absolute_error’, ‘loss’])...
Any tips help, there isn't a lot of wisdom here... Thank you
import numpy
import pandas
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt
import math
# load dataset
dataset = pandas.read_csv("prepdSPdata.csv", index_col='Date', parse_dates=True)
# shuffle dataset
dataset = dataset.sample(frac=1.0)
# split into input (X) and output (Y) variables
X = numpy.array(dataset.drop(['Demand'],1))
Y = numpy.array(dataset['Demand'])
print(dataset.shape)
print(dataset.dtypes)
print(dataset.columns)
def wider_model():
# create model
model = Sequential()
model.add(Dense(20, input_dim=11, kernel_initializer='normal', activation='relu'))
model.add(Dense(10, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasRegressor(build_fn=wider_model, epochs=1, batch_size=5, verbose=0)))
pipeline = Pipeline(estimators)
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(pipeline, X, Y, cv=kfold)
print("Wider: %.2f (%.2f) MSE" % (results.mean(), results.std()))
print("RMSE", math.sqrt(results.std()))
# list all data in history
#print(wider_model.wider_model.keys())
# summarize history for MSE
#plt.plot(history.history['acc'])
#plt.plot(history.history['val_acc'])
#plt.title('model MSE')
#plt.ylabel('MSE')
#plt.xlabel('epoch')
#plt.legend(['train', 'test'], loc='upper left')
#plt.show()
# summarize history for loss
#plt.plot(history.history['loss'])
#plt.plot(history.history['val_loss'])
#plt.title('model loss')
#plt.ylabel('loss')
#plt.xlabel('epoch')
#plt.legend(['train', 'test'], loc='upper left')
#plt.show()
AI: cross_val_score does not return the history of the training. You can use fit instead:
history = model.fit( ...
See this example.
As you mentioned, the history object holds the results of the training for each epoch.
Here is the relevant bit:
history = model.fit(X, X, epochs=500, batch_size=len(X), verbose=2)
pyplot.plot(history.history['mean_squared_error'])
pyplot.plot(history.history['mean_absolute_error'])
pyplot.plot(history.history['mean_absolute_percentage_error'])
pyplot.plot(history.history['cosine_proximity'])
pyplot.show()
I was actually working on the same example that you referenced yesterday. I think it is hard to understand because it introduces many functions and concepts: estimators, StandardScaler, KerasRegressor, Pipeline, KFold and cross_val_score.
However, I did like the approach to creating and testing models, and Cross Validation would produce more robust models. |
H: How do I send the results of a convolutional layer and non-deep-learning features into a dense layer in Keras?
I understand that I can set up a convolutional network for 1-dimensional sequence/time series.
model = Sequential()
model.add(Conv1D())
model.add(GlobalMaxPooling1D())
model.add(Dense())
Let's say I'd like to use "regular" (non-deep-learning) features too in my model, how should I best combine the two at a dense layer?
Concretely, let's assume that, for each row of my dataset, there are 1k points in the time series, along with 100 "regular" features.
To generalize my question, let's say there are now two kinds of time series plus regular features for each row in my dataset. If I would like to have a separate convolutional block for each time series, how do I combine all three?
AI: This can be done with Keras functional API.
In this example, "merge_1" layer gets input from :
Output of LSTM Layer (lstm_1)
aux_input layer
More info : https://keras.io/getting-started/functional-api-guide/ |
H: General question on the approach to optimise numbers
I have a huge huge model in SQL that nobody knows what it is doing. This model spits out some numbers and those numbers should be optimised to match another batch of 'correct' numbers as much as possible. So I get one set of numbers form my black box model that change according to how different parameters change inside that model - there are around 400 parameters that can be changed to get different numbers out of it (no, I don't want to even touch it, less trying to understand how it works) and I have another 'correct' set of numbers that I get for those parameters from a person in charge of it. What would be the best approach to 'optimise' my black box numbers to match the correct ones? The black box numbers will relatively match the correct numbers depending on how parameters are set up but I need to make them more correct. I was thinking maybe Bayesian optimisation in Python or GridSearch but not sure if that is the best approach. Any ideas would be appreciated.
AI: For this, you can treat output provided by "person in charge" as ground truth. I assume that you have historical records of :
Numbers generated by this person
Parameters that were fed into the model to generate numbers
treat these 400 parameters are train_x and numbers generated by the person as train_y.
Once you have this data in CSV, you can explore various Algo to find a model that can match train_y with minimal error.
Since no details are available on "how" either of these outputs are produced, you need to pick a tool that enables quick experimentation.
I will suggest Weka since this is a UI tool and does not require any coding. Once you identify an Algo, you can implement it in Python.
Some tutorials :
https://www.youtube.com/watch?v=6tDnNyNZDF0
https://www.youtube.com/watch?v=kFKxUQDnLV8
https://www.youtube.com/watch?v=COpXNK0O8As |
H: Precision vs. Recall
Can I make the following statement about a binary classification, please?
Precision 1: 0.10
Recall 1: 0.83
Statement: "We can expect 90% false alarms (1 - 0.10). But for the remaining 10%, we can be around 83% certain (Recall 1: 0.83), that we caught a label 1."
Thanks in advance!
AI: I would phrase it like so:
"Of all records that were labelled 1 by the model, 10% were actually 1 (90% incorrect predictions). Of all records that were truly labelled 1 we predicted 83% correctly."
While this is out of context of your question, if support refers to the number of records then it would be beneficial to get a more balanced dataset. The reason your precision is so poor for label 1 is because there are many more "negatives" (0's) than "positives" (1) increasing the chance for false positives to occur, affecting your precision.
EDIT:
This on Cross Validated will help provide more explanation. |
H: In Keras library, what is the meaning of "nb_steps_warmup" in the DQNAgent Object initialization?
I can't understand the meaning of "nb_steps_warmup", a parameter of the __init__ function of DQNAgent class of the Keras_RL module.
I just know that when I set small value for "nb_steps_warmup" the command line prints: UserWarning: Not enough entries to sample without replacement. Consider increasing your warm-up phase to avoid oversampling!
This is my code:
import numpy as np
import gym
import gym_briscola
import argparse
import os
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.optimizers import Adam
from rl.agents.dqn import DQNAgent
from rl.policy import BoltzmannQPolicy
from rl.memory import SequentialMemory
import logging
def getModel(input_shape, nb_actions):
# Next, we build a very simple model.
model = Sequential()
model.add(Flatten(input_shape = input_shape))
model.add(Dense(nb_actions, activation = 'relu'))
for i in range(2):
model.add(Dense(2, activation = 'relu'))
model.add(Dense(nb_actions, activation = 'relu'))
# print(model.summary())
return model
def init():
ENV_NAME = 'Briscola-v0'
# Get the environment and extract the number of actions.
env = gym.make(ENV_NAME)
env.setName("Inteligence")
env.cicle = True
nb_actions = env.action_space.n
window_length = 10
input_shape = (window_length, 5)
# Finally, we configure and compile our agent. You can use every built-in Keras optimizer and
# even the metrics!
memory = SequentialMemory(limit=50000, window_length=window_length)
#Maxwell-boltzmann distribution
policy = BoltzmannQPolicy()
dqn = DQNAgent(model=getModel(input_shape, nb_actions), nb_actions=nb_actions, memory=memory, nb_steps_warmup=11,
target_model_update=1e-5, policy=policy, dueling_type='avg', enable_dueling_network=True)
print("Compila")
dqn.compile(Adam(lr=1e-5), metrics=['mae'])
try:
dqn.load_weights('dqn_{}_weights.h5f'.format(ENV_NAME))
except OSError:
print("File non trovato")
return dqn, env, ENV_NAME
def startTraining():
dqn, env, ENV_NAME = init()
print("Fit")
dqn.fit(env, nb_steps=5E6, visualize=False, verbose=1, log_interval=1000)
# After training is done, we save the final weights.
dqn.save_weights('dqn_{}_weights.h5f'.format(ENV_NAME), overwrite=True)
def startTest():
dqn, env, _ = init()
print("Test")
# Finally, evaluate our algorithm for 5 episodes.
dqn.test(env, nb_episodes=10, visualize=False)
#Log config
def setLogging(show = True):
logging.getLogger("Briscola").propagate = show
logging.getLogger("IA").propagate = True
logging.getLogger("Client").propagate = show
logging.getLogger("Vincitore").propagate = show
logging.basicConfig(level=logging.INFO)
if __name__ == "__main__":
#Parameter settings
parser = argparse.ArgumentParser()
parser.add_argument("-m", "--modality", help="The modality of the program", const="train", nargs='?')
parser.add_argument("-l", "--logging", help="Enable logging", type=bool, const=False, nargs='?')
args = parser.parse_args()
setLogging(True)
print("Avvio modalita' ", args.modality)
if args.modality == "test":
startTest()
else:
startTraining()
print("Fine")
This is the doc of the module: https://keras-rl.readthedocs.io/en/latest/agents/dqn/
I hope I was clear with my English.
AI: Often times in reinforcement learning the error rate of the first few steps will be very large and may cause your parameters to oscillate. This is usually attributed to the lack of specificity of the deeper layers in your network. Thus we can come up with some schemes where the learning rate changes in a pre-determined way. For example we can use constant warm-up or gradual warm-up.
The convergence of stochastic gradient descent is a function of the learning rate and the batch size. When the batch size is increased too much then the needed increase in the learning rate can be such that it is beyond the possible curvature of the loss function. We thus introduce warm up as a means by which we can introduce large learning rates without the instability. |
H: What metrics determine the quality of the model?
Working on this Kaggle competition, and have some questions. Using this code:
def r2_keras(y_true, y_pred):
SS_res = K.sum(K.square(y_true - y_pred))
SS_tot = K.sum(K.square(y_true - K.mean(y_true)))
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
The output of my training looks like:
0s 138us/step - loss: 0.1340 - mean_squared_error: 0.1340 - r2_keras: 0.7565 - val_loss: 0.4112 - val_mean_squared_error: 0.4112 - val_r2_keras: 0.4064
Scaled Validation r2: 0.5182
Unscaled Validation r2: -152.1261
I am using 20% of the training data for validation.
I am tracking these metrics during training:
Training loss, mse and r2
Validation loss, mse and r2
I get these metrics on the model after training:
R2 for Validation on scaled data
R2 for Validation on unscaled data
scaler = StandardScaler()
scaled_train = scaler.fit_transform(train_df)
scaled_test = scaler.transform(test_df)
...
m.fit( X_train, Y_train, epochs=epochs, validation_data=(X_test,Y_test))
....
from sklearn.metrics import r2_score
scaled_r2 = r2_score(prediction, scaled_test_df[[target]].values)
unscaled_r2 = r2_score(descaled_prediction, test_df[target].values)
So, my questions are:
Unscaled and scaled r2's are not highly correlated (0.31 AAMOF). Which one would best describe the accuracy of the model on unseen data?
Why isn't the unscaled r2 the same as the scaled r2?
The model r2 is not the same as any of the validation r2's during training (val_r2_keras). Shouldn't the trained model r2 be the same as the one reported during the training?
AI: Unscaled and scaled r2's are not highly correlated (0.31 AAMOF). Which one would best describe the accuracy of the model on unseen data?
I don't think this is a matter of which will describe the generalization error better, because both of them are describing the same thing, just on different scales. So, the advice would be to use the accuracy metric consistent with the metric that will be used for predictions on unseen data.
Why isn't the unscaled r2 the same as the scaled r2?
This is because MSE is scale dependent.
The model r2 is not the same as any of the validation r2's during training (val_r2_keras). Shouldn't the trained model r2 be the same as the one reported during the training?
Why do you think so?
They are different because the datasets for training and for validation are different. |
H: No graph is displaying while plotting value with time interval using python
I have a large dataset with values and date and time. So I want to plot graph of value with time and date. So I wrote a code for time difference. Afterthat I tried to store value into that time difference. Afterthat I tried to plot it. It run with no graph. Can anyone helps me to solve this error?
here is my code:
x= df1,iloc[:,2]
time_interval = 14400 #interval in seconds (14400s = 360 minutes)
date_array = []
date_array.append(pd.to_datetime(df1['date'][0]).date())
start_time = []
end_time = []
temp_date = pd.to_datetime(df1['date'][0]).date()
start_time.append(pd.to_datetime(df1['time'][0], format='%H:%M:%S').time())
for i in range(len(df1['date'])):
cur_date = pd.to_datetime(df1['date'][i]).date()
if( cur_date > temp_date):
end_time.append(pd.to_datetime(df1['time'][i-1], format='%H:%M:%S').time())
start_time.append(pd.to_datetime(df1['time'][i], format='%H:%M:%S').time())
date_array.append(cur_date)
temp_date = cur_date
end_time.append(pd.to_datetime(df1['time'][len(df1['date'])-1], format='%H:%M:%S').time())
datetime_array = []
for i in range(len(date_array)):
s_time = datetime.datetime.combine(date_array[i],start_time[i])
e_time = datetime.datetime.combine(date_array[i], end_time[i])
timediff = (e_time - s_time)
#num_periods = int(timediff.total_seconds()/t3ime_interval) +1
num_periods = abs(int(timediff.total_seconds()/time_interval)) + 1
time_list = pd.date_range(start=s_time, end = e_time, periods=num_periods ).to_pydatetime()
datetime_array.extend(time_list)
time_stamps = [datetime.datetime.strftime(t,'%H:%m:%S') for t in datetime_array]
x = np.zeros([num_periods], dtype='timedelta64[s]')
plt.xticks(np.arange(num_periods), time_stamps)
my csv file:
after running the code output came like this:
image:
AI: In python you generally have all the libraries available to you. It is hard to find sometimes but you should rarely need to write out so much code. Try this out.
I created some dummy data using the same date formats as you have:
import pandas as pd
import matplotlib.pyplot as plt
data = {'date': ['08/06/2018', '8/6/2018', '8/6/2018', '9/6/2018'],
'time': ['6:15:00', '12:45:00', '18:15:00', '6:15:00'],
'x2': [1, 4, 8, 6]}
Now we will make a pandas DataFrame with this dummy data
df = pd.DataFrame(data)
Now we can get our x-axis datetimes by first concatenating the dates and times together separated by a space. Then we will get pandas to parse these datetimes.
datetimes = pd.to_datetime(df['date'] + ' ' + df['time'],
format='%d/%m/%Y %H:%M:%S')
You can then plot your data using
plt.plot(datetimes, df['x2'])
Put your csv file in your workspace. Then you can use this following code
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv(r'temp.csv')
datetimes = pd.to_datetime(df['date'] + ' ' + df['time'],
format='%d/%m/%Y %H:%M:%S')
plt.plot(datetimes, df['x'])
plt.show()
import matplotlib.dates as mdates
fig, ax = plt.subplots(1)
fig.autofmt_xdate()
plt.plot(datetimes, df['x'])
plt.xticks(rotation=90)
xfmt = mdates.DateFormatter('%d-%m-%y %H:%M')
ax.xaxis.set_major_formatter(xfmt)
plt.show() |
H: Neural Network is not getting better
So i started building my own neural network framework in node.js, just to understand the concept of neural networks better.
Currently i implemented Neurons, Connections between these Neurons and Layers.
Given this network:
N3s output for example, would be Sigmoid((N1 * W1 + N2 * W4) + B) where B is a randomly generated Bias between -1 and 1.
To train the network, a dataset which contains the inputs an their desired output is used. The network will be fed the inputs (Every input getting assigned to one Neuron in the first layer, for example [1, 0] would result in N1 beeing 1 and N2 beeing 0). Then for each Connection, the weight will be increased or decreased by 1% if the resulting output of the network would be closed to the desired one.
For example if 1 is the desired output and the current output would be 0.5, the output with the weight increased would be 0.37 and the output with the weight decreased would be 0.46. The weight would be increased.
Same thing is done for every Neurons Bias.
My Problem:
If i train above network with the dataset:
input: 0, 0
desired Output: 0
input: 0, 1
desired Output: 1
input: 1, 0
desired Output: 1
input: 1, 1
desired Output: 0
for 10000 iterations, it will give me something similar to the following output:
[ 0, 0 ] => [ 0.13500189926566186 ]
[ 0, 1 ] => [ 0.14749124528600177 ]
[ 1, 0 ] => [ 0.14238677719564685 ]
[ 1, 1 ] => [ 0.15412548826116046 ]
I am kinda new to Neural Networks, so sorry if this is a dumb question.
AI: Your training algorithm seems wrong. You should either implement Back Propagation, or tweak your own training algo towards that.
Now your training brings all the weights to an average value, that's why your output seems so close after 10000 iterations. What BP does is it modifies the weights that matter more.
Alternatively, if you don't want to use BP, you can play with evolution instead. It will give you the same result, just slower.
edit: can you post your training algo's code somewhere?
Also, you don't really need different objects (like Neurons, Connections, etc.), you only need matrix algebra for an efficient solution. |
H: Given a target vector and a feature vector, how to computer the weight
In page 13 of the slide, given $t$ and $X$ as following. I don't understand how we get $w$.
$$t=[t^{(1)},t^{(2)}, \ldots, t^{(N)} ]^T$$
$$X=\begin{bmatrix}1, x^{(1)} \\ 1, x^{(2)} \\ \vdots\\1, x^{(N)} \end{bmatrix}$$
Then:
$$w=(X^TX)^{-1}X^Tt$$
AI: The least square problem is to minimize $$\|Xw-t\|^2$$
Differentiating it with respect to $w$ and equating it to $0$, we have
$$2X^T(Xw-t)=0$$
Hence, we have
$$X^TXw-X^Tt=0$$
That is $$X^TXw=X^Tt$$
$$w=(X^TX)^{-1}X^Tt$$ |
H: Meaning of 'hue" in seaborn barplot
Seaborn barplot has three parameters.
x, y, hue : names of variables in data or vector data, optional
Question
What is hue? It seems the attribute to plot but why it is called "hue" because when I googled, the result is about color?
Google
Hue - Wikipedia
Hue is one of the main properties (called color appearance parameters) of a color, defined technically (in the CIECAM02 model)
AI: In seaborn, the hue parameter determines which column in the data frame should be used for colour encoding. Using the official document for lmplotprovided an example for this.
import seaborn as sns; sns.set(color_codes=True)
tips = sns.load_dataset("tips")
g = sns.lmplot(x="total_bill", y="tip", data=tips)
Adding `hue="smoker" tells seaborn you want to colour the data points for smoker and non-smoker differently.
>>> g = sns.lmplot(x="total_bill", y="tip", hue="smoker", data=tips) |
H: What do "compile", "fit", and "predict" do in Keras sequential models?
I am a little confused between these two parts of Keras sequential models functions. May someone explains what is exactly the job of each one? I mean compile doing forward pass and calculating cost function then pass it through fit to do backward pass and calculating derivatives and updating weights? Or what?
I have seen in some codes, they only used compile function for some of their LSTMs and fit for some other ones! So I need to know each of these functions do what part of the work(training a neural network).
It's also interesting for me to know what exactly do predict function as well.
Very thank you in advanced!
AI: Let's first see what we need to do when we want to train a model.
First, we want to decide a model architecture, this is the number of hidden layers and activation functions, etc. (compile)
Secondly, we will want to train our model to get all the paramters to the correct value to map our inputs to our outputs. (fit)
Lastly, we will want to use this model to do some feed-forward passes to predict novel inputs. (predict)
Let's go through an example using the mnist database.
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.callbacks import ModelCheckpoint
from keras.models import model_from_json
from keras import backend as K
Let's load our data. Then I normalize the values of the pixels to be between 0 and 1.
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
Now we need to reshape our data to compatible with Keras. We need to add an additional dimension to our data which will act as our channel when passing the data through the deep learning model. I then vectorize the output classes.
# The known number of output classes.
num_classes = 10
# Input image dimensions
img_rows, img_cols = 28, 28
# Channels go last for TensorFlow backend
x_train_reshaped = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test_reshaped = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
# Convert class vectors to binary class matrices. This uses 1 hot encoding.
y_train_binary = keras.utils.to_categorical(y_train, num_classes)
y_test_binary = keras.utils.to_categorical(y_test, num_classes)
Now let's define our model. We will use a vanilla CNN for this example.
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
Now we are ready to compile our model. This will create a Python object which will build the CNN. This is done by building the computation graph in the correct format based on the Keras backend you are using. I usually use tensorflow over theano. The compilation steps also asks you to define the loss function and kind of optimizer you want to use. These options depend on the problem you are trying to solve, you can find the best techniques usually reading the literature in the field. For a classification task categorical cross-entropy works very well.
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
Now we have a Python object that has a model and all its parameters with its initial values. If you try to use predict now with this model your accuracy will be 10%, pure random output.
You can save this model to disk to use later.
# Save the model
model_json = model.to_json()
with open("weights/model.json", "w") as json_file:
json_file.write(model_json)
So, now we need to train our model so that the parameters get tuned to provide the correct outputs for a given input. We do this by feeding inputs at the input layer and then getting an output, we then calculate the loss function using the output and use backpropagation to tune the model parameters. This will fit the model parameters to the data.
First let's define some callback functions so that we can checkpoint our model and save it model parameters to file each time we get better results.
# Save the weights using a checkpoint.
filepath="weights/weights-improvement-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
epochs = 4
batch_size = 128
# Fit the model weights.
model.fit(x_train_reshaped, y_train_binary,
batch_size=batch_size,
epochs=epochs,
verbose=1,
callbacks=callbacks_list,
validation_data=(x_test_reshaped, y_test_binary))
Now we have a model architecture and we have a file containing all the model parameters with the best values found to map the inputs to an output. We are now done with the computationally expensive part of deep learning. We can now take our model and use feed-forward passes and predict inputs. I prefer to use predict_class, rather than predict because it immediately gives me the class, rather than the output vector.
print('Predict the classes: ')
prediction = model.predict_classes(x_test_reshaped[10:20])
show_imgs(x_test[10:20])
print('Predicted classes: ', prediction)
Predicted classes: [0 6 9 0 1 5 9 7 3 4]
The code to print the MNIST database nicely
import matplotlib.pyplot as plt
%matplotlib inline
# utility function for showing images
def show_imgs(x_test, decoded_imgs=None, n=10):
plt.figure(figsize=(20, 4))
for i in range(n):
ax = plt.subplot(2, n, i+1)
plt.imshow(x_test[i].reshape(28,28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
if decoded_imgs is not None:
ax = plt.subplot(2, n, i+ 1 +n)
plt.imshow(decoded_imgs[i].reshape(28,28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show() |
H: Gradient Boosting RandomSearchCV or GridSearchCV
In your algorithms, when you use Gradient Boosting, do you prefer RandomSearchCV or GridSearchCV in order to optimize your hyperparameters ?
Thanks for sharing your experience.
AI: I think it depends on the size of your multi dimensional grid. If it is small, then you can afford to be exhaustive and do a grid search. But if it is very large, and your computation time for a grid search extends too much, then definitely go to a random search. In fact, with random search one can explore larger regions than with grid search, and that is an advantage.
In any case, for hyperparameters search there are two keys:
Monitor it while it is running, so that you can stop it when you are happy with the results. This is specially important for random search.
Split your data in three, train, cross validation and test. Evaluate the hyperparameter search in the cv set. Once finished, rank them by their performance there, and then take the best point and re-evaluate in test. It may happen that you do what is called "overfitting to cv set", in which case the performance in test set will deteriorate significantly vs the performance in cv set. If that happens, try with the second best, third best etc until the performance in test set is good. But take into account that the more points you take from the ranking, the higher the chance of "overfitting to test set".
Once you find the best regions, you can do either perform a more constrained there with either grid search or random search again.
Another option, which tends to work very well, is bayesian optimisation. Here the library that you use is important. In Python, after trying several which gave different problems, the best I found was skopt: https://scikit-optimize.github.io/ |
H: seaborn regplot does not show the line
Question
Why lmplot show the line but regplot does not?
lmplot
import seaborn as sns
import matplotlib.pyplot as plt
sns.lmplot(x="Value", y="dollar_price", data=merged_df, height=8, aspect=1.5)
regplot
# seaborn.regplot() returns matplotlib.Axes object
plt.rcParams['figure.figsize'] = (15,10)
ax = sns.regplot(x="Value", y="dollar_price", data=merged_df, fit_reg=False)
ax.set_xlabel("GDP per capita (constant 2000 US$) 2017")
ax.set_ylabel("BigMac index (US$)")
# Label the country code for those who demonstrate extreme BigMac index
for row in merged_df.itertuples():
ax.text(row.Value,row.dollar_price+0.1,row.country)
Documentation
seaborn.regplot documentation page shows the line. So I suppose there is something missing?
>>> import seaborn as sns; sns.set(color_codes=True)
>>> tips = sns.load_dataset("tips")
>>> ax = sns.regplot(x="total_bill", y="tip", data=tips)
AI: ax = sns.regplot(x="Value", y="dollar_price", data=merged_df, fit_reg=False)
You have the fit_reg set to False here. Just set it True, you should see the regression line. |
H: 'DataFrame' object has no attribute 'to_dataframe'
I'm sure I have a small error here that I'm overlooking, but am having a tough time figuring out what I need to change.
Here is my code up until the error I'm getting.
# Load libraries
import pandas as pd
import numpy as np
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
from sklearn import model_selection
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analyisis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
# Load dataset
names = ['action','reject','approve','occ','loanamt', 'suffolk', 'appinc','typur','unit','married','dep','emp',yjob','self','atotinc','cototinc','hexp']
# from azureml import Workspace
# ws = Workspace(
# workspace_id='',
# authorization_token='==',
# endpoint='https://studioapi.azureml.net'
# )
# ds = ws.datasets['loanapp_c.csv']
ds = pd.read_csv('desktop/python ML/loanapp_c.csv')
dataset = ds.to_dataframe()
I was running this on Azure and am now trying to do it locally. Here is the error I'm getting:
AttributeError Traceback (most recent call last)
<ipython-input-3-b49a23658806> in <module>()
32
33 ds = pd.read_csv('desktop/python ML/loanapp_c.csv')
---> 34 dataset = ds.to_dataframe()
35
36 # shape
~/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py in_getattr_(self, name)
4374 if self._info_axis.can_hold_identifiers_and_holds_name(name):
return self[name]
-> 4376 return object._getattribute_(self,name)
4377
4378 def _setattr_(self, name, value):
AttributeError: 'DataFrame' object has no attribute 'to_dataframe'
Not sure what I have wrong.
AI: The function pd.read_csv() is already a DataFrame and thus that kind of object does not support calling .to_dataframe().
You can check the type of your variable ds using print(type(ds)), you will see that it is a pandas DataFrame type. |
H: Backpropagation for noobs
I am trying to understand neural networks and how they work, by programming my own one from scratch in nodejs. Currently, i managed to build a network, that has weights, layers and neurons. I also understood what an activation function is and i am using the sigmoid function.
I've now come to the point of back propagation with the gradient descent algorithm. My Problem is, that i only know 10th grade Math and every tutorial/explanation i managed to find, uses complex functions and math which i cannot manage to understand.
I would really like to finish this project and get back propagation to work, so if someone could explain me how to use the back propagation without using too complex math (e, derivatives, functions, etc.) or explaining the complex math it would be greatly appreciated.
My source Code: https://gitlab.com/milan44/node (Training is happening in Network.prototype.train)
AI: How about this video, which explains backpropagation intuitively?
https://www.youtube.com/watch?v=Ilg3gGewQ5U
Title: "What is backpropagation really doing? | Deep learning, chapter 3"
Author: "3Blue1Brown" |
H: How to match a user with another user based on their taste?
Information available
Consider that there are N users on a platform. Every user adds items that they like on their profile. These items have static attributes that describe the product.
User A:
Row | Attribute a | Attribute b | Attribute c
Item 1| 0.593 | 0.7852 | 0.484
Item 2| 0.18 | 0.96 | 0.05
Item 3| 0.423 | 0.886 | 0.156
User B:
Row | Attribute a | Attribute b | Attribute c
Item 7| 0.228 | 0.148 | 0.658
Item 8| 0.785 | 0.33 | 0.887
Item 9| 0.569 | 0.994 | 0.374
User A has a list of items that he/she likes. Same goes with User B... User N. The items in the profiles of different users might or might not be the same but the items describe the User's taste for that particular item.
Goal
What I want to do is, match a User with another User if they have a similar taste in picking items. I don't understand how to achieve this. Any help is appreciated!
AI: You can perform clustering of your customers based on a distance function.
Definition might look like this:
First, calculate euclidean distances between the first item of the first customer's basket and all of the items in the second customer's basket.
Then find out, what is the closest item from second customer's basket (minimum euclidean distance).
Perform the same operation for each item in first customer's basket.
Calculate mean of the minimum distances.
Do the same for the second customer.
Take maximum of means from the first and the second customer. |
H: What is disadvantage of using CART in regression problems?
Why is CART hardly used for regression?
Is there any significant reason for its unpopularity in regression techniques?
AI: Regression and classification trees are nearly identical in how they function. I'm not aware of any specific downside to regression trees specifically. However, trees in general do have the downside of being susceptible to overfitting:
The main downside of decision trees is that even with the use of pre-pruning, they
tend to overfit and provide poor generalization performance. Therefore, in most
applications,...ensemble methods...are usually used in place of a single decision tree.
Muller & Guido, Introduction to Machine Learning with Python, 2017. |
H: Curve Fitting By Predict polynomial degree with ANNs
My Science Fair Project is on Curve fitting with neural nets as an alternative to polynomials. I know this is a widely done procedure in datascience but I wanted to do my own research on it. Currently the biggest problem is training time, and to solve that I had an idea. What if I I trained an ANN to predict the degree of the polynomial that best fits the model then use standard polynomial regression to curve fit. This would make the training only happen once so I could train for as long as I wanted, and thus it would increase speed by a lot. What are your thought in this? Has anyone done this before? Is it plausible?
AI: Interesting idea :)
The things I would consider:
Problem definition
Should it be classification or regression? If you define it as classification, you should select the number of classes, hence the maximum and minimum polynomial degree (e.g. 1,2,3,4,5,6,7,8,9 and 10 could be polynomial degrees which you recognize).
If you select regression, you don't need to define the maximum number of classes, but the output layers could be trickier to define.
Dataset
How do you define your samples? I guess that it's easy to generate samples (select some polynomial with degree $n$, sample points from it and then all those points define one sample for class (or regression value) n).
It would be interesting to create the image out of the sample (plot of points) and then maybe use a convolutional network to learn the classification (or regression).
Also, consider how dataset big should be. It's hard to say, you will have to test it and see how big it should be.
Losses
For classification probably cross entropy and for regression probably mean squared error.
Let me know the results. |
H: Dealing with correlated features when calculating permutation importance
I have implemented the permutation importance calculation as found here in my attempt to identify features that contribute little to my model's (Gradient Boosted Tree model) predictive power.
The issue I have encountered is that some of my features are highly correlated, potentially masking the true importance of features when being evaluated by permutation importance. Typically the solution to this would be to perform something like Recursive Feature Elimination instead. Unfortunately, I cannot do this as the cost of retraining the model is prohibitive. The model takes ~3 hours to train with a feature set of 39 features.
My question is whether it is possible to use permutation importance while dealing with correlated features? My initial thought was to invert the process and shuffle all features other than the one I want to investigate although I do not know if this would have the same level of explanation.
AI: Using mutual information namely the correlation of each feature and the output is not something that can be helpful very much. The reason is that the correlation coefficient is just capable of finding whether they are linearly dependent or not. Other than Gaussian distribution, as far as I know, it is not even capable of finding out whether the inputs are independent or not which means if you see the coefficient is equal to zero, if you don't know the distribution you cannot conclude that they are independent but you can say they are not linearly dependent. In real-world applications, it scarcely happens that features just have linear dependencies with the output. Consequently, approaches like this are not very helpful because you may have two features that can have a better relationship with the output but none of them has a good separation with the output.
For cases that you have a lot of features, you can decide to choose other feature selection and extraction methods. I guess for your case wrapper method may suit better. In the wrapper method, you don't have any criterion and you just search through all possibilities of features using heuristic-based methods in order to find the best sub-features which have the best cross-validation accuracy. |
H: For a multi-class classification problem, how to transform the target variable to a form that is usable by sklearn algorithms?
I recently tried to create a model for predicting what class a sample belongs to out of 160 possible classes. These classes of the target variable are just simple strings describing workouts like "Push-ups", "Sit-ups" etc. I used sklearn's method LabelEncoder to transform the target variable, and it assigned every class a number between 0 and 159. I know that the reason why we are supposed to do one-hot encoding for categorical features is to avoid telling an algorithm something nonsensical like "blue is greater than red". My question then becomes, doesn't LabelEncoder tell the sklearn algorithm that "push-ups" is less than or greater than "sit-ups" if both of them are assigned distinct numbers between 0-159? Why or why not? This feels like a very stupid question, but I had real trouble finding an answer anywhere online.
AI: Welcome to the site! Do a google search for "one hot encoding" and all will become clear.
In a nutshell, don't think of it as numbers "assigned" of 0-159. It's more about a "slot" for each category and using one hot encoding to let the algorithm know which slot a particular record belongs to. |
H: Training on data with inherently non-applicable data cells
I am training a model on a chemical sample dataset to find outliers and perform imputation where it makes sense.
Chemical Dataset
Contains thousands of rows of chemical mixtures with many columns of properties. Example properties: bromine content, density.
Inherently non-applicable data
The chemicals can be gas, liquid or solid but some properties are only applicable to samples of a certain state. An example could be viscosity in liquids, bond type (ionic, molecular, covalent) in solids or density in gas.
So far...
...all research has pointed towards methods of fixing "missing values" via column means, data imputation or something similar. There doesn't seem to be any sense in imputing the freezing point of a gas. A gas mixture does not have a freezing point. I am still in the process of data preparation and unsure how to proceed.
I am working in python and missing data is stored as NaN values. Perhaps there are some models that can deal with such NaN-values.
Side-Note:
The majority of the dataset is comprised of distillation curve datapoints (sequential data describing what percentage of a chemical sample evaporates as temperature is increased). This data is present for all samples.
Follow-up 1: Is there a model that will give me NaN values for the freezing point when I give it something that resembles a gas?
Follow-up 2: Can this be compared to image object detection where the object is partially obscured? or part of the image is corrupt?
AI: Welcome to the site!
The usual approach to missing values is to handle them manually. There are a few algorithms which can do this automatically, such as LightGBM and XGBoost, but in most cases it's better for model performance to decide on how you should indicate that a value is missing in your data.
For example in with a Pandas dataframe in Python, I may decide to replace all NA/NaN values in a particular column which should contain positive integers with -99:
dataframe[column] = dataframe[column].fillna(-99) |
H: How mean and deviation come out with MNIST dataset?
I am a novice at the data science, and I notice some repository state the mean value and deviation in MNIST dataset are 0.1307 and 0.3081.
I cannot imagine how these two numbers come from. Based on my understanding, the MNIST dataset has 60,000 pics and each of them has (28 * 28 = 784) features. How do I convert this feature vectors to get the mean and deviation?
Especially, this should classify by the label, right? For example, the number 0 should have its mean and deviation. For number 1 should also have its mean and deviation.
AI: mean : It is the mean of all pixel values in the dataset ( 60000 × 28 × 28 ). This mean is calculated over the whole dataset.
deviation : It is the standard deviation of all pixel values. The dataset is treated as a population rather than a sample.
What are the uses of these values?
Mean and standard deviation are commonly used to standardize the data in this case the images. Standardized data has mean close to 0 and standard deviation close to 1. You can read more here.
Why to standardize the data?
Standardization transforms your data in such a manner that it has unit variance.
According to Wikipedia,
In statistics, the standard score is the signed number of standard
deviations by which the value of an observation or data point is above
the mean value of what is being observed or measured |
H: Dataframe size is null?
I have a function in which I want to represent my data like this:
Input is a column:[ 123 125 11 122 ...]
Output: 123 125
125 11
11 122
The function in python is like that:
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back, 0])
return np.array(dataX), np.array(dataY)
My dataset is represented as 1-grams of integers in a csv file, so I was obliged to use the transpose () of the dataframe to use this function. The problem is that I find the dataframe size null, and the train and test data (after spliting) also empty.
the code is:
dataframe = pd.read_csv("train2.csv")
print(dataframe.shape) # (0,150)
print("--------\n")
#dataframe= np.asarray(dataframe)
dataframe = dataframe.transpose()
print(dataframe.shape) #(150,0)
print(dataframe.size) # 0
dataset = dataframe.values
dataset = dataset.astype('float32')
# split into train and test sets
train_size = int(len(dataset) * 0.67)
print(len(dataset))
#print (train_size)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
print(len(train), len(test))
print(dataset[0:train_size,:]) #[]
print(train) # []
print(test) # []
# reshape into X=t and Y=t+1
look_back = 1
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
Any solution please?
AI: Your first dataframe.shape is showing you that you have no rows. Pandas is probably treating your data as just the column names, so try pd.read_csv('train2.csv', header=None). |
H: Off-policy n-step learning with DQN
I'm reviewing the Rainbow paper and I'm not sure I understand how they can use DQN with multi-step learning, without doing any correction to account for off-policiness.
So. I understand how you can use 1-step update off-policy: the reward for a single transition doesn’t depend on the current policy, so you can reuse this experience in the future.
I also understand the point of n-step updates: it’s a trade-off between having high biais with 1-step update (as you get only limited information from a single transition) and having high variance with n-step update (as in that case both the policy and the environment can be stochastic, so you end up adding n random variables together).
What I do not get is how you can use n-step return off-policy, which is what the Rainbow DQN seems to do. With n-step returns you are considering trajectories, and you can’t assume that these trajectories would have been taken if the agent was using the current policy.
If I understood correctly, in the case of policy gradient this is dealt with using importance sampling, which will reduce the impact of policies which are further away from the current one.
But I don’t see the equivalent of this for multi-step DQN?
AI: In the Rainbow approach, theoretical correctness of the off-policy return values is completely ignored, and it just uses:
$$G_{t:t+n} = \gamma^n\text{max}_{a'}[Q(S_{t+n},a')] + \sum_{k=0}^{n-1} \gamma^{k}R_{t+k+1}$$
It still works and improves results over using single-step returns.
They rely on a few things for this to work:
$n$ is not large, compared to amount of variability in the environment that occurs over those steps. So the return is usually correct even if there was an exploratory action taken. Many game events (and also real-world physics) have some fuzziness - e.g. as long as in Pong you hit the ball with a part of the paddle, it will be a reasonable result, and only some (literal) edge cases will cause a major difference in rewards.
Exploration is low ($\epsilon$ set to $0.01$ or $0.001$)
Rewards are sparse, so the n-step return very often only depends on state progression. As long at $S_{t+n}$ is as likely to be reached starting from $S_t$ under both the exploratory and target policies, then the simple returns are approximately correct.
The policy changes slowly, maybe taking a million frames to converge to optimal, which is far larger than the replay memory.
From a theoretical basis in Q learning, it would be better to use "up to n-step" returns that truncate at the time step when the action taken diverges from the current learned policy. However, that is more complex, as you have to assess the learned policy at each of the n steps in order to decide when to truncate - I suspect given the researchers know how $Q(\lambda)$ works, that they may have tried this and just found the extra correctness was not worth the extra complexity and CPU cost. In other environments it may be worth the effort to implement this. |
H: Wich GPU-card to make deep learning with a MacBook Pro?
I'm a owner for a MacBook Pro. I want to accelerate my deep learning models, because with the CPU, it's very slow !
Could you tell me which GPU card I can buy for it ?
Thanks.
AI: I also have a Macbook and I do data science on it. I looked at this same issue about 1 year ago and it's very possible that the landscape has changed since then.
But, at that time, I came to the conclusion that my best and most cost-effective options were to either (1) get a notebook with a GPU (of which there are many now due to the popularity of e-sports) or (2) get some space on AWS and their GPU servers. I really couldn't find an "enhanced Mac" option that would be as quick or work with as many tools as the above 2 options can offer you.
FYI - on the surface, the AWS option may seem expensive. But if you write your code locally and test on smaller samples and then only move that code to AWS when you need to do the deep learning phases (and only keep the AWS server running for those minutes) it can be a surprisingly affordable way to go. |
H: Create an API from EDA or ML outcome?
I have the following sample dataset (the actual dataset is over 10 million records)
Passenger Trip
0 Mark London
1 Mike Girona
2 Michael Paris
3 Max Sydney
4 Martin Amsterdam
5 Martin Barcelona
6 Martin Barcelona
7 Mark London
8 Mark Paris
9 Martin New york
10 Max Sydney
11 Max Paris
12 Max Sydney
...
...
...
And I wanted to get the destination frequently travelled by a passenger !
I was playing around in Jupyter and got the expected data with the following approach
series_px = df_px_dest.groupby('Passenger')['Trip'].apply(lambda x: x.value_counts().head(1))
df_px = series_px.to_frame()
df_px.index = df_px.index.set_names(['UName', 'DEST'])
df_px.reset_index(inplace=True)
def getNextPossibleDestByUser(pxname,df=df_px):
return df.query('UName==@pxname')['DEST'].to_string(index=False)
While the response is fine. I have few doubts now
1) What's the best way to expose the method (say in this case getNextPossibleDestByUser) as a API (pass customer name as input and get the destination as output) ?
2) Whenever the API is being called , does that mean all the 10 million records gets processed each time ? Are there anyway to optimise that ?
3) Rather than dataframe (pandas) query approach can I consider some ml models or utility functions from say scikit to solve the same problem ?
AI: What's the best way to expose the method (say in this case getNextPossibleDestByUser) as a API (pass customer name as input and get the destination as output) ?
Use flask. Pretty easy to build an API.
Whenever the API is being called , does that mean all the 10 million records gets processed each time ? Are there anyway to optimise that ?
It depends on the exact application but for you case you can take the passenger as an input from the call to your API as POST body or in the arguments and only iterate over the subset of data for that passenger.
Rather than dataframe (pandas) query approach can I consider some ml models or utility functions from say scikit to solve the same problem ?
If the problem is -- as you illustrated -- simple frequency calculations then better to stick to pandas, in my opinion. If your problem is solved by the simple approach it would not make sense to go for a complicated approach. |
H: How should continuous outputs be in convolutional neural network?
I have labeled faces images (label is the age -continuous value-) dataset and I want to construct a Convolutional Neural Network model to predict the age of a person.
I have the following questions.
How the label i.e. the age must be encoded ?
Which activation function to use ?
AI: Convolutional networks can be used for regression tasks too. The difference corresponds to the output layers of the dense networks. In classification tasks you use sigmoid or softmax depending on your task. In regression tasks you can simply use linear activation function as the non-linearity of the last layers. Consequently, if you have the ages as the outputs of your network and if they are in a same scale as the inputs you can simply use them as they are right now. |
H: Meaning of equation for CNN probabilities
So the first equation above refers to a CNN (rather a committee of CNNs) for image classification. I am unable to understand exactly what the author is trying to do in the first equation.
So far, I think they're calculating the index of max likehlihood probabilities for all committees, then adding up the probabilities for all committees for those indices, and finally taking the maximum index.
But this seems overly convoluted and I'm not really sure. Could someone clarify this?
AI: I agree the equation might not be clear, but you can decompose it into something like the following:
First, the term $\operatorname{argmax}_k p^i (y=k|\mathbf{x})$ tells you which label has the higher probability from model $i$ given the input object $\mathbf{x}$.
Then, this "iterates" over all models in the Committee, computing for each the label that is most likely.
Finally finding which label is the most common one (that $\operatorname{argmax}_j$) at the end.
Also, it helps to think about it in pseudo-code
def get_label(CNNs, x):
labels = [0, 0, 0, 0, 0] # each position refers to that last $j$
for pCNNi in CNNs:
predictions = pCNNi(x)
label_i = predictions.index(max(predictions)) # this is the argmax_k
labels[label_i] += 1
return labels.index(max(labels)) # this is the argmax_j |
H: Train a model for unstructured data
I am new to Machine Learning. I want to develop Curriculum Vitae recommender system. I want to determine how similar 2 CVs are, and given a random CV, it suggest which cluster of CVs it belongs to?
This is what I've already done, following a blog post:
I have a folder containing lot of CVs or resume text documents in plain text format (.txt).
I have done pre-processing on this data, like tokenization, stop words removal, stemming.
I extracted the Candidate's name, email-id, contact number, education and experience.
I am confused with how to train the data and how do I create a model for that. More specifically, I have the following questions:
Now how to create a model on text data?
Which algorithm I should apply on this data?
Please anyone answer. Your help will be appreciated.
Thanks.
AI: I have worked on a similar project with JDs, we basically created a word2vec model for words in JD, the result were good as we had lots of JDs.
Basically, what word2vec does is convert a word to vectorial representation which signifies context.
You can check the documentation here: https://radimrehurek.com/gensim/
You may extract skills or other stuff from CVs, and do a semantic similarity based on w2v model. You may use a custom formulae for comparing similarity. Other things could be education, experience, similar projects etc |
H: sklearn preprocessing MinMaxScaler
I am reading, a machinelearningmastery.com post about improving neural network performance, and I am attempting to normalize my own data set with the tips mentioned in the post using sklearn preprocessing MinMaxScaler.
In my code I am reading a CSV file directly into pandas.
#read CSV file
df = pd.read_csv('C:\\Users\\desktop\\EC\\data.csv', index_col='Date', parse_dates=True)
The snip below is what the data looks like with column names:
From the post, I normalize the data with this code below. The process appears to just return a numpy array, but I use Pandas during the machine learning fit process.
from sklearn.preprocessing import MinMaxScaler
# create scaler
scaler = MinMaxScaler()
# fit and transform in one step
df2 = scaler.fit_transform(df)
df2 = pd.DataFrame(df2)
What's happening, is my column names are stripped away and I use column names a lot in dropping & selecting. For example, I fit a lot of models like this process below to differentiate target and input variables.
#Test random Forest
import numpy as np
from sklearn import preprocessing, cross_validation, neighbors
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.externals import joblib
import math
rmses = []
for i in range(2):
X = np.array(df2.drop(['Demand'],1))
y = np.array(df2['Demand'])
offset = int(X.shape[0] * 0.7)
X_train, y_train = X[:offset], y[:offset]
X_test, y_test = X[offset:], y[offset:]
clf = RandomForestRegressor(n_estimators=120, min_samples_split=20)
clf.fit(X_train, y_train)
mse = mean_squared_error(y_test, clf.predict(X_test))
rmse = math.sqrt(mse)
print("rmse: %.4f" % rmse)
rmses.append(rmse)
print(sum(rmses)/len(rmses))
joblib.dump(clf, 'rfrModel.pkl')
But maybe this isn't a bid deal... Sorry not a lot of wisdom here any tips help. Is there another method to madness that I don't need to dependent on column names? If the df2 = scaler.fit_transform(df) leaves columns in place but just removes names, I could just use a column number to .drop - differentiate target & input variables.. Demand is the name of my target variable, and I could just call the second column..., right??
AI: You can just use the previous column names for creating a new dataframe
# Use the previous column names from df and assign it to df2
df2 = pd.DataFrame(df2, columns=df.columns) |
H: Reading meters with tensorflow
I'm new to ML world and been reading about ML and TensorFlow.
My goal is to read the following example in real time with Android phone:
So I tried firebase ML OCR and it works really good, it reads the complete value but it does not read the decimal point and also reads a lot of the surrounding text.
So my idea is that I should first detect black and red bounding boxes and then detect individual numbers inside
is this the right way to go? How would I accomplish this?
Also how do you use two kinds of a model, one to extract a part of the image (black and red bounding areas) and then pass them to OCR model?
What about last digit which can always be in between two numbers (example: 1 and 2)?
AI: Two options :
Use pre-built libraries for OCR + Bounding Box detection (E.g.: https://www.pyimagesearch.com/2018/08/20/opencv-text-detection-east-text-detector/ for Bouding Box detection and then OpenCV / Tessract for OCR)
Train a Deep learning model for Text Detection in scene . Examples : https://github.com/qjadud1994/CRNN-Keras and https://github.com/mvoelk/ssd_detectors |
H: Python list formatting
I have a list, which looks like this:
params
['h\x00i\x00', '\x00t\x00h\x00e\x00r\x00e\x00']
Now, all I want is to merge these two elements into the string "hi there", but when I use
print ' '.join(params)
I get
h i t h e r e
How do I strip out those hex characters to make the string "hi there"?
AI: You have the null equivalent hexadecimal '\x00' in between each character. To remove those hexadecimal characters, first join the list with a space and then split it at '\x00' and then join again. Print statement simply looks like follows.
>>> print(' '.join( ' '.join(params).split('\x00'))) |
H: Can BERT do the next-word-predict task?
As BERT is bidirectional (uses bi-directional transformer), is it possible to use it for the next-word-predict task? If yes, what needs to be tweaked?
AI: BERT can't be used for next word prediction, at least not with the current state of the research on masked language modeling.
BERT is trained on a masked language modeling task and therefore you cannot "predict the next word". You can only mask a word and ask BERT to predict it given the rest of the sentence (both to the left and to the right of the masked word).
This way, with BERT you can't sample text like if it were a normal autoregressive language model. However, BERT can be seen as a Markov Random Field Language Model and be used for text generation as such. See article BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model for details. The authors released source code and a Google Colab notebook.
Update: the authors of the MRF article discovered their analysis was flawed and BERT is not a MRF, see this
Update 2: despite not being meant for next word prediction, there have been attempts at using BERT that way. Here you can find a project that does next word prediction with BERT, XLNet, RoBERTa, etc. |
H: Percentage stacked bar chart - error = “TypeError: only size-1 arrays can be converted to Python scalars” - why x & y values are not recognized?
df = pd.DataFrame(raw_data)
raw_data = {'X':[x1,x2,x3], 'Y':[y1,y2,y3]}
r =np.arange(len(df))
totals = [n+m for n,m in zip(df['X'], df['Y'])]
X = [n / m * 100 for n,m in zip(df['X'], totals)]
Y = [n / m * 100 for n,m in zip(df['Y'], totals)]
barWidth = 0.8
names = ("R 1","R 2","R 3")
plt.bar(r, X, color='r', edgecolor='black', width=barWidth,label="X")
plt.bar(r, Y, bottom=X, color='c', edgecolor='black', width=barWidth,label="Y")
plt.title("TEST - Stacked bar graph (%)",y=1.05,fontsize=20)
plt.xticks(r, names)
plt.xlabel("Vrsta obrazovanja",fontsize=15)
plt.ylabel("Precentage %",fontsize=15)
plt.legend(loc="lower center")
plt.show()
AI: I think the way you are setting the DataFrame might be the problem. It seems that x1, x2, x3 and y1, y2, y3 are all lists of 1 element, so your code will use these lists assuming them as numbers and trigger the exception you see.
Try,
raw_data = {'X':[x1[0],x2[0],x3[0]], 'Y':[y1[0],y2[0],y3[0]]} |
H: dataframe.columns.difference() use
I am trying to find the working of dataframe.columns.difference() but couldn't find a satisfactory explanation about it. Can anyone explain the working of this method in detail?
AI: The function dataframe.columns.difference() gives you complement of the values that you provide as argument. It can be used to create a new dataframe from an existing dataframe with exclusion of some columns. Let us look through an example:
In [2]: import pandas as pd
In [3]: import numpy as np
In [4]: df = pd.DataFrame(np.random.randn(5, 4), columns=list('ABCD'))
In [5]: df
Out[5]:
A B C D
0 -1.023134 -0.130241 -0.675639 -0.985182
1 0.270465 -1.099458 -1.114871 3.203371
2 -0.340572 0.913594 -0.387428 0.867702
3 -0.487784 0.465429 -1.344002 1.216967
4 1.433862 -0.172795 -1.656147 0.061359
In [6]: df_new = df[df.columns.difference(['B', 'D'])]
In [7]: df_new
Out[7]:
A C
0 -1.023134 -0.675639
1 0.270465 -1.114871
2 -0.340572 -0.387428
3 -0.487784 -1.344002
4 1.433862 -1.656147
The function returns as output a new list of columns from the existing columns excluding the ones given as arguments. You can also check it:
In [8]: df.columns.difference(['B', 'D'])
Out[8]: Index(['A', 'C'], dtype='object')
I suggest you to take a look at the official documentation here. |
H: model.score and r2_score giving different values for a regression model
I am build a linear regression model and a decision tree model using sklearn. I want to compare the performance of these two models, I have calculated the r2_score for both the models. I have calculated the model.score for both the values. I am confused which is a better metric to compare the performance of these models. Also what does model.score gives?
from sklearn.metrics import r2_score
score_DT = r2_score(y_pred_DT,y_test)
dt_score = regressorDT.score(X_test,y_test)
AI: Both functions are the same r2 metric and should produce the same results.
Your usage of the r2_score function is wrong. The first argument should be the ground truth values and not the predicted values, so in your case it should be:
score_DT = r2_score(y_test, y_pred_DT) |
H: How to Write Multiple Data Frames in an Excel Sheet
I have multiple data frames with same column names. I want to write them together to an excel sheet stacked vertically on top of each other. And between each, there will be a text occupying a row. This is what I have in mind.
I tried the pandas.ExcelWriter() method, but each dataframe overwrites the previous frame in the sheet, instead of appending.
Note that, I still need multiple sheets for different dataframe, but also multiple dataframes on each sheet. Is it possible? Or any other python library which can dynamically generate the excel sheet from pandas dataframes?
AI: an example to write in same sheet:
import pandas as pd
data1 = """
class precision recall
<18 0.0125 12
18-24 0.0250 16
25-34 0.00350 4
"""
data2 = """
class precision recall
<18 0 0
18-24 0.25 6
25-34 0.35 5
"""
#create 2 df for sample
df1 = pd.read_csv(pd.compat.StringIO(data1), sep='\s+')
df1.name = "Dataframe1"
df2 = pd.read_csv(pd.compat.StringIO(data2), sep='\s+')
df2.name = "Dataframe2"
print(df1);print(df2)
writer = pd.ExcelWriter('e:\\test.xlsx',engine='xlsxwriter')
workbook=writer.book
worksheet=workbook.add_worksheet('Result')
writer.sheets['Result'] = worksheet
worksheet.write_string(0, 0, df1.name)
df1.to_excel(writer,sheet_name='Result',startrow=1 , startcol=0)
worksheet.write_string(df1.shape[0] + 4, 0, df2.name)
df2.to_excel(writer,sheet_name='Result',startrow=df1.shape[0] + 5, startcol=0)
writer.save()
output:
if you want to write in different sheets:
import pandas as pd
# Create a Pandas Excel writer using XlsxWriter as the engine.
writer = pd.ExcelWriter('e:\\test.xlsx', engine='xlsxwriter')
# Write each dataframe to a different worksheet. you could write different string like above if you want
df1.to_excel(writer, sheet_name='Sheet1')
df2.to_excel(writer, sheet_name='Sheet2')
# Close the Pandas Excel writer and output the Excel file.
writer.save() |
H: Issues with training SSD on own dataset
I'm new to ML and trying to train a SSD300, with some Keras-Code github.com/pierluigiferrari/ssd_keras I found on github.
For training I'm using an own (very small) dataset of objects that are not in any of the bigger known datasets.
My dataset has the following characteristics:
objects have very different sizes in images (from around 20x40 to 250x200)
there is only one class labeld in the images
images are in RGB
all images are sized to fit in 300x300
dataset contains 319 images for training and validation
Now my problem is, that the loss-function for validation doesn't converge, but training loss does. See this image showing the loss functions over the epochs. I trained 120 epochs with 1000 steps each:
When I try to use the trained weights, coming out of this training, I get zero detections in image. It seems like the model didn't learn anything.
I'm using pretrained weights for the underlaying VGG-16 network provided in the github-repository. It is trained on imagenet dataset.
My parameters are as follows:
img_height = 300 # Height of the model input images
img_width = 300 # Width of the model input images
img_channels = 3 # Number of color channels of the model input images
mean_color = [123, 117, 104] # The per-channel mean of the images in the dataset. Do not change this value if you're using any of the pre-trained weights.
swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we'll have the model reverse the color channel order of the input images.
n_classes = 1 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO
scales_pascal = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets
scales_coco = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets
scales = scales_pascal
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters
two_boxes_for_ar1 = True
steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer.
clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries
variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are divided as in the original implementation
normalize_coords = True
Can someone please help me by pointing out these questions:
How to interpret the loss function? Is it because of the small dataset or because of wrong parameters or something else?
Do I have to train my own classifier (VGG-16) or can I use the pretrained one even when my objects don't appear in the pretrained dataset?
Do I have to train for a longer time? Means for more epochs?
As additional information:
I already trained a faster R-CNN model with the exact same dataset. It worked quiet good and gives me good results.
I would appreciate any help you can provide!
AI: What you are experiencing is called overfitting and it happens because of your very small dataset.
All the model cares about is performance on the training dataset, so given the opportunity, it will simply attempt to memorize it. This is what happens in you case, you feed a model which contains over 130 Million parameters less than 319 images.
So regarding your questions:
The loss function shows a clear case of overfitting.
On general, it is okay to use a trained model, especially when you only have a small dataset, but in your case, the dataset is too small for any deep-learning model. When I say small dataset, I mean 10k images, not several hundreds.
You should not train for longer time, once the validation loss stops improving, it is a clear sign to stop. There is even a training technique named "early stopping" which is designed to stop training once the validation loss stops to drop.
You have to understand that currently, your dataset of 300 images, is irrelevant to the world of deep-learning. So if you still want to use it for object detection, you need to revert to more classic computer-vision techniques like using HOG or SIFT features, or even manually engineering the features for your special case. |
H: In PCA, every principal component a eigen vector?
In pca, we convert predictors into principal components for dimensionality reduction. My assumption is every principal component is a eigen vector with eigen value as sum of squared distance of orthogonal projection of data points or vectors on it..
Is my assumption correct. If not pls clarify
AI: Your assumption is correct. However, some additional details need to be specified e.g. eigenvector of what ? What characterizes the principal component eigenvector i.e. what criteria must it satisfy ?
As explained in the authoritative reference book by Bishop, the Principal Component Analysis (PCA) provides a set of mutually orthogonal directions, called principal components, which maximize the variance of the projections of the data points, when it is projected successively on respective directions as axes. Here, the underlying assumption is the data points are considered elements of a Vector Space equipped with standard inner product.
Illustrating using first principal component, which say is specified by unit vector, $\mathbf{u_1}$ for a dataset, $\{ \mathbf{x_n} \}, \text{n} = 1, \dots, N$ then
Projection of $ \mathbf{x_n} $ = $ \mathbf{u_1}^T \mathbf{x_n} $
Mean of Projections, $ \mathbb{E}_{\mathbf{u_1}}(\mathbf{x}) = \mathbf{u_1}^T \bar{\mathbf{x}} $ where $ \bar{\mathbf{x}} = \frac{1}{N} \sum^N_{n=1} \mathbf{x_n} $
Variance of Projections, $ Var_{\mathbf{u_1}}(\mathbf{x}) = \frac{1}{N} \sum^N_{n=1} ( \mathbf{u_1}^T \mathbf{x_n} - \mathbf{u_1}^T \bar{\mathbf{x}} )^2 = \mathbf{u_1}^T \mathbf{S} \mathbf{u_1} $ where $\mathbf{S}$ is the Data Covariance matrix given by $ \mathbf{S} = \frac{1}{N} \sum^N_{n=1} ( \mathbf{x_n} - \mathbf{\bar{x}} ) ( \mathbf{x_n} - \mathbf{\bar{x}} )^T $
Maximizing $ Var(\mathbf{x}) $ subject to unit vector constraint, $ \mathbf{u_1}^T \mathbf{u_1} = 1 $, applying Lagrange multiplier we get optimization problem
$
\text{max}_{ \mathbf{u_1}^T \mathbf{u_1} = 1 } \mathbf{u_1}^T \mathbf{S} \mathbf{u_1}
= \text{max}_{ \mathbf{u_1} } \mathbf{u_1}^T \mathbf{S} \mathbf{u_1} + \lambda_1 ( \mathbf{u_1}^T \mathbf{u_1} - 1 )
$
At maxima, we get
$ \mathbf{S} \mathbf{u_1} = \lambda_1 \mathbf{u_1}
\implies \lambda_1 = \mathbf{u_1}^T \mathbf{S} \mathbf{u_1} = Var_{\mathbf{u_1}}(\mathbf{x})
$
Hence, principal eigenvectors e.g. $\mathbf{u_1}$ are eigenvectors of Covariance matrix, $\mathbf{S}$ with corresponding eigenvalues e.g. $\lambda_1$ as Variance of projections along corresponding eigenvector |
H: What is the job of "RepeatVector" and "TimeDistributed"?
I read about them in Keras documentation and other websites, but I couldn't exactly understand what exactly they do and how should we use them in designing many-to-many or encoder-decoder LSTM networks?
I saw them used in the solution of this problem here.
model = Sequential()
model.add(LSTM(input_dim=1, output_dim=hidden_neurons, return_sequences=False))
model.add(RepeatVector(10))
model.add(LSTM(output_dim=hidden_neurons, return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.add(Activation('linear'))
model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['accuracy'])
AI: tf.keras.layers.RepeatVector
According to the docs :
Repeats the input n times.
They have also provided an example :
model = Sequential()
model.add(Dense(32, input_dim=32))
# now: model.output_shape == (None, 32)
# note: `None` is the batch dimension
model.add(RepeatVector(3))
# now: model.output_shape == (None, 3, 32)
In the above example, the RepeatVector layer repeats the incoming inputs a specific number of time. The shape of the input in the above example was ( 32 , ). But the output shape of the RepeatVector was ( 3 , 32 ), since the inputs were repeated 3 times.
tf.keras.layers.TimeDistributed()
According to the docs :
This wrapper allows to apply a layer to every temporal slice of an input.
The input should be at least 3D, and the dimension of index one will be considered to be the temporal dimension.
You can refer to the example at their website.
TimeDistributed layer applies a specific layer such as Dense to every sample it receives as an input. Suppose the input size is ( 13 , 10 , 6 ). Now, I need to apply a Dense layer to every slice of shape ( 10 , 6 ). Then I would wrap the Dense layer in a TimeDistributed layer.
model.add( TimeDistributed( Dense( 12 , input_shape=( 10 , 6 ) )) )
The output shape of such a layer would be ( 13 , 10 , 12 ). Hence, the operation of the Dense layer was applied to each temporal slice as mentioned. |
H: What are the output shape of LSTM with "return_sequences" equal to "True" or "False"?
What are the output shape of LSTM with Keras implementation when "return_sequences" equals to "True" or "False" ?
AI: Assuming your output state size is h. You will generate nxh for n data points if return_sequences is false. On the other hand you would be having nxtxh dimensional array as output if return_sequences is set to true. Here t is the number of timestamps in each data point. |
H: Is a large number of epochs good or bad idea in CNN
In my CNN model, by using large number of epochs like 400 or above, the validations accuracy and some times test accuracy gets better, but I think this large number of epochs is not good idea? I am right or not? why?
AI: If your model is still improving (according to the validation loss), then more epochs are better. You can confirm this by using a hold-out test set to compare model checkpoints e.g. at epoch 100, 200, 400, 500.
Normally the amount of improvement reduces with time ("diminishing returns"), so it is common to stop once the curves is pretty-much flat, for example using EarlyStopping callback.
Different model requires different times to trains, depending on their size/architecture, and the dateset. Some examples of large models being trained on the ImageNet dataset (~1,000,000 labelled images of ~1000 classes):
the original YOLO model trained in 160 epochs
the ResNet model can be trained in 35 epoch
fully-conneted DenseNet model trained in 300 epochs
The number of epochs you require will depend on the size of your model and the variation in your dataset.
The size of your model can be a rough proxy for the complexity that it is able to express (or learn). So a huge model can represent produce more nuanced models for datasets with higher diversity in the data, however would probably take longer to train i.e. more epochs.
Whilst training, I would recommend plotting the training and validation loss and keeping an eye on how they progress over epochs and also in relation to one another. You should of course expect both values to decrease, but you need to stop training once the lines start diverging - meaning that you are over-fitting to your specific dataset.
That is likely to happen if you train a large CNN for many epochs, and the graph could look something like this:
Image source |
H: Is there a way to drop duplicated rows based on an unhashable column?
i have a pandas dataframe df with one column z filled with set values
i want to drop duplicated rows where 2 rows are considered duplicated version of one another when they have same column z values ( which are sets ).
import pandas as pd
lnks = [ ( 'a' , 'b' , { 'a' , 'b' } ) , ( 'b' , 'c' , { 'b' , 'c' } ) , ( 'b' , 'a' , { 'a' , 'b' } ) ]
lbls = [ 'x' , 'y' , 'z' ]
df = pd.DataFrame.from_records( lnks , columns = lbls )
Trying to drop duplicated rows based on column z values :
df.drop_duplicates( subset = 'z' , keep='first')
And i get the error message :
TypeError: unhashable type: 'set'
Is there a way to drop duplicated rows based on a unhashable typed column ?
AI: It is true that a set is not hashable (it cannot be used as a key in a hashmap a.k.a a dictionary). So what you can do is to just convert the column to a type that is hashable - I would go for a tuple.
I made a new column that is just the "z" column you had, converted to tuples. Then you can use the same method you tried to, on the new column:
In [1] : import pandas as pd
...:
...: lnks = [ ( 'a' , 'b' , { 'a' , 'b' } ) , ( 'b' , 'c' , { 'b' , 'c' } )
...: , ( 'b' , 'a' , { 'a' , 'b' } ) ]
...: lbls = [ 'x' , 'y' , 'z' ]
...: df = pd.DataFrame.from_records( lnks , columns = lbls)
In [2]: df["z_tuple"] = df.z.apply(lambda x: tuple(x))
In [3]: df.drop_duplicates(subset="z_tuple", keep="first")
Out[3]:
x y z z_tuple
0 a b {b, a} (b, a)
1 b c {c, b} (c, b)
The apply method lets you apply a function to each item in a column, and then returns the values as a new column (a Pandas Series object). This lets you assign it back to the original DataFrame as a new column, as I did.
You can also remove the "z_tuple" column then if you no longer want it:
In [4] : df.drop("z_tuple", axis=1, inplace=True)
In [5] : df
Out[5] :
x y z
0 a b {b, a}
1 b c {c, b}
2 b a {b, a} |
H: Scikit learn kmeans with custom definition of inertia?
I've coded a small clustering algorithm for time signals using kmeans, which works ok (gives acceptable results).
However, kmeans uses the sum of squared differences. I would like to be able to input instead my own measure of difference, but there doesn't seem to be a way provided by the library to do that.
What would be the easiest way to achieve this? Any other python library which may provide me some way to input instead my own function to define the distance? Or I guess I could instead re-implement the algorithm myself, but I'd rather keep the sci-kit one (since they provide functionalities I want to use such as parallel processing).
AI: K-means cannot optimize arbitrary measures.
The mean optimizes squared errors. It does not optimize, e.g., Euclidean distances, Manhattan distances, etc. It won't crash, but the solution will not be optimal (not even locally) because the centers are not well placed.
So it makes little sense to add support for another "inertia" inside k-means (nor other distances), as it can't optimize that.
If you want to optimize other distances, there is for example the PAM algorithm, and k-medians (for Manhattan).
There is nothing wrong with computing such a quality afterwards though, but then it belongs into the evaluation package, not in the KMeans class. |
H: SVR is giving same prediction for all features
I'm creating a basic application to predict the 'Closing' value of a stock for day n+1, given features of stock n using Python and Scikit-learn
A sample row in my dataframe looks like this (2000 rows)
Open Close High Low Volume
0 537.40 537.10 541.55 530.47 52877.98
Similar to this video, where he uses 'Dates' and 'Open Price'. In this example, Dates are the features and Open price is the target.
Now in my example, I don't have a 'Dates' value in my dataset, but instead want to use Open, High, Low, Volume data as the features because I thought that would make it more accurate
I was defining my features and targets as so
features = df.loc[:,df.columns != 'Closing']
targets = df.loc[:,df.columns == 'Closing']
Which would return a df looking like this
features:
Open High Low Vol from
29 670.02 685.11 661.09 92227.36
targets:
Close
29 674.57
However I realised that the data needs to be in a numpy array, so I now get my features and targets like this
features = df.loc[:,df.columns != 'Closing'].values
targets = df.loc[:,df.columns == 'Closing'].values
So now my features look like this
[6.70020000e+02 6.85110000e+02 6.61090000e+02 9.22273600e+04
6.23944806e+07]
[7.78102000e+03 8.10087000e+03 7.67541000e+03 6.86188500e+04
5.41391322e+08]
and my targets look like this
[ 674.57]
[ 8042.64]
I then split up my data using
X_training, X_testing, y_training, y_testing = train_test_split(features, targets, test_size=0.8)
I tried to follow the Scikit-Learn documentation, which resulted in the following
svr_rbf = svm.SVR(kernel='rbf', C=100.0, gamma=0.0004, epsilon= 0.01 )
svr_rbf.fit(X_training, y_training)
predictions = svr_rbf.predict(X_testing)
print(predictions)
I assumed that this would predict the Y values given the testing features, which I could then plot against the actual y_testing values to see how similar they are. However, the predictions is printing out the same value for each X_testing feature.
[3763.84681818 3763.84681818 3763.84681818 3763.84681818 3763.84681818
I've tried changing the value of epsilon, c and gamma but that doesnt seem to change the fact that the predictions always gives the same value
I know that it might not be accurate to predict stock prices, but I must have done something wrong to get the same value when applying the model to various different test data
AI: There are a couple of parts that I think changing will help.
First, a general one for all model building: I would suggest you scale your data before putting it into the model.
It might not directly solve the problem of receiving the same predicted value in each step, but you might notice that you predictions lie somewhere in the ranges of your input values - as you are using unscaled volume, that is making things difficult for the model. It is essentially have to work on two different scales at the same time, which is cannot do very well.
Have a look at the StandardScaler in sklean for a way how to do that.
Next a few suggestions of things to change, specifically because you are working with stock prices:
I would normally predict the value of the stock market tomorrow, and not the closing prices on the same data, where you are using open/high/low/volume. For me that only make sense if you were to have high-frequency (intraday) data.
Given this, you would need to shift your y value by one step. There is a method on Pandas DataFrames to help with that, but as you dont have a date column and you only need to shift by one timestep anyway, you can just do this:
features = df.loc[:-1, df.columns != 'Closing'].values # leave out last step
targets = df.loc[1:, df.columns == 'Closing'].values # start one step later
You could then even then predict the opening price of the following day, or keep closing data in the features data, as that would not introduce temporal bias.
Something that would require more setup, would be to look at shuffling your data. Again, because you want to use historical values to predict future ones, you need to keep the relevant hsitory together. Have a look at my other answer to this question and the diagram, which explains more about this idea.
EDIT
You should also scale y_train and y_test, so that the model knows to predict within that range. Do this using the same StandardScaler instance, as not to introduce bias. Have a look at this short tutorial. Your predictions will then be within the same range (e.g. [-1, +1]). You can compute errors on that range too. If you really want, you can then scale your predictions back to the original range so they look more realistic, but that isn't really necessary to validate the model. You can simply plot the predictions against ground truth in the scaled space.
Check out this thread, which explains a few reasons as to why you should use the same instance of StandardScaler on the test data. |
H: confused about parameter updates and forward/backward pass according to batches and epochs in CNN?
I am working on a CNN model, the code written in tensorflow, I did some googling about parameter updates such as weights ana biases when method is optimized and the loss is computed, two things made me confuse:
1- after output layer, the data goes to loss, the loss compute and then the model is begin optimization or in reverse of that?
2- Is parameters updated after each mini-batch fed to network (i.e. the forward and backward pass is done for every batch) or only updates when one epoch is completed? why some tutorials said that each epoch is ba forward/backward pass?
anyone can clarify it please? if with a reference its better for me.
AI: I understand your confusion, and the real cause of all this mismatch between different tutorials is because there is many equivalent ways to train a neural network when dealing with batches, epochs. However, I think it is best to stick with the most common terminology, i.e. that used by the deep learning libraries.
Backpropagation
For question 1, you are correct. We feed the data through the inputs, the data goes through a forward pass and then we obtain an output. With the output we can calculate a loss. Then we will use backpropagation to attribute some fault to each model parameter for the resulting error (loss). Then we will use gradient descent to update the model parameters accordingly. You can see how this works here.
Epochs and batches
For question 2. First let's define some terms. One epoch ends when all the training data available has been consumed. The second epoch goes through all the data again. In a simple neural network with not much data, you will pass all the training instances through the network successively and get the loss for each output. Then we will get an average of these losses to estimate the total loss for all instances. This results in one backpropagation per epoch.
However, most of the time it is not possible to fit all the data into memory so we must use batches, this means we will only feed-forward some training instances at a time. Then we will calculate the loss resulting from these instances and tune the parameters using backpropagation. Say we have 1000 training instances, then we can use a batch size of 100, we will thus do back-propagation 10 times per epoch.
Pros and cons
The pros of using batches is you can use larger datasets to train your model, however the smaller the batch size the less accurate the loss function estimate. |
H: The most used loss function in tensorflow for a binary classification?
I am working on a binary classification problem using CNN model, the model designed using tensorflow framework, in most GitHub projects that I saw, they use "softmax cross entropy with logits" v1 and v2 as loss function, my questions are:
1- Why this loss method is the most used one?
2- What is the type of this function because of the name I am confused about its type.
3- What is the equation for the function? in tensorflow website, the equation is not available.
AI: I think there is some confusion here. Softmax is usually an activation function which you will use in your output layer, and cross-entropy is the loss function that you will use.
Softmax
This activation function outputs the probability for each class, it is defined as
$\sigma(\bf{y_i}) = \frac{e^{\bf{y_i}}}{\sum_{j}{e^{\bf{y_j}}}}$.
For example, in a problem with 2 class labels. Then if we have some outputs from our neural network like $y$=[3, 4] then we can get the probability of each output classes by using the softmax function on these outputs.
import numpy as np
x = [3, 4]
np.exp(x)/np.sum(np.exp(x))
[0.26894142, 0.73105858]
You will see that these are probabilities and do sum to 1.
Then we can get the class of the input by seeing which probability is higher. In this case class 2 has a probability of 73.11%, so the predicted class label, $\hat{y} = 1$.
Cross-entropy
Cross-entropy is a loss function that is used for classification tasks. For binary classification it is defined as
$H(p, q) = -y\log(p) - (1-y)\log(1-p)$.
Let's assume that the real class of the above example is 0, $y=0$. Then we made a mistake and you can see that
$H(p, q) = -0\log(0.26894142) - (1-0)\log(1-0.26894142) = 0.313$.
That is the loss that is used for backpropagation. |
H: Is this approach for Active Learning correct?
Let's say that we have 10,000 unlabeled documents, and we want to use pool-based sampling with batch size of 5. And we will use least confidence (LC) for querying strategy.
Is this outline of active learning reasonable:
1) Randomly select 30 documents, and get them labelled:=> labelled: 30, unlabelled: 9970.
2) Train the model on 30 labelled data
3) Use the model to make predictions for 9970 unlabelled documents
4) Use LC to select 5 out of all predicted documents that the model is least certain about
5) Get labels for the 5 documents:=> labelled: 35, unlabelled: 9965
6) Retrain the model on labelled data
7) Keep doing steps 2 - 6 until, the labeling resource expires or other stopping criteria is met.
Is this correct? I am mostly concerned about step 3) Should we retrain on all the data on every iteration? It seems like it could get computationally slow, but on the other hand if we don't evaluate every document how would we know which one is the most helpful to get labelled?
AI: Your flow is correct. Model is retrained on new labeled data. Otherwise, the next candidates for labeling will be selected from the same region as previous candidates on which the model is least certain. By training on new labeled data, model will move on to new regions about which is least certain. Here is the diagram for pool-based method from page 5 of this survey on active learning: |
H: Is it possible to have differential weights as per input importance for a Neural Network
I have a Computed Tomography data set where central slices are more important as the scan angle approaches 90 degrees. The information in the initial and last slices(0 degrees and 180 degrees) may assigned lesser weight as they contain lateral information. Could anyone please suggest me a way to assign weights as per importance of the input? Shall I focus on any specific hyper-parameter while training?
AI: Yes! It is totally possible, generally weights are never same, they differ for different inputs.This is the first reason why they are called weights, as they associate weight to every input.
Preparing neural network parameters (weights and bias) using TensorFlow Variables in python:(assuming you have 3 inputs and want to assign them with .3,.1 and .8 weight respectively.)
weights = tensorflow.Variable(initial_value=[[.3],[.1],[.8]],dtype=tensorflow.float32)
bias = tensorflow.Variable(initial_value=[[1]], dtype=tensorflow.float32)
Happy to answer. |
H: What does 'Linear regularities among words' mean?
Context: In the paper "Efficient Estimation of Word Representations in Vector Space" by T. Mikolov et al., the authors make use of the phrase: 'Linear regularities among words'.
What does that mean in the context of the paper, or in a general context related to NLP?
Quoting the paragraph from the paper:
Somewhat surprisingly, it was found that similarity of word
representations goes beyond simple syntactic regularities. Using a
word offset technique where simple algebraic operations are performed
on the word vectors, it was shown for example that vector(”King”) -
vector(”Man”) + vector(”Woman”) results in a vector that is closest to
the vector representation of the word Queen [20].
In this paper, we try to maximize accuracy of these vector operations
by developing new model architectures that preserve the linear
regularities among words. We design a new comprehensive test set for
measuring both syntactic and semantic regularities1 , and show that
many such regularities can be learned with high accuracy. Moreover, we
discuss how training time and accuracy depends on the dimensionality
of the word vectors and on the amount of the training data.
AI: By linear regularities among words, he meant that "Vectorized form of words should follow linear additive properties!"
V("King") - V("Man") + V("Woman") ~ V("Queen) |
H: Overfitting - how to detect it and reduce it?
I have a side project where I am doing credit scoring using R (sample size around 16k for train data and 4k for test data, and also another two 20k data batches for out-of-time validation) with unbalanced classes (3.5% of BAD type of clients).
I'm trying to make various models to have enough of them to make ensembles, but for this purpose, let's focus on one of them, particularly XGBoost.
I was reading a lot on how to tackle overfitting, but I haven't found any good source on how to do it in practice, step-by-step. As for now, my two best models have:
0.8 AUC on training data, around 0.799 AUC on holdout set and around 0.7355 and 0.7195 AUC on out-of-time batches.
0.764 AUC on training, 0.7785 AUC on the holdout test set and 0,7285 AUC on both out-of-time batches.
I am worried about is that drop on out-of-time batches, since I think that 0.05-0.08 drop is huge and it might be a sign that models that I did, really are overfitting and don't generalize well. To clarify, while I was tweaking models, I didn't know about those out-of-time scores.
Could anyone share the experience what is best practice to detect overfitting? And does those two models overfit, or I am just panicking, and this drop in performance is normal?
My current pipeline in general looks like this:
Data cleanup
Feature importance using xgboost package to take best 300 features from all 2400 available.
Removing highly-correlated features (0.75 cutoff) - 123 features left
Train/test split - 0.8 vs 0.2 plus two out-of-time batches
Model selection using nested CV(5-fold CV in outer) with hyperparameter tuning in inner loop(5-fold CV in inner) - all done in MLR package.
From 5 models I get from nested CV, I'm picking the best performing one (that has the closest AUC in both train and holdout test set)
And then when I was happy with the model I performed a test on out-of-time models.
How I could improve my pipeline, in a way that I could detect overfitting? Is there any list of steps that would roughly cover what it could be done to reduce it?
Also, in highly unbalanced case, choosing a good validation set means that I only need to take care of the proper distribution of the target variable? Should I take care of something else as well?
AI: When choosing the validation set and the test set, it is important that it reflects the actual "production environment" of your problem. Since you have "out of time" validation sets, I assume you have some time structure in your data that you need to address when making predictions. If you are developing your model, not taking this time aspect into consideration, you are likely to get issues when this model is put into a realistic environment because the models are fit to an unrealistic scenario where time does not matter.
From the scores you show here, the train and test scores are very similar, while it drops off in the "out of time"-set. This might be an indication that you are not putting enough emphasis on the time dimension while developing your model.
Another point: There is also fully possible to overfit to your validation set, when as in your case, you have a lot of variables. Since some combination of these variables might randomly fit your train and validation set well, it might not be the case for your test set. This effect is also magnified by my earlier point, where the data generating process is not really stationary across time.
Here is a nice article about how to choose a good validation set:
https://www.fast.ai/2017/11/13/validation-sets/ |
H: Two different GPUs for Keras (Python)?
One question guys, someone knows if it should be ok to get one more GPU of type Nvidia Geforce GTX 1070 (gaming version), given that now I have GTX 1070 Titanium? They don't have another Titanium card available here, so I have to get a different one, but closely similar, and I wonder if for using Keras (with TensorFlow backend), will it work fine? They are not exactly the same cards, but similar enough maybe. I want 2 GPUs for Keras.
AI: I tried training with a 1080 and a 2080 ti and I found that I didn't get any speed up from multi-gpu training because the 1080 acted as a bottleneck. So I while I think this would work fine, you would be better served to run a different model on each GPU rather than trying to train across GPUs. |
H: What Kind of Network should I be considering?
was hoping someone can guide me in the right direction with this problem.
I have a dataset that contains the history of an experiments that lasts 40 minutes; taking a measurement every minute. I have historical data for about 600 runs. So I have a total of 24000 rows in my dataset.
I would like to set up a model in where I can train it on the different multiple runs where all 40 minutes are available; and each minute has same feature count of 11.
My plan then was to feed the model anywhere from 2-35 rows of live data, and I would like it to predict one of the values at the 40 minute mark.
Any help would be greatly appreciated.
AI: You want to use a Recurrent Neural Network. More specifically, look into the LSTM variant and understand it's use cases over a "regular" RNN, and go from there. RNN's are good for capturing sequential data patterns and this is essentially what you want to do. |
H: Python & Pandas : TypeError: to_sql() got an unexpected keyword argument 'flavor'
I want to store JSON Data into MySQL Database using Python.
I used dataframe of pandas. I found that to_sql() can do this job easily.
Python Code:
jdata=json.loads(json_data)
df=pandas.DataFrame(jdata)
df.to_sql(con=con, name='crashTable', if_exists='replace', flavor='mysql')
Output:
TypeError: to_sql() got an unexpected keyword argument 'flavor'
I got this error message. How to resolve this error?
AI: Based on the documentation 0.22 and 0.24.1, the flavor does not exist in the argument list of the to_sql method. You're probably running the 0.24.1 version which does not need flavor argument. |
H: Mutate with dynamic column names dplyr
Hi I have this dataset (It has many more columns)
media brand radio tv cinema
<chr> <dbl> <dbl> <dbl> <dbl>
radio 0 0 0 0
tv 0 0 0 0
cinema 0 0 0 0
tv 0 0 0 0
radio 0 0 0 0
tv 0 0 0 0
I want to obtain the following(Assign a 1 to each column based on the value of media column):
media brand radio tv cinema
<chr> <dbl> <dbl> <dbl> <dbl>
radio 0 1 0 0
tv 0 0 1 0
cinema 0 0 0 1
tv 0 0 1 0
radio 0 1 0 0
tv 0 0 1 0
Do you have any idea on how to do it?As I have many more columns, I'm trying to find a solution using dynamic variable assignment.
Thanks
AI: What you are trying to do is called one hot encoding or dummy encoding. Even as this may be possible to accomplish in dplyr (together with tidyr), I would recommend using the function one_hot() from the library mltools. For an explanation, see the following code snippets:
# set up sample data
df <- data.frame(
id = 1:4,
media = factor(c("radio", NA, "tv", "tv"), levels=c("radio", "tv", "cinema"))
)
One Hot Encoding with tidyr:
The following code snippet shows how you would accomplish one hot encoding with dplyr and tidyr (copied from this comment):
library(dplyr)
library(tidyr)
tib <- as_tibble(df)
tib %>%
mutate(i = 1) %>%
spread(media, i, fill = 0)
However, this code will give you the following table:
# A tibble: 4 x 4
id radio tv `<NA>`
<int> <dbl> <dbl> <dbl>
1 1 0 0
2 0 0 1
3 0 1 0
4 0 1 0
Note that there is no cinema column present, even though we defined a respective level in our sample data-frame. Also, the spread()-function created an own column for values with NA's, treating them as a separate category.
One Hot Encoding with mltools (recommended):
library(data.table)
library(mltools)
dt <- data.table(df)
one_hot(dt)
This code will give you the following table:
id color_radio color_tv color_cinema
1 1 0 0
2 NA NA NA
3 0 1 0
4 0 1 0
As you can see, the column cinema is now present even though our sample data-frame did not contain any observations of it. Also, there isn't an NA-column present anymore. The NA from the second row is instead visible in all columns (which makes way more sense, since you don't know, what category this observation IS, so you cannot know what category it IS NOT).
In addition, this code is not only easier to understand it also runs about two to three times as fast. |
H: Change rate of cross validation data, after training
Say we have N of labeled data, and we need to take some part for the cross validation (we will skip test part for this case). We chose, 0.6 part for the training and 0.4 for validation.
After training neural Network with early stop, we have found 8 epochs, as optimal to stop, and have received good enough results.
Q. In case, we have very limited N training samples. May we use all samples in new model training, and just stop it's training after discovered epochs? Without separating it to train and cross validation, and testing it, at all (or even, change rate of separating, to 0.9 train, 0.1 cross validation).
Maybe there is known technologies for such cases? Thanks.
AI: Due to stochastic nature of NN training, the best epoch may vary upon each restart. In other words, at epoch 8, each of (the best, under-fitted, over-fitted) cases may happen. However, if you train multiple times and the best model is consistently found at (or around) 8th epoch, it is safe to say 8th epoch gives the best model away from under- or over-fitting, thus definitely validation set can be added to training set to improve the performance.
A more solid approach would be to plot the effect of training size (10% up to 90%) on the best epoch and the validation error. This means producing two plots (training size, the best epoch) and (training size, validation error), where each point is an average over multiple restarts. This way you can better find (1) the best epoch, and (2) the degree to which the added validation set will going to boost the performance on unseen test data, i.e. extrapolating the validation error for training size 100%.
It is possible that performance goes flat after for example 70% of training set, implying that adding the validation set has no gain. |
H: Why is having high p-value makes dependent variable significant in backward elimination?
In backward elimination, I heard the steps of fitting the model by keep removing the highest p-value(a.k.a. insignificant independent variable) each time like below
Select a significance level to stay in the model(e.g. SL = 0.05)
Fit the full model with all possible predictors
Consider the predictor with the highest P-Value(P > SL)
Remove the predictor
Fit model without this variable (Repeat step 3-5 until P <= SL)
But the part which I don't get is why is having higher p-value makes the corresponding independent variable insignificant. Doesn't having high p-value mean it's more close to the null hypothesis so that that variable is more significant?
AI: Actually, the Null Hypothesis is that the predictor is not significant. Taken from the book Introduction to Statistical Learning:
Null Hypothesis: There is no relationship between X and Y
Alternative Hypothesis: There is some relationship between X and Y
If we have a high p-value, we have an expressive result showing that the null hypothesis is correct and therefore the estimated coefficient come from a normal distribution around zero and can be discarded. I hope this answer your question, please let any comment if you need more help. |
H: When to use mean vs median
I'm new to data science and stats, so this might seems like a beginner question.
I'm working on a dataset where I've user's Twitter followers gain per day. I want to measure the average growth he had over a period of time, which I did by finding the mean of growth. But someone is suggesting me to use median for this.
Can anyone explains, in which use-case we should use mean and when to use median?
AI: The arithmetic mean is denoted as $\bar{x}$
$$\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i $$
where each $x_i$ represent an unique observation. The arithmetic mean measures the average value for a given set of numbers.
In contrast to this, the median is the value which falls directly in the middle of your dataset. The median is especially useful when you are dealing with a wide range or when there is an outlier (a very high or low number compared to the rest) which would skew the mean.
For example, salaries are usually discussed using medians. This due to the large disparity between the majority of people and a very few people with a lot of money (with the few people with a lot of money being the outliers). Thus, looking at the 50% percentile individual will give a more representative value than the mean in this circumstance.
Alternatively, grades are usually described using the mean (average) because most students should be near the average and few will be far below or far above. |
H: How to utilize user feedback due to miss-classification when correct class label is unknown?
Suppose we are developing an app which is supposed to predict a dog's breed by it's picture. We trained a classifier (in my case an MLP) using some dataset and shipped the app to users. Now suppose some user comes and takes a picture of a friend's dog and the app tells her there is 90% chance that this dog is an X. The user knows that this is not true, but she doesn't know what is the dog's breed (if she knew, why would she use our app in the first place?). So we get a feedback which tells us "this is a picture of a dog which is not an X". This sample could be a sample of some other class or a new class or not a dog at all.
I'm looking for a way to use this feedback, to improve the precision of my MLP in class X without touching other classes.
AI: This can be accomplished by a modification to multi-class cross-entropy.
We are faced with two types of supervision. First type is "data $i$ belongs to class $k$" denoted by $y_{ik}=1$, and second type is "data $i$ does not belong to class $k$" denoted by $\bar{y}_{ik}=1$. For example, for 3 classes, $y_i=(1, 0, 0)$ denotes that point $i$ belongs to class $1$, and $\bar{y}_{i}=(0, 0, 1)$ denotes that point $i$ does not belong to class $3$. Let $y'_{ik} \in [0, 1]$ denote the model prediction. The original cross-entropy for $K$ classes is:
$$H_y(y')=-\sum_{i}\sum_{k=1}^{K}y_{ik}log(y'_{ik})$$.
This objective assigns loss $-log(y'_{ik})$ to $y_{ik} = 1$ to encourage the model to output $y'_{ik} \rightarrow 1$ leading to $-log(y'_{ik})\rightarrow 0$.
On the other hand, for the second supervision $\bar{y}_{ik}=1$, we want to encourage the model to output $y'_{ik} \rightarrow 0$. Therefore, loss $-log(1- y'_{ik})$ can be used to have $-log(1- y'_{ik})\rightarrow 0$.
Accordingly, second supervision can be combined with first one as follows:
$$H_{(y,\bar{y})}(y')=-\sum_{i}\sum_{k=1}^{K}y_{ik}log(y'_{ik})+\bar{y}_{ik}log(1-y'_{ik})$$
Note that supervision "data $i$ does not belong to classes $1$ and $2$" is also supported. For example, $\bar{y}_{i}=(1, 1, 0,...)$ activates both $-log(1 - y'_{i1})$ and $-log(1 - y'_{i2})$ to encourage the model to output less probabilities for classes $1$ and $2$, i.e. $y'_{i1} \rightarrow 0$, and $y'_{i2} \rightarrow 0$. |
H: Metrics for Evaluating Performance of Logistic Regression
I built a Logistic Regression model and I would like to evaluate the performance of the model. I would like to understand its evaluation metrics.
What do the metrics Sensitivity, Specificity, False Positives Rate, Precision, Recall, and Accuracy tell us about this model?
AI: Since Logistic regression is not same as Linear regression , predicting just accuracy will mislead.
** Confusion Matrix** is one way to evaluate the performance of your model. Checking the values of True Positives, False Negatives ( Type II Error) are really important.
** ROC Curve** Receiver Operating Characteristic(ROC) summarizes the model’s performance by evaluating the trade offs between true positive rate (sensitivity) and false positive rate(1- specificity)
The below link will give you more information:
https://www.analyticsvidhya.com/blog/2015/11/beginners-guide-on-logistic-regression-in-r/ |
H: Why does not log transformation make the data normalized?
Having some skewed features as shown in the following figure. I am trying to imply log transformation to the feature called vBMD(mgHA/cm3). I run the following codes
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
df=pd.read_csv("Data.csv")
scaler=MinMaxScaler(feature_range=(0,1))
df['vBMD (mgHA/cm3)']=scaler.fit_transform(np.array(df['vBMD
(mgHA/cm3)']).reshape(-1,1))
df['vBMD (mgHA/cm3)']=np.log(np.array(df['vBMD (mgHA/cm3)']))
After the transfromation, I have got the following result.
While I am waiting that the feature will be normalized, its skewness increased. Thus, what am I doing wrong?
AI: Log transformation leads to a normal distribution only for log-normal distributions. Not all distributions are log-normal, meaning they will not become normal after the log transformation.
EDIT:
As you have commented, if you are trying to convert an arbitrary distribution to normal, methods like QuantileTransformer can be used. But note that these transformations make a distribution normal by changing (destroying) some information from the original data. |
H: What are the approaches to aggregate categorical variables?
I am working on a clickstream dataset. I have come up with the following example dataset to explain my problem:
ClickTimeStamp | SessionID | ART_weekOfYear | PagenameClicked | TimeSpentPerSession | CustID | ContractID | ... | TARGET |
2017-01-04 16:48:00 | 1 | 1 | P1 | 1 | abc | xyz | | 1 |
2017-01-04 16:48:53 | 1 | 1 | P2 | 1 | abc | xyz | | 1 |
2017-01-11 10:09:57 | 2 | 2 | P1 | 2 | abc | xyz | | 1 |
2017-01-11 10:11:24 | 2 | 2 | P4 | 2 | abc | xyz | | 1 |
2017-01-27 13:22:39 | 3 | 4 | P1 | 2 | abc | mnp | | 0 |
2017-01-27 13:24:01 | 3 | 4 | P7 | 2 | abc | mnp | | 0 |
The above dataset has clicks on its each row and TARGET is (let's say) contract was retained (1) or not (0). Keep in mind the TARGET is at contract level.
Now, I aggregate the above dataset as per my need (i.e. aggregate on contractID) and training set looks like this:
CustID | ContractID | ... | SessionID_conct | ART_weekOfYear_conct | PagenameClicked | TimeSpentPerSession_avg | TARGET |
abc | xyz | | "1-2" |"1-2" | "P1->P2->P1->P4" | 1.5 | 1 |
abc | mnp | | "3" |"4" | "P1->P7" | 2 | 0 |
PROBLEM:
For numerical features I just took average (as for TimeSpentPerSession_avg) but for categorical features it is not straightforward. In reality, my categorical features have very high cardinality, such as "PagenameClicked". So I cannot simply convert my categorical features to dummy variables and then aggregate them as numerical features.
I would like to know possible solutions to treat categorical features in such a way that dimensionality doesn't explode and I can also aggregate new representation on the contractID.
I have tried Entity Embeddings and read this paper for details. I transformed each categorical feature to an embedding representation of 16 dimension. However, now I am stuck at aggregating these embedding vectors for each contractID. Kindly let me know if anyone has worked in this direction or has a better solution.
Thanks allot for reading this question. :)
AI: You problem is essentially you have high cardinality in your features, right? This will be relative to your problem, but you can look for mean encodings. Essentially, you will replace categories by the mean on target variable, however, this is highly prone to overfitting and you should take care.
The following two videos will give an excellent explanation:
https://www.coursera.org/learn/competitive-data-science/lecture/b5Gxv/concept-of-mean-encoding
https://www.coursera.org/learn/competitive-data-science/lecture/LGYQ2/regularization
Another idea is to group similar sequences into one segment, for example, let's say we have:
A->B->C and B->C
If this makes sense in your application, this could be transformed into one variable only. |
H: How to print x-axes labels in pandas.Series.plot()?
I am trying to visualise my data to understand the data skewness. For that purpose, I use the below and get desired output -
df.groupby('owner_team').inc_subj.count().plot.bar(ylim=0)
Output -
My concern is the x-axes labels are shown as numbers, which is the exactly values present. But, my desire is to see the names (string values) corresponding to those numbers.
To give a little bit of background, initially they were string values which I converted to integer values using factorize():
df['owner_team'], mapp = df['owner_team'].factorize()
I am referencing this Pandas doc but couldn't find the exact parameter to set.
Tried labels but didn't help.
Any pointers please.
ps. Using Pandas v0.23.4 and Python v3.6
AI: Having a look at the Pandas plot method (on the DataFrame object), we can see that it returns a matplotlib Axes object.
Try something like this:
ax = df.groupby('owner_team').inc_subj.count().plot.bar(ylim=0)
ax.set_xticklabels(df.owner_team) # if they are still present as strings
If you removed that column, go back to your original processing and keep a copy of it somewhere then use that column above, instead.
Matplotlib will also generally be able to link to the current/latest plot (figure) that has been created. So using the Pandas plot method, you would need to intercept that.You can then try using standard matplotlib methods (e.g. plt.xlabels and so on).
Ther emight be a nice way using the pandas API directly, but I haven't come across that. |
H: predict gives the same output value for every image (Keras)
I am trying to classify images and assign them label 1 or 0. (Skin cancer or not).
I am aware of the three main issues regarding having the same output in every input.
I did not split the set and I'm just trying to apply the CNN on the train set, I know it doesn't make sense but it's just to verify how it's working. (Predicting on the unlabeled data gives exact same probability)
I have verified the three main points:
1: Scaling the date (both image size and pixel intensity values)
2: Taking a low learning rate
3: I only tried with small epochs 6 at most because of the computation time, is it worth it to let it run one day just to see results with more epochs ?
Anyway I can't understand how a bad training could lead the network to give same class probability every time ?
I tried the on batch options etc.. doesn't change anything.
Accuracy is very low as this kind of classification is not really suited for CNNs but this shouldn't explain the weird result though.
Here is my output :
Here are different parts of the program:
Model :
Resizing :
Thanks for help and sorry for the ugly screenshots.
AI: When all the predictions are giving exact the same value you know that your model is not learning thus something is wrong!
In your case the problem is having the last dense layer with the softmax AND the sigmoid activation.
model.add(keras.layers.Dense(1, activation=tf.nn.softmax))
model.add(keras.layers.Activation('sigmoid'))
This is creating a conflict where the softmax is outputting a 1 (since there is only one node) and the sigmoid takes this 1 and computing its sigmoid value gives:
1/(1+exp(-1)) = 0.731058
And there is our friend!
To solve this, you just need to remove the last activation layer, and change the softmax for a sigmoid since your output is binary:
model = keras.Sequential()
model.add(keras.layers.Conv2D(16, [3,3], activation='relu', padding='same'))
model.add(keras.layers.Conv2D(32, [3,3], activation='relu', padding='same'))
model.add(keras.layers.Conv2D(64, [3,3], activation='relu', padding='same'))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dropout(0.15))
model.add(keras.layers.Activation('relu'))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(50))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
#model.add(keras.layers.Activation('sigmoid'))
This should work! |
H: How to alter word2vec wikipedia model for n-grams?
I have a very little data, so my word2vec model does not perform well. My intention is to identify words similar to technical terms such as 'support vector machine', 'machine learning', 'artificial intelligence' etc.
I am interested in knowing if I can use the Google's wikipedia model for this. But according to my model most of the words I will be dealing with are n-grams. Hence, how can I utilise this Google's wikipedia model that is based on unigrams to achieve my task?
I am happy to provide more examples if needed :)
AI: To clarify, an n-gram usually refers to a sequence of characters, i.e. the word "clear" is comprised of the trigrams {cle, lea, ear}. I think the term you are looking for is "multi-word phrases".
Embedding collections of words is referred to a couple different ways, including "sent2vec", "doc2vec" or "thought vectors". These terms generally refer to embedding a "complete" set of words, either a sentence, paragraph, collection of paragraphs, or a document. A common --
if somewhat inelegant -- approach to using a pre-trained word2vec model to embed multiple words is to embed each word separately and then take the average as the embedding for their combination.
I think you'll find this article relevant: Representation learning for very short texts using weighted word embedding aggregation |
H: Why is there a $2$ at the denominator of the mean squared error function?
In the famous Deep Learning Book, in chapter 1, equation 6, the Quadratic Cost (or Mean Squared Error) in a neural network is defined as
$ C(w, b) = \frac{1}{2n}\sum_{x}||y(x)-a||^2 $
where $w$ is the set of all weights and $b$ the set of all biases, $n$ is the number of training inputs, x is the set of all training inputs, y(x) is the expected output of the network for input x, and $a$ is the actual output of the network for input $x$, with respect to $w$ and $b$.
Most of this formula seems very clear to be, except the $2$ in the denominator. If I understand it correctly, we are summing up the squared vector length of (the actual output minus its expected output), for each training input (giving us the total squared error for the training set) and then divide this by the number of training samples, to get the mean squared error of all training samples. Why do we divide this by $2$ as well then?
In other places I've seen that Andrew Ng's lecture defines the Mean Square cost in a similar way, also with the $2$ in the denominator, so this seems to be a common definition.
AI: This is just for mathematical convenience. When you differentiate $C(w,b)$, you will get an extra $2$. To eliminate that, $2$ is kept beforehand in denominator.
You can also watch this video on SVM lecture by Patrick Winston, where he uses a similar formula and then tells that he is using $2$ in denominator just for mathematical convenience. |
H: Fine train a convnet on difficult data only?
I use a convnet to classify two types of objects: class A and B.
I created the data set myself and have around 1000 examples per class. Some are really obvious and clear, some others are very difficult.
My question is: After the training, is it a good idea to retrain the model just on the data that are not correctly classified ?
Fore example, by taking 50% of the examples that are the least correctly classify and retrain for some epochs on them only. My intuition is that it would help the network focus on non-obvious features that could have been learnt during the first training phase .
EDIT: I am an idiot. A training on a subset of the data will necessarily result in an increase of the loss for the rest of the data (except if there are some magic tricks I don't know about). The only good solution is to design a custom loss function that fits my needs for the first training.
AI: I am an idiot. A training on a subset of the data will necessarily result in an increase of the loss for the rest of the data (except if there are some magic tricks I don't know about). The only good solution is to design a custom loss function that fits my needs for the first training. |
H: Neural network accuracy for simple classification
I am trying to develop a NN for a very simple classification model with keras/tensorflow:
Ex:
input: "Do" => class output: "Dog"
input: "Ca" => class output: "Cat"
input: "Mo" => class output: "Mouse"
I train the model with many "Do", "Ca", etc. (as dictionary), indexing the input to categorical arrays ([0,1,0] for example is "Cat", [1,0,0] is "Dog", [0,0,1] is "Mouse" in case of three classes).
I know that for this type of problem every other traditional classification algorithm should be used, and not DL, but I'm doing it with a purpose (I need to merge it with an image classifier).
My question is: since the classification is easy ("Do" is always "Dog", "Ca" is always "Cat") a Decision tree would have an accuracy of 100% always. Why do Neural Networks need tons of data and epochs to get an acceptable accuracy? With 10 classes, it takes 10 epochs and 8 thousand entries in order to get an accuracy higher than 90%. I'm using sparse categorical crossentropy as loss, and SGD as the optimizer. (2 Dense layer, relu - softmax). Also, I am a bit lost on how to choose the number of neurons, I guess trial and error is the way.
AI: I will add my 2 cents at the end of this answer. However, this is how it can be done using a neural network. Firstly, yes, you should expect to need more data to train even a simple neural network because their are more parameters that need tuning. Think of them like little faucets that you need to tune in order to get the right output volume based on an input. If you have millions of these faucets you an imagine that this is an arduous process.
You will need some of the following imports
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.callbacks import ModelCheckpoint
from keras.models import model_from_json
from keras import backend as K
But, in your case you know what your data should be so you can simulate it. I do this as follows and make a training and testing set.
import numpy as np
n = 10
m = 2
x_train = np.zeros((n, m))
y_train = np.zeros((n,))
for i in range(n):
label = np.random.randint(0,m)
y_train[i] = label
x_train[i, label] = 1
x_test = np.zeros((n//3, m))
y_test = np.zeros((n//3,))
for i in range(n//3):
label = np.random.randint(0,m)
y_test[i] = label
x_test[i, label] = 1
Now we will have a training set which contains $n$ instances and a testing set with a third as many. $m$ is the number of possible inputs. For cat vs. dog this would be $m=2$. You will be using your more general case where $m=10$. Each entry in the matrix $x$ has the vector with one-hot encoded vector where the index in accordance with the label is 1.
We need to reshape the data for it to fit with the Keras structure.
# The known number of output classes.
num_classes = m
# Channels go last for TensorFlow backend
x_train_reshaped = x_train.reshape(x_train.shape[0], m,)
x_test_reshaped = x_test.reshape(x_test.shape[0], m,)
input_shape = (m,)
# Convert class vectors to binary class matrices. This uses 1 hot encoding.
y_train_binary = keras.utils.to_categorical(y_train, num_classes)
y_test_binary = keras.utils.to_categorical(y_test, num_classes)
We then build our model
model = Sequential()
model.add(Dense(64, activation='relu', input_shape = input_shape))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
We then train our model
epochs = 4
batch_size = 128
# Fit the model weights.
history = model.fit(x_train_reshaped, y_train_binary,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test_reshaped, y_test_binary))
Epoch 4/4 10/10 [==============================] - 0s 251us/step -
loss: 0.6247 - acc: 0.6000 - val_loss: 0.5311 - val_acc: 1.0000
Voila, now you have perfect classification with this network. You can play around with the model and see a summary of the model using.
model.summary()
For $m = 10$
Due to the higher complexity that this set has you will need to increase the number of instances in your training set. I will also 2 layers to our model and make them less wide. Furthermore, we will add more epochs so we will train longer.
import numpy as np
n = 1000
m = 10
x_train = np.zeros((n, m))
y_train = np.zeros((n,))
for i in range(n):
label = np.random.randint(0,m)
y_train[i] = label
x_train[i, label] = 1
x_test = np.zeros((n//3, m))
y_test = np.zeros((n//3,))
for i in range(n//3):
label = np.random.randint(0,m)
y_test[i] = label
x_test[i, label] = 1
# The known number of output classes.
num_classes = m
# Channels go last for TensorFlow backend
x_train_reshaped = x_train.reshape(x_train.shape[0], m,)
x_test_reshaped = x_test.reshape(x_test.shape[0], m,)
input_shape = (m,)
# Convert class vectors to binary class matrices. This uses 1 hot encoding.
y_train_binary = keras.utils.to_categorical(y_train, num_classes)
y_test_binary = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Dense(32, activation='relu', input_shape = input_shape))
model.add(Dense(32, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
epochs = 10
batch_size = 128
# Fit the model weights.
history = model.fit(x_train_reshaped, y_train_binary,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test_reshaped, y_test_binary))
Epoch 10/10 1000/1000 [==============================] - 0s 49us/step
- loss: 1.5977 - acc: 1.0000 - val_loss: 1.5235 - val_acc: 1.0000
My suggestions
I would not use a NN for such a case. Most of the frameworks allow you to add information throughout your model. Such that if you have images you can run a CNN over them, then when you are ready to convert your layers to a densely connected layer you can in additional information, such as vectorized text.
You can thus use a random forests approach or something even simpler to get your 100% classification even faster. Then you can feed the output of this model to your deep learning framework which has already "extracted the features" from the images and concatenate these additional features to that tensor. Then you will pass this larger tensor through the subsequent Dense layers to get your final output. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.