text
stringlengths 83
79.5k
|
---|
H: Not able to restore attention model properly
I am referring this article on building an attention model using tensorflow.
I am trying to train a similar model on my dataset using google colab. Due to the session limit of colab and my large dataset, I need to save the model state and restore it to resume training.
However, I am not able to restore the model upon saving the parameters. I have saved the input and target tokenizers, model checkpoint and even the input and output tensors. However, every time I use checkpoint.restore and resume training the model it resumes training with a high loss(equal to random weights).
I always test my model before saving using the translate function on some test data and it generates a one line summary. However, when I restore the model and run some sample data on the translate function, I only get a single tag as output (as if it is a newly initialised model).
Here is my code
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
manager = tf.train.CheckpointManager(checkpoint, 'checkpoint_dir', max_to_keep=1)
The training step is
EPOCHS = 50
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in tqdm(enumerate(dataset.take(steps_per_epoch))):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# saving (checkpoint) the model every 3 epochs
if (epoch + 1) % 3 == 0:
manager.save()
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
I restore by doing
checkpoint.restore('ckpt-ckptnumber.index')
I save the tokenizers (both input and output) using pickle
with open('inp_tokenizer.pickle', 'wb') as handle:
pickle.dump(inp_lang_tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
I save the tensors using numpy.save()
np.save('X.npy', input_tensor)
AI: you have to use entire things like ckpt.restore("./tf_ckpts/ckpt-10"). please check the https://www.tensorflow.org/guide/checkpoint. |
H: Compare ratings of players over different leagues
I want to compare ratings of players from different leagues and predict rating of player in a league he/she didnt participate in. Rating of a player is estimated within a league where he was playing.
There are some cross-observations which are players that have ratings estimated for more then one league.
For example there is a player P1 who has a rating 40 in league L1 and rating 55 in league L2. As you can see at the picture. There are distributions of rating in L1 and L2. Yellow line indicates the mean rating of a league and the green lines are cross-observations .
My question is if I know that player Px has a rating R in L1, what is his rating in L2?
Any ideas are appreciated!
Thank you
AI: An existing way to do this is to use modified Elo ratings.
Video game servers sometimes use a similar scoring system called Glicko or Glicko-2 which might be better for your purposes.
My understanding is that you want to understand the rating of each player relative to every other player and that each in-league rating is essentially the rating of each player in that league relative to each other player in that league. If this is true then there are two ways I can think of to help you work out the hypothetical rating of player X in league L2 based on their performance in league L1:
Generate global ratings based on one of the methods mentioned above and use them to insert the new player in the right place in the ranking
Calculate am in group rating based on the methodologies described above and then calculate the new players ranking by stimulating the score he/she would accumulate if they played each of the players in L2 that they could be compared to in L1 and achieved the expected outcome (i.e. beat all the players they were ranked better than and were beaten by all the players they were ranked worse than)
You might also be interested to check whether the rank correlation between leagues is strong based on players that are ranked in both leagues. |
H: Pipeline heterogeneous data
I'm facing an issue I don't know how to solve, but as I'm a beginner probably there is an easy solution I can't find.
I'm playing with the titanic dataset and I want to work with pipelines (In order to avoid data leakage using cross validation). For that reason I'm using two pipelines (one for numerical, one for categorical) + FeatureUnion().
What's the problem? In the numerical pipeline I fill the NaN values of Age and then I create some buckets for that variable. The result of this pipeline would be a dataframe containing all the numerical features + 1 categorical variable. For encoding categorical variables, I use a pipeline for the categorical variables, and then use FeatureUnion to join both datasets. But the problem is that the new variable I create in the numerical pipeline doesn't go into the categorical pipeline, resulting with a dataframe with one categorical variable that hasn't been encoded. How can I solve this?
CODE:
num_pipeline = Pipeline(steps = [
('selector', DataFrameSelector(numerical_features)),
('imputer', df_imputer(strategy="median")), #Numerical
('new_variables', df_new_variables()) #Numerical
])
cat_pipeline = Pipeline(steps = [
('selector', DataFrameSelector(categorical_features)),
('label_encoder', MultiColumnLabelEncoder()) #Categorical
])
full_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline)
])
Thank you for your time
Best regards
EDIT:
I was thinking about using ColumnTransformer as I think it suits better in my example as I have to apply different transformations for different columns, but the problem is that when working with ColumnTransformer the output would be an array with no columns' names, which I think would be hard to deal if we want to use feature selection. That's why I chose Pipelines rather than ColumnTransformer.
Talking about the option of creating the bucket before going into the pipeline, I can't because it's created based on the variable I'm dealing with missing values.
What would be the best option in this case?
AI: Approach 1: create features before transforming
If you want to create a categorical variable based on a numerical variable and then treat it in cat_pipeline, you need to do create it before the column transformer.
Implement a transformer (called "bucketer" ?) that takes p variables and transforms it into p+1 (if you want to add the categorical representation and keep the initial numerical feature). This transformer is the FIRST step of your pipe.
Then, create a ColumnTransformer (I think it is more suited for your case, but have not enough details to be sure. I suggest you read this to be sure). This second transformation is the second step in your pipe.
Each branch is fed with respect to what it should output because feature creation (bucketing) was done prior to column transformer.
Approach 2: create features in transforming
Otherwise, you can create two main paths :
One branch will output only categorical features, whatever the input type.
The other will output only numerical features, whatever the input type.
You may want to re-use your feature selector in some branches, I just wanted to illustrate the two possible approaches
Hope this helps |
H: How to apply .replace() inside a column in a pandas data frame to clean data
I have a pandas data frame where I want to replace all the values that are not numeric in one column for ""
The error I'm getting in my code is the following:
ValueError: could not convert string to float: '690,276.00'
Since I'm trying to convert all values into float so I can do internal operations with them.
Part of my cleaning data frame code looks like this:
# Cleaning:
df_clean = df_read[~(df_read['Ratio of Similarity (Gray)'] <= .2)]
print(df_clean, 'clean 1: Eliminate Ratio of similarity less than 0.2')
df_clean_2 = df_clean.dropna(subset=['buybox_price'])
print(df_clean_2, 'clean 2: Eliminate Nan Buybox Prices')
df_clean_2 = df_clean_2.replace(",", "").replace('', '').astype({'product_ranking':'float64'})
df_clean_3 = df_clean_2[~(df_clean_2['product_ranking'] >= 5000000)]
print(df_clean_3, 'clean 3: Eliminate Product Ranking + than 5.000.000')
df_clean_4 = df_clean_3[~(df_clean_3['buybox_price'] <= 6)]
print(df_clean_4, 'clean 4: Eliminate Buybox Price less than 6$')
# Save Cleaned File
path_file = os.path.join(BASE_DIR, 'csv/amazon_product_comparator.csv')
df_hc = df_clean_4.to_csv(path_file)
The error can be found in the line:
df_clean_3 = df_clean_2[~(df_clean_2['product_ranking'] >= 5000000)]
print(df_clean_3, 'clean 3: Eliminate Product Ranking + than 5.000.000')
AI: EDITED to include a working example.
I have made an example with string type numeric data. I have also include a poorly formatted string number 1,002,*8320.
The code below will convert the numbers in the format of digits and digits with commas to floats, and convert all other strings to NaN. I have used a regular expression to replace commas with blanks.
The NaN must be used over "" since the comparison operator will not work on strings
import pandas as pd
import numpy as np
import re
# example data frame
df2 = pd.DataFrame([['1', '2000000000'],
['2', '1,002,*8320'],
['3', '1,000,000']],
columns = ['idx','product_ranking'])
# Remove Commas
df2['product_ranking'] = df2['product_ranking'].map(lambda x: re.sub('[,]*' , '', x))
# Remove Convert strings that are numeric into floats.
df2['product_ranking'] = df2['product_ranking'].map(lambda x: float(x) if x.isnumeric() else np.nan)
#Comparison is working
df2['product_ranking'] > 100000000 |
H: Is it possible to apply PCA on different subsets independently?
I need to apply PCA on a rather big set of data, but my machine is not able to handle the workload. So I was considering to split randomly my original set into 4 subsets, apply PCA independently on each subset and finally join the 4 subsets to have the original one with the PCA.
For my understanding PCA looks for correlated variables so they can be combined into one, which somehow will represent the values of the original variables. So I believe this operation happens on a row level. However, I guess the algorithm needs to analyse all the set as a whole to determine the correlation between features, since correlation among features row by row may differ, and some rows may even have NaN values.
So I would like to know if this approach with the subsets is correct, or if I may end up with one subset which after PAC combined features a and b and another subset which combined b and c.
AI: You can use mini batch PCA. One formulation is available in sklearn.
Alternatively, you can run PCA on a carefully selected subset of your data. It's very time consuming and I'm not sure it's possible, and its feasibility is task specific. |
H: Remove all columns where the entire column is null
I have a very dirty csv where there are several columns with only null values.
I would like to remove them. I am trying to select all columns where the count of null values in the column is not equal to the number of rows.
clean_df = bucketed_df.select([c for c in bucketed_df.columns if count(when(isnull(c), c)) not bucketed_df.count()])
However, I get this error:
SyntaxError: invalid syntax
File "<command-2213215314329625>", line 1
clean_df = bucketed_df.select([c for c in bucketed_df.columns if count(when(isnull(c), c)) not bucketed_df.count()])
^
SyntaxError: invalid syntax
If anyone could help me get rid of these dirty columns, that would be great.
AI: [Updated]: Just realized it is about pyspark!
It is still simple! A concrete example (idea heavily borrowed from this answer):
Creating a dummy dataset
import pandas as pd
import numpy as np
import pyspark.sql.functions as sqlf
main= pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))
main["E"]= np.NAN
main["F"]= np.NAN
df = sqlContext.createDataFrame(main)
Function to drop Null columns
def drop_null_columns(df):
"""
This function drops columns containing all null values.
:param df: A PySpark DataFrame
"""
null_counts = df.select([sqlf.count(sqlf.when(sqlf.col(c).isNull(), c)).alias(c) for c in df.columns]).collect()[0].asDict()
to_drop = [k for k, v in null_counts.items() if v >= df.count()]
df = df.drop(*to_drop)
return df
Outcome
df_dropped = drop_null_columns(df) |
H: Numpy ndarray holding string with unknown data type
I ran the following code in a jupyter notebook cell
ndarrs=np.array(["1.2","1.5","1.6"], dtype=np.string_)
print(ndarrs.dtype)
It returned |S3 as shown below.
Can someone help me understand the meaning of this symbol?
AI: In Python 3, you shouldn't really specify the np.string_ dtype, as it is left there for backwards compatibility with Python 2. The S type you see using np.dtype is a map to the bytes_ type, a zero-terminated string buffer, which shouldn't be used.
The S just means string and the number gives the number of bytes.
In [1]: s = "" # start with an empty string
In [2]: for i in range(5): # make the string larger
s += str(i)
a = np.array([s], dtype=np.string_)
print(f"{a}\t{a.dtype}")
[b'0'] |S1
[b'01'] |S2
[b'012'] |S3
[b'0123'] |S4
[b'01234'] |S5
[b'012345'] |S6
For python 3 you should instead use np.unicode:
In [1]: a = np.array(["hello", "world"], dtype=np.unicode)
In [2]: type(a)
Out[2]: numpy.ndarray
In [3]: b.dtype
Out[3]: dtype('<U5')
< means little-endian
U means a unicode string
5 relates to the number of bytes used to hold the string. If you had really long strings, that number would then increase.
Have a look at this documentation, which states:
Note on string types:
For backward compatibility with Python 2 the S and a typestrings remain zero-terminated bytes and np.string_ continues to map to np.bytes_. To use actual strings in Python 3 use U or np.unicode_. |
H: Which graphical tools can be used to display uni- or bivariate continuous data?
There are 4 options to this multiple question,
Scatterplots
Conditional density plots
Histograms
Boxplots
I chose Scatterplots and Histograms but the answer is either wrong or not enough, I haven't seen anything about conditional density plots in my textbook and boxplots I'm not so sure. Tried googling it but there is no mention of boxplots. I think I might have misunderstood the question.
AI: It’s asking you about these and these, both of which can visualise continuous data. |
H: Partial autocorrelation and Autocorrelation difference
I have been trying to understand better the PACF and ACF, but I'm literally struggling.
Have been using a series of articles like:
https://machinelearningmastery.com/gentle-introduction-autocorrelation-partial-autocorrelation/
And even looking at the method signature of statmodels for PACF and ACF I am still not able to get it.
Can somebody be so kind to make a "for dummies" explanation with some example of what is happening when data a considered?
Is it correct saying, among the other things, that
ACF is the plot of the AutoRegressive (AR) modelling whereas
PACF does the same but for the Moving Average (MA)
Or is it the other way round?
ACF is the plot of the Moving Average (MA) modelling whereas
PACF does the same but for the AutoRegressive (AR).
AI: Good evening,
Let's begin with a simple example: you have a time series process, for example some process with correlation of up to lag 4.
An order (p) process is time series process where current value depends on previous p values.
You can think for examples of weekly sales in a supermarket, and your AR(4) model in this case shows that your current week sales depend on the previous 4 weeks of sales (on week 1, week 2, week 3, and week 4).
Now, let's get into ACF and PACF.
You have your AR(4) model, which basically tells you that points are correlated up to lag 4, meaning that your correlation between wk 1 and wk 5 is the same as between wk2 and wk 6 (note that the "distance" between them is 4 lags). Similarly, for any number of lags <4 this rule works the same way: corr wk 1 and wk 2 is same as wk 2 and wk 3 and so on. Assume that correlation at lag 1 (wk 1 and wk 2) is stronger than lag 2 (wk1 and wk 3), this means that as number of lags increases, the autocorrelation will also decrease. At each further lag the information from previous lags carries over.
Here is a visualization.
PACF is a completely different concept. What it primarily focuses on is finding out the correlation between two points at a particular lag. In your case, say you want to find the "independent" correlation between wk4 and wk3, this is exactly what PACF will show you.
Here is a visualization.
All together:
ACF data for wk4 will include all the information up to wk3 (wk1, wk2, wk3).
PACF data for wk4 will include "independent" (partial) correlations between wk3 and wk4 only (meaning it will by correlation between these points which wasn't explained by their mutual correlations.
To your point about AR and MA processes, it's the other way around. The lag at which ACF becomes very small is order of MA process. And the lag at which PACF becomes very small is order of AR process.
Hope this helps. |
H: How to extract data from an API every hour in Python?
I am new to Python and I tried writing a script that extracts air quality json data from an API every hour and logs it into a same excel file. My code doesn't return anything.
Is my code correct ? and how can I make it log into the excel file every hour please ? Thank you very much.
Here is the script:
def write_to_excel():
request = requests.get("https://api.waqi.info/feed/paris/?
token=?")
request_text = request.text
JSON = json.loads(request_text)
filterJSON = {
'time': str(JSON['data']['time']['s']),
'co': str(JSON['data']['iaqi']['co']['v']),
'h': str(JSON['data']['iaqi']['h']['v']),
'no2': str(JSON['data']['iaqi']['no2']['v']),
'o3': str(JSON['data']['iaqi']['o3']['v']),
'p': str(JSON['data']['iaqi']['p']['v']),
'pm10': str(JSON['data']['iaqi']['pm10']['v']),
'pm25': str(JSON['data']['iaqi']['pm25']['v']),
'so2': str(JSON['data']['iaqi']['so2']['v']),
't': str(JSON['data']['iaqi']['t']['v']),
'w': str(JSON['data']['iaqi']['w']['v']),
}
liste.append(filterJSON)
try:
os.remove("airquality.xlsx")
except:
pass
pd.DataFrame(liste).to_excel('airquality.xlsx')
print(liste)
if __name__ == "__main__":
schedule.every(3).seconds.do(write_to_excel)
while True:
schedule.run_pending()
'''
AI: Assuming that example is working for you, trying to write the data every 3 seconds, you need to just change the scheduling to be
schedule.every(1).hour.do(write_to_excel)
You are currently writing the data at each interval to the same file, so you will overwrite the file every time. You could do a few things here:
Open the excel file (e.g. into a pandas DataFrame), append the new data and save it all back to disk. This is pretty inefficient though and will have problems once you have a lot of data.
Write the data to a database, extending it every hour. This is the most professional solution.
Write a new file to disk each hour, including e.g. the timestamp of the hour in the filename to make each file unique. This is simple and just means you iterate over the files one-by-one when reading them later to do analysis or plotting etc.
You could change your function to be like this, implementing the first option above:
def request_data_and_save(excel_file: str = "air_quality.xlsx"):
request = requests.get("https://api.waqi.info/feed/paris/?token=?")
request_text = request.text
JSON = json.loads(request_text)
filterJSON = {
'time': str(JSON['data']['time']['s']),
'co': str(JSON['data']['iaqi']['co']['v']),
'h': str(JSON['data']['iaqi']['h']['v']),
'no2': str(JSON['data']['iaqi']['no2']['v']),
'o3': str(JSON['data']['iaqi']['o3']['v']),
'p': str(JSON['data']['iaqi']['p']['v']),
'pm10': str(JSON['data']['iaqi']['pm10']['v']),
'pm25': str(JSON['data']['iaqi']['pm25']['v']),
'so2': str(JSON['data']['iaqi']['so2']['v']),
't': str(JSON['data']['iaqi']['t']['v']),
'w': str(JSON['data']['iaqi']['w']['v']),
}
# Get a writer and append data to the file
with pd.ExcelWriter(excel_file, mode="a") as xl:
df = pd.DataFrame.from_dict(filterJSON)
df.to_excel(xl)
if __name__ == "__main__":
schedule.every().hour.do(write_to_excel)
while True:
schedule.run_pending()
NOTE: this code is not tested |
H: Is this correlation between distance matrices?
I have a set of objects. I have calculated two distance matrices: $X$ defining distance between each objects pair using metric $f1$, and $Y$ -- using metric $f2$. Now, I would like to understand if two objects are similar according to metric $f1$, then they are also similar according to metric $f2$. How can I do it?
For instance, $f1$ could say whether two objects have similar color, and $f2$ --- whether two objects have similar size. But metrics can be anything. For instance, we could talk about articles, $f1$ could be Jaccard distance measuring how many tags both articles share, and $f2$ could be euclidian distance measuring distance between word vectors of two articles. Now I would like to understand if two objects of blueish objects tend to be big, or whether articles tagged with "racism" have similar content.
Am I asking about correlation? How can I calculate it between $X$ and $Y$?
AI: You are basically right. You want to check the degree of dependence of one variable with respect to another one. No matter how you generate each variable, if you want to know how dependent it is from the other one, you normally use correlation to run that evaluation.
Very useful options you should consider to analyze the potential dependence between X and Y are:
Correlation: to measure both the strength and direction of the linear relationship between two variables.
Covariance: to assess the direction of the linear relationship between variables (not the strength).
Pearson's correlation: to obtain a single metric representing the degree (strength) of dependence between the two variables
Spearman's correlation: to evaluate a potential non-linear dependence between the two variables.
For more information, please, check this post here. |
H: How to compare supervised learning algorithm and it's technique ensemble learning algorithm?
I have to compare Support Vector Machine and Random Forest algorithm , but i'm confused how it can be compared, like support vector machine is supervised learning algorithm and random forest is ensemble learning .
Help me out how i can compare it on which point like - in clasification , in regression .
AI: TL;DR
Since both SVM and Random Forest are supervised algorithms, you can compare the two like you would compare any other two supervised algorithms.
The fact that a Random Forest is an ensemble classifier doesn't really matter as long as you treat all trees in the forest as a single model.
Comparing two supervised algorithms
The simplest way to compare supervised algorithms is with a train/test split:
Split all your data into two sets, namely a training and a testing set (a common ratio is 0.8/0.2).
Train both models independently with the data from the training set.
Use your models to predict the data from the testing set.
Give a score to the predictions by comparing what the model predicted vs the true value from the testing set. If you have a classification problem, you could use the F1 score. If you have a regression problem, you could use the R-square score.
Pick the model with the best score.
Other ways of comparing two supervised algorithms
Instead of a train/test split, you could look into cross-validation.
Instead of a random train/test split, you could look into stratified or time splits.
Instead of comparing two different algorithms, you could compare the same algorithm against itself but with different hyper-parameters (i.e. Hyper-parameter optimization).
Instead of F1 score or R-squared you could use a metric that best fits your business case. |
H: Having and issue plotting horizontal chart
So for some weird reason I can't manage to fix the plotting issue
Any suggestions?
from sklearn.metrics import confusion_matrix
List = []
for i in range(len(model)):
cm = confusion_matrix(Y_test, model[i].predict(X_test))
TN = cm[0][0]
TP = cm[1][1]
FN = cm[1][0]
FP = cm[0][1]
List.append(((TP + TN) / (TP + TN + FN + FP))*100)
print()
import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
bars = ('Logistic', 'K Nearest Neighbor', 'K Nearest Neighbor', 'Support Vector Machine ', 'Support Vector Machine ','Support Vector Machine ','Support Vector Machine ' )
percentage = np.array([List[0], List[1], List[2], List[3],List[4],List[5],List[6]])
new_labels = [i+' {:.2f}%'.format(j) for i, j in zip(bars, percentage)]
plt.barh(bars, percentage, color='lightskyblue', edgecolor='blue')
plt.yticks(range(len(bars)), new_labels)
# Show graphic
plt.show()
AI: You have to change one line in your code.
This
plt.barh(bars, percentage, color='lightskyblue', edgecolor='blue')
should become this
plt.barh(range(len(bars)), percentage, color='lightskyblue', edgecolor='blue')
What you are changing is the y position of your bars. If you look at your original bars there are only 3 values there, so bars were overlapping.
Once you change that to range(len(bars)) you are using the same tick values that you set in plt.yticks(range(len(bars)), new_labels). |
H: List of keywords as features
I'm new to machine learning, being this the first time I'm involved in a project in the area. I have a dataset of news articles and have extracted the keywords present on the news title such as ['china factory activity shrinks', 'first time', '2 years']. Note, the list size varies.
I would like to use this data in my features, but I don't know how and what is the best way to extract features based on a list of keywords.
Since the keywords cannot be converted to a categorical value, I think I can't use OneHotEnconder, but I'm not 100% certain. Is bag-of-words or TF-IDF viable possiblity? How can I encode this keywods to numerical values maintaining some meaningful information to the model?
AI: Sure, of course we can use TF-IDF or bag of words. The easiest way is to build a separate TFIDF transformer for every group of keywords and then combine them together using FeatureUnion. Just make sure the custom tokenizer only filters out the keywords in each group that you're looking at.
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = [
"china factory activity shrinks some text here",
"first time some text here",
"2 years some text here",
"combining first time and 2 years here.",
"adding china factory activity first but doesn't match exactly"
]
# custom tokenizers
keywords1 = lambda x: [x for x in x.split() if x in ["china", "factory", "activity", "shrinks"]]
keywords2 = lambda x: [x for x in x.split() if x in ["first", "time"]]
keywords3 = lambda x: [x for x in x.split() if x in ["2", "years"]]
combined_features = FeatureUnion([
('kwd1', TfidfVectorizer(tokenizer=keywords1)),
('kwd2', TfidfVectorizer(tokenizer=keywords2)),
('kwd3', TfidfVectorizer(tokenizer=keywords3))
])
pipeline = Pipeline([('bow', combined_features)])
output_corpus = pipeline.fit_transform(corpus)
For the deep-learning approach, I would recommend a similar kind of pattern so that the embedding learned from each group would have a different semantic meaning from another group. You'll just need to ensure that the tokenizer only considers the keywords from each group when it is processed. If you're using a pre-build word embedding; you'll just have a filter before it goes into each embedding so that it doesn't "leak" into another topic.
Example using Keras:
from keras import layers
from keras.models import Model
from keras.preprocessing.sequence import pad_sequences
def custom_sequence_tokenizer(txt, vocab):
vocab_mapper = dict([(word, idx+1) for idx, word in enumerate(vocab)])
text_split = [vocab_mapper.get(x, 0) for x in txt.split() if x in vocab]
return text_split
keywords1 = pad_sequences([custom_sequence_tokenizer(x, ["china", "factory", "activity", "shrinks"]) for x in corpus], padding='post', maxlen=10)
keywords2 = pad_sequences([custom_sequence_tokenizer(x, ["first", "time"]) for x in corpus], padding='post', maxlen=10)
keywords3 = pad_sequences([custom_sequence_tokenizer(x, ["2", "years"]) for x in corpus], padding='post', maxlen=10)
input_count1 = layers.Input(shape=(10,), name='in1')
input_count2 = layers.Input(shape=(10,), name='in2')
input_count3 = layers.Input(shape=(10,), name='in3')
# input size is reflection of the vocab size
embed1 = layers.Embedding(5, 8)(input_count1)
embed2 = layers.Embedding(3, 8)(input_count2)
embed3 = layers.Embedding(3, 8)(input_count3)
combine = layers.Concatenate()([embed1, embed2, embed3])
model = Model(inputs=[input_count1, input_count2, input_count3], outputs=combine)
output = model.predict({
'in1': keywords1,
'in2': keywords2,
'in3': keywords3,
}) |
H: Intuition behind Adagrad optimization
The following paper ADADELTA: AN ADAPTIVE LEARNING RATE METHOD gives a method called Adagrad where we we have the following update rule : $$ X_{n+1} = X_n -[Lr/\sqrt{\sum_{i=0}^ng_i^2}]*g_n $$
Now I understand that this updation rule dynamically chooses the learning rate for each iteration but have the following question :
Here we see that larger gradients have smaller learning rates and smaller gradients have larger
learning rates. I don't understand why is this the property that is desired, in other words, why is this a good thing for our network
AI: To understand the intuition behind Adagrad, let's have a look at the graphs below, representing the evolution of the loss function when model's weights are updated according to different values of the learning rate in a 1-D search space:
Graph 1 (left - Too Fast!): when updating the weights of the model, we can see that it does not converge to the global minimum. It is due to the fact that the learning rate is too high and it keeps bouncing at the bottom.
Graph 2 (center - Too Slow!): in this case, the weights' update takes place very slowly, because the learning rate is too small, thus it takes too long to converge (or even never) to the global minimum.
Graph 3 (right - Spot On!): in this case, the learning rate is adapted depending on the value of the gradient, i.e. the higher the gradient, the lower the learning rate, or, the lower the gradient, the higher the learning rate. This makes it possible to increase the chances of convergence without having the issues commented before.
Adagrad is one of several adaptive gradient descent algorithms like ADAM, ADADELTA, etc. You can check more info here.
NOTE: the pictures above have been taken from here. |
H: How to generate plot of reward and its variance?
I am new to reinforcement learning and I would like know how to generate a learning curve plot such as that shown below (taken from this blog post), that illustrates the reward (return) and its variance (shaded region). I would like to use Matplotlib or any other Python plotting framework.
AI: To estimate the variance, you probably need to run your algorithm multiple times and keep track of the return for each of these runs. From these multiple returns, you can estimate the variance.
Once you have the standard deviation (or variance) of the return, you can plot something like your plots using matplotlib's fill_between function. |
H: Different activation function in same layer of a Neural network
My question is that what will happen if I arrange different activation functions in the same layer of a neural network and continue the same trend for the other hidden layers.
Suppose I have 3 relu units in the starting and 3 tanh after that and other activation function I the same hidden layer and for the other hidden layers I am scaling all the nodes with the same scale (decreasing/increasing) and the arrangement and order of the activation function is not changing.
AI: At a higher level, using multiple activation functions in a NN may not work as these activation functions operate differently for the same inputs. For an input of -1.56, ReLU will a 0, sigmoid will give a 0.174 and Tanh will give -0.91. These differences will not allow the gradients to flow uniformly during backpropagation.
You may try using two different activation functions in the same layers. I can think of some issues which may arise,
Talking about the combination of Tanh and ReLU functions, we can face the problem of exploding gradients. ReLU functions provide the same inputs as outputs if they're zero or positive. On the other hand, Tanh function provides outputs in the range [ -1, 1 ]. Large positive values will pass through the ReLU function unchanged but while passing through the Tanh function, you'll always get a fully saturated firing i.e an output of 1 always.
Therefore, while using ReLU in a classification problem, we use softmax instead of the sigmoid function. If the sigmoid function is used, you'll probably get an output of ones only.
Instead of the ReLU-Tanh combination, you may want to use the Sigmoid-Tanh combination.
Here, the sigmoid may give an output of 0 ( or a number close to zero ), if given a negative value. These negative values may be supplied by the Tanh function in the preceding layers. You may face overfitting here. |
H: Changing the default scale of a plot to a "custom scale"
I am trying to change the scale of the x_axis for a plot.
The default is generating divisions of 20 units (0-20-40-60-80-100-120).
I tried changing to log, but then I get 0-10-100, which helps understand the data, but I would like to try to see divisions with 10 units because my data is not logarithmic (its linear, but there are many values in the 0-10 range, which results in a plot that is very hard to read).
How can I achieve that effect, or another one that will help me visually the data?
The code is below:
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
sns.set()
plot = data3.plot(kind="scatter", x="A", y="B") # data3 is the name of my data frame
plt.xscale('log') # this was my change (plt is correct)
plot
I also tried the following (instead of the 'log' line above), but it gives the default (0-20-40-etc):
plt.xticks = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
AI: You are almost there when you tried to set the xticks values.
Try
ticks = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
plt.xticks(ticks, ticks)
The two parameters are the tick locations and the tick labels and they could be different, if you wanted.
Here's a full sample code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
point_min = 0 # low value for random data
point_max = 101 # high value for random data
point_count = 100 # number of data points
# generate a DataFrame
data3 = pd.DataFrame({'A':random.sample(range(point_min, point_max), point_count)
, 'B':random.sample(range(point_min, point_max), point_count)}
)
tick_freq = 10 # define how far apart you want your ticks
data3.plot(x='A', y='B', kind='scatter'
, c=np.repeat(np.atleast_2d([.5, .6, .7]), data3.shape[0], axis=0) # marker color
, figsize=(15, 5)) # wide plot to show all tick labels
plt.xticks(ticks, ticks)
plot;
The resulting graph looks as below |
H: LSTM Time-series classification - derived feature
I have a time-series dataset and I want to derive a new feature based on a date column which I believe might improve my predictive model. The feature is if it's weekday or weekend.
I am not sure how to create this for modeling purposes. Do I represent it as a boolean function or a numerical one (1/0) or just add 2 new features (weekday, weekend) and represent them as 1 and nan?
AI: Add two new features, dummy variables, that represent the presence or absence of weekend and weekdays. You can create the feature (weekend/weekday) and use one hot encoding to create the two new. |
H: Conditionally replace values in column with value from another column
I have a 5 column dataframe where one column has some NaN's. My goal is to replace these NaN's with the corresponding value in another column.
So if my DF looks like this:
YearBuilt LastModified
1993 2004
1994 NaN
1995 2004
I would want to replace the NaN with the corresponding YearBuilt, which is 1994 in this case.
AI: You can use .fillna() to handle this:
df.LastModified.fillna(df.YearBuilt,inplace = True) |
H: Creating bar plot with frequency distribution based
I have a dataframe with this kind of structure:
ID | Size | Value
1 | 1 | 10
2 | 2 | 20
3 | 1 | 10
4 | 1 | 10
5 | 1 | 15
6 | 1 | 12
7 | 1 | 20
I want to create bar plot of the distribuition of column "Values" when the "Size" column is equal to 1.
With the current example:
the x-axis should have: 10, 12, 15 and 20
the y-axis should have 1 (for the 12, 15 and 20) and 3 for the 10
I don't have a lot of code. Basically I just created a new dataframe with only the rows where Size=1 and then performed value_counts():
data1 = data.loc[(data['Size'] == 1)]
count = data1['Value'].value_counts()
The variable count now has a Series object with a Value and a counter.
How can I split the value of count in order to have two lists (values and counters) to send to the plotting code?
AI: What you are looking for is a histogram. It plots the distribution of a given series of values. So, you can just extract the values that you want to plot from your dataframe and then use matplotlib's histogram function. Here is the code:
import matplotlib.pyplot as plt
to_plot = data.loc[df['Size']==1]['Value']
plt.hist(to_plot)
plt.show()
This is the output I get, note that I haven't done any formatting, this is just the raw output.
The above option is more flexible. But in your case you can also use the following:
import matplotlib.pyplot as plt
to_plot = data.loc[df['Size']==1]['Value']
to_plot.value_counts().plot.bar() |
H: How to present Market Basket Analysis Results?
I am working on a Retail Company's in-store transactions for 3 months. I have performed the Market Basket Analysis on the same and I'm getting hundreds if not thousands of association rules. I am using the apriori algorithm from mlxtend.frequent_patterns import apriori in Python and I have used different support values in apriori(basket_sets, min_support=0.01, use_colnames=True), all the way from 0.01 to 0.4.
If I use a support value too high, (for some stores there are no rules found), there are very few association rules, and if I use a support value too low, its very difficult to make sense of the association rules generated since there are too many of such rules.
Since I've chosen to go with the low support value, I wanted to understand ways in which I could present the rules which would make business sense to the data owners. If there is any literature (I've tried googling at least 10 different queries but all of them return "How to do Market Basket Analysis and its applications!") on how to present the Association results, that would be really good.
Thanks
AI: The best way to understand multiple association rules is to visualize them. This makes it even easier to present. This paper covers multiple approaches for visualizing association rules. Go through its references. They also suggest their tool, but it is in R. If you want resources for python try searching for "association rules visualization python" and you'll find some resources. |
H: Random forest minimum number of observations
I have around 5000-6000 observations of nearly 8-10 variables (of which 2 are discrete, categorical) and a single numerical target parameter. As per initial evaluation, random forest regression might be a good algorithm for the current case.
Is the current observations/variables count adequate for the proposed method? If other regression algorithms are recommended as things are described currently, kindly let me know.
AI: The important is not the number of observation but the quality of this observations. If you have a look at toy datasets of sklearn they are way smaller than that.
Random forest is a good algorithm when there is small data since it is a bagging of decision trees with bootstrap. Each decision tree is feed with a sample of data with replacement, in this way even if the data is small there are bigger chances of making a good model.
In a high level, yes it seems a good way to go, but with out knowing more at the data is hard to tell.
I would suggest to give it a try with a Generalized Linear Model, a support vector machine and a gradient boosting. Since your data is small you will not need much computation time for it. |
H: Dummy Variable Trap
In my course about machine learning I'm studying multiple linear regression and we talked about dummy variable trap. I have a data set which contains country, height, weight, gender of every person where country is encoded with letters such as us, uk, fr, ge for united states, united kingdom, france and germany respectively and genders are encoded with M F. When I convert these categorical variables into numeric ones (with one hot encoder) I get confused about the following.
When we encode M and F with two different columns if we don't drop one, we fall into dummy variable trap since a "1" on male column would obviously mean a "0" on female column therefore we only have one degree of freedom so the other is redundant, no problem here.
However with country column we can for example say a person is french if all other columns have "0" for them therefore I think that 3 columns are enough for specifying 4 countries and if we have 4 columns we would fall into dummy variable trap but all worked examples state otherwise.
Why is it so? Why 2 variables can be represented with 1 columns if two of them can't be true at the same time but 4 columns cannot be represented by 3 columns if any two columns cannot be true at the same time? Thanks in advance
AI: It can be.
But with country, people don't assume the list to be exhaustive i.e. new country might come to the list. With gender, this is very less likely.
But please, don't give too much attention to the dummy variable trap. It just creates an extra correlated variable. Correlation doesn't impact model performance. just some extra computation.
Anyway, if var count become too high, you will need dimensionality reduction techniques.
Only place I have seen importance to Dummy Var trap is A-Z Udemy ML course. |
H: How to fix spelling mistakes in data?
I have an input data file which contains list of drug names.
I have more than 1000 unique drug names. However, the drug names has spelling mistakes and space character issues.
For ex: we have ISONIAZID 300MG TAB , ISONAZID300MG TAB, ISNIAZID 300MG
You can see how the above 3 terms are different in representation (due to spelling mistake) but actually indicate the same drug ISONIAZID 300MG TAB (which is the right spelling)
But the problem is there are several other drugs with such spelling mistakes and I am not sure how can I group all of them into one (meaning rename it with right spelling)? ex: All the 3 terms above should be renamed to ISONIAZID 300MG TAB (which is the right spelling)
I am posting it here to seek your opinion on is there any medical dictionary or automated approach which can take my raw csv file as input and output proper drug names?
AI: There are several general approaches to this but almost all of them compare the values to some baseline and make a decision whether the individual value is close enough.
You can compare the similarity of strings using different methods e.g. I often use the Levenshtein distance which basically measures how many characters you would have to change to convert word a into word b. By grouping all words with a low enough Levenshtein distance you already identify all words that should be the same.
It is even easier if you have a dictionary of "correct values" in which case you would compare each value to all entries in the dictionary and assign it the value with the lowest levenshtein distance.
Using some basic text analysis you could also identify the most common spelling mistakes in your data and eliminate them based on replacement rules. |
H: Effecient way to decompose multiple time series in a data frame and compare the fit of additive and multiplicative models?
I have a data frame in R that contains time series data of 7 variables that were taken on several hundred different individuals. I want to know if it would be more appropriate to use an additive model or a multiplicative model for each variable.
To give an example, the data is structured something like this:
set.seed(123)
ID = factor(letters[seq(15)])
Time = c(1000,1200,1234,980,1300,1020,1180,1908,1303,
1045,1373,1111,1097,1167,1423)
df <- data.frame(ID = rep(ID, Time), Time = sequence(Time))
df[paste0('Var', c(1:7))] <- rnorm(sum(Time))
What is an effective way to decompose the data for each variable/ID combination, fit each with an additive model and a multiplicative model, and compare the fits?
AI: One way to do this would be to fit the decompositions with the same numbers of degrees of freedom and see which fits the best. It is convenient to do this using the tsibble and feasts packages as they allow for modelling many time series at once.
I've modified your example data so that it is possible to do a multiplicative decomposition -- having negative values in the data makes multiplicative decompositions problematic.
The multiplicative decomposition uses STL on the log data, and then exponentiates the trend and seasonal terms to put them back on the original scale.
Your example has no obvious seasonality so I have arbitrarily set the seasonal period to 12 for illustration purposes. Change it to whatever it should be.
I have set the trend window to be 99 and the seasonal component to be periodic. Again, change these to suit your actual data but the two fits should have the same values.
set.seed(123)
ID = factor(letters[seq(15)])
Time = c(1000,1200,1234,980,1300,1020,1180,1908,1303,
1045,1373,1111,1097,1167,1423)
df <- data.frame(ID = rep(ID, Time), Time = sequence(Time))
df[paste0('Var', c(1:7))] <- abs(rnorm(sum(Time)))
library(tidyverse)
library(tsibble)
library(feasts)
# Create tsibble in long form
df <- df %>%
pivot_longer(starts_with("Var"), names_to="Series", values_to="value") %>%
as_tsibble(index=Time, key=c(ID,Series))
# Additive decompositions
additive <- df %>%
model(add = STL(value ~ trend(window=99) + season("periodic", period=12))) %>%
components()
# Multiplicative decompositions
multiplicative <- df %>%
model(mult = STL(log(value) ~ trend(window=99) + season("periodic", period=12))) %>%
components() %>%
mutate(remainder = df$value - exp(trend+season_12))
# Find variance of remainders
rva <- additive %>%
as_tibble() %>%
group_by(ID, Series) %>%
summarise(rv = var(remainder, na.rm=TRUE)) %>%
ungroup()
rvm <- multiplicative %>%
as_tibble() %>%
group_by(ID, Series) %>%
summarise(rv = var(remainder, na.rm=TRUE)) %>%
ungroup()
# Which remainder has lowest variance?
left_join(rva, rvm, by = c("ID","Series")) %>%
mutate(best = if_else(rv.x < rv.y, "additive", "multiplicative"))
#> # A tibble: 105 x 5
#> ID Series rv.x rv.y best
#> <fct> <chr> <dbl> <dbl> <chr>
#> 1 a Var1 0.357 0.361 additive
#> 2 a Var2 0.357 0.361 additive
#> 3 a Var3 0.357 0.361 additive
#> 4 a Var4 0.357 0.361 additive
#> 5 a Var5 0.357 0.361 additive
#> 6 a Var6 0.357 0.361 additive
#> 7 a Var7 0.357 0.361 additive
#> 8 b Var1 0.338 0.341 additive
#> 9 b Var2 0.338 0.341 additive
#> 10 b Var3 0.338 0.341 additive
#> # … with 95 more rows
Created on 2020-04-22 by the reprex package (v0.3.0) |
H: How to create a Time Series Training Dataset with variable sequence length
I have time series data with variable sequence lengths.
So something like:
date value label
2020-01-01 2 0 # first input time series
2020-01-02 1 0 # first input time series
2020-01-03 1 0 # first input time series
2020-01-01 3 1 # second input time series
2020-01-03 1 1 # second input time series
how is it possible to create a training dataset (numpy arrays) of shape [samples, time_steps, n_features] when time_steps is not consistent?
Additional Info: The model that is going to be trained is an LSTM which is capable to handle variable input lengths.
AI: I solved it the following way:
zero padding all time series that are smaller than the longest one
adding a Masking()layer with masking_value=0. which ignores all zero values before feeding the network.
The part of the model with the Masking layer looks like the following:
model = keras.Sequential()
model.add(layers.Masking(mask_value=0., input_shape=(None, 1)))
model.add(layers.LSTM(100))
Additional Info: Implementing the Masking Layer for a Model with two separate Inputs was kind of unconvinient with a the Keras Functional API, therefore i implemented that part of the model with the Keras Sequential API and connected it to the rest of the model which is implemented with the Functional API. |
H: XGBoost and Random Forest: ntrees vs. number of boosting rounds vs. n_estimators
So I understand the main difference between Random Forests and GB Methods. Random Forests grow parallel trees and GB Methods grow one tree for each iteration. However, I am confused on the vocab used with scikit's RF regressor and xgboost's Regressor. Specifically the part about tuning for the number of trees/iterations/boosting rounds. From my understanding each of those terms are the same. They determine how many times a decision tree should be calculated based on the algorithm. However, should I be referring to them as ntrees or n_estimators? Or should I simply use early stopping rounds for my xgboost and tune the number of trees only for my rf?
My Random Forest:
rf = RandomForestRegressor(random_state = 13)
param_grid = dict(model__n_estimators = [250,500,750,1000,1500,2000],
model__max_depth = [5,7,10,12,15,20,25],
model__min_samples_split= [2,5,10],
model__min_samples_leaf= [1,3,5]
)
gs = GridSearchCV(rf
,param_grid = param_grid
,scoring = 'neg_mean_squared_error'
,n_jobs = -1
,cv = 5
,refit = 'neg_mean_squared_error'
)
My xgboost
model = XGBRegressor(random_state = 13)
param_grid = dict(model__ntrees = [500,750,1000,1500,2000],
model__max_depth = [1,3,5,7,10],
model__learning_rate= [0.01,0.025,0.05,0.1,0.15,0.2],
model__min_child_weight= [1,3,5,7,10],
model__colsample_bytree=[0.80,1]
)
gs = GridSearchCV(model
,param_grid = param_grid
,scoring = 'neg_mean_squared_error'
,n_jobs = -1
,cv = 5
,refit = 'neg_mean_squared_error'
)
AI: As I understand it, iterations is equivalent to boosting rounds.
However, number of trees is not necessarily equivalent to the above, as xgboost has a parameter called num_parallel_tree which allows the user to create multiple trees per iteration (i.e. think of it as boosted random forest).
As an example, if the user set num_parallel_tree = 3 for 500 iterations, then number of trees = 1500 (=3*500) rather than 500. |
H: Is there a machine learning model suited well for longitudinal data?
I have a fairly large (>100K rows) dataset with multiple (daily) measurements per individual, for a few thousand individuals. The number of measurements per individual vary, and there are many null values (that is, one row may have missing values for certain variables/measurements, but not for all). I also have a daily outcome (extrapolated, but let's assume it's fair to do so, so there is a binary outcome for each day when measurements are taken).
My question goal is to model the outcome, such that I can predict daily outcomes for new individuals.
My background is in research, and I am familiar with some statistics and ML, and overall still fairly new to data science. I am wondering if there are any particular known ML algorithms that can be used to model such data. I am cautious about using logistic regression from something like python's scikit learn because the observations are not independent (they are highly correlated on an individual level). From my knowledge, these kind of data are well-suited for a mixed effects logistic regression or longitudinal logistic regression. However, I haven't been able to find any widely used ML algorithms for it, and I would like to pursue an ML approach rather than fitting a statistical model using something like lme4 package in R.
Could someone recommend an available ML algorithm to model such data?
PS: I did some research and found a few research articles on the topic but nothing widely used or clearly implemented. The structure of the data I am working with strikes me as very common, so I thought I'd ask.
AI: Assuming we are not talking about a time series and also assuming unseen data you want to make a prediction on could include individuals not currently present in your data set, your best bet is to restructure your data first.
What you want to do is predict daily outcome Y from X1...Xn predictors which I understand to be measurements taken. A normal approach here would be to fit a RandomForest or boosting model which, yes would be based on a logistical regressor.
However you point out that simply assuming each case is independent is incorrect because outcomes are highly dependent on the individual measured. If this is the case then we need to add the attributes describing the individual as additional predictors.
So this:
id | day | measurement1 | measurement2 | ... | outcome
A | Mon | 1 | 0 | 1 | 1
B | Mon | 0 | 1 | 0 | 0
becomes this:
id | age | gender | day | measurement1 | measurement2 | ... | outcome
A | 34 | male | Mon | 1 | 0 | 1 | 1
B | 28 | female | Mon | 0 | 1 | 0 | 0
By including the attributes of each individual we can use each daily measurement as a single case in training the model because we assume that the correlation between the intraindividual outcomes can be explained by the attributes (i.e. individuals with similar age, gender, other attributes that are domain appropriate should have the same outcome bias).
If you do not have any attributes about the individuals besides their measurements then you can also safely ignore those because your model will have to predict an outcome on unseen data without knowing anything about the individual. That the prediction could be improved because we know individuals bias the outcome does not matter because the data simply isn't there.
You have to understand that prediction tasks are different than other statistical work, the only thing we care about is the properly validated performance of the prediction model. If you can get a model that is good enough by ignoring individuals than you are a-okay and if your model sucks you need more data.
If on the other hand you only want to predict outcomes for individuals ALREADY IN YOUR TRAINING SET the problem becomes even easier to solve. Simply add the individual identifier as a predictor variable.
To sum it up, unless you have a time series, you should be okay to use any ML classification model like RandomForest or boosting models even if they are based on normal logistical regressions. However you might have to restructure your data a bit. |
H: Dealing with large data: selecting a sample
I'm given a data set to create a model that would predict whether a certain supply chain would be able to deliver the goods without delay or not. I'm doing this in Python.
The data set has 93000 data points has at least 100 features as given. What's worse, most of these features are categorical data and each feature has around 200 unique categories.
I want to develop the best model I can but I find the size of the data set is too large for this.
I tried different methods of feature reduction. But even for feature reduction methods, it takes infeasible amount of time in my laptop. So I thought the best thing I could do is to first select a sub sample of the data points and find the best model and best parameters and then train the whole data set with this model.
Is selecting a sample from the data set the best thing I could do in this situation?
If not, your kind suggestions are welcome.
If it is, what's the best method of selecting these samples?
AI: 9300 cases is not a lot, this doesn't even fall within the realm of big data or anything were you would have to apply special techniques.
Also given a more restricted feature set there are almost no computational issues you should encounter unless you work on an actual brick or Atari. As an example benchmark I am able to fit a model for >4M cases with up to 20-30 features (some with ~40 levels) on my 16gb Laptop in <10mins.
So let's focus on the real issue, the complexity of the data i.e. the amount of available features/feature levels and most importantly efficient code.
Assuming you don't have the resources to simply throw some cloud processing power on this problem and call it a day we have to restrict the complexity.
Note that this does not mean "sampling the data" which would indicate using only a subset of cases, as said in the beginning this isn't your problem.
Instead let's workshop ways to cut down computational requirements:
1. Optimize your code
This sounds simple but research and make sure that you are using the best packages for the job, adjusted parameters correctly and cut fat from your code. This is hard work but it is necessary and worth it.
E.g. have you checked that all code is parallelized as much as you can and runs on all CPUs?
2. Optimize your data structure
Factor levels normally aren't a problem computationally speaking but even so >100 levels might be too hard to understand anyway. Try to transform your data by e.g. grouping certain factor levels together. Also try to pivot parts of your data e.g. by wide to long data.
3. Use your domain knowledge
You surely understand your business and this problem. Don't just throw all the data at it! Think for a minute and remove unnecessary or duplicate columns.
4. Work your way upwards
ML modeling has a simply yardstick, if the validated prediction quality is good, your model is good. If you manage to do that with two variables, nobody cares.
So start with a bare-bones data set that includes ALL cases (still keep split-sample validation in mind here) but only the 2-3 simplest, most relevant predictors (based on domain knowledge) as well as the depended variable.
Fit a model and check it's quality, is it good enough? Yeah your done! If not include the next round of variables...
5. Modulize your EDA and have patience
I cannot stress enough that your data set really isn't that big or complex, so optimize your code first. BUT if basic EDA needed to find the best features still takes too long, do the following:
Structure and modulize your code by e.g. doing a PCA for only chunks of the columns (again use all case but only a subset of variables).
Let it run one-by-one and just have patience if the code runs for an hour, sometimes that is how it goes.
Look at the results, compare and decide how to merge results.
6. Talk to your boss
Yes I said I assume you do not have the resources to throw more power on it but if this is an important business problem there should be resources for cloud computation. Fitting a model on AWS is dirt cheap and will only take seconds, so realistically we are talking about <100$ here to train multiple models.
If you don't have a boss to pay for this stuff or he doesn't want to, try free resources. There are a lot of cloud based services that offer enough free GPU or CPU time that you should be able to fit a model. |
H: How to find from which row and column the value belong?
Suppose I created the below data frame
data = {'Height_1': [4.3,6.7,5.4,6.2],
'Height_2': [5.1, 6.9, 5.1, 5.2],
'Height_3': [4.9,6.2,6.5,6.4]}
df = pd.DataFrame(data)
Suppose someone comes and asks me
Find the row and column of height 6.9 ?
Find in how many rows and columns height 6.2 is present?
Please help me with what will be the code for this?
AI: You can use pandas .isin() function to compare all the values of your dataframe to a given value:
In [10]: df[df.isin([6.9])]
Out[10]:
Height_1 Height_2 Height_3
0 NaN NaN NaN
1 NaN 6.9 NaN
2 NaN NaN NaN
3 NaN NaN NaN
Now, if you want to get rows and column directly from it use .stack() on it. So, it will be like:
In [11]: df[df.isin([6.9])].stack()
Out[11]:
1 Height_2 6.9
dtype: float64
The output is a series. This will work in case of multiple matches too where the output will be a dataframe. |
H: Estimating a metric of a mixture
I'm looking for a general approach to solve a problem that is best demonstrated by the following example.
You mix six different batches of orange juice and you know the acidity, the sugar content, the concentration of volatile particles in each batch of the juice, the growing location of the fruits, and other data. You mix those batches, sell the product, and measure how well it was accepted by the market. Tomorrow, you decide to mix four other batches, the next day you will mix two batches, etc: the number of components in your product isn't constant. The goal is to take the information about each component and to predict the outcome metric.
If we could compute the average value of each input parameter, I would start by computing the average value of each parameter in the mix, and trying to use it as the X data in an ML algorithm. However, some of the parameters cannot be added. In our example, such a parameter can be the plantation age, plantation location, cultivar of the fruit, etc
What are the approaches to solve this problem? Does it belong to a certain studies problem class ?
AI: However, some of the parameters cannot be added. In our example, such a parameter can be the plantation age, plantation location, cultivar of the fruit, etc
Can you explain why these can't be added?
There exist many ways to encode categorical information, such as plantation location, such that a ML model can interpret it. In my use-cases, in supply-chain, I deal with thousands of unique location observations, where in I use a geo-encoding API to return latitude/longitude pairs to feed into my models.
In your case, where in you only have 6-factorial observations (that may be wrong), what I would do is to generally categorize your categorical features. So for location, instead of "Florida," I would say "South-East," and then lump all south-eastern locations together. Since you'll have low cardinality in feature size, binary or one-hot encoding may be useful for you. |
H: reduction of sample from videos sample
Well, I post the same question in the main stack before finding the right place, sorry.
A friend of mine is working with more than a 100 videos as sample for his neural network. Each video last more than a couple of minutes with around 24 frames per second. The objective, using deep learning, is to detect movement through all the samples.
The problem for him is the quantity of data he is dealing with. The training part require/consumes too much time. I'm no expert with data preparation, but I thought maybe he could turn all frame into dataframe, clean them from mono color image (full black/white), turn them into gray instead of full rgb and compress them but, I'm not sure if it will be enough.
Do you think of better method to reduce the training sample?
AI: Reduce the size e.g. using cv2.resize()
Compress the image (it is not lossless) e.g. cv2.imencode()
Lower the frame rate
Use lower precision - images are uint8 when loaded, but the deep learning frameworks use float32 by default. You could try float16 or mixed precision.
Using JPEG compression has been shown to be fairly good in terms of the reduction in memory and minimal loss of performance. Have a look a this research.
You could also drop the frame rate, so say 10 FPS. The actually values could be computed based on the expected velocity of the moving objects -> do you really require 24 FPS for the task?
Otherwise, the hardware you are using will determine which steps to take afterwards. Memory, number of operations, inference speed etc. will change how you optimise the process.
You mentioned "dataframe", so I will just point out that using Pandas Dataframes to hold raw image data, whilst looking easy, it generally very inefficient due to the number of data-points involved (pixels), and the fact that Pandas DataFrames are essentially annotated NumPy arrays - the annotations take a lot of space. Better to load into pure numpy arrays and use OpenCV for things suchs as making gray-scale (black and white) images from RGB, resizing them, normalising pixel values, and so on. |
H: Why linear regression feature coefficients become super large?
Introduction
I've implemented linear regression using sklearn and after all calculations I've got results like this:
Feature: 0, coef: -9985335237.46533
Feature: 1, coef: 417387013140.39661
Feature: 2, coef: -2.85809
Feature: 3, coef: 1.50522
Feature: 4, coef: -1.07076
Data
My data is based on user visits in gym. All data normalized 0 <= x <= 1. Data set has 10k observations.
X:
feature_0: gym's rating
feature_1: gym's review(rating) count
feature_2: gym's one visit price
feature_3: gym's unlimited subscription price
feature_4: distance to gym from user's home | calculated min(x / 30, 1.0), because mean is 15.17
Y: user's visit count to that gym
Data sample
Code
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression
from matplotlib import pyplot
from numpy import loadtxt
# define dataset
x = loadtxt('formatted_data_x.txt')
y = loadtxt('formatted_data_y.txt')
# define the model
model = LinearRegression()
# fit the model
model.fit(x, y)
# get importance
importance = model.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, coef: %.5f' % (i,v))
Question
Why linear regression feature coefficients become super large? Is it okay?
Feature: 0, coef: -9985335237.46533
Feature: 1, coef: 417387013140.39661
...
P.S: I'm new to this "part" of StackExchange and ML\DS at all, so please if I do something wrong or I have to provide more information, let me know! Any help would be appreciated. Thanks in advance!
AI: Large coefficients in linear regression are not necessarily a problem. They can be large becuase some variable was rescaled. You mentionned that you do some rescaling, but provide no details. Therefore it is not possible to tell what exactly is going on.
Here is a (general) example that explains how coefficients can get "large" (in R). Assume we want to model "visits" ($y$) contingent on "rating" ($x$):
# Data
df = data.frame(c(1,3,5,3,7,5,8,9,7,10),c(34,54,31,45,65,78,56,87,69,134))
colnames(df)<-c("rating","visits")
# Regression 1
reg1 = lm(visits~rating,data=df)
summary(reg1)
The regression results are:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 19.452 15.273 1.274 0.2385
rating 7.905 2.379 3.322 0.0105 *
This tells us, that visits increase by about 7.9 when rating increases by one unit. This is basically a linear function with intercept 19.45 and slope 7.9. Since our model is
$$ y = \beta_0 + \beta_1 x + u ,$$
the corresponding (estimated) linear function would look like:
$$f(x) = 19.45 + 7.9 x .$$
We can predict and plot our model. The results are just as expected, a positive linear function.
# Predict and plot
pred1 = predict(reg1,newdata=df)
plot(df$rating,df$visits,xlab="Rating",ylab="Visits")
lines(df$rating,pred1)
Now comes the interesting part: I do a linear transformation on $x$. Namely, I divide $x$ by some "large" number and I run the same regression as before:
# Transform x
large_integer = 10000000
df$rating2 = df$rating/large_integer
df
rating visits rating2
1 1 34 1e-07
2 3 54 3e-07
3 5 31 5e-07
4 3 45 3e-07
5 7 65 7e-07
6 5 78 5e-07
7 8 56 8e-07
8 9 87 9e-07
9 7 69 7e-07
10 10 134 1e-06
# Regression 2 (with transformed x)
reg2 = lm(visits~rating2,data=df)
summary(reg2)
The results are:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.945e+01 1.527e+01 1.274 0.2385
rating2 7.905e+07 2.379e+07 3.322 0.0105 *
As you see, the coefficient for rating is rather large now. However, when I predict and plot, I get basically the same results as before. The only thing that has changed is the "scale" of $x$ (the way $x$ is expressed).
Let's compare the coefficient for rating in both regressions.
In the first case it was:
# Relevant coefficient "rating" from reg1 (the "small" one)
reg1$coefficients[2]
rating
7.904762
In the second case it was:
# Relevant coefficient "rating2" from reg2 (the "large" one)
reg2$coefficients[2]
rating2
79047619
However, when I divide the coefficient rating2 by the same "large" number as I did to "rescale" the data, I get:
# "Rescale" large coefficient
reg2$coefficients[2]/large_integer
rating2
7.904762
As you can see, the "rescaled" coefficient rating2 is exactly the same as the original coefficient for rating.
What can you do to check your regression:
Run the regression without any rescaling and see if the results make sense
Make a prediction from the regression
Rescale your data (i.e. "standardise"), which should contribute to get better predictions because data are less "wonky" in this case. However, coefficients have no natural interpretation any more
Compare standardised data to non-standardised to see how your data changed. Based on the discussion above, you should get a good idea if very small or large coefficients can make sense after standardisation
Make a prediction, compare to the prediction from above |
H: Understanding SVM Kernels
Following Andrew Ng's machine learning course, he explains SVM kernels by manually selecting 3 landmarks and defining 3 gaussian function based on them. Then he says that we are actually defining 3 new features which are $f_1$, $f_2$ and $f_3$.
$\hskip0.9in$
And by applying these 3 gaussian functions on every input data:
$$x=(x_1,x_2)\to \hat{x}=(f_1(x), f_2(x), f_3(x))$$
it seems that we are mapping our data from $\mathbb R^2$ space to a $\mathbb R^3$ space. Now our goal is to find a hyperplane in the 3 dimensional space, where our transformed data is linearly separable. Is my understanding correct? If not, how these 3 new features should be interpreted?
$\hskip1in$
In some blog posts, i have read that by using a gaussian kernel, we are mapping our data to an infinite dimensional space (where gaussian kernel computes the dot product of transformed input data) which contradicts with my above understandings.
AI: Yep, this is the correct interpretation. The kernels make a difficult classification problem into a much simpler one by making the data linearly separable by transforming it into a higher dimension. I think this image does a good job of illustrating that. |
H: Which ML classifier is appropriate for me if all of my features are categorical?
My dataset contains four features. All of the features are categorical. There are 150 categories in the value of 1st and 2nd features. There are 8 categories in the value of 3rd and 4th feature. I replaced the categories with numeric values and applied Random forest. However, performance is not still up to the mark.
What other Machine Learning Classification algorithms I can try?
AI: What does performance is not up to the mark mean?
To optimize performance you first need to understand it.
A typical workflow is this:
1. Define a baseline performanc
E.g. predict values with a constant (average, mode) or pick randomly from a distribution and measure the accuracy of that
2. Compare your 1st draft model (in your case your RandomForest) to that baseline
If your 1st draft model isn't better than the baseline, especially a random baseline, you have some errors in your code, data, etc. Try to find and eliminate those first.
If it is better but not by much, you have top optimize the model (see next step). If it is better by much you are either a) done or b) not satisfied with the absolute performance in which case you have to optimize.
3. Optimize your model parameters
Now you start grid searching your model parameters to get every last bit of performance out of it to make sure the problem is with the model and not the parameters.
4. Beauty contest
If your performance is still subpar now you can try other algorithms but do not expect wonders from this step. If you already tried RandomForest maybe go for:
Boosted models like XGBoost
Naive Gauss models
...
Fit them on the same data, do parameter optimization and compare results.
5. Back to the data
Most likely just picking another model did not help if the original performance was too far away from acceptable. Then you have to go back to the data, collect more, feature engineer, etc. |
H: How do I iterate over my images in dataset?
I am building an autoencoder with help from this site. There I was trying to build an autoencoder for my own custom data. My images are stored in a folder IMG and have names like 0.jpg, 1.jpg, 2.jpg.....
I tried to develop an iterator to iterate over all my images but the problem arises that when I convert all of my 124 images in a single training_data array the model responds that it expected a single array yet 124 arrays were given to it. Can anyone tell me how I should write the iterator? I tried using the keras flow_from_directory function from the "Machine Learning mastery" website but it shows 0 images from 0 classes.
Here is my code:-->
import tensorflow as tf
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
from keras.callbacks import TensorBoard
import numpy as np
from PIL import Image
i = int(0)
images_dir = "/home/awesome_ruler/Documents/Atom projects/Compression_enc/Images/IMG/{}.jpg".format(i)
training_data = []
while i < 125:
print("working on ", i, 'file')
image = Image.open(images_dir)
pic_array = np.asarray(images_dir)
training_data.append([pic_array])
i += 1
input_img = Input(shape=(600, 400, 3)) # adapt this if using `channels_first` image data format
x = Conv2D(48, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(24, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(24, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = Conv2D(24, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(24, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(48, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.fit(training_data,
epochs=50,
batch_size=128,
shuffle=True,
callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])
Also I want the images to retain the 'color' feature so I am using the input shape as (600,400,3) because RGB is on 3 channels. is it correct?
I would have simply used my iterator but it is my understanding that I need a different function that communicates to the model and gives it images one-by-one while I am just loading them all in a single variable. So can anyone help me with this?
Here is the full TraceBack:-
Traceback (most recent call last):
File "autoencoder.py", line 46, in <module>
callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])
File "/home/awesome_ruler/.local/lib/python3.7/site-packages/keras/engine/training.py", line 1154, in fit
batch_size=batch_size)
File "/home/awesome_ruler/.local/lib/python3.7/site-packages/keras/engine/training.py", line 579, in _standardize_user_data
exception_prefix='input')
File "/home/awesome_ruler/.local/lib/python3.7/site-packages/keras/engine/training_utils.py", line 109, in standardize_input_data
str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 125 arrays: [array([['/home/awesome_ruler/Documents/Atom projects/Compression_enc/Images/IMG/0.jpg']],
dtype='<U76'), array([['/home/awesome_ruler/Documents/Atom projects/Compression_enc/Images/IMG/0.jpg']]...
AI: Your images_dir actually seems to be the path to a single image... but nevertheless,
I would simply create a single number array with shape: (num_images, height, width, channels) by doing the following:
import os
import numpy as np
# Root directory holding all images (I recommend removing the space in "Atom projects")
images_dir = "/home/awesome_ruler/Documents/Atom projects/Compression_enc/Images/IMG/"
# Number of images you want to load
N= 125
# Get all paths and take the first N
n_image_paths = sorted([f.path for f in os.scandir(data_root)])[:N]
# Load the images using on of the variants of loading images
images = np.array([Image.open(f) for f in n_image_paths])
# Be careful with variants: the order of channels is different for different methods!
# images = np.array([plt.imread(f) for f in n_image_paths]) # Matplotlib
# images = np.array([cv2.imread(f) for f in n_image_paths]) # OpenCV
Now images can be passed directly to your model:
autoencoder.fit(x=images,
epochs=50, ...) |
H: Does small batch size improve the model?
I'm training an LSTM with Keras.
I've noticed that the smaller the batch size, the more the loss decreases during periods: so this makes me think that the network can process fewer items better at a time.
Is it a normal behavior in general?
AI: In general smaller or larger batch size doesn't guarantee better convergence. Batch size is more or less treated as a hyperparameter to tune keeping in the memory constraints you have.
There is a tradeoff for bigger and smaller batch size which have their own disadvantage, making it a hyperparameter to tune in some sense.
Theory says that, bigger the batch size, lesser is the noise in the gradients and so better is the gradient estimate. This allows the model to take a better step towards a minima. However, the challenge is that bigger batch size needs more memory and each step is time consuming.
Even if somehow we can avoid the time and space constraints, bigger batch size still wouldn't give better solution in practice as compared to smaller batch size. This is because the surface of the neural networks objective is generally non-convex, which means that there might be local optimums. Just having an accurate gradient estimate doesn't guarantee us reaching the global optimum (which we seek). It could lead us to a local optimum accurately! Keeping the batch size small makes the gradient estimate noisy which might allow us to bypass a local optimum during convergence. But having very small batch size would be too noisy for the model to convergence anywhere.
So, the optimum batch size depends on the network you are training, data you are training on and the objective function you are trying to optimize. |
H: Displaying network error as a single value
I've been writing a neural network from scratch. I've completed the feedforward, backpropagation, and mini-batch gradient descent methods, so I can train the network. Other neural networks I've worked with usually display the error/loss after each batch as a single decimal value, and I'd like to implement this functionality but I'm not sure how.
I understand squared error is given by $(y - \hat{y})^2$, and that for an output layer with $m$ neurons, you should have an error vector of size $m$. However, how is the error vector displayed as one value?
AI: You can simply sum the absolute m error to have 1 overall error value of the neural network. You can also compute mean square error. Whatever error you choose will not change much, in both cases the smaller the error the better your neural network. Even if by convention we prefer to take the mean squared error. |
H: PCA - what do I do with its results?
I have a data set with more than 20 features, and I applied PCA:
M.fit_transform(all_data)
variance = M.explained_variance_ratio_
var = np.cumsum(np.round(M.explained_variance_ratio_, decimals=3)*100)
plt.ylabel('% Variance Explained')
plt.xlabel('# of Features')
plt.title('PCA Analysis')
plt.ylim(30,102.5)
plt.plot(var, marker="s")
plt.show()
Printing the var variable, I get
array([ 89., 100., 100., 100., 100., 100., 100., 100., 100., 100.])
I understand this tells us that the variance is explained by 2 features.
So I calculated it again, now the 2 components:
from sklearn.decomposition import PCA
M = PCA(n_components = 2)
X = M.fit_transform(all_data)
plt.scatter(X[:,0],X[:,1])
And this gives a "random looking plot". I understand that the data was changed during the PCA process.
What can I do with this information? How will this help me understand the data?
Is it useful per se? Is it useful as a preparation method for other methods? Which ones can I try?
AI: What can I do with this information?
You can do a lot of things with this data. You can visualize it, you can use the vectors for prediction or regression, whatever the task at hand. However, there are a few restrictions of PCA that you need to keep in mind. For eg. its very memory intensive, so you need to have a "lot" of RAM to use PCA on certain data-sets.
How will this help me understand the data?
You can visualize the data like this (image is taken from http://www.nlpca.org/pca_principal_component_analysis.html):
With reference to the above image, you can see that the data-points can be clearly separated into different clusters. Using this, you can apply K-Means and get different cluster centres. Using these cluster centres, you can further investigate and find additional insights.
Is it useful per see?
PCA is a dimensionality reduction technique which is very memory intensive.
If you have the required memory, you can easily reduce the number of features by 50-80 %, while still retaining a good amount of information. For eg, we can reduce 100 features to 20-30 features which contain maximum amount of information.
While performing PCA, it's important to check if the matrix computation can be done with the RAM that you have, otherwise, you can check out Iterative - PCA.
Is it useful as a preparation method for other methods?
It is very useful as a preparation method for clustering and visualization.
Please refer this link : https://qiita.com/bmj0114/items/db9145a707cb6ed13201
Which ones can I try?
You can try the example given in the link above. |
H: Find the longest palindrome substring of a piece of DNA
I have to make a function that prints the longest palindrome substring of a piece of DNA. I already wrote a function that checks whether a piece of DNA is a palindrome itself. See the function below.
def make_complement_strand(DNA):
complement=[]
rules_for_complement={"A":"T","T":"A","C":"G","G":"C"}
for letter in DNA:
complement.append(rules_for_complement[letter])
return(complement)
def is_this_a_palindrome(DNA):
if DNA!=(make_complement_strand(DNA)[::-1]):
print("false")
return False
else:
print("true")
return True
is_this_a_palindrome("GGGCCC")
But now: how to make a function printing the longest palindrome substring of a DNA string?
The meaning of palindrome in the context of genetics is slightly different from the definition used for words and sentences. Since a double helix is formed by two paired strands of nucleotides that run in opposite directions in the 5’- to-3’ sense, and the nucleotides always pair in the same way (Adenine (A) with Thymine (T) for DNA, with Uracil (U) for RNA; Cytosine (C) with Guanine (G)), a (single-stranded) nucleotide sequence is said to be a palindrome if it is equal to its reverse complement. For example, the DNA sequence ACCTAGGT is palindromic because its nucleotide-by-nucleotide complement is TGGATCCA, and reversing the order of the nucleotides in the complement gives the original sequence.
AI: Longest palindromic substring is a computer science problem.
One common solution is Manacher's algorithm. |
H: Reason for capping Learning Rate (alpha) up to 1 for Gradient Descent
I am learning to implement Gradient Descent algorithm in Python and came across the problem of selecting the right learning rate.
I have learned that learning rates are usually selected up to 1 (Andrew Ng's Machine Learning course). But for curiosity reasons, I have tried alpha = 1.1 and alpha = 1.2.
I can see in the case of alpha = 1.2, we reach the lower cost faster than the other learning rates (simply because the curve touches the bottom first). Is it safe to say that alpha = 1.2 is the best rate?
I plugged in the theta values, where alpha = 1.2, to predict the price of an item, my implemented function provided the same answer as Sklearn's LinearRegression() in lesser iterations than it did with alpha = 1.0. Using lower alpha rates would increase the number of iterations.
So, why is the learning rate capped at 1? Is it mandatory or suggested?
Should I forget about selecting learning rates and let functions like LinearRegression() take care of it automatically in the future?
I am new to machine learning and I want to understand the reasoning behind the algorithms rather than calling the functions blindly and playing around with parameters using high-level libraries.
Feel free to correct me if I have understood the concepts wrong.
AI: Setting a hard cap on the learning rate, for example at alpha = 1, is certainly not mandatory. It is also not necessarily advisable to set such a cap, as the merits of using different values for the learning rate are highly dependent on the exact function upon which you are performing gradient descent, what you hope to achieve in doing so, and what measures you will use to measure the relative success of one value choice over another.
I think the information you provided demonstrates this concept well. For example, if all you care about is moving towards some local minimum of your cost function, ultimately finding parameters for your model that achieve a cost less than say .01, and all else being equal accomplishing these tasks in the least number of iterations possible, we can see that among the values you tried alpha = 1.2 is indeed the best value (among the runs you showed us, it reached the cost of .01 in the least number of iterations). However, many people care about other properties of their gradient descent algorithms. For example, one may prefer a learning rate which is more likely to arrive at whichever (if any) local minima is nearest to the initialized parameters; lower learning rates seem better suited for this goal, since a high learning rate has a higher potential of 'overshooting' one minimum and landing in the basin of another. Or one may prefer a learning rate which produces a very smooth looking cost over time graph; lower learning rates seem better suited for this goal too (for an anecdotal example, your alpha = .03 learning curve looks smoothest).
There are many resources and methods available for choosing "ideal" learning rates and schedules out there, and I think it is worthwhile to read up on them to get a flavor for what people typically do. Most suggestions are heuristic, and not guaranteed to be meaningful in any particular example. Setting a cap of alpha = 1 is one such heuristic, and is probably suggested because it has been useful for many people with a lot of experience. Since many people have devoted significant time to studying this question, I don't think it is necessarily a bad idea to postpone thinking too hard on the topic when one first uses gradient descent, and instead just use the defaults in things such as scikit-learn's implementations, or take suggestions such as never setting alpha larger than 1. Personally, though, I share your desire to not blindly use defaults when I have the time to think on alternatives, and think it would be informative (if potentially not useful) to spend time investigating exactly how learning rate choices affect the goals you have in your gradient descent implementation. |
H: Split data into linear regression
I am looking for a way that could help me create more precise models.
Let's say these are real estate prices for different areas. Only in the data I do not have a clear division into these areas I suppose this is the relationship.
At present I have one model (red line) and I would like to have n models, eg 3. Additional two green lines. And use them for points that are closer to this line.
How convenient to go about it?
What measure should be used to divide this data and apply linear regression so that the variance is as low as possible?
May I have some inspiration :)?
AI: You schould check pandas qcut
Below I will put results and some code
labels = ['weak', 'medium', 'strong']
df['label'] = pd.qcut(df['values_to_divide'], len(labels), labels=labels) |
H: Considering the output of a BLSTM in pytorch, what's the order of the elements?
I am currently using pytorch to implement a BLSTM-based neural network. I understand that the output of the BLSTM is two times the hidden size. However, I am currently unable to find out whether this is ordered as [forward_state_0, backward_state_n, forward_state_1, backward_state_n-1,..., forward_state_n, backward_state_0] or as [forward_state_0, forward_state_1,..., forward_state_n, backward_state_n, backward_stat_n-1,...,backward_state_0] or something else. I'd like to feed a pairwise maximum of the outputs to the next layer so the most import thing for me is which are the corresponding forward and backward states.
AI: Your answer is in the documentation of the code you linked in your comment:
For the unpacked case, the directions can be separated using output.view(seq_len, batch, num_directions, hidden_size), with forward and backward being direction 0 and 1 respectively. Similarly, the directions can be separated in the packed case.
So you just need yo separate the directions as specified above and then take the element-wise maximum with torch.max. |
H: Suitable sample data set to test machine learning algorithms
I'm new to Machine Learning and I just came across the sci-kit package. On this interesting page there are many toy data sets used to test different clustering algorithms. Each data set has a unique pattern and some algorithms perform better than others depending on the data sets.
I want to ask why these data sets are chosen as tests for the algorithms? What are the properties for them to be suitable for use in testing? Are there any other data sets with common attributes that are used for the same purpose? Do they have certain names that I can read more about?
Thank you.
AI: The toy examples or the common datasets you are talking about are so because they are simple to visualise it and work with. Their simplicity helps the beginner to train simple models which don't require much compute. The simplicity in the structure of the dataset allows visualisation of the data on lower dimensions.
Reason for using them as test datasets is that they provide us a quick sanity check to see if the algorithm performs or not. The link you provided is specifically for clustering problems. So, datasets which can be easily visualised on 2D plane would be a simple dataset to check the performance of the algorithm via inspection. Had it been a complex datasets like a dataset of human faces, it would be difficult to evaluate the performance of the model through visualisation and inspection.
Some examples for such datasets:
MNIST dataset - collection of handwritten digits used to train classification network to identify the class of digit during test time.
Cifar-10 : collection of RGB images of 10 classes of objects in real world (e.g cars and birds).
Cifar-100: upgrade of Cifar-10. Contains images from 100 classes |
H: Mini Batch Gradient Descent shuffling
My data set is of shape (60,784,1000) with mini batches for input and (60,10,1000) for labels, should I shuffle only the 60 mini batches or the training examples themselves?
AI: Normally, you would shuffle up all of the examples and then portion them off into batches of some chosen size. Then, you would do a parameter update based on the gradient of the loss function with respect to each batch. This whole process is one "epoch" of training. Typically, deep neural nets are then trained over many epochs, often with a learning rate that varies as training proceeds.
An important aspect of this process is that when the data is shuffled up at the beginning of an epoch, examples are put into batches with different examples than they were matched with in the previous epoch. This gives us a more complete sampling of batch gradients and improves our collective stochastic estimation of the optimal gradient (the derivative of the cost function with respect to the model parameters and data).
Short answer: your model performance will almost certainly be worse if you choose static batches and shuffle those batches around instead of shuffling the data and then dividing them into batches.
Also, be careful of your shapes. With a dataset shape of (60, 784, 1000), it's highly likely that you're working on MNIST or one of its cousins - MNIST is 60,000 examples of length (784,), if the images have been flattened down from 28 x 28 pixels. The 784 being axis 1 in your shape tuple makes me concerned that you've reshaped your data incorrectly. Make sure that the array entries that represent a single image are where you think they are. |
H: Measuring the success of text summarization
I am trying to make a text summarization program that will take a text article and reduce it to a para or 2.
Since I am a newbie with no idea of NLP, it is hard to approach and break down the problem. So I was wondering if there was a measure that is used to check for effectiveness and correctness of text summarization. I tried googling this, but nothing that suits my purpose.
Does something like this even exist? or am I going in the wrong direction?
AI: You will first need a human written summary as your correct output. You can then compare the original and generated ones using Rouge scores. They compare the similarity between two giving paras by comparing different combinations of sub-phrases. Search for Rouge and you'll get different types of rouge scores, you'll probably need to use more than one. Also, there are libraries available that calculate rouge scores so you don't have to implement it yourself! |
H: When is z-normalization not needed when using DTW?
I'm hoping to get some answers to a question I have regarding normalization of DTW datasets, in particular datasets in which two time-series shapes with similar shapes but differences in magnitude are misclassified under z-normalization.
One reason is simply empirical. When I z-normalize my data it doesn't cluster well. When I don't z-normalize it works quite well. But that isn't really enough to ignore the normalization advice.
The second is in reading the links in this thread which does a good job summarizing the other threads related to this, reading the literature, and thinking about my data. I cannot find a good discussion of the case where magnitude matters within a subset of similar clusters and z-normalization negates that important magnitude difference. In cases where the shape of some data can generally be the same (the lengths are the same, but the amplitude differs) z-normalization makes them cluster as identical when they aren't. Does this mean that DTW just isn't the right tool, that I should be normalizing differently, or is my data a special case because it is already normalized the way it is collected?
Briefly, my data is isotopic ratios recorded in fish "ear stones" which change as a fish moves from one location to another and are recorded in the growing layers of the stone (called an otolith). The isotope ratio in each river is very stable, and that ratio is known to the third or fourth decimal place. There is no biological fractionation, the information is recorded as it exists in the river when the fish enters the river. All data is normalized at collection with regard to the global ocean value which is known to the 5th decimal place. What we care about is classifying the data according to the shape of the isotopic curve, which is characteristic of where and when fish moved between rivers, the different decisions fish make in their movements. All the curves are interpolated to the same length prior to analysis.
The problem I see in my data is illustrated by the conceptual plot below.
The warping of DTW helps to better classify fish with mild differences in their movement timing, like the two black lines in the plot. It also does a good job of classifying patterns with really different shapes (the red line). Without z-normalization, DTW correctly classifies the blue line as a different group. But, using z-normalization, the two black lines classify with the blue line because the magnitude difference is erased.
To take the example from Section 1.2.1 of Searching and Mining Trillions of Time Series Subsequences under Dynamic Time Warping from the linked question above, the movement of fish in two different habitats is analogous to the action of drawing a gun or pointing a finger. The reason for z-normalization in the gun-point dataset, according to the author, is to account for offsets in scaling of the video. But in this case all of the data is normalized already to the global marine signature to account for machine drift, and the isotopic values are extremely precise and accurate. The error in amplitude of the curves is less than 1%. Is z-normalization required here, or is it just erasing meaningful differences in amplitude that are correctly classified without z-normalization?
I'm curious not just to find an answer regarding my own data, but also because it has been really difficult for me to find a straightforward discussion of this type of amplitude problem in the literature. Most of the advice is a blanket, "In 99% of cases, you must z-normalize." as eamonn stated in the other question...an opinion I respect given his work on this subject. But, I'm curious about what those 1% of examples are. Perhaps I'm just missing it, or perhaps I'm missing the boat completely. Your thoughts, and explanations, would be greatly appreciated.
AI: That eamonn guy sounds smart.
If you do not normalize, small differences in the mean and/or STD rapidly swamp any shape similarity.
Here is one way to think about it.
If you do not normalize, and the means or STDs are different, then.......
EuclideanDIST(A,B) is approximately equal to DTWdist(A,B), which is approximately equal to [abs(mean(A)-mean(B)) times a constant].
If you can model the similarity with just the mean, then why do you need to even look at the shape?
BTW, you have a cool problem/dataset. |
H: How can I fix regression model interpretation of feature?
I'm building a regression model to predict the values of a feature $Y$ given a set of other features $X_{1}, X_{2}, X_{3}..X_{n}$.
Onde of these other features, let's say $X_1$, is known to be inversely proportional to $Y$ based on domain's knowledge. The problem is my model is interpreting his coefficient as positive, letting it directly proportional to $Y$. I've tried plenty of different models to verify if I could get better interpretation, such as OLS, Linear Regression, and Logistic Regression, but every model I tried failed to interpret the $X_1$ coefficient.
What can I do to get a regression that better reflects the real-world behavior of this coefficient?
AI: Unless there's a mistake in your code, or the coefficient on $X_1$ is not significant, I'd be inclined to trust the model output.
It's not unusual for data to behave this way. Just because $X_1$ and $Y$ are inversely related with respect to the marginal distribution of $(X_1, Y)$, as can be concluded from a scatterplot of the two variables, does not mean this relationship holds conditional on other variables.
Here is an example where $(X_1, Y)$ are inversely related, but are positively related conditional on another value, $X_2$. (The example is generated using R -- you've tagged python, but this concept is language-agnostic):
library(tidyverse)
library(broom)
set.seed(1)
N <- 100
dat <- tibble(
x2 = sample(1:4, size = N, replace = TRUE),
x1 = x2 + rnorm(N) / 3,
y = x1 - 2 * x2 + rnorm(N) / 5
)
ggplot(dat, aes(x1, y)) +
geom_point(aes(colour = factor(x2))) +
theme_bw() +
scale_colour_discrete("x2")
Here are the outputs of a linear regression model. You'll notice that the coefficient on $X_1$ is negative when $X_2$ is not involved, as anticipated, but is positive when $X_2$ is involved. That's because the interpretation of a regression coefficient is the relationship given the other covariates.
lm(y ~ x1, data = dat) %>%
tidy()
#> # A tibble: 2 x 5
#> term estimate std.error statistic p.value
#> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 (Intercept) -0.492 0.154 -3.20 1.83e- 3
#> 2 x1 -0.809 0.0549 -14.7 1.33e-26
lm(y ~ x1 + x2, data = dat) %>%
tidy()
#> # A tibble: 3 x 5
#> term estimate std.error statistic p.value
#> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 (Intercept) 0.0189 0.0540 0.349 7.28e- 1
#> 2 x1 1.04 0.0681 15.3 1.42e-27
#> 3 x2 -2.05 0.0726 -28.2 1.60e-48
Created on 2020-04-27 by the reprex package (v0.3.0)
This concept extends to more than two covariates, as well as continuous covariates. |
H: will increasing threshold always increase precision?
here precision at threshold 0.85 > precision at threshold 0.90. shouldnt it be the other way round? increasing threshold will reduce False positive and precision will be greater than before?
AI: here precision at threshold 0.85 > precision at threshold 0.90. shouldnt it be the other way round? increasing threshold will reduce False positive and precision will be greater than before?
Precision is $\frac{\text{TP}}{\text{TP}+\text{FP}}$
Both $\text{TP}$ and $\text{FP}$ are reduced when you increase the threshold. If both decrease in proportion to the current precision (i.e. they are spread evenly at each confidence value), then precision will remain the same. Most models on most datasets will tend to increase precision as the threshold increases, at least initially (e.g. moving from 0.5 to 0.6) as false positives may commonly be found as uncertain edge cases with low confidence, i.e. false positives tend to occur more frequently at low confidence, so increasing threshold will exclude a higher ratio of false positives than true positives than the current precision. However, there is no guarantee of that.
The value of precision will vary in practice depending on what the model predicted for each example. If you have a cluster of highly confident false positives, they can cause precision to drop as threshold grows, until they get excluded. The most extreme example would be where the most confident classification is incorrect, in which case the highest possible threshold will score zero precision. |
H: How to identify topic transition in consecutive sentences using Python?
I'm new to data mining. I want to detect topic transition among consecutive sentences. For instance, I have a paragraph (this could be a collection of dozens of sentences, sometimes without transitional words) as follows:
As I really like Mickey Mouse, I was hopping to go to Florida. But my
dad took me to Nevada. Obviously, Mickey Mouse was not there. But, I
attended a camp with other children. And, I really enjoyed and learnt a lot from my
camp.
Here, I want to automatically split this into following sub-paraphs:
As I really like Mickey Mouse, I was hopping to go to Florida. But my
dad took me to Nevada. Obviously, Mickey Mouse was not there.
But, I attended a camp with other children. And, I really enjoyed and learnt a lot from my
camp.
As far as I know, this is not the sentence similarity measurement. What technique should be used here? Any example using python or tensorflow models would be greatly appreciated.
AI: One solution could be to:
Get sentence embeddings from FastText
Compute Euclidean Distance between the consecutive sentences
If the distance between the consecutive sentences is close to 1, then, you may say the two sentences are talking about different topics.
See here how to compute sentence embeddings for the English language: https://github.com/facebookresearch/fastText/blob/5b5943c118b0ec5fb9cd8d20587de2b2d3966dfe/python/fasttext_module/fasttext/FastText.py#L127
fasttext.util.download_model('en', if_exists='ignore') # English
ft = fasttext.load_model('cc.en.300.bin')
fasttext.util.reduce_model(ft, 20)
def get_fasttext_sentence_embedding(sentence, ft):
if pd.isna(sentence):
return np.zeros(20)
return ft.get_sentence_vector(sentence)
Then, compute the euclidian distance between fast text embeddings of consecutive sentences.
The same can be done using LDA (topic model), but, that would require a lot of text to model the topics. |
H: Filling missing values for Embedded List in Python3
I searched for a similar question but I didn't come across. And I'm new in this area, I hope I explained my question well enough.
I have a dataset consist of text data. I store them in a list and every row of a list consists of a string value. But every row length is not equal. I want them to be equal, so I can use them in a self-attention model.
The sample of my dataset
In [8]: myList
Out[8]:
[
['the first line of my dataset'],
['the second line'],
['the 3rd'],
['the 4th'],
['the 5th'],
['the 6th'],
['the 7th'],
]
So as you can see the first one is longer than the rest of them. I want to fill with a certain value like # to equalize the word count.
The sample output I'd like to do
In [8]: myList
Out[8]:
[
['the first line of my dataset'],
['the second line # # #'],
['the 3rd # # # #'],
['the 4th # # # #'],
['the 5th # # # #'],
['the 6th # # # #'],
['the 7th # # # #']
]
If this would be a dataframe I could use fillna() function of Pandas library. I tried to apply this:
train_X = pd.Series(train_X).fillna("#").values
but since it is an embedded list(I guess) it didn't work. Is there a better way to do that?
Any recommendation is appreciated.
AI: According to the suggestion, @bkshi gave to me I come up with a solution here below:
Also since texts_to_sequences() function convert my list to sequences starting from 1, I could use pad_sequence() and use 0 instead of a string value.
This solution satisfies my requirements so I used a number as padding instead of a string value.
import pandas as pd
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
my_list = [['the first line'],
['the 2nd line'],
['the 3r line'],
['the 4th line'],
['the 5th line'],
['the'],
['the 5th line, this is']]
max_features = 10 #how many unique words you're using
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(my_list)
my_list = tokenizer.texts_to_sequences(my_list)
my_list = pad_sequences(train_X, maxlen=None, dtype='int32', padding='post', truncating='post', value=0.0) |
H: Predicting time series, with few historical samples based on similar series
I'm trying to build a model with Keras to predict the time series of a sensor, based on its type and historic data of sensors of the same type.
The figure below shows 3 time series, generated from 3 sensors of the same type, the green dashed line is the new sensor data and the vertical line is where the data for the new sensor end.
I've tried writing an LSTM network, that returns the hidden state output for each input time step, while the target was the values for each timestamp. Then trying to predict the new time series giving the model a few points of the sensor history data. With no luck :(
So I'm guessing I'm walking on the wrong path. What are the options of predicting a time series with just a few historical samples based on the history of other time series of the same type?
Any help / reference / video would be appericiated
AI: I'd suggest statistical forecasting techniques such as ARIMA or ES, but those models usually cannot generalize well across timeseries, so you'd need one for each timeseries
A good starter for using LSTM for forecasting is here - https://www.tensorflow.org/tutorials/structured_data/time_series. But if you don't have enough data, NNs would like result in poor test results.
For your case, I'd suggest you trying a regression approach- structuring your timeseries into a regression features format. After that you can use regression models from sklearn, starting with linear and moving to more complex ones. Because you have less data, you might want to explore less complex models first to prevent overfitting. For the features, for example, you could create lag features (what was the value of your signal two timestep back, what was the mean/std of the past 2 timesteps, what was the maximum in the last 6 timesteps and so on). Look up any kaggle competition on timeseries forecasting if you want to refer code or get specific ideas on feature extraction |
H: Un-learning a single training example from a trained model
I was going through the paper "The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction" by google on which suggests best practices for models in production. In a section about privacy controls in the data pipeline it says:
Finally, test that any user-requested data deletion propagates to the
data in the ML training pipeline, and to any learned models.
I understand about the data deletion from the data pipeline but is it even possible to "un-learn" a single training example without retraining on the new data? They have mentioned in the paper that the practices are being used in google at some point or other, so there might be an efficient way but I'm unable to get any information on this.
I am looking for any literature on this or any ideas about how one would go on solving this problem.
Edit:
On further research, I found this paper which focuses on the specific problem. Though making a lot of assumptions they propose a method for k-means too. Looks like this is an upcoming research area and would require time to develop!
AI: is it even possible to "un-learn" a single training example without retraining on the new data?
To the best of my knowledge, the answer is no except in some very special cases.
The most obvious exception that comes to mind is instance-based learning, such as kNN: since the "model" itself consists only of the set of training instances, it's straightforward to remove an instance.
In general, supervised ML relies on generalizing patterns based on the instances from the training set. Any non-trivial model consists of multiple such patterns, with every pattern potentially resulting from a different subset of instances. Even if there was a way to trace which instance participated to which pattern (that would be extremely inefficient), removing any pattern would probably cause the model to fail. |
H: Relu with not gradient vanishing function is possible?
I'm beginner in ML.
In the ANN, relu has the gradient of 1 in x>0
how ever, i wonder in x=<0 relu has gradient of 0 and may have gradient vanishing problem
in deep neural networks.
if activation function like y=x(for the all the x) has no gradient vanishing problem,
why we dose not use this function in deep neural networks?
Is there any side effect for y=x(for all x)? (maybe, the weight may go infinity in deep neural networks...... however, I think this problem is also being happen in ReLU. so it is not a problem(I think.))
AI: If you are using an activation like y=x, then your model is a simple linear one. Multiple layers with such activation will be equivalent/reduced to only one layer with a linear activation! Thus you can only map linear function satisfactorily with this type of model. To be able to learn complex non-linear functions, you need to use multiple layers with non-linear activation in between to make the whole model non-linear
To prevent the vanishing gradient problem, there is a variant of relu called Leaky ReLU. This activation is same as relu in the positive region of x. For negative region of x, it is a linear function with a small slope (e.g. 0.2). This makes Leaky ReLU a non linear activation at x=0 point. |
H: How to choose best model for Regression?
I'm building a model to predict the flight delay. My dataset contains the following columns:
FL_DATE (contains months(1-12)), OP_CARRIER (One hot encoded data of Carrier names), ORIGIN(One hot encoded data of Origin Airport), Dest(one-hot encoded data of Dest Airport), CRS_DEP_TIME(Intended time of departure ex: 1015), DEP_TIME(Actual time of departure ex: 1017),DEP_DELAY(the difference between crs-dep ex: -2), ARR_DELAY(arrival delay ex: -2)
My target variable is ARR_DELAY. After checking my data, I have decided it is a regression problem. However, I'm not sure what method do I need to use for selecting the appropriate columns. On the other hand, I was plotting each column with ARR_DELAY to check their relation and got something like this: FL_TIME vs ARR_DELAY
In such scenarios, if I have to build a model for such data which regression technique should I use?
PS: I'm new to Machine Learning. Please correct me If I'm heading in the wrong direction
AI: One of my favorite tools for feature selection is the Random Forest. Consider giving it a try:
https://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html
Random Forests are generally classifiers but they have the added advantage of being able to find and measure the variable's importance. It can be a long process but helpful. By using 'Gini importance' as a measure, RF can provide a bar chart that gives a relative comparison of the features with respect to which feature is best at separating signal from the noise.
Check out this further discussion:
https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#workings |
H: Getting the positive impacting features using SHAP
I'm attempting to use SHAP to automatically extract feature names that have a positive impact on my regression models. On inspection of the code I see that the bar plot, for example, determines these by taking the mean absolute SHAP values for a feature. Being an absolute value, it obviously takes the absolute impact but I want to only consider positive impacting values.
Is my intuition that I can just take the mean instead of the mean of the absolute values correct?
(highly) Negative SHAP values should give a negative mean value.
Is this a good approach or am I missing some better way to do this?
EDIT:
I am specifically interested in features that raise the predicted value. ie. if feature_1 lifts the predicted value by 100 and feature_2 by 1000, I want this information to be extracted as feature_2 has and higher impact on the output value.
AI: Depending on your model there may be some better model-specific approaches than SHAP. It is also important to note that SHAP is an approximation of Shapley value, with the main assumption of not having too much correlation between your features.
That being said, taking the mean instead of the mean absolute values seems to be the most efficient approach in the continuity of what you are doing. Just keep in mind that :
SHAP don't have a "physical" interpretation, in terms of direct impact on the output. It may lack some meaning for real life users.
Taking the mean can "hide" skewed SHAP profile : a variable with high impact on a small subgroup of instances and no impact on the rest may get the same average than a variable with small impact on every instance. |
H: Why is n-grams language independent?
I don't understand how n-grams are language independent. I've read that by using character n-grams of a word than the word itself as dimensions of a vector space model, we can skip the language-dependent pre-processing such as stemming and stop word removal.
Can someone please provide reasoning for this?
AI: The way I read this, you are actually asking two questions:
How do character n-grams help to encode knowledge that is often encoded with the help of techniques such as stemming ?
Why are n-grams language independent? I'm not totally sure on what you mean by this one, but I'll take a stab at it
Character N-grams
Some languages (most languages that I know of, but some more than others) have grammatic rules that change the morphology of a word: one house, two houses. In a vanilla Vector Space Model house and houses are not identical and form two dimensions of your model. As if they were as different as house and apple.
We know that English applies these morphological operations on the words and we can counter that by bending the words back into their 'stem'.
For instance, the Snowball stemmer would bend them back to a token that does collide:
House -> Hous
Houses -> Hous
Note: Hous is not what we think of as a stem, hence my use of the quotes around 'stem'
N-gramming is basically splitting text into all subsequences sequences of length N. If we apply that (N=3, forgetting about start and stop for simplicity) on our example strings we would get something like:
>
House -> Hou, ous, use
Houses -> Hou, ous, use, ses
Note that we end up with 4 new dimensions, 3 of which collide. This reduces our previous found need for stemming. We could argue that it still doesn't collide fully, but then again, we might not want it to.
Language Independence
I don't think n-grams are language independent. Certainly not on the word level: some languages are more regular and some languages are more context-free. Some languages follow a pattern that looks more like adding words to the end as you go, and those languages are probably more suited for modelling with n-grams. And this probably holds too for character-based n-grams.
However, as previously argued, they add flexibility to the V.S.M. when it comes to handling morphology. That morphology would normally require 'manually' encoding knowledge about the language at hand. With this more or less out of the way your system needs less configuration to the language at hand, which makes it less dependent on the language. |
H: A Machine learning model that has a undefined input size but a fixed output?
I don't know too much about ML but I can seem to figure out how to train something like this. If you guys could list some possible ways to do this, thank you.
AI: You have multiple options to "collapse" a variable-length input into a single value:
Recurrent neural networks (RNN), either vanilla RNNs or more powerful variants like long-short term memories (LSTM) or gated recurrent units (GRU). With these, you "accumulate" the information of each time step in the input and at the end, you get your fixed-size output.
Pooling, either average pooling or max pooling. You just compute the average/maximum of the representation across the time dimension.
Padding. You just assume a maximum length for the data and create a neural network that receives a piece of data of such a length. For any input data that is shorter, you just add some "padding" elements at the end until they have the maximum size. |
H: Google DataFlow
I'm trying to build a Google dataflow pipeline through one of the posts in Medium.
https://levelup.gitconnected.com/scaling-scikit-learn-with-apache-beam-251eb6fcf75b
However, it seems like I'm missing one of the project argument and it throws the following error. I'd appreciate your help to guide me through.
Error:
ERROR:apache_beam.runners.direct.executor:Giving up after 4 attempts.
WARNING:apache_beam.runners.direct.executor:A task failed with exception: Missing executing project information. Please use the --project command line option to specify it.
Code:
import apache_beam as beam
import argparse
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.io.gcp.bigquery import parse_table_schema_from_json
import json
query = """
SELECT year, plurality, apgar_5min,
mother_age, father_age,
gestation_weeks, ever_born
,case when mother_married = true
then 1 else 0 end as mother_married
,weight_pounds as weight
,current_timestamp as time
,GENERATE_UUID() as guid
FROM `bigquery-public-data.samples.natality`
limit 100
"""
class ApplyDoFn(beam.DoFn):
def __init__(self):
self._model = None
from google.cloud import storage
import pandas as pd
import pickle as pkl
self._storage = storage
self._pkl = pkl
self._pd = pd
def process(self, element):
if self._model is None:
bucket = self._storage.Client().get_bucket('dsp_model_store')
blob = bucket.get_blob('natality/sklearn-linear')
self._model = self._pkl.loads(blob.download_as_string())
new_x = self._pd.DataFrame.from_dict(element, orient = "index").transpose().fillna(0)
weight = self._model.predict(new_x.iloc[:,1:8])[0]
return [ { 'guid': element['guid'], 'weight': weight, 'time': str(element['time']) } ]
schema = parse_table_schema_from_json(json.dumps({'fields':
[ { 'name': 'guid', 'type': 'STRING'},
{ 'name': 'weight', 'type': 'FLOAT64'},
{ 'name': 'time', 'type': 'STRING'} ]}))
class PublishDoFn(beam.DoFn):
def __init__(self):
from google.cloud import datastore
self._ds = datastore
def process(self, element):
client = self._ds.Client()
key = client.key('natality-guid', element['guid'])
entity = self._ds.Entity(key)
entity['weight'] = element['weight']
entity['time'] = element['time']
client.put(entity)
parser = argparse.ArgumentParser()
known_args, pipeline_args = parser.parse_known_args(None)
pipeline_options = PipelineOptions(pipeline_args)
# define the pipeline steps
p = beam.Pipeline(options=pipeline_options)
data = p | 'Read from BigQuery' >> beam.io.Read(
beam.io.BigQuerySource(query=query, use_standard_sql=True))
scored = data | 'Apply Model' >> beam.ParDo(ApplyDoFn())
scored | 'Save to BigQuery' >> beam.io.Write(beam.io.BigQuerySink(
'weight_preds', 'dsp_demo', schema = schema,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND))
scored | 'Create entities' >> beam.ParDo(PublishDoFn())
# run the pipeline
result = p.run()
result.wait_until_finish()
AI: The error tells you to specify the project ID associated to the project in which you want to run the Dataflow job in.
In the Python SDK, you can set this and other Google Cloud related variables in the following way:
gcloud_options = pipeline_options.view_as(GoogleCloudOptions)
gcloud_options.project = '<insert project ID here>'
(...)
p = beam.Pipeline(options=pipeline_options)
For the full snippet, you can check the Dataflow documentation. |
H: predict_proba to print specific class probablity
I am having 16 labels and predict_proba is giving me probablities of all 16 categories in an array. Is there any way if I pass specific label to predict_proba it can only print me probablity of that category?
preds = model.predict(dataframe)
# getting predicted class , am interested in knowing probablity of this class.
print(preds)
# it is printing array for all 16 labels, I am keen to pass above predicted class and retrieve probability for it.
print(model.predict_proba(dataframe))
Output
[[0.07387347 0.007413 0.00354506 0.02321654 0.09627853 0.00958647
0.00599333 0.02232621 0.12513558 0.00494633 0.07230524 0.00384056
0.00378245 0.44431455 0.04089799 0.0625447 ]]
now model had predicted class "XYZ" how to get its index in predict_proba function, if I have that I can simply use (i being index)
print(model.predict_proba(dataframe)[0][i])
AI: Once you fit your sklearn classifier, it will generally have a classes_ attribute. This attribute contains your class labels (as strings). So you could do something as follows:
probas = model.predict_proba(dataframe)
classes = model.classes_
for class_name, proba in zip(classes, probas):
print(f"{class_name}: {proba}")
And to find a specific index, you can use numpy's where function:
import numpy as np
class_label = "XYZ"
class_index = np.where(model.classes_ == class_label)
proba = model.predict_proba(dataframe)[class_index] |
H: What is the gradient descent rule using binary cross entropy (BCE) with tanh?
Similar to this post, I need the gradient descent step of tanh but now with binary cross entropy (BCE).
So we have
$$
\Delta \omega = -\eta \frac{\delta E}{\delta \omega}
$$
Now we have BCE:
$$
E = −(ylog(\hat{y})+(1−y)log(1−\hat{y}))
$$
Considering my output is $\hat{y} = tanh(\omega .x)$, $x$ is my input vector and $y_i$ is the corresponding label here.
$$
\frac{\delta E}{\delta \omega} = \frac{\delta −(ylog(tanh(wx))+(1−y)log(1−tanh(wx)))}{\delta \omega}
$$
Now on this website they do something similar for the normal sigmoid and arrive at (eq 60):
$$
\frac{σ′(z)x}{ σ(z)(1−σ(z))}(σ(z)−y)
$$
Could we use that and continue there? We can get the derivative like this and get:
$$
\frac{tanh′(wx)x}{tanh(wx)(1−tanh(wx))}(tanh(wx)−y)
\\= \frac{x-xtanh(wx)^2}{tanh(wx)(1−tanh(wx))}(tanh(wx)−y)
\\= \frac{x-x\hat{y}^2}{\hat{y}(1−\hat{y})}(\hat{y}−y)
\\= \frac{(\hat{y} + 1)x(\hat{y} - y)}{\hat{y}}
$$
Wherever I look, I don't find this :)
Update
Given the first answer that gives $(1 + \hat{y})(1 - \hat{y})$, we arrive at the same
$$
\frac{tanh′(wx)x}{tanh(wx)(1−tanh(wx))}(tanh(wx)−y)
\\= \frac{x(1 + \hat{y})(1 - \hat{y})}{\hat{y}(1−\hat{y})}(\hat{y}−y)
\\= \frac{(\hat{y} + 1)x(\hat{y} - y)}{\hat{y}}
$$
AI: Let 'a' be the output from an activation function like sigmoid or tanh.
Therefore, the derivative of sigmoid is a*(1-a) whereas the derivative for tanh is (1+a)*(1-a).
Just follow the derivation of sigmoid except replace the derivative of sigmoid with that of tanh. |
H: Which target variable should I use?
I have a problem where I want an LSTM to predict the resistance of a body. This value can also be calculated if we know the drag coefficient and the speed of that body. In my case, at inference time, the speed is known, meaning that I can do the following:
predict the drag coefficient, and then calculate the resistance accordingly
predict the resistance directly
Which one should I use as my learning target?
AI: Interesting question!
Short (but maybe naive) answer
Experiment with both options and see which performs best!
Longer answer
predict the drag coefficient, and then calculate the resistance accordingly
If you do this, your network will try to optimize something different than your actual goal (which is the resistance). This means that your model will not "care" if the resistance you eventually calculate is any good, which can result in strange results.
predict the resistance directly
This would be better from a machine learning perspective as your model's goal will be the same as yours, however, you will lose the advantage that you have by knowing how the resistance is calculated.
Solution A
Predict both and then have a final step to decide what your final resistance will be. With LSTM, this is definitely possible, your target will just become 2 numbers instead of 1.
Solution B
The best solution, in my opinion, would be to have the LSTM output a single number (which would act as the drag coefficient), and then, add a layer which calculates the resistance using the known formula so that you can backpropagate on the entire thing, and you get the best of both worlds. In PyTorch this can be done rather elegantly. The big caveat is that the formula to calculate the resistance needs to be differentiable. |
H: Selecting most relevant word from lists of candidate words
Let's suppose I have 1000's of training examples where each consists of a bucket e.g. 'engineering' or 'management' and a list of tags e.g. ['software', 'python', 'product'] where a human has selected the most relevant tag for the use case e.g.'software'.
So our data is like:
bucket tags best_tag
engineering [fullstack, software] software
engineering [java, python, software] software
management [technical, product] product
What kind of model or approach would suit taking a list of tags and predicting the best tag based on some kind of underlying latent hierarchy?
AI: There are many ways you could approach this problem
Word embeddings
If you have word embeddings at hand, you can look at the distance between the tags and the bucket and pick the one with the smallest distance.
Frequentist approach
You could simply look at the frequency of a bucket/tag pair and choose this. Likely not the best model, but might already go a long way.
Recommender system
Given a bucket, your goal is to recommend the best tag. You can use collaborative filtering or neural approaches to train a recommender. I feel this could work well especially if the data is sparse (i.e. lots of different tags, lots of buckets).
The caveat I would see with this approach is that you would technically always compare all tags, which only works if tag A is always better than tag B regardless of which tags are proposed to the user.
Ranking problem
You could look at it as a ranking problem, I recommend reading this blog to have a better idea of how you can train such model.
Classification problem
This becomes a classification problem if you turn your problem into the following: given a bucket, and two tags (A & B), return 0 if tag A is preferred, 1 if tag B is preferred. You can create your training data as every combination of two tags from your data, times 2 (swap A and B).
The caveat is that given N tags, you might need to do a round-robin or tournament approach to know which tag is the winner, due to the pairwise nature.
Recurrent/Convolutional network
If you want to implicitly deal with the variable-length nature of the problem, you could pass your tags as a sequence. Since your tags have no particular order, this creates a different input for each permutation of the tags. During training, this provides more data points, and during inference, this could be used to create an ensemble (i.e. predict a tag for each permutation and do majority voting).
If you believe that it matters in which order the tags are presented to the user, then deal with the sequence in the order it is in your data.
Your LSTM/CNN would essentially learn to output a single score for each item, such that the item with the highest score is the desired one. |
H: Why is my model accuracy decreasing after the second epoch?
This is my training log for ten epoch for a sentiment analysis model:
Train on 5487 samples, validate on 610 samples
Epoch 1/10
5487/5487 [==============================] - 23s 4ms/sample - loss: 1.4769 - accuracy: 0.5216 - val_loss: 2.4135 - val_accuracy: 0.6164
Epoch 2/10
5487/5487 [==============================] - 19s 3ms/sample - loss: 7.5815 - accuracy: 0.4593 - val_loss: 9.6993 - val_accuracy: 0.3000
Epoch 3/10
5487/5487 [==============================] - 19s 3ms/sample - loss: 9.8212 - accuracy: 0.3807 - val_loss: 9.4066 - val_accuracy: 0.3164
Epoch 4/10
5487/5487 [==============================] - 19s 3ms/sample - loss: 9.6174 - accuracy: 0.3594 - val_loss: 9.4066 - val_accuracy: 0.3066
Epoch 5/10
5487/5487 [==============================] - 19s 3ms/sample - loss: 9.5968 - accuracy: 0.3548 - val_loss: 9.4066 - val_accuracy: 0.3066
Epoch 6/10
5487/5487 [==============================] - 19s 3ms/sample - loss: 9.5939 - accuracy: 0.3561 - val_loss: 9.4066 - val_accuracy: 0.3066
Epoch 7/10
5487/5487 [==============================] - 19s 3ms/sample - loss: 9.5792 - accuracy: 0.3465 - val_loss: 9.4066 - val_accuracy: 0.3066
Epoch 8/10
5487/5487 [==============================] - 19s 3ms/sample - loss: 9.6086 - accuracy: 0.3506 - val_loss: 9.4066 - val_accuracy: 0.3066
Epoch 9/10
5487/5487 [==============================] - 19s 3ms/sample - loss: 9.6233 - accuracy: 0.3501 - val_loss: 9.4066 - val_accuracy: 0.3066
Epoch 10/10
5487/5487 [==============================] - 19s 3ms/sample - loss: 9.5821 - accuracy: 0.3548 - val_loss: 9.4066 - val_accuracy: 0.3066
And this is the model itself:
model = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim=len(idx_to_word), output_dim=300),
tf.keras.layers.Dropout(0.2,noise_shape=[None,50,1]),
tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(512, use_bias=False)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(512, recurrent_dropout=0.2,
dropout=0.2)),
tf.keras.layers.Dense(len(label_to_idx))
])
model.compile(loss=tf.keras.losses.CategoricalCrossentropy(),
optimizer=tf.keras.optimizers.Adam(1e-3),
metrics=['accuracy'])
history = model.fit( X, one_hot_labels,
epochs=10,
batch_size=64,
validation_split=0.1,
verbose=1,
shuffle=True)
I wonder what is the reason for decreasing accuracy in my training proccess
AI: Decreasing accuracy as training progresses means that the learning rate for your model is too high. Your model weights are changing a lot due to the high learning rate and therefore moving away from the local minimum where your accuracy would be at its highest. Try decreasing the learning rate and see what happens. |
H: Why is the input to an activation function a linear combination of the input features?
I'm new to deep learning, and there is something about the calculations I do not understand:
For a standard neural network, why is it that only the activation function is not linear, but the input to the activation function is a linear combination of each of the $x_i$'s? For example, with the sigmoid function, it would look like:
$$ \frac{1}{1+ e^{-(w_0x_0 + w_1x_1 + b)}} $$
where $w_i$ are the weights and $x_i$ represents the input to that layer.
For example, why is it that we don't have something like this:
$$ \frac{1}{1+ e^{-(w_0x_0^2 + w_1\sqrt{x_1} + b)}} $$
Is it because it would be redundant if we had enough layers? Or is it because a priori, you wouldn't know what the best function is?
AI: The main reason is that a linear combination of the input followed by a non-linearity stacked on top of eachother is a universal function approximator. Which means that no matter how complicated the true underlying function is, a neural network can approximate it to an arbitrarily small error.
There's also the efficiency factor since a linear combination of $n$ inputs each having $m$ dimensions can be represented using a single matrix multiplication $h=X \times W$ where $X$ is an $n \times m$ matrix (where each row is an example and each column is a feature of that example) and $W$ is an $ m \times d $ weight matrix. And computers are VERY efficient at doing matrix multiplications. Thus, the more you build your model to use matrix multiplications the better. |
H: Help improving my "read_excel" execution time in python. My code reads slowly
My first question here so please bare with me.
I'm trying to feed my neural network with training data read in from an excel file. It works perfectly fine when i have less than 50 rows in the sheet. But when i try with the real excel file containing almost 4.000 rows it suddenly takes forever. Although 4.000 is a lot i'm pretty sure my way of doing it is still very inefficient.
as you can see in the code below i'm using the read_excel over and over again in the loop.
I feel like there should be a way to only read the whole column 1 time and then work with it from there.
My goal is to read in 5 rows as the 1st input starting from row 0. then reading 5 rows in again but starting from row 1 and 5 rows again starting from row 3
So it's like a window of 5 rows that is read and then moving the window by 1.
The output should allways be the 1 row after the window.
**Example:** if row 1-20 contains numbers 1-20 then:
input1 = [1,2,3,4,5] and output1 = 6
input2 = [2,3,4,5,6] and output2 = 7
...
input15 = [15,16,17,18,19] and output15 = 20
notice how inputs are lists and outputs are just numbers. So when i append those to the final input & output lists i end up with inputs being a list of lists and out being list of outputs
My code
from pandas import read_excel
# initialize final input & output lists. The contents of the temporary input & output lists
# are gonna be appended to these final lists
training_input = []
training_output = []
# excel relevant info
my_sheet = 'Junaid'
file_name = '../Documents/Junaid1.xlsx'
# initialize counters
loop_count = 0
row_counter = 0
for x in range(25):
# load the excel file containing inputs & outputs
# using parameters skiprows, nrows (number of rows) and index col
df = read_excel(file_name, sheet_name = my_sheet, skiprows=row_counter, nrows=6, index_col=0)
# initialize temporary input & output lists
input_temp = []
output_temp = []
for y in df.index:
# append the first 5 rows of the 6 to input list
if loop_count < 5:
input_temp.append(df.index[loop_count])
loop_count += 1
else:
# append the 6th data to output list
training_output.append(df.index[loop_count])
training_input.append(input_temp)
row_counter += 1
loop_count = 0
AI: Well yes it would be slow because you are opening and closing the file for every iteration of the for loop. A general rule in programming is that if the file is not constantly changing, then only open and read it a single time. Also, there are large sections of your code that can be shaved off if you simply use list comprehension
Here, I have rewritten your code to only open the file and read it once, then it creates the two lists using list comprehension and slicing.
from pandas import read_excel
# excel relevant info
my_sheet = 'Junaid'
file_name = '../Documents/Junaid1.xlsx'
df = read_excel(file_name, sheet_name = my_sheet, index_col=0, header=None)
training_input = [df.index[i:i+5].tolist() for i in range(len(df)-5)]
training_output = [df.index[i].tolist() for i in range(5, len(df))]
Also, there seems to be a bug in your code since the excel file you described in your question does not have a header (i.e. the very first row contains data), thus your code skips the very first row of values. To fix that you should pass the parameter "header=None" to the pandas function to tell it that there is no header index. You can read more about that here. |
H: Are weights of a neural network reset between epochs?
If an epoch is defined as the neural network training process after seeing the whole training data once. How is it that when starting the next epoch, the loss is almost always smaller than the first one? Does this mean that after an epoch the weights of the neural network are not reset? and each epoch is not a standalone training process?
AI: An epoch is not a standalone training process, so no, the weights are not reset after an epoch is complete. Epochs are merely used to keep track of how much data has been used to train the network. It's a way to represent how much "work" has been done.
Epochs are used to compare how "long" it would take to train a certain network regardless of hardware. Indeed, if a network takes 3 epochs to converge, it will take 3 epochs to converge, regardless of hardware. If you had used time, it would be less meaningful as one machine could maybe do 1 epoch in 10 minutes, and another setup might only do 1 epoch in 45 minutes.
Neural networks (sadly) are usually not able to learn enough by seeing the data once, which is why multiple epochs are often required. Think about it as if you were studying a syllabus for a course. Once you finished the syllabus (first epoch), you go over it again to understand it even better (epoch 2, epoch 3, etc.) |
H: Tensorflow 2.0 - Layer with fixed input
I'm trying to use Tensorflow to optimize a few variables to be used in a KNN algorithm, however, I'm running into an issue where I'm unable to have a layer work properly if it is not connected to an Input layer.
What I'm trying to do below is pass in [[1]] as a static tensor, and then using the Lambda layer to coerce the weights into a usable shape. Once trained, I would just retrieve the weight value from the first Dense layer.
ones = tf.ones(shape=(1,1)) # trying to use this to take the place of dynamic inputs
theta_layer = layers.Dense(1, activation="linear", use_bias=False, trainable=True)(ones)
theta_layer = layers.Lambda(lambda x: tf.ones(shape=(self.batch_size,1)) * x)(theta_layer)
print(theta_layer)
# tf.Tensor(
# [[-1.151663]
# [-1.151663]
# [-1.151663]], shape=(3, 1), dtype=float32)
concat_layer = layers.Concatenate()([theta_layer, features_input])
model = models.Model(inputs=features_input, outputs=concat_layer)
model.compile(optimizer="adam", loss="mse") #, metrics=["mae"])#, bias_regularizer=None)
The problem with the above strategy is that theta_layer does not show up in model.summary() and the weights don't show up in model.get_weights().
Also, when I try to directly set theta_layer as the model output, I get this message:
AttributeError: Tensor.op is meaningless when eager execution is enabled.
This is probably the giveaway but frankly I'm just not knowledgeable enough about Tensorflow.
I know that I can fix this by passing in a separate tf.ones input to the model (not just into specific layers), but it would complicate the code seemingly unnecessarily (I have to do this for multiple layers/variables). Is there a way that I can alter the tf.ones passed to the layer or the alter the layer itself to eliminate this issue?
AI: This feels like a bit of a hack, but I was able to infer something from an answer on another question: https://stackoverflow.com/a/46466275/6182971
It works if I change the first line above from:
ones = tf.ones(shape=(1,1))
to:
ones = layers.Lambda(lambda x: tf.ones(shape=(1,1)))(features_input)
Even though the Lambda layer is returning a constant, passing in features_input, which is the main training data connects the tf.ones constant to the network inputs, which seems to be sufficient. |
H: Trying to understand the result provided by np.linalg.norm function in numpy (normalisation)
I'm new to data science with a moderate math background. I'm playing around with numpy and can across the following:
So after reading np.linalg.norm, to my understanding it computes the 2-norm of the matrix. Wanting to see if I understood properly, I decided to compute it by hand using the 2 norm formula I found here:
Following computing the dot product, the characteristic equation, applying the formula for quadratic equation and taking square root of the max, I end up with a different result, namely:
So here is my question. What went wrong? Did I use the right formula? Also I couldn't find a conclusive way to get the 5. The doc he doc that says:
If this is set to True, the axes which are normed over are left in the result as dimensions with size one. With this option the result will broadcast correctly against the original x.
But I can't wrap my head around it. What does it represent and how you compute it?
I hope the formatting and the question are clear enough.
AI: The formula you cited is not the formula scipy is using. According to the documentation it calculates the "Frobenius norm" which is defined to be the root of the squared sum of the elements of each row (row since you specified dim=1)
$ O_i = { \biggl(\sum_{n=1}^N \mathbf{A}_{i, n}^2\biggr) }^{1/2} $
Thus, $(3^2+4^2)^{1/2}=5$ and $(2^2+6^2+4^2)^{1/2}=56^{1/2}$
EDIT: Also, the keepdims argument just says whether to collapse the dimension that you calculated or not, a simple example is to look at the output shape of the matrix that you get with and without the argument.
a = np.array([[0, 3, 4], [2, 6, 4]])
np.linalg.norm(a, axis=1, keepdims=True).shape # output is (2, 1)
np.linalg.norm(a, axis=1, keepdims=False).shape # output is (2,) because the second dimension was collapsed |
H: What is the objective that is optimized with Random Search?
I have recently learned about Random Search (or sklearn.model_selection.RandomizedSearchCV in Python) and was thinking about the theory behind the optimization process. In particular my question is, given that one performs Random Search on a certain algorithm (let's say random forest), what are the best hyperparameter based on? More specifically in what sense are they the "best" hyperparameters for the model? Do they maximize accuracy of the model? If not what is the (performance-)criterion that is optimized? Or is it entropy/gini?
AI: According to the documentation, the function RandomizedSearchCV accepts a scoring string that can take any value from this table and you can even implement your own custom scorer depending on what your goal is.
The default parameter is None in which case it uses the models score function that is defined to:
Return the mean accuracy on the given test data and labels. |
H: (De-)Scaling/normalizing input and output data inside Keras model as layer
I am building a 2-hidden layer MLP using Keras.
I'm using a SciKit learn wrapper to be able to use the GridSearchCV functionality.
My sample-size is limited, forcing me to use K-fold verification as well for trustworthy results.
However, it is to my understanding that for every iteration in the K-fold validation, input data should be scaled (and output descaled) only using the training data. This requires the scaling to be performed inside the Keras model.
In order to have understandable results, the output should than be transformed back (using previously found scaling parameters) in order to calculate the metrics.
Is it possible to
Z-score standardize my input data (X & Y) in a normalization layer (batchnormalization for example)
Transform the output layer back (before calculating metrics), using scaling parameters found in 1.
I've looked at the batchnormalization functionality in Keras, but the documentation mentions: "During training time, BatchNormalization.inverse and BatchNormalization.forward are not guaranteed to be inverses of each other because inverse(y) uses statistics of the current minibatch, while forward(x) uses running-average statistics accumulated from training." Which seems like this prevents the functionality from being used in this manner.
Does anybody have a known solution or function for this?
AI: Let's start from X. As you use the sklearn-style interface, the natural choice is to use StandardScaler. You could use pipeline to integrate it in the grid search.
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
pipeline = Pipeline([
('scaler', StandardScaler()),
('model', your_cool_model)])
param_grid = {...}
search = GridSearchCV(pipeline, param_grid, cv=5).fit(X, y)
That would do the trick.
Now about y. The chances are you don't need to scale target. The good discussion is in this question
If you are certain that targets should be scaled, the sklearn provides TransformedTargetRegressor just for the case
wrapped_model = TransformedTargetRegressor(
regressor=model,
transformer=StandardScaler()) |
H: Time series regression forecast next month from now (random forest,Lasso,Ridge)
I have a dataset about hedge funds. Its include data from 2010 january to 2019 december. This data are monthly financial ratios of hedge funds such as sharpe,alpha,beta,sortino and monthly returns of hedge funds. I normalized the relative returns of each fund. And I want to estimate these monthly returns. Using these ratios. Now Im using machine learning regression models, I created this function "$$Y_{t+1} = X_{0,t}+...+ X_{12,t}$$" for the regression model. "$Y_{t+1}$" shows the normalized relative return of the hedge fund in the following month(t+1)(next/future), "$X_{0,t}$" variables show the financial ratios of that fund in the t month(now). I thought about this function to predict future month's returns before data came so I want to predict the next month with the current data.Because the data comes as of the end of the month. I have to guess the return for the next month. As an example, I would like to forecast the return on hegde funds at the end of January 2011 with data from January 2010 to December 2010. Without ever seeing the data for January 2011. Is this regression function written correctly.
Does using $Y_{t+1}$ break the model and is there a model or mathematical function that you can suggest? for "predict future month's returns before data came so I want to predict the next month with the current data."
Additonal Infos:
Some additional information there are 2889 months of data, I keep all 27 different funds in a database, and I train a single regression model, not for each fund separately.I used the regression function I showed above($Y_{t+1} = X_{0,t}+...+ X_{12,t}$).
But it didn't work well on any algorithm, so the R-square score was negative. why do you think it didn't work and how can I fix it ?
I used both sliding window and expanding window as validation. I use dummy variables for fund names/tickers
Sample Data:
AI: You could try to predict difference $dy = Y_{t+1} - Y_{t}$, not the $Y_{t+1}$ itself. Intuitively, the weather today is good starting prediction for the weather tomorrow.
Secondly, with time series, the seasoning is quite important. You usually would like to use models like ARIMA that takes this into account.
Lastly, it could be just don't enough data to make good predictions. I don't know what is these indexes exactly, but I'm not sure you can use data from one fund to predict another. This leaves you with about 100 points of training data (8 years monthly) and it's not a lot.
I would recommend you to solve 1-d (predict one fund by only it's data) with $dY$ trick, then try ARIMA |
H: Cross Correlation Between Input-Output Sine Waves
I am writing an algorithm to estimate the frequency transfer function of the system. For this, I want to use the Cross-correlation Between Input-Output Sine Waves method. There are a few things I don't understand about method:
I - What does capital N mean?
II - Is and Ic scalar or vector(depend on N)?
These are my thoughts :
I - N represents the number of outputs I collected for a specific w value with a sampling periode.
II - Scalar.
Could you help?
AI: These formulas are discrete fourier transform.
It's very standard procedure when you analyze wave-alike processes. What it does it transforms the signal from the time-space to the frequencies-space. The idea is that you take sin-wave with specific frequency, multiply it by signal and if the value is big then you have something similar to the wave of this frequency.
$\frac{1}{NT}$ term is just overall time of the sequence (N samples with length T)
The $I_{s}$ and $I_{c}$ is amplitude(intensity) for the sin and cos, respectively. You get them for the specific frequency $\omega$ and it's a single number for each. From this you can get full amplitude (G) and phase $\phi$. Why do you need phase? Because it's important - if you have two waves with similar frequency and $\phi = \pi$, then they would cancel each other, and if they have same phase, they would double. |
H: Building a content-based music recommendation system
I am trying to build a recommendation system in Python that recommends songs based on a playlist.
What I have is two datasets:
1. One dataset consists of 350 songs from my playlist and 13 acoustic features for each one like timber, energy, key, tempo, etc. that I extracted using the Spotify API
2. The other dataset consists of 340 000 songs and their acoustic features (got it from here: https://components.one/datasets/billboard-200/)
I've gotten both datasets in the same format and ready to be worked with.
Now what I am trying to do in Python is use my first dataset to get 30-40 songs with similar acoustic features from the second dataset but I have no idea how to approach this.
Should I use some machine-learning models or do something entirely else?
I thought about comparing the songs from my first dataset to the songs on my second dataset and pulling the ones with a similarity score of let's say over 70% or something like that but I feel like there is probably a much better way of doing this.
AI: If the features are identical, good start would be to use n-neighbors approach.
It would be something like that
from sklearn.neighbors import NearestNeighbors
all_songs_features = [[0, 0, 2], [1, 0, 0], [0, 0, 1], [100, 100, 100], [0, 0, 1.5]]
neigh = NearestNeighbors()
neigh.fit(all_songs_features)
my_song_features = [[0, 0, 1.3], [1.1, 0, 0]]
print(neigh.kneighbors(my_song_features, 2, return_distance=False))
These code returns the indexes of similar songs from the bigger dataset.
Note, if the features are with different range (i.e. one with values 0-1 and another 10000-1000000), then you need to scale them, for example with StandardScaler |
H: MAE estimation for k-fold cross-validation
I have code that estimates RMSE for k-fold cross-validation and I think it is correct (from book: Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition by Aurélien Géron)
scores = cross_val_score(forest_reg, a, b, scoring="neg_mean_squared_error", cv=10)
print(pd.Series(np.sqrt(-scores)).describe())
So what about MAE? Should I use (with sqrt):
scores = cross_val_score(forest_reg, a, b, scoring="neg_mean_absolute_error", cv=10)
print(pd.Series(np.sqrt(-scores)).describe())
or this (without sqrt):
scores = cross_val_score(forest_reg, a, b, scoring="neg_mean_absolute_error", cv=10)
print(pd.Series(-scores).describe())
Also for MAE estimation, it should be -scores or scores?
AI: It’s an issue of units.
Compare what you’re doing in RMSE and in MAE.
RMSE is a way of getting MSE back to the original units, like how we take the square root of variance to get standard deviation. This makes more physical sense. Sure, we can make sense of square meters, but what about square dollars?
When you do MAE, you don’t have that squaring action to give you squared units. Consequently, while MSE in in square units, MAE is in the original units.
You could take the square root of MAE, but then you’d wind up with measurements with units of $\sqrt{\$}$ or the square root of whatever units you’re using. The result is that your measure of dispersion is not in the original units, which a probably what you want.
I do not see any use for taking the square root of MAE. If you do, please do share. That would be very interesting. |
H: Error using image: Error when checking input: expected dense_56_input to have 2 dimensions, but got array with shape
I am trying to train a Sequential model using simple flow_from_directory() but i am getting this error , I have tried using lesser layers but the error dose not go away.
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Flatten
train_directory = 'D:\D_data\Rock_Paper_Scissors\Train'
training_datgagen = ImageDataGenerator(rescale = 1./255)
training_generator = training_datgagen.flow_from_directory(
train_directory,
target_size = (28,28),
class_mode = 'categorical')
validation_directory = 'D:\D_data\Rock_Paper_Scissors\Test'
validation_datagen = ImageDataGenerator(rescale= 1./255)
validation_generator = validation_datagen.flow_from_directory(
validation_directory,
target_size = (28,28),
class_mode = 'categorical'
)
model = Sequential()
model.add(Dense(128, input_shape = (784,)))
model.add(Dense(64, activation = 'relu'))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(3, activation = 'softmax'))
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy',metrics = ['accuracy'])
model.fit_generator(training_generator,epochs=10)
Here is the error:
File "C:\Users\Ankit\.spyder-py3\temp.py", line 31, in <module>
model.fit_generator(training_generator,epochs=10)
File "C:\Users\Ankit\anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\Ankit\anaconda3\lib\site-packages\keras\engine\training.py", line 1732, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\Ankit\anaconda3\lib\site-packages\keras\engine\training_generator.py", line 220, in fit_generator
reset_metrics=False)
File "C:\Users\Ankit\anaconda3\lib\site-packages\keras\engine\training.py", line 1508, in train_on_batch
class_weight=class_weight)
File "C:\Users\Ankit\anaconda3\lib\site-packages\keras\engine\training.py", line 579, in _standardize_user_data
exception_prefix='input')
File "C:\Users\Ankit\anaconda3\lib\site-packages\keras\engine\training_utils.py", line 135, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected dense_56_input to have 2 dimensions, but got array with shape (32, 28, 28, 3)
AI: The error occurs due to mismatch in the input shape. In the model you have specified it as input_shape = (784,) but the actual images that the model is getting as input are of different size. Just put the input shape as input_shape = (32, 32, 3) and you are good to go. Take a look here for more details on how to specify the input_shape.
Also, you'll have to use convolutional layers to process images or you can use a flatten layer before using dense layers! |
H: How to create a matrix from two given vectors in R in RStudio?
Suppose, $c(1, 2, 3, 4)$ and $c(2, 4, 5, 6)$ are two vectors. Then in R or RStudio,
How to create a $4\times 2$ matrix from these two vectors?
Also,
how to add another vector $c(8, 9, 10, 11)$ as a column to the previous matrix?
AI: x <- c(1, 2, 3, 4)
y <- c(2, 4, 5, 6)
z <- matrix(c(x, y), nrow = length(x))
zz <- cbind(z, c(8, 9, 10, 11)) |
H: Discretisation Using Decision Trees
I'm new to the machine learning and working on a supervised classification problem. I used discretization process to transform continuous variables into discrete variables. So I followed this article to implement it. But when repeat same process with same values it generate different boundary values. Can anyone explain about it?
X_train, X_test, y_train, y_test = train_test_split(train[['tripid', 'Hour', 'is_FairCorrect']],train.is_FairCorrect , test_size = 0.3)
tree_model = DecisionTreeClassifier(max_depth=2)
tree_model.fit(X_train.Hour.to_frame(), X_train.is_FairCorrect)
X_train['Age_tree']=tree_model.predict_proba(X_train.Hour.to_frame())[:,1]
pd.concat([X_train.groupby(['Age_tree'])['Hour'].min(),
X_train.groupby(['Age_tree'])['Hour'].max()], axis=1)
AI: But when repeat same process with same values it generate different boundary values. Can anyone explain about it?
This is because you're not setting the random_seed in the in the train_test_split, which means that the training data is shuffled in a different way on each run.
With a quick check using one of sklearn's datasets, you can check that this is the issue:
from sklearn.datasets import load_boston
X,y = load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=4)
tree_model = DecisionTreeClassifier(max_depth=2, random_state=2)
tree_model.fit(X_train, y_train)
y_pred = tree_model.predict_proba(X_train)[:,1]
X_train_df = pd.DataFrame(X_train, columns = ['sepal_len', 'sepal_wid',
'petal_len', 'petal_wid'])
X_train_df['Age_tree'] = tree_model.predict_proba(X_train)[:,1]
X_train_df.Age_tree.unique()
This will produce the same boundaries in all runs, in this case array([0. , 0.90697674, 0.03030303]). Whereas if you don't set the random seed you'll get different probabilities and boundaries on each run. |
H: Getting a best k in KNN Algorithm
So, i was learning the KNN Algorithm and there i learnt cross Validation to find a optimal value of k.Now i want to apply grid search to get the optimal value.I found an answer on stack overflow where both standardScaler and KNN are passed as estimator.
pipe = Pipeline([
('sc', StandardScaler()),
('knn', KNeighborsClassifier(algorithm='brute'))
])
params = {
'knn__n_neighbors': [3, 5, 7, 9, 11] # usually odd numbers
}
clf = GridSearchCV(estimator=pipe,
param_grid=params,
cv=5,
return_train_score=True) # Turn on cv train scores
clf.fit(X, y)
My questions
I am already applying the standardscaler to standardize the data before passing to KNN. So here do i still need to pass the standardscaler in the estimator?
why X and Y are passed instead of x_train and y_train assuming x and y are independent and dependent variable and x_train,y_train are formed after train_test_split operation ?
Any example of such code will be appericiated.
AI: Looking into the linked answer, it appears that they are directly training on X and y since they're using a GridSearchCV, which already includes a k-fold cross validation (5 fold by default). So basically you'll already have a score for the classifier by calling GridSearchCV with the defined pipeline.
That being said I'd argue that it is never really the recommended approach to directly do this without a final test step, to asses the performance of the trained model on unseen data. So even if you do a k-fold cross validation, it is advisable to leave a test set to get a final score, specially when the k-fold process involved a hyper-parameter tuning, as in this case. In such cases you need anther validation step that is independent of the tuning.
And in relation to the second point, no, you don't need to include a StandardScaler if the data is already normalised. Though, since you're using pipeline, you might as well include all transformation logic in the pipeline, for the sake of simplicity. |
H: Positive or negative impact of features in prediction with Random Forest
In classification, when we want to get the importance of each variable in the random forest algorithm we usually use Mean Decrease in Gini or Mean Decrease in Accuracy metrics. Now is there a metric which computes the positive or negative effects of each variable not on the predictive accuracy of the model but rather on the dependent variable itself? Something like the beta coefficients in the standard linear regression model but in the context of classification with random forests.
AI: With decision trees you cannot directly get the positive or negative effects of each variable as you would with say a linear regression through the coefficients. Its just not the way decision trees work. As you point out, the training process involves finding optimal features and splits at each node by looking at the gini index or the mutual information with the target variable. But no parameters are learnt during the process which we could use for such analysis.
A common tool that is used for this purpose is SHAP. In fact, there is a specific explainer for decision trees based models which is the shap.explainers.Tree. With SHAP you can get both the contribution of the features in pushing towards one or another value of the label, and also an overall view of the contribution of all features. |
H: Sort dataframe by date column stored as string
I have dataframe named df1.
I want to sort data frame month column according to month (Jan, Feb, March..).
For that I used code:
sorted_df = df1.sort_values(by='month')
print(sorted_df)
but the output is sorted by alphabetic order of month column.
I think the reason for that in this month column data type is object so month column is sorted by alphabetic order.
The question is: how sort values in column month in correct order (according to order of months in year)?
My dataframe:
AI: I suggest first separating the month column into day and month using str.split('-')
# create test data
df = pd.DataFrame(['20-Apr', '19-Mar', '4-Dec'], columns=['month'])
# create day column
df['day'] = 0
split = df['month'].str.split('-', expand=True)
df['day'], df['month'] = split[0], split[1]
Now that month is seperated, you can change it to categorical such that it can be custom sorted
df['month'] = pd.Categorical(df['month'], ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
Now you can sort
df.sort_values("month")
Hope this helps |
H: How to calculate events per day in R including dates when no events occurred?
I would like to create a data frame in which in the first column I will have all the dates from a certain period of time and in the second the number of events that occurred on each date including dates when no events occurred. I would also like to count the events to which specific factors have been assigned
The first data frame in which I have the events with dates for a given date:
Row Sex Age Date
1 2 36 2004-01-05
2 1 47 2004-01-06
3 1 26 2004-01-10
4 2 23 2004-01-20
5 1 50 2004-01-27
6 2 35 2004-01-28
7 1 35 2004-01-30
8 1 38 2004-02-06
9 2 29 2004-02-11
Where in the column "Sex" 1 means female and 2 male.
Second data frame in which I have dates from the examined period:
Row Date
1 2004-01-05
2 2004-01-06
3 2004-01-07
4 2004-01-08
5 2004-01-09
6 2004-01-10
7 2004-01-11
8 2004-01-12
9 2004-01-13
10 2004-01-14
I want to get a data frame that looks like this:
Row Date Events (All) Events (Female) Events (Male)
1 2004-01-05 1 0 1
2 2004-01-06 1 1 0
3 2004-01-07 0 0 0
4 2004-01-08 0 0 0
5 2004-01-09 0 0 0
6 2004-01-10 0 1 0
7 2004-01-11 0 0 0
8 2004-01-12 0 0 0
9 2004-01-13 0 0 0
10 2004-01-14 0 0 0
Could anyone help?
AI: I made the assumption that:
Events (All) = Events (Female) + Events (Male)
I also manipulated the data so that there are more than one event per day.
As a result, df looks like:
Sex Age Date
1 2 36 2004-01-05
2 1 47 2004-01-05
3 1 26 2004-01-10
4 2 23 2004-01-10
5 1 50 2004-01-27
6 2 35 2004-01-27
7 1 35 2004-01-30
8 1 38 2004-02-30
9 2 29 2004-02-30
The following code should achieve the desired results.
library(tidyr)
df = read.csv([Path to dataset], stringsAsFactors = FALSE)
df %>% group_by(Date, Sex) %>%
summarise(sex_count = n()) %>%
spread(Sex,sex_count, fill=0) %>%
rename( event_female = '1', event_male = '2') %>%
mutate(event_all = event_female + event_male ) %>%
select(event_all, event_female, event_male)
output:
Date event_all event_female event_male
<chr> <dbl> <dbl> <dbl>
1 2004-01-05 2 1 1
2 2004-01-10 2 1 1
3 2004-01-27 2 1 1
4 2004-01-30 1 1 0
5 2004-02-30 2 1 1
``` |
H: Intuition of LDA
Can anyone explain how the LDA-topic model assigns words to topics?
I understand the generative property of the LDA model but how does the model recognize that "Labrador" and "dog" are similar words/ in the same cluster/topic? Is there kind of a similarity measure? The learning parameters of LDA are the the assignment of words to topics, the topic-words probabilities vector and the document-topic probabilities vector. But HOW is it learned?
AI: You are right, LDA is not very intuitive. It involves a lot of mathematics and concepts. However this video should help you
https://youtu.be/3mHy4OSyRf0
Also this article
“Intuitive Guide to Latent Dirichlet Allocation” by Thushan Ganegedara https://link.medium.com/texozcnAc6 |
H: In neural networks model, which number of hidden units to select?
In the neural networks model, how many numbers of hidden units need to keep to get an optimal result, as per Cybenko theorem which demonstrates that only one hidden layer is sufficient to solve any regression/classification problem but the selection of the number of units in a hidden layer is very important because it impacts the model performance. Is there a theory to tell us how to choose the optimal number of units for a hidden layer?
AI: Unfrtunately not, there is not theory to tell us what is the right number of units to choose. As there is not theory for the number of hidden layers to choose. On this respect, Deep Learning is still more an art than a science.
It's true that with one hidden layer we could theoretically solve any problem, but most of the problems are complicated enough to require unimaginable amounts computation.
I think in the end it all boils down to two main issues:
Your specific task, i.e. how large and deep a Network must be so that your model works as you need.
The compute power at your disposal.
When it comes to this, I strongly suggest to prioritize depth over width. Deeper Networks (the ones with many hidden layers) are much more efficient and powerful than large Networks (the ones with fewer, larger layers). It seems they are better at producing abstractions on the input data, to transform and process the signal in more sophisticated ways. |
H: Confidence interval interpretation in linear regression when errors are not normally distributed
I've read that "If the error distribution is significantly non-normal, confidence intervals may be too wide or too narrow" (source). So, can anyone elaborate on this? When are the confidence intervals narrow and when are they wide? Does it have anything to do with skewness?
AI: OLS Model:
One of the assumptions behind OLS (aka linear regression) is homoskedasticity, namely:
$$ Var(u| x ) = \sigma^2.$$
Recall that the linear model is defined:
$$ y = X \beta + u, $$
where $u$ is the statistical error term. The error term (per OLS assumptions) need to have an expected value $E(u|x)=0$ (orthogonality condition) with variance $\sigma^2$, so that the error is distributed $u \sim (0,\sigma^2)$.
Heteroscedasticity:
In case the variance of $u$ is not "harmonic" and the assumption above is violated, we say that error terms are heteroscedastic. Heteroscedasticity does not (!) change the estimated coefficients, but it does affect the (estimated) standard errors and consequently the confidence bands.
The error variance is estimated by:
$$ \hat{\sigma}^2 = 1/(n-2) \sum{\hat{u}^2} .$$
The standard error (of coefficient $\beta$) is estimated by:
$$ se(\hat{\beta}) = \hat{\sigma} / (\sum{(x_i-\bar{x})^2})^{1/2}.$$
The assumption of homoskedasticity is required in order to get proper estimates of the error variance and the ("normal", in contrast to "robust", see below) standard errors. Standard errors in turn are used to calculate confidence bands. So in case you cannot trust the estimated standard errors, you can also not rely on the confidence bands.
The problem here ultimately is, that given heteroscedasticity, you cannot tell if some estimated coefficient is statistically significant or not. Significance here is defined (95% confidence) so that the confidence band of some estimated coefficient does not „cross“ zero (so is strictly positive or negative).
There are different options to deal with heteroscedasticity:
The most common solution is to use "robust" standard errors. There are different versions of "robust" errors (HC1, HC2, HC3). They all have in common, that they aim at getting a "robust" estimate of the error variance. Most software allows you to calculate robust SE. Find an example for R here.
Another alternative would be to estimate a "feasible generalised model" (FGLS) in which you first estimate the scedastic function (to get an idea of the distribution of errors) and you try to "correct" problems in the error distribution. However, this is not something you would use very often in practice. It is more an academic excercise.
Testing heteroscedasticity:
Usually, you would test if there is heteroscedasticity. You can look at the "residual vs. fitted plot" to get an idea of how the error terms are distributed.
However, a proper test can be done using the White or Breusch-Pagan Tests. Here is an example in R. |
H: Find the time between two events by customer id
I need to find a customer has bought P1, and after how many days customer will buy P2.
I am unable to find the days between order of P1 and the next order of P2 by the same customer.
I have data as shown below.
Customer ID Order_Date Product
C-87 11/20/2018 P2
C-87 7/25/2018 P1
C-87 7/19/2019 P1
C-87 8/2/2018 P2
C-87 12/9/2019 P1
... ... ...
C-22 9/22/2018 P2
C-22 9/4/2018 P2
C-22 1/15/2018 P1
C-22 9/5/2019 P2
C-22 3/20/2018 P1
AI: You can split it into two dataframe containing only one of P1 and P2 first.
df1 = df[df.index[df['Product'] == 'P1'].tolist()]
df2 = df[df.index[df['Product'] == 'P2'].tolist()]
And then marge df1 and df2 on Customer ID and Product
df1.merge(df2, 'inner', left_on=['Customer ID', 'Product'], right_on=['Customer ID', 'Product'], copy=False) |
H: Classifying boat images
I trying to get some experience by exploring this Kaggle dataset.
It consists of 1500 pictures of boats classified in 9 categories. The data is as follows :
#x_train consists of 1159 images, with 80% of images from each category
x_train.shape = (1159,200,200,3)
y_train contains the number-label for each boat
y_train.shape = (1159,)
I have tried many variations of models like the following one but without any success.
model = Sequential()
model.add( Conv2D(32, (3,3), input_shape = x_train.shape[1:] , activation='relu') )
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Flatten())
model.add(Dense(4, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
h = model.fit(x_train, y_train, epochs=50,
batch_size = 64,
validation_data = (x_val, y_val) )
Could you give me any advice on how to get a model with decent test_accuracy?
AI: By looking at your code snippet, I realize you are training your CNN from scratch.
Use Transfer Learning Instead. Training a new model (choice of model architecture i.e. how deep your model should be, hyperparameters etc.) is very difficult if not impossible with only 1500 images. You can achieve great results quickly by using an already-trained model (aka Transfer Learning). If you are not quite familiar with the subject, read this article Transfer learning from pre-trained models, or this one First steps with Transfer Learning for custom image classification with Keras. There are codes included that helps to get started faster. One of the recent advances in Transfer Learning is efficientnet, you may want to jump using that one! But I would guess boats would be easy even with earlier models. |
H: how does word variety depend on total words?
I want to compare the word variety of several books. But some are short, while others are long. So how can I correct for the fact that longer books will generally contain a larger number of unique words. I tried simply dividing the number of unique words in each book by the number of total words in each book. But I think that was overdoing it, because now the shorter books appear to have the most variety. What is the best approach to this?
AI: The simple answer is to use a sample of equal size from every book, or even better to randomly extract several samples of equal size from every book and then use the mean across samples.
I tried simply dividing the number of unique words in each book by the number of total words in each book
This is known as the type/token ratio, the simplest way to measure lexical density. I think it makes perfect sense in the case you describe, as far as I know usually it's not too biased. |
H: Negative value in information gain calculation through gini index
I am trying to determine the root node for the decision tree on given data
annual income target variable has been renamed as low, mid, and high.
I am using gini index to measure the impurity of my nodes.
The process I am following is simple:
1- calculate the Gini index for the dataset(target is annual income)
gini(annual income)=1-((5/20)^2+(12/20)^2+(3/20)^2) = 0.445
2 - for each variable calculate gini and then remainder and information gain
3 - choose variable with the highest information gain
for remainder i am using this
just instead of entropy, I am using gini
when I am trying to calculate information gain if education becomes root note I am getting a negative information gain (which is obviously not possible)
MY CALCULATION:
as you can see I got a gini index of 0.532 for the node if I do
Information gain (0.445-0.532)=-ve value
can you point towards what am I doing wrong
AI: I quickly checked your calculation and you seem to have miscalculated the gini(annual income)
gini(annual income)=1-((5/20)^2+(12/20)^2+(3/20)^2) = 0.445
When it actually equals 0.555 (you probably forgot the 1-... part) which is larger than 0.532 so you might be fine for the rest of the calculations. |
H: plotting a decision tree based on gridsearchcv
i was trying to plot the decision tree which is formed with GridSearchCV, but its giving me an Attribute error.
AttributeError: 'GridSearchCV' object has no attribute 'n_features_'
However if i try to plot a normal decision tree without GridSearchCv, then it successfully prints.
code [decision tree without gridsearchcv]
# dtc_entropy : decison tree classifier based on entropy/information Gain
#plotting : decision tree on information/entropy based
from sklearn.tree import export_graphviz
import graphviz
feature_names = x.columns
dot_data = export_graphviz(dtc_entropy, out_file=None, filled=True, rounded=True,
feature_names=feature_names,
class_names=['0','1','2'])
graph = graphviz.Source(dot_data)
graph ### --------------> WORKS
code [decision tree with gridsearchcv]
#plotting : decision tree with GRIDSEARCHCV (dtc_gscv) on information/entropy based
from sklearn.tree import export_graphviz
import graphviz
feature_names = x.columns
dot_data = export_graphviz(dtc_gscv, out_file=None, filled=True, rounded=True,
feature_names=feature_names,
class_names=['0','1','2'])
graph = graphviz.Source(dot_data)
graph ##### ------------> ERROR
Error
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-201-603524707f02> in <module>()
6 dot_data = export_graphviz(dtc_gscv, out_file=None, filled=True, rounded=True,
7 feature_names=feature_names,
----> 8 class_names=['0','1','2'])
9 graph = graphviz.Source(dot_data)
10 graph
1 frames
/usr/local/lib/python3.6/dist-packages/sklearn/tree/_export.py in export(self, decision_tree)
393 # n_features_ in the decision_tree
394 if self.feature_names is not None:
--> 395 if len(self.feature_names) != decision_tree.n_features_:
396 raise ValueError("Length of feature_names, %d "
397 "does not match number of features, %d"
AttributeError: 'GridSearchCV' object has no attribute 'n_features_'
code for decision-tree based on GridSearchCV
dtc=DecisionTreeClassifier()
#use gridsearch to test all values for n_neighbors
dtc_gscv = gsc(dtc, parameter_grid, cv=5,scoring='accuracy',n_jobs=-1)
#fit model to data
dtc_gscv.fit(x_train,y_train)
One solution is taking the best parameters from gridsearchCV and then form a decision tree with those parameters and plot the tree.
However is there any way to print the decision-tree based on GridSearchCV.
AI: While I don't have a module named graphviz I can still try to help. Reading the documentation for GridSearchCV, I can see that there's a attribute called best_estimator_ that provides the estimator that was chosen by the search. By applying .best_estimator_ to your sample code, it seems be working fine.
clf.fit(iris.data, iris.target)
dot_data = export_graphviz(clf, out_file=None, filled=True, rounded=True,
class_names=['0','1','2'])
-------------------
AttributeError: 'GridSearchCV' object has no attribute 'tree_'
With .best_estimator_
clf.fit(iris.data, iris.target)
dot_data = export_graphviz(clf.best_estimator_, out_file=None, filled=True, rounded=True,
class_names=['0','1','2'])
I get no error. I hope this helps |
H: Fitting a pandas dataframe to a Poisson Distribution
I have a simple dataframe df2 that consist of indices and one column of values. I want to fit this dataframe to a poisson distribution. Below is the code I am using:
import numpy as np
from scipy.optimize import curve_fit
data=df2.values
bins=df2.index
def poisson(k, lamb):
return (lamb^k/ np.math.factorial(k)) * np.exp(-lamb)
params, cov = curve_fit(poisson, np.array(bins.tolist()), data.flatten())
I get the following error:
TypeError: only size-1 arrays can be converted to Python scalars
AI: I think the cause of the error is the np.math.factorial(k) function call, since curve_fit passes a numpy array as the first parameter to the poisson function, and if you try to run the code
np.math.factorial(np.array([1, 2, 3]))
You'll get the error
TypeError: only size-1 arrays can be converted to Python scalars
Try using scipy.special.factorial since it accepts a numpy array as input instead of only accepting scalers.
Thus, just change your poisson function to
def poisson(k, lamb):
return (lamb**k/ scipy.special.factorial(k)) * np.exp(-lamb)
Hope this helps
EDIT:
Also I fixed the ^ to ** since that's how you use the exponential operator in python. |
H: How to train-test split and cross validate in Surprise?
I wrote the following code below which works:
from surprise.model_selection import cross_validate
cross_validate(algo,dataset,measures=['RMSE', 'MAE'],cv=5, verbose=False, n_jobs=-1)
However when I do this: (notice the trainset is passed here in cross_validate instead of whole dataset)
from surprise.model_selection import train_test_split
trainset, testset = train_test_split(dataset, test_size=test_size)
cross_validate(algo, trainset, measures=['RMSE', 'MAE'],cv=5, verbose=False, n_jobs=-1)
It gives the following error:
AttributeError: 'Trainset' object has no attribute 'raw_ratings'
I looked it up and
Surprise documentation says that Trainset objects are not the same as dataset objects, which makes sense.
However, the documentation does not say how to convert the trainset to dataset.
My question is:
1. Is it possible to convert Surprise Trainset to surprise Dataset?
2. If not, what is the correct way to train-test split the whole dataset and cross-validate?
AI: EDIT: It seems I misunderstood the task at first, so here's my correction. Hope it works this time
It seems like what you're trying to do is similar to what is in the documentation under examples/split_data_for_unbiased_estimation.py (or this github issue which seems to be exactly what you want)
The code manually splits the dataset into two without using any sort of function call. Then sets the internals of the data variable to be only the train split.
import random
from surprise import SVD
from surprise import Dataset
from surprise import accuracy
from surprise import GridSearch
# Load your full dataset.
data = Dataset.load_builtin('ml-100k')
raw_ratings = data.raw_ratings
# shuffle ratings if you want
random.shuffle(raw_ratings)
# 90% trainset, 10% testset
threshold = int(.9 * len(raw_ratings))
trainset_raw_ratings = raw_ratings[:threshold]
test_raw_ratings = raw_ratings[threshold:]
data.raw_ratings = trainset_raw_ratings # data is now your trainset
data.split(n_folds=3)
# Select your best algo with grid search. Verbosity is buggy, I'll fix it.
print('GRID SEARCH...')
param_grid = {'n_epochs': [5, 10], 'lr_all': [0.002, 0.005]}
grid_search = GridSearch(SVD, param_grid, measures=['RMSE'], verbose=0)
grid_search.evaluate(data)
algo = grid_search.best_estimator['RMSE']
# retrain on the whole train set
trainset = data.build_full_trainset()
algo.train(trainset)
# now test on the trainset
testset = data.construct_testset(trainset_raw_ratings)
predictions = algo.test(testset)
print('Accuracy on the trainset:')
accuracy.rmse(predictions)
# now test on the testset
testset = data.construct_testset(test_raw_ratings)
predictions = algo.test(testset)
print('Accuracy on the testset:')
accuracy.rmse(predictions)
PS: If you feel like this seems a bit hacky and weird... then the core-developer of Scikit-learn that wrote this code also agrees with that sentiment. |
H: Test data for statistical t-test in Python
first of all sorry if this is not the proper place to ask but i have been trying to create some dummy variables in order to run a students t-test as well as a welch t-test and then run a monte-carlo simulation.Problem is, I am only given the sample size and standard deviation of the 2 populations. How can I go about creating some sort of representation for this data in order for me to run these tests? I wish to run these tests in either python or R. Thanks in advance.
EDIT : both populations come from a normal distribution
AI: In Python, to generate random numbers from a certain distribution you would pick the corresponding distribution from np.random (documentation) and pass the corresponding parameters. Thus to draw from a normal distribution you would do
import numpy as np
# for reproducible results, seed the number generator
np.random.seed(42)
n = 100
mu_1, std_1 = 0, 1
mu_2, std_2 = 0.2, 1.5
dataset1 = np.random.normal(loc=mu_1, scale=std_1, size=n)
dataset2 = np.random.normal(loc=mu_2, scale=std_2, size=n)
And the output
print('dataset 1:')
print(f'mean: {dataset1.mean():.2f}')
print(f'std: {dataset1.std():.2f}')
print(f'shape: {dataset1.shape}')
print('--------------')
print('dataset 2:')
print(f'mean: {dataset2.mean():.2f}')
print(f'std: {dataset2.std():.2f}')
print(f'shape: {dataset2.shape}')
--------------
dataset 1:
mean: -0.10
std: 0.90
shape: (100,)
--------------
dataset 2:
mean: 0.23
std: 1.42
shape: (100,)
PS: You don't have to use np.random.seed that's just to make the random generator consistent with the output every time the code is run.
EDIT: Also, if you want to use a t-test on python you can use scipy.stats, thus if you want to calculate the T-test for the means of two independent samples use scipy.stats.ttest_ind, or if you want to calculate the t-test on two related samples use scipy.stats.ttest_rel |
H: How is the "base value" of SHAP values calculated?
I'm trying to understand how the base value is calculated. So I used an example from SHAP's github notebook, Census income classification with LightGBM.
Right after I trained the lightgbm model, I applied explainer.shap_values() on each row of the test set individually. By using force_plot(), it yields the base value, model output value, and the contributions of features, as shown below:
My understanding is that the base value is derived when the model has no features. But how is it actually calculated in SHAP?
AI: As you say, it's the value of a feature-less model, which generally is the average of the outcome variable in the training set (often in log-odds, if classification). With force_plot, you actually pass your desired base value as the first parameter; in that notebook's case it is explainer.expected_value[1], the average of the second class.
https://github.com/slundberg/shap/blob/06c9d18f3dd014e9ed037a084f48bfaf1bc8f75a/shap/plots/force.py#L31
https://github.com/slundberg/shap/issues/352#issuecomment-447485624 |
H: Can I use SVC() as a base_estimtor for ensemble methods?
I am currently testing out a few different ensemble methods on my dataset. I've heard that you can also use support vector machines as base learners in boosting and bagging methods but I am not sure which methods allow it or not. In particular, e.g. for XGB i tried out trees and SVMs as base learners and got the exact same result for 5 different performance metrics which made me question the results and/or that the option can only take trees as base learners. I didn't find much info in the documentation or at least not in all of the documentations. I would be interested about AdaBoostClassifier(), BaggingClassifier() and XGBClassifier(). Does anybody know the details and whether or not I can use SVMs here as base learners?
AI: In short: Yes.
Conceptually, bagging and boosting are model-agnostic techniques, meaning that they work regardless of the learner.
Bagging essentially is the following:
create multiple predictors (they can even be hard-coded!)
gather predictions from the learners and come up with a prediction
Boosting can be seen as:
train a predictor
find where the predictor makes mistakes
put more emphasis on these mistakes
repeat until satisfactory
Regarding the specific Sklearn implementations, here are the base learners that you can use:
AdaBoostClassifier()
The documentation says Support for sample weighting is required, as well as proper classes_ and n_classes_ attributes.
This means that you can use all models that can give weight to your samples as part of the learning process (KNN, SVM, etc.)
BaggingClassifier()
This is a simple bagging strategy, so all estimators can be used here.
GradientBoostingClassifier()
This requires that your learners are differentiable so that the gradient can be computed. Generally, this technique is specific for tree learning. |
H: What is difference between transductive and inductive in GNN?
It seems in GNN(graph neural network), in transductive situation, we input the whole graph and we mask the label of valid data and predict the label for the valid data.
But is seems in inductive situation, we also input the whole graph(but sample to batch) and mask the label of the valid data and predict the label for the valid data.
AI: In inductive learning, during training you are unaware of the nodes used for testing. For the specific inductive dataset here (PPI), the test graphs are disjoint and entirely unseen by the GNN during training. |
H: mini batch vs. batch gradient descent
In batch gradient descent, it is said that one iteration of gradient descent update takes the processing of whole entire dataset, which I believe makes an epoch.On the other hand, in mini batch algorithm an update is made after every mini batch and once every mini batch is done, one epoch is completed. So in both cases, an epoch is completed after all the data is processed.I do not quite get what makes mini batch algorithm more efficient.
Thanks,
AI: In short, batch gradient descent is accurate but plays it safe, and therefore is slow. Mini-batch gradient descent is a bit less accurate, but doesn't play it safe and is much faster.
When you do gradient descent, you use an estimate of the gradient to update your weights. When you use batch gradient descent, your gradient estimate is 100% accurate since it uses all your data.
Mini-batch is considered more efficient because you might be able to get, let's say, an ~80% accurate gradient with only 5% of the data (these numbers are made up). So, your weights may not always be updated optimally (if your estimate is not so good), but you will be able to update your weights more often since you don't need to go through all your data at once.
The idea is that you update your weights more often with an approximation of your gradient, which often is good enough. The utility of mini-batch becomes more obvious when you start dealing with very large datasets. |
H: Logistic Regression: Is it viable to use data that is outdated?
TLDR: Want to predict who makes the playoffs (1,0), but there are more playoff spots now than there were in the past, is it okay to use that past data?
I want to use binary logistic regression on MLB data to estimate each team's probability of reaching the playoffs this upcoming season.
There is data going back as far as the seasons of the 1870s. However, my issue is that the structure of the playoffs and baseball as a whole has changed often over the years. Specifically, the changes deal with the number of playoff spots, which is in part due to an increase in the number of teams. For example, up until 1969 there were 20 teams, and there was only the championship (World Series), so, technically, only 2 teams made it to the "playoffs". The number of playoff spots has increased gradually to its present state, which is 10, in 2012, and there are now 30 teams.
To me, it makes sense to only use data from 2012 (to 2019) since it reflects the state of the upcoming season. This gives me 240 observations, thus 80 positive outcomes for my playoff (dependent) variable. However, I have about 40 predictors after removing highly correlated ones, which means that I should have way more observations. Though I know that the number of predictors will likely decrease once I fit the model, I still fear my sample size may still be too low. This makes me consider going further back to the previous era beginning in 1994 when there were 8 playoff spots, simply for the sake of more observations.
My question is that would it be viable to use such data in a regression, given that it may not accurately reflect the circumstances of what I'm trying to estimate? Could I maybe even go back to 1969?
I found this article which is pretty much exactly what I'm trying to do, and he uses data back to 1969, but it just seems like an issue to me.
AI: Your thinking is sensible. Indeed, in a perfect world, your training data should be completely representative of the data you'll encounter. However, in practice, you often find that "unrepresentative" data may still have some value.
Ultimately, whatever you do is good if it improves your model, so if using "outdated" data helps, then do it!
Here is what you could experiment with:
Let the data speak
You could compare models using more or fewer data and it might give you an idea of the ideal cutoff point.
Implement "time decay"
Maybe, 1870 data is useful but likely, it's less useful than 1871 and even less useful than last year's. You could weight your training instance based on how old they are so that your recent data points have a bigger impact.
Create time-insensitive features
By this I mean, you could find a way to turn your problem such that the number of playoffs team doesn't matter. Instead of a binary "playoff/no playoff", your problem could be to instead rank your teams, then you can select the playoffs team based on the cutoff at this specific year.
You could also add how many playoff teams there are as a feature so that the learning algorithm is aware of how many spots there are. |
H: Is there a limit in the number of layers for neural network?
I heard the neural network has a problem with vanishing gradient problems even though the ReLU activation function is used.
In ResNet(that has a connecting function for reducing the problem), there is limit of maximum layers of 120~190(I heard).
For the complete AI performance(or general AI with strong intelligence) I believe that the limit of the number of layers must be solved.
Is there is any possibility that we find a new activation function that does not limit the number of layers? (maybe we could use exhaustive search... checking the train performance in a neural network of 200~500 layers)
AI: In recent years, the problem of vanishing/exploding gradients is not causing a lot of trouble anymore. It's still something you should care about, but all the tools and tricks that have been developed in the last 5-7 years have dissipated a lot of worries.
Today, using activations from the ReLU family, combined with batchnorm, dropout, and other regularization techniques such as a good parameters initialization have made this problem not so scary anymore.
At this point, the number of hidden layers depends from other main factors:
The computaional power available, of course.
The complexity of your dataset. If the signal is very easy and can be learned in few epochs, too many parameters means none of them is trained enough (since the error would be backpropagated to them, and each of them would receive too little update).
In Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, Aurelien Géron said that sometimes we don't have to worry too much about model size, just implement a very powerful network, and use early stopping to train it just as much you need. That's a different way of tackling the problem.
In light of that, coming to your observations:
For the complete AI performance(or general AI with strong intelligence) I believe that the limit of the number of layers must be solved.
I think there is no right number of hidden layer, strictly speaking. It's not a result that you can find with a mathematical formula. Under many aspects, Deep Learning is more an art than a science, and as I exposed above it can be solved in more than one way.
Is there is any possibility that we find a new activation function that does not limit the number of layers?
Activation function only a little piece of a large mosaic. It's not up to an activation function to solve the issue. The research on new activation function is still active though, and very interesting to follow. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.