text
stringlengths
83
79.5k
H: Choosing the periodicity in a SARIMA model Given the order (P,D,Q,s) of a SARIMA model, s is an integer representing the number of periods in a season. Intuitively, I suppose it would be 12 for monthly data and 4 for quarterly data. But if I have hourly data (for a whole year) and I'm using only a small number of days as training data (say 10 or 14), what is a proper choice for s? AI: The choice of periodicity in a ARIMA model is depending on the seasonality in your time-series data. You should plot your time-series and see if you have a "curve"-pattern that occurs regulary. Based on the "distance" (hours in your data set) you can make a guess what the right choice of periodicity is.
H: Labels are not given for multiclass classification problem I have probably a weird question. If you are dealing with a multiclass classification problem, do you always have already determined target output/labels? I have e.g. a huge data set with a lot of features about different city areas (population, population density, number of services, banks and so on). I want to classify objects (houses, buildings) in these city areas based on these features, whether they are near the city center or not, let's say I want to have 3-5 labels at the end. But I don't know myself yet, how should I determine these labels. Is there a specific approach to solve this? Had anyone a similar problem? Please advise P.S. Earlier I calculated the distance between some object (e.g. house) and city center point (based on latitude&longitude). And based on distances I generated labels. But this approach is not universal when we have different cities of different sizes. P.P.S. Do I have to follow probably unsupervised learning methods? Do clustering and find the clusters. Then analyze the clusters to give meanings to those identified clusters. And then solve the problem as a multiclass classification problem? AI: Your problem is associated to "unsupervised" learning in machine learning. You do not have a data set that has training data - meaning that data points with correctly specified labels are not known yet. You can try different approaches to group/label your data set using the given features. You probably have to check by yourself if your model is "auto"-labeling your data correctly. Clustering (k-Means) Decision Trees Auto-Encoding with NNs More Approaches
H: where can i find the algorithm of these papers? I am reading about clinical NER I found 2 papers talking about it Paper 1 and Paper 2 They are talking about algorithms and ML has been used to approach clinical NER. I could not find anywhere on how exactly these algorithms are implemented in these papers. Can anyone help me find it or build it? AI: You can email the authors to ask them if they could share their code with you, but maybe they can't for IP reasons or don't want to share it. Papers like these are not unusual in experimental research. In theory you should be able to reproduce their system following the explanations in the paper. However there are other tools available for biomedical NER: MetaMap, cTakes.
H: why is MSE of prediction way different from loss over batches I am new to machine learning so forgive me if i ask stupid question. I have a time series data and i split it into training and test set. This is my code: from numpy import array from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense # split a univariate sequence into samples def split_sequence(sequence, n_steps_in, n_steps_out): X, y = list(), list() for i in range(len(sequence)): # find the end of this pattern end_ix = i + n_steps_in out_end_ix = end_ix + n_steps_out # check if we are beyond the sequence if out_end_ix > len(sequence): break # gather input and output parts of the pattern seq_x, seq_y = sequence[i:end_ix], sequence[end_ix:out_end_ix] X.append(seq_x) y.append(seq_y) return array(X), array(y) # choose a number of time steps n_steps_in, n_steps_out = 10, 5 # split into samples X, y = split_sequence(trainlist, n_steps_in, n_steps_out) # define model model = Sequential() model.add(Dense(100, activation='relu', input_dim=n_steps_in)) model.add(Dense(n_steps_out)) model.compile(optimizer='adam', loss='mean_squared_error') # fit model history = model.fit(X, y, epochs=2000, verbose=0) # demonstrate prediction x_input = array(testlist[0:10]) x_input = x_input.reshape((1, n_steps_in)) yhat = model.predict(x_input, verbose=0) yhat=list(yhat[0]) when i do print(history.history['loss'][-10:-1]) it gives me roughly 0.55 and when i do from sklearn.metrics import mean_squared_error mean_squared_error(testlist[11:16],yhat) it gives me 0.11. Why is it so different? AI: From my understanding, you are comparing the prediction error on the test set versus the training loss error. When the training error is greater than the test error, that means your model is under-fitted. Under-fitting occurs when there is still room for improvement on the test data. This can happen for a number of reasons: If the model is not powerful enough, is over-regularized, or has simply not been trained long enough. This means the network has not learned the relevant patterns in the training data. There are number of ways you can resolve under-fitting problem: Increase the time of training procedure Increase number of parameters in your model or the complexity of your model
H: How much is the Class Imbalance Problem rates? I'm working on a data set and wanted to know is there a standard rate about Class Imbalance problem or not? I have 47 samples in Class A and 150 Sample in class B , should I use Class Imbalance Technique or these rates are normal? AI: There is no general rule but you better to use such techniques (SMOTE, sampling etc) and try to obtain 50:50 if you can. You can utilize my another answer (just ignore multiclass part). My suggestion is you should create more samples by synthetic data generators like SMOTE. You have a model with very less observations (197). The model may not fit well thus data generation may help your for that case. Hope it helps!
H: How to use Inception v3 in Tensorflow I am trying to import Inception v3 in TensorFlow. I wish to apply it after reading this tutorial on object detection. AI: Keras, now fully merged with the new TensorFlow 2.0, allows you to call a long list of pre-trained models. If you want to create an Inception V3, you do: from tensorflow.keras.applications import InceptionV3 That InceptionV3 you just imported is not a model itself, it's a class. You now need to instantiate an InceptionV3 object, with: my_model = InceptionV3() at this point, my_model is a Keras Sequential() model with the architecture and trained weights of Inception V3, that you can re-train, freeze, save, and load as you need. Check also the full list of models available in the module, it's great.
H: Evaluate clustering by using decision tree unsupervised learning I am trying to evaluate some clustering results that a company did for some data but they used an evaluation method for clustering that i have never seen before. So i would like to ask your opinion and obviously if someone is aware of this method it would be great if he/she could explain to me the whole idea. Clusters have been made to the data set (sample of 250000 rows and 5 features out of 500000 rows) by using k-prototypes as one of the features is categorical. All the combinations of k= 2:10 and lambda = c(0.3,0.5,0.6,1,2,4,6.693558,10) have been made and 3 methods to figure out the best combination have been use. Elbow method (pick the number of clusters and lambda with the min WSS) Silhouette method pick the number of clusters and lambda with the max silhouette) Decision tree They build a decision tree for the data and after that they calculated for every different clustering combination the following value: (inverse leaf size weighted within cluster purity)* cluster size/ total obs and the picked the combination which had the max value. (k=10 and lambda=4) So my question is: Is there such a thing? Can we use the tree to identify which combination will give us higher cluster purity? Also if we can do that can we just use a simple tree without even evaluate how good or bad tree is? And finally, as every single method is giving us different answers how can we decide and pick which one to use to pick the right combinations? I would really appreciate if someone can help me with that. Thanks in advance! AI: That's a good question. Is there such a thing? Can we use the tree to identify which combination will give us higher cluster purity? Clustering and simple decision tree fitting are used together in many cases such as: First, like you mentioned quality of clustering can be measured by using decision tree leafs. I heard this calculation first time (I know some other measures very similar to it) but it makes sence since it still measures how are clusters are distinct from each other and dense. Second (and most used one), fitting a decision tree by assigning cluster labels as class labels. The fit in here should be overfit (training error should be close nearly 0). This let you when you have a new customer (let's say segmentation in e-commerce) you don't have to calculate all distances and find clusters, you just predict the new customer with the tree and assign cluster = segment label to her/him. Also if we can do that can we just use a simple tree without even evaluate how good or bad tree is? No, you cannot. The tree fit well (nearly overfit). Since you want to obtain cluster separation rules in your data rather than obtain a model. And finally, as every single method is giving us different answers how can we decide and pick which one to use to pick the right combinations? In that stage you should think why are we doing this segmentation? With all possible cluster models try to simulate your business approach and compare results. Hope it helps!
H: On which step should use SMOTE technique for over sampling? I want to use SMOTE technique for over sampling but I don't know on which step on pre-processing I should use it. My preprocessing steps are: Missing values Removing Outliers Smoothing Data Should I use SMOTE before all of these steps or its better to use it after these steps? AI: If you are using python, you can't use SMOTE in the presence of null values. In this case: Remove Outliers Smooth Data Impute null values (there are some smart options for that in R: using random forests to impute) SMOTE Removing outliers first let you do better smoothing and imputing.
H: Does Feature Normalization affect Gradient Descent | Linear Regression am new to datascience and i want to learn linear regression so i coded linear regression from scratch and performed gradient descent to find the best $w_\theta$ and $b_\theta$ values using a tutorial. And it went just fine i was able to find the best $w_\theta$ , $b_\theta$ values and i ploted the line-of-best-fit (below). And the gradient descent code i used to find the $w_\theta$ , $b_\theta$ values is below . def step_gradient_descent(data,m,b,learning_rate=0.0001): b_gradient= m_gradient = 0 N=float(len(data)) for i in range(len(data)): [x,y]=data[i] y_=(m*x)+b m_gradient+= - (2/N)*(x*(y-y_)) b_gradient+= -(2/N)*(y-y_) #print("m ={}, b ={}".format(m_gradient,b_gradient)) m_new = m-(learning_rate*m_gradient) b_new = b-(learning_rate*b_gradient) return (m_new,b_new) def perform_gradient_descent(data,m,b,lr=0.0001,epochs=1000): m_array=b_array=[] for i in range(epochs): if(i % 100 == 0 ): print("Running {}/{}".format(i,epochs)) (m,b) = step_gradient_descent(data,m,b,lr) return (m,b,m_array,b_array) and then i performed features normalization / standization on $x$ , $y$ below def featureNormalize(data): mean = np.mean(data , axis=0 ) std = np.std(data , axis=0 ) norm = ( data - mean ) / std return norm and then after that i plotted the line-of-best-fit , which was different from the above plotted one. So things i tried to get back the previous line-of-best-fit as before are changing learning rate ( $r$ ) changing epochs ( $n$ ) which didnot work out. So my understanding is that feature normalization / standization should not have any effect on the gradient descent but that didnot happen in this case. Just curious to figure out what's happening in my case. Link to my notebook Thanks in Advance. AI: Standard normalization let gradient descent converge faster. That's why most cases normalization is applied (of course it also eliminate effect of different scale features values). In this quora topic it is explained well. Your learning rate is very small. I think there is not enough number of iterations to coverge local minimum. This graph explains well the effect of learning rate to gradient descent: You may want to check my another answer for it.
H: What is Sentiment Bias? How will it affect a Lexicon Based Sentiment Analysis? I am comparing deep learning and lexicon/rule-based models for sentiment analysis. When I was doing some research into the limitations of lexicon based models, I came across a journal article that mentioned sentiment bias. However, this article did not explain what exactly sentiment bias is. I am providing the link here: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0202523 AI: What the authors call sentiment bias is the tendency for such systems to give strong results, either positive or negative, as opposed to neutral. Very often sentences or documents are more or less sentiment-neutral, but accumulating the positive or negative weights associated to individual words makes it more likely to result in a non-zero "sentiment value". Very simple example: I am comparing deep learning and lexicon/rule-based models for sentiment analysis. +1 +1 -1 +1 +1 Individually words such as "deep", "learning, "model", "sentiment" can be considered positive; "rule" can be considered negative. As a result your sentence would receive a strong positive score of +3, even though it's actually neutral.
H: Why must a CNN have a fixed input size? Right now I'm studying Convolutional Neural Networks. Why must a CNN have a fixed input size? I know that it is possible to overcome this problem (with fully convolutional neural networks etc...), and I also know that it is due to the fully connected layers placed at the end of the network. But why? I can not understand what the presence of the fully connected layers implies and why we are forced to have a fixed input size. AI: I think the answer to this question is weight sharing in convolutional layers, which you don't have in fully-connected ones. In convolutional layers you only train the kernel, which is then convolved with the input of that layer. If you make the input larger, you still would use the same kernel, only the size of the output would also increase accordingly. The same is true for pooling layers. So, for convolutional layers the number of trainable weights is (mostly) independent of input and output size, but output size is determined by input size and vice versa. In fully-connected layers you train weight to connect every dimension of the input with every dimension of the output, so if you made the input larger, you would require more weights. But you cannot just make up new weights, they would need to be trained. So, for fully-connected layers the weight matrix determines both input and output size. Since CNN often have one or more fully-connected layers in the end, there is a constraint on what the input dimension to the fully-connected layers has to be, which in turn determines the input size of the highest convolutional layer, which in turn determines the input size of the second highest convolutional layer and so on and so on, until you reach the input layer.
H: Chunking Sentences with Spacy I have a lot of sentences (500k) which looks like this: "Penalty missed! Bad penalty by Felipe Brisola - Riga FC - shot with right foot is very close to the goal. Felipe Brisola should be disappointed." "Penalty saved! Damir Kojasevic - Sutjeska Niksic - fails to capitalise on this great opportunity, shot with right foot saved in the centre of the goal." "Penalty saved! Stefan Panic - Riga FC - fails to capitalise on this great opportunity, shot with right foot saved in the centre of the goal." "Penalty saved! Georgie Kelly - Dundalk - fails to capitalise on this great opportunity, shot with right foot saved in the centre of the goal." "Penalty missed! Still FC København 1, Crvena Zvezda 1. Marko Marin - Crvena Zvezda - hits the bar with a shot with right foot." As you see, they are not really robotic, and after ending up writing 1500 lines of php code (with regex) and still being inconsistent, I decided to see my alternatives with machine learning. What I am trying to achieve is: For example this one: "Penalty saved! Stefan Panic - Riga FC - fails to capitalise on this great opportunity, shot with right foot saved in the centre of the goal." type => penalty action => saved reason => shot with right foot saved in the centre of the goal person => Stefan Panic I stumbled upon spaCy and saw "Named Entity Recognition" and thought maybe I can use it for this purpose. Especially as I have huge training data. I wanted to ask: Is spaCy's Named Entity Recognition is right for this task? If not, what should I try to learn for this task? P.S: I know a little about python but nothing about ML AI: Named Entity Recognition (NER) would extract names of people, organizations and such. Example: "Penalty missed! Bad penalty by <person>Felipe Brisola</person> - <organization>Riga FC</organization> - shot with right foot is very close to the goal. <person>Felipe Brisola</person> should be disappointed." So it could be helpful for the "person" field, but probably not for the rest. Note that you could also train a system similar to NER in order to predict other fields, but it would require a good amount of annotated data and it's not sure to work well.
H: Input 0 is incompatible with layer conv2d_2: expected ndim=4, found ndim=3 I get this error in Tensor flow, What does it mean and how can I fix it? import pickle import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Activation from keras.layers import Conv2D, MaxPooling2D from keras.utils import to_categorical import numpy as np X = pickle.load(open('cancer_image_features.pickle','rb')) y = pickle.load(open('cancer_image_lables.pickle','rb')) X = X/255.0 y = to_categorical(y) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',input_shape=X.shape[1:])) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, kernel_size=(3, 3),activation='relu')) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Activation("softmax")) model.add (Flatten()) model.add(Dense(3)) model.compile ( loss = 'binary_crossentropy', optimizer = 'adam' , metrics = ['accuracy']) model.fit(X,y, batch_size = 512, epochs = 10, validation_split= 0.3) AI: The error is likely to come from this line: model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',input_shape=X.shape[1:])) Input_shape should be a 4dim vectors as stated in the keras doc: Input shape 4D tensor with shape: (batch, channels, rows, cols) if data_format is "channels_first" or 4D tensor with shape: (batch, rows, cols, channels) if data_format is "channels_last". You may have to reshape your data, as stated here: https://stackoverflow.com/q/43895750/8119313
H: How to gauge the Complexity of Pre trained Neural Networks? What does one mean when they are talking about the simplicity of the networks? Does it mean that the shallower the networks the simpler they are, or does it mean that lesser the number of trainable parameters, simpler the models? AI: It can mean both, depending on the context. In order to improve prediction, a Network might need either more parameters or more depth, but these features improve models in different ways. For example, it's true that a Network with too few parameters won't be able to learn much; there is a minimal amount of parameters required simply to transform and represent signals in a sufficiently sofisticated way. On the other side, depth is fundamental in order to make your model generate more complex abstractions that will make it superior to other ML algorithms. In other words: the number of parameters overall tells you how many things your network can learn. Its depth tells you how complex and sofisticated these things can be. Deep Neural Network complexity is a multifaceted thing, it can't be represented on a single dimension.
H: 2nd, 3rd, Nth closest guesses I have used the KMeans algorithm to create an engine that can guess the cluster that a particular set of input data will fall into. Can I use it to guess the 2nd closest cluster, 3rd closest, and so on? Currently, I am using the sklearn.cluster.KMeans library if that's any help - it doesn't seem like the API provides this functionality already. AI: You could simply compute the distance to each cluster center provided in the cluster_centers_ attribute (once the KMeans instance is fitted). The predict method actually does that for the closest cluster center.
H: "Super" Optimizer concept I was wondering why there isn't a feature built into common-use ML libraries, like Keras, that plugs many different combinations of layers and nodes to multiple models and trains them simultaneously to single out the best NN architecture for your problem? For example, given training data, validation data, and a loss function, it compares a model consisting of two hidden Dense layers with 256 neurons each, and another model consisting of two hidden Dense layers, the first with 256 and the second with 64. It would then save the model with higher accuracy according to the input loss function. Does something like this exist already? I know something like this exists with SKLearn's GridSeachCV, but I wasn't sure if that's a common practice outside of SKLearn. I feel like I might be oversimplifying a complex problem. Thank you! AI: You could have a look at the AutoML work, which offers a few different ways to optimise a model of parameter space and goes well beyond Sklearn. They even have a tool that wraps around Sklearn! Here is a summary from their homepage: Here is a longer description, and here is a full example based on Keras. These libraries can essentially combine the best known practical methods of optimisation, such as: standard grid-search over parameters Bayesian optimisation of parameters a combination of the two methods: HpBandSter (HyperBand on STERoids)
H: How can I override the values of 1 column in a big dataframe using a small second dataframe? I have a Pandas dataframe (donations_df) that contains thousands of donations. Each donation row has many columns, but the two key ones for my question are: A recipient_id column indicating who received the donation An office column indicating what legislative office the recipient holds. Some of the values of the office column in donations_df are outdated, and I want to replace/override them. So I have a second, much smaller dataframe (overrides_df). It contains the preferred office names I want to use. It has only a few rows (one for each recipient). For example: | recipient_id | office | | ------------ | ------------------ | | 3839314 | Governor | | 24811446 | Secretary of State | | 12733609 | Auditor | | 6676482 | Attorney General | What is the proper way to join/merge/update this? What I want to happen is that for each donation in donations_df, if that donation's recipient_id exists in overrides_df, then then the donation would pick up the new office value from overrides_df What is the proper way to join/merge/update these two to achieve that result? AI: Try this: for recipient_id, office in override_df.values: donations_df.loc[donations_df["recipient_id"] == recipient_id, "office"] = office However, you should use a reference table to avoid this problem!
H: Detect if word is «common English» word or slang word I have a huge list of short phrases, for example: sql server data analysis # SQL is not a common word bodybuilding # common word export opml # opml is not a common word best ocr mac # ocr and mac are not common words I want to detect if word is not a common word and should not be processes further. I've tried to do this with NLTK, but it gives strange results: result = word in nltk.corpus.words.words() sql = false iso = true mac = true Is there a better way to do this? AI: It all depends on your definition of what a common word is in your domain. You are using an NLTK corpus which likely doesn't fit your domain very well. Either you have a corpus containing the domain you want and you do a simple lookup. Or you don't know in advance and you need to compute these common words from your documents (your short phrases). In that case, the more sentences you have the better. An easy way to do this is using pure Python is to use a counter from collections import Counter documents = [] # here add your list of documents/phrases counter = Counter() for doc in documents: words = doc.split() # assuming that words can be split on whitespaces counter.update(words) counter.most_common() # this will return words ranked by their frequency Then it's up to you to apply a threshold to define what words are common and which aren't. A more advanced approach could be using their TFIDF weights but in that case, you are not necessarily keeping common words, you are keeping "important" ones.
H: Diff. in P-value & F-Stat. Multiple linear regression Even if we have individual p-values for each predictor. Why do we need overall F-statistic? I read this solution but I am not sure if I get it right. Can someone please explain? Source: "An Introduction to Statistical Learning: with Applications in R" by James, Witten, Hastie and Tibshirani AI: The overall F-stat says if your model is significantly better than the naive model that only has an intercept at the average of all response data pooled together. In other words, it measures if $R^2$ is significantly better than 0. In your case, you can eyeball the $R^2$ and see that 0.897 is going to be better than 0. The F-stat and p-value will be more useful when the $R^2$ has a subtle difference from 0. Another place where an F-stat is useful to regression is testing against a less naive model than intercept-only. In fact, the p-values on each parameter in the regression are equal to the p-value you'd get by doing the F-test comparing the full model to a model with that parameter excluded (assuming the typical nice properties that go along with doing parameter inference in linear regression). The reason not to examine the individual p-values for each parameter is because of false discoveries, the usual business about multiple testing. Doing one F-test of the entire equation removes the possibility of false discovery of a significant parameter, in exchange for not being able to pin down which parameters are significant.
H: Is there a common strategy to measure if a difference-significance of two areas under two ROC curves I conduct sound detection experiments with mice. I have a stimulus sound and a "noise" sound that shoukd be ignored. I want to measure how well the mouse ignors the noise (with respect to, say, ignoring 100% of the noise stimuli). I have sessions with, say 200 trials of stimulus sound that sould be detected and 40 trials of noise sounds that should be ignored. I created the confusion matrix and ROC curve (I use Matlab). Now I want to know how well my mouse is performing compared with "ignoring all noise stims". Is there a way, or a common used formula, to get a evaluation or a confidence interval of "how good is my ROC AUC comparing to AUC(ROC) = 1"? Thanks! AI: The AUC is a summary statistic of your dataset. As such, if you did the same experiment again you would get a slightly different value - and if repeated several times, you'd get a distribution of values. You probably want to show that your distribution rarely or never includes 0.5 (indicating random classification). When you have an empirical distribution like this, or a histogram of repeated experiments, you can just count how many of your estimates are $\approx 0.5$. If none, you're golden. Since you don't want to do your experiment multiple times, you could 'simulate' that scenario by using samples of your data, but otherwise performing the same process. This is called 'bootstrapping'. You can use this method to construct confidence intervals around your AUC metric, or equivalently, do a statistical test to see if the metric you get is different from 0.5. Some packages have bootstrapped confidence intervals built in to the AUC calculation, so you could just use that.
H: How to handle different categorical embedding sizes in hold out data set I have a pytorch tabular dataset with zip code as a categorical embedding. I'm getting great results on the test set. When I go to run my hold out sample through, it errors out because I have more zip codes in the hold out then what the model was trained on. How do I handle this? In production, the likelihood of seeing a new zip code is high so I need to learn something I can transfer into production. Thank you. AI: One possibility would be to represent the zip codes using some transformation that could be applied to new (unseen) zip codes as well. For example, could you re-represent zip codes as latitude + longitude?
H: Improve performances of a convolutional neural network I am doing image classificaition, and to do this I have built the following neural network: def Network(input_shape, num_classes, regl2 = 0.0001, lr=0.0001): model = Sequential() # C1 Convolutional Layer model.add(Conv2D(filters=96, input_shape=input_shape, kernel_size=(3,3),\ strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation before passing it to the next layer model.add(BatchNormalization()) # C2 Convolutional Layer model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C3 Convolutional Layer model.add(Conv2D(filters=768, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Batch Normalisation model.add(BatchNormalization()) # C4 Convolutional Layer model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Batch Normalisation model.add(BatchNormalization()) # C5 Convolutional Layer model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C6 Convolutional Layer model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C7 Convolutional Layer model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # Flatten model.add(Flatten()) flatten_shape = (input_shape[0]*input_shape[1]*input_shape[2],) # D1 Dense Layer model.add(Dense(4096, input_shape=flatten_shape, kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # D2 Dense Layer model.add(Dense(4096, kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # D3 Dense Layer model.add(Dense(1000,kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # Output Layer model.add(Dense(num_classes)) model.add(Activation('softmax')) # Compile adam = optimizers.Adam(lr=lr) model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy']) return model # create the model model = Network(input_shape,num_classes) model.summary() it works good enough, but I would like to increase its performances. How could I modify it to do so? I was thinking about adding layers, which should give better performances, but I haven' t understand well if I should add convolutional layers or dense layers. Moreover I would like to find other ways to increase accuracy than simply adding layers. Can somebody please help me? Thanks in advance. [EDIT] I am considering a training set of 1200 images, which represent 4 wheater conditions : Haze, Rainy, Snowy, Sunny. With my model, the Test accuracy is 0.797500, and the Test loss is 1.881952. I would like to increase more my accuracy, but I don' t have other ideas than adding convolutional layers. I could try to change the size of the kernels and other hyperparameters, but I have other ideas. AI: It sounds like the model is underfit; to verify this, you would check that training and testing accuracy are similar, but, both lower than desired. One cause is likely the very low number of training images for a model of this complexity. You could try augmenting the training set with transformations of those images, for example by rotating the images in the training set? (This was done in the original alexnet paper: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf see section 4.1 on p.5).
H: Convolutional layers without pooling I am studying the CNN architecture of the AlexNet, and I have seen that it has convolutional layers without pooling in between: but I don' understand why this is done. Wouldn't be better to have something like CONV - POOLING - CONV - POOLING , and so on, instead of CONV - POOLING - CONV - CONV -CONV - POOLING? Why is this done? Thanks in advance. AI: The idea behind consecutive convolutional layers with no pooling is actually not to skip any pooling but to replace a single layer with a bigger receptive field. So think of it this way: Two consecutive 3x3 layers actually lead to a receptive field of 5x5. Three consecutive 3x3 layers (like in AlexNet) lead to an actual receptive field of 7x7. Intuitively I like to think of it this way: For two 3x3 layers the actual receptive field gets extended by 1 in all directions as the information is "pulled in" from outside of the 3x3 field by applying two layers consecutively. In the first layer the 3x3 receptive field is applied (here using A and B as examples): Now, that means when the second 3x3 layer is being applied cell B contains information from beyond the dark grey area. And this information in B will be considered for A when you apply the second layer. So basically the receptive field of A has grown as it includes information from beyond its actual 3x3 field: And since this happens in all directions as shown in the second picture you now end of with a sort of 5x5 receptive field. Finally, if you apply this a third time (like AlexNet does with its 3 consecutive 3x3 layers) then the receptive field will be extended a second time by 1 in all directions, i.e. you get a 7x7 receptive field. Question is why would you actually do this and not just use a single layer with a larger receptive field? The paper Very Deep Convolutional Networks for Large-Scale Image Recognition gives two reasons: First, we incorporate three non-linear rectification layers instead of a single one, which makes the decision function more discriminative. [...] Second, we decrease the number of parameters: assuming that both the input and the output of a three-layer 3 × 3 convolution stack has C channels, the stack is parametrised by $3(3^2C^2)=27C^2$ weights; at the same time, a single 7 × 7 conv. layer would require $7^2 C^2=49C^2$ parameters, i.e. 81% more. This can be seen as imposing a regularisation on the 7 × 7 conv. filters, forcing them to have a decomposition through the 3 × 3 filters (with non-linearity injected in between).
H: When should we start using stacking of models? I am solving a Kaggle contest and my single model has reached score of 0.121, I'd like to know when to start using ensembling/stacking to improve the score. I used lasso and xgboost and there obviously must be variance associated with those two algorithms. So stacking should theoretically give me better output than my individual algorithms. But how to idenfity if stacking is worth it and that we've reached dead end to accuracy of a particular model? AI: Stacking is going to help most when individual models capture unique characteristics of the data. It is often the case that different architectures perform similarly, if somewhat differently, on the same data. In those cases, ensembling/stacking will only offer slight incremental benefits. In the limit, of you only care about prediction, you can wire up as many different approaches as you can think of. However if interpretability is key, each additional component model will further complicate things. Your specific question of when to know if it’s worth it or if you’ve reached the limit can be treated like anything else - is your incremental r-square/error/classification accuracy significantly better versus a simpler approach?
H: Why might trees work so much better than boosting classifiers? I am predicting 10 classes label encoded using scikit-learn with 6 factors, 1.2M cases. DecisionTreeClassifier RandomForestClassifier ExtraTreesClassifier give accuracies (and precision and recall) of 0.9 AdaBoostClassifier GradientBoostingClassifier give accuracies of 0.2 Any pointers on the huge discrepancy? (I am doing gridsearchcv). Code: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) def output_metrics(): from sklearn.metrics import accuracy_score, precision_score, recall_score print("Accuracy:",accuracy_score(y_test, y_pred)) print('Precision', precision_score(y_test, y_pred, average=None).mean()) print('Recall', recall_score(y_test, y_pred, average=None).mean()) from sklearn.ensemble import AdaBoostClassifier from sklearn.model_selection import GridSearchCV tree_para = { 'n_estimators': [16, 32] } clf = GridSearchCV(AdaBoostClassifier(), tree_para, cv=5) model= clf.fit(X_train, y_train) y_pred = clf.predict(X_test) output_metrics() AI: As a disclaimer, I would like to point out that the performance in this specific case might still be due to the specific dataset used. A likely explanation lies in the essence of what trees and boosting algorithms do. As @akvall pointed out in the comments, Boosting algorithms can often overfit since this is what they are designed to do! As a reminder, regardless of how fancy a boosting algorithm works, it follows the following logic: train on the training set evaluate to see which mistakes were made retrain and focus more on the previous mistakes repeat until satisfactory results Trees do not work the same way and are therefore less prone to overfitting. A Random Forest will simply compute independent trees and use majority voting to make a prediction. Every boosting algorithm will "boost" for a certain amount of iterations, it is probably wise to look at how many times your algorithms did "boost".
H: Accuracy of KFold Cross Validation for Neural Network I have a neural network that Im evaluating using 10 -Fold cross validation. The validation accuracy for a fold changes alot during training in the range of -+10% So for example the validation accuracy of a fold would range between 80% and 70%. My question is which number should I consider to be this fold's accuracy. Should I just take the maximum validation accuracy reached while training or should I just run the training for a certain number of epochs and take the last number (The second approach's result will depend on luck)? Thanks AI: For some context, the entire point of running K-Fold cross-validation to compute an estimate of the performance. If the obtained values vary a lot, it just means that your estimate is less precise (i.e. has a larger standard deviation) In a neural network setting, your network evolves, so obviously, averaging all folds might give you a less accurate estimate if the network learned a lot in the last few iterations for instance. Typically, you would always take the fold at the end of training, since this is the performance you currently have. However, it is good to note that analyzing the variance of your performance within a short window can help you understand whether or not your model has converged or whether it is still "exploring".
H: How is loss computed for multiclass CNN with an output layer larger than the number of classes? I have built a CNN in pytorch for classifying the Fashion-MNIST dataset (10 classes). The images are 28x28. I have constructed the final layer in my model as an output of 50. (i.e. $nn.Linear(100, 50)$). Also I am using cross entropy loss. I am confused about how loss is calculated for these data sizes. From what I had known about backpropagation and loss function, the output of the neural net is compared with the expected result. For example, using mean square error, the loss function is $(output - expected)^2$. So if I had a binary classifier, say the class labels are $({0,1})$ then the output of the neural network would need to be one dimension to compute the loss. Now if I had three classes, how would you calculate loss? How many outputs would you need? Since the expected class label is still just a single digit, I don't see how loss can be calculated if the output of the neural network is more than one dimension. For example, if the output is $[ x1, x2, x3]$ and the expected class label is $y$, I don't see how loss could be calculated since the dimensions don't agree. So how is loss computed against a class label when the output of a neural network isn't a single digit? AI: There are many great tutorials covering how exactly cross entropy loss works. The key thing is : the classifier outputs probability for each class and not a single label. Taking your example, if there are 3 classes then the network will output something like [0.5, 0.25, 0.25] giving probabilities for each class. You can treat expected output as [1, 0, 0] if the output is, say, label 0. Now the dimensions agree and you can calculate the loss.
H: Remove noise by clustering on which step of pre-processing is better? I am working on a classification task. The dataset is a UCI data set about machine learning with 200 observations and 2 classes. Part of my model includes the following preprocessing steps: remove missing values normalize between 0 and 1 remove outlier smoothing remove trend from data SMOTE I would like to use a clustering method to remove noisy data points. The question is, at which step should this happen? AI: Looking at your different steps, the important thing to do is check which step would be affected by outliers. Removing missing values is not affected because this step is not dependent on other data points present (or not) in the dataset. However, normalizing your data is. Indeed, let's say your outliers contain extreme values, this will affect the normalized values of the non-outlier data points. Therefore, intuitively, I would perform your noise removal at the very start or after step 1. Ultimately, you should see what works better for your task. Perhaps removing outliers doesn't help as much as you'd expect. Same with your pre-processing. Feel free to experiment!
H: How to apply data binning on reviews data? I need to apply data binning on a set of reviews, I have searched for some data binning methods for reviews and long-texts and couldn't find anything other than classification. Is NLP or classification the only method to bin long-text data? AI: You can bin on Len of the data. you can also bin on Keywords you are interested in. You can use Countvectorizer with YellowBricks library to arrive at Frequency Plot. You can tokenize into N grams and find the frequencies. Depends on your problem statement that you are trying to solve. You can do Sentiment Analysis on each of the reviews and bin them into positive negative and neutral. There are many ways you can think off. I hope this helps.
H: Solutions for big data preprecessing for feeding deep neural network models built with TensorFlow 2.0? Currently I am using Python, Numpy, pandas, scikit-learn to do data preprocessing (LabelEncoder, MinMaxScaler, fillna, etc.), and then feeding the processed data to DNN models built with Tensorflow 2.0. This input pipeline meets my needs when data is small enough to fit a PC's RAM. Now I have some large datasets, more than 10GB, some are larger. I also plan to deploy the models in a production environment, which means there will be new data coming everyday. For DNN model training there is distributed strategy of tensorflow 2.0. But for data preprocessing obviously I cannot use pandas, scikitlearn on the large datasets with one PC. It seems to me I need to use a for-loop where I repeatedly fetch a small part of the data and use it for training? I am wondering what do people typically use in either experiment or production environment for big data preprocessing? Should I use Spark(PySpark) and Tensorflow input pipeline? AI: Looking at your use case, Dask, H2O, Modin, Koalas and Vaex would better for scaling your data preprocessing apart from Pyspark. They have API's similar to pandas thus porting your existing code would be easier. But you would need to set them for your target environment.
H: Pearson vs Spearman vs Kendall What are the characteristics of the three correlation coefficients and what are the comparisons of each of them/assumptions? Can somebody kindly take me through the concepts? AI: Correlation is a bivariate analysis that measures the strength of association between two variables and the direction of the relationship. In terms of the strength of the relationship, the value of the correlation coefficient varies between +1 and -1. A value of ± 1 indicates a perfect degree of association between the two variables. As the correlation coefficient value goes towards 0, the relationship between the two variables will be weaker. The direction of the relationship is indicated by the sign of the coefficient; a + sign indicates a positive relationship and a – sign indicates a negative relationship. Pearson's correlation coefficient and the others are the non-parametric method, Spearman's rank correlation coefficient and Kendall's tau coefficient. Pearson's Correlation Coefficient $$ r = \frac{\sum(X - \overline{X})(Y - \overline{Y})} {\sqrt{\sum(X-\overline{X})^{2}\cdot\sum(Y-\overline{Y})^{2}}}\\ ~ \\ \begin{align} Where, ~ \overline{X} &= mean ~ of ~ X~variable\\ \overline{Y} &= mean ~ of ~ Y ~ variable\\ \end{align} $$ Assumptions: Each observation should have a pair of values. Each variable should be continuous. It should be the absence of outliers. It assumes linearity and homoscedasticity. Spearman's Rank Correlation Coefficient $$\rho = \frac{\sum_{i=1}^{n}(R(x_i) - \overline{R(x)})(R(y_i) - \overline{R(y)})} {\sqrt{\sum_{i=1}^{n}(R(x_i) - \overline{R(x)})^{2}\cdot\sum_{i=1}^{n}(R(y_i)-\overline{R(y)})^{2}}} = 1 - \frac{6\sum_{i=1}^{n}(R(x_i) - R(y_i))^{2}}{n(n^{2} - 1)}\\ ~ \\ \begin{align} Where, ~ R(x_i) &= rank ~ of ~ x_i\\ R(y_i) &= rank ~ of ~ y_i\\ \overline{R(x)} &=mean ~ rank ~ of ~ x\\ \overline{R(y)} &=mean ~ rank ~ of ~ y\\ n &= number ~ of ~ pairs \end{align} $$ Assumptions: Pairs of observations are independent. Two variables should be measured on an ordinal, interval or ratio scale. It assumes that there is a monotonic relationship between the two variables. Kendall's Tau Coefficient $$ \tau = \frac{n_c - n_d}{n_c + n_d} = \frac{n_c - n_d}{n(n-1)/2}\\ ~ \\ \begin{align} Where, ~ n_c &= number ~ of ~ concordant ~ pairs\\ n_d &= number ~ of ~ discordant ~ pairs\\ n &= number ~ of ~ pairs \end{align} $$ Assumptions: It's the same as assumptions of Spearman's rank correlation coefficient Comparison of Each Correlation Coefficients Pearson correlation vs Spearman and Kendall correlation Non-parametric correlations are less powerful because they use less information in their calculations. In the case of Pearson's correlation uses information about the mean and deviation from the mean, while non-parametric correlations use only the ordinal information and scores of pairs. In the case of non-parametric correlation, it's possible that the X and Y values can be continuous or ordinal, and approximate normal distributions for X and Y are not required. But in the case of Pearson's correlation, it assumes the distributions of X and Y should be normal distribution and also be continuous. Correlation coefficients only measure linear (Pearson) or monotonic (Spearman and Kendall) relationships. Spearman correlation vs Kendall correlation In the normal case, Kendall correlation is more robust and efficient than Spearman correlation. It means that Kendall correlation is preferred when there are small samples or some outliers. Kendall correlation has a O(n^2) computation complexity comparing with O(n logn) of Spearman correlation, where n is the sample size. Spearman’s rho usually is larger than Kendall’s tau. The interpretation of Kendall’s tau in terms of the probabilities of observing the agreeable (concordant) and non-agreeable (discordant) pairs is very direct. Example Python Implementation
H: What is a channel in a CNN? I was reading an article about convolutional neural networks, and I found something that I don't understand, which is: The filter must have the same number of channels as the input image so that the element-wise multiplication can take place. Now, what I don't understand is: What is a channel in a convolutional neural network? I have tried looking for the answer, but can't understand what is it yet. Can someone explain it to me? Thanks in advance. AI: Let's assume that we are talking about 2D convolutions applied on images. In a grayscale image, the data is a matrix of dimensions $w \times h$, where $w$ is the width of the image and $h$ is its height. In a color image, we normally have 3 channels: red, green and blue; this way, a color image can be represented as a matrix of dimensions $w \times h \times c$, where $c$ is the number of channels, that is, 3. A convolution layer receives the image ($w \times h \times c$) as input, and generates as output an activation map of dimensions $w' \times h' \times c'$. The number of input channels in the convolution is $c$, while the number of output channels is $c'$. The filter for such a convolution is a tensor of dimensions $f \times f \times c \times c'$, where $f$ is the filter size (normally 3 or 5). This way, the number of channels is the depth of the matrices involved in the convolutions. Also, a convolution operation defines the variation in such depth by specifying input and output channels. These explanations are directly extrapolable to 1D signals or 3D signals, but the analogy with image channels made it more appropriate to use 2D signals in the example.
H: Use of decision trees for classifying images I am new at Machine Learning and reading about it I wonder if it is possible (and convenient) to use decision trees to classify images. For instance, to classify faces AI: If you're looking to classify faces, you can use decision trees, however, they are not expected to provide extremely good results. Why? Images, and especially faces heavily rely on local relationships between features (i.e. pixels close to each other). Decision trees do not take this into account, and therefore, results may not be great, or may be heavily affected by noise. Also, trees are powerful, but typically, they are useful when concise, which requires features to be meaningful. However, images have some of the least meaningful features out there (pixels)
H: How to build an overfitted network in order to increase performances I am learning how to implement CNN, and searching on the internet I have found that a trick to design a good network is to first build it in such a way that it overfits, and then use regularization to elimnate overfitting and have a good performing network. But how do I do this? I don't understand how do I build a network that overfits on purpose? And also in which way do I use regularization after? Can someone helo me? Thanks in advance. AI: The reason of overfitting is generally because of the high(unnecessary) complexity model. So if you train a too complex(large) model, you will get a high training accuracy and low test set accuracy. Then you can start to fight with overfitting with getting more data, dropout, regularization, early stopping, global average pooling, feature scale clipping or dropping some of layers from your network.
H: What does it mean the term variation for an image dataset? I am working with convolutional neural networks, and I have seen that often we need to pre process the images before feeding them to the network. In particular, I have seen that often we have to do image augmentation using an image generator. Now, when looking for a clarification on why we need to do this, I came across an article which says: Image Augmentations techniques are methods of artificially increasing the variations of images in our data-set by using horizontal/vertical flips, rotations, variations in brightness of images, horizontal/vertical shifts etc. What I don't understand is what is the variation of a dataset. Can somebody help me? Thanks in advance. AI: Assume, you have two different datasets of cat images: Dataset 1 Images were taken under full daylight. The cat covers always almost the whole image. The cat is always in the center of the image. Dataset 2 The images were taken under various lighting conditions: some under full daylight, some in rooms, some during the night. The cats were differently far away from the camera, e.g. sometimes the cat covers only 20% of the image. The cats are in different positions in the images, e.g. sometimes, the cat is in the center but other times, it is in the upper right corner. In this case, we would say that Dataset 2 has a higher variation than Dataset 1. Generally, we use data augmentation to simulate missing information in datasets. For example, you could decrease the brightness of some images in Dataset 1 to simulate how a cat looks at night. Or you could zoom out to simulate how a cat looks from farther away. Note that a perfect dataset that contained all relevant information would not require any data augmentation at all. However, gathering data is expensive and data augmentation is cheap. Therefore, we almost always have to augment.
H: What is cohen kappa metric, implementation in Python? Can somebody explain indetail explanation on Quadratic Kappa Metric/cohen kappa metric with implementation in Python AI: Quadratic Kappa Metric is the same as cohen kappa metric in Sci-kit learn @ sklearn.metrics.cohen_kappa_score when weights are set to 'Quadratic'. quadratic weighted kappa, which measures the agreement between two ratings. This metric typically varies from 0 (random agreement between raters) to 1 (complete agreement between raters). In the event that there is less agreement between the raters than expected by chance, the metric may go below 0. The quadratic weighted kappa is calculated between the scores which are expected/known and the predicted scores. Results have 5 possible ratings, 0,1,2,3,4. The quadratic weighted kappa is calculated as follows. First, an N x N histogram matrix O is constructed, such that Oi,j corresponds to the number of adoption records that have a rating of i (actual) and received a predicted rating j. An N-by-N matrix of weights, w, is calculated based on the difference between actual and predicted rating scores. An N-by-N histogram matrix of expected ratings, E, is calculated, assuming that there is no correlation between rating scores. This is calculated as the outer product between the actual rating's histogram vector of ratings and the predicted rating's histogram vector of ratings, normalized such that E and O have the same sum. From these three matrices, the quadratic weighted kappa is calculated. Code implementation in Python Breaking down the formula into parts 5 step breakdown for Weighted Kappa Metric First, create a multi-class confusion matrix O between predicted and actual ratings. Second, construct a weight matrix w which calculates the weight between the actual and predicted ratings. Third, calculate value_counts() for each rating in preds and actuals. Fourth, calculate E, which is the outer product of two value_count vectors Fifth, normalize the E and O matrix Calculate weighted kappa as per formula Each Step Explained Step-1: Under Step-1, we shall be calculating a confusion_matrix between the Predicted and Actual values. Here is a great resource to know more about confusion_matrix. Step-2: Under Step-2, under step-2 each element is weighted. Predictions that are further away from actuals are marked harshly than predictions that are closer to actuals. We will have a less score if our prediction is 5 and actual is 3 as compared to a prediction of 4 in the same case. Step-3: We create two vectors, one for preds and one for actuals, which tells us how many values of each rating exist in both vectors. Step-4: E is the Expected Matrix which is the outer product of the two vectors calculated in step-3. Step-5: Normalise both matrices to have the same sum. Since it is easiest to get the sum to be '1', we will simply divide each matrix by its sum to normalize the data. Step-6: Calculated numerator and denominator of Weighted Kappa and return the Weighted Kappa metric as 1-(num/den) More Info
H: Using Amazon Personalize to build a Recommendation System I would like to build a recommendation system based only in the items metadata. I have an input vector with some desirable topics that the user want to read about, for example: (self-help, yoga, sports) On the other hand I have a dataset with books with Title, Description, among other fields. Up to now I am building this model locally with Python, using clustering to group books by similarity. However, I would like to build a Recommendation System using Amazon Personalize for this case using only item metadata. I do not want to add other info as ratings for now. Is it possible? Do you know some example? AI: As far as I have researched, Amazon Personalize is not useful for this project. This is because Amazon Personalize requires User-Item Interaction data to train the model. To use Amazon Personalize, you need a dataset with historical data that contains at least the fields: User Item Date where the Date field represent the time when the user interacted with the Item.
H: python tsne.transform does not exist? I am trying to transform two datasets: x_train and x_test using tsne. I assume the way to do this is to fit tsne to x_train, and then transform x_test and x_train. But, I am not able to transform any of the datasets. tsne = TSNE(random_state = 420, n_components=2, verbose=1, perplexity=5, n_iter=350).fit(x_train) I assume that tsne has been fitted to x_train. But, when I do this: x_train_tse = tsne.transform(x_subset) I get: AttributeError: 'TSNE' object has no attribute 'transform' Any help will be appreciated. (I know I could do fit_transform, but wouldn't I get the same error on x_test?) AI: That's a particular peculiarity of TSNE in sklearn: https://github.com/scikit-learn/scikit-learn/issues/5361
H: Is it possible to change test and train data size when using crossvalind function with Kfold param? I was looking at MATLAB Help and want to work with "crossvalind" function. It would two parameters that you can use it. If you use "HoldOut" you can define partition size of test and train data set and when you use "Kfold" you can define "fold counts". Now I want to know is there a way to use "Kfold" parameter and define partition size of test and train data sets? I checked it and it seems when you use "Kfold" as bellow code, it always default partition size like 75% to 25% . indices = crossvalind('Kfold',species,10); AI: K-fold means that the validation step will be performed k times, each of them using a fraction $\frac{k-1}{k}$ for training and $\frac{1}{k}$ for validation. If you want a fixed validation fraction, choose the number of folds that fits: 90%/10% : 10-fold 75%/25% : 4-fold etc. I don't know about Matlab libraries or functions, but if you want to do custom folds, you'll probably have to sample train and validations sets "by hand".
H: Logistic regression threshold value How can i set the threshold value for the target variable. For example if a target variable is chance_of_admit and it has values from 0 to 1, how can I pick a value and so that I can convert it to 0's and 1's to perform logistic regression AI: So there two ways of doing this, IMHO, By creating a well balanced target variable by choosing the right threshold. As I suggested in the comments above. In doing so we are simply taking care of values which should be treated as positive which would otherwise become negative if we take lower threshold. By using the mean threshold and that will generate imbalanced target variable and when you perform the modelling, you can see the ROC and PRC curves and decide the threshold based on that. But keep in mind, it also depends what kind of problem you are solving.
H: How is "relevance" defined in information retrieval outside the context of systems with user feedback? I've seen information retrieval systems that return some results from a query, and then the user rates these results as either "relevant" or "not relevant". What can you do if you do not have user feedback? E.g. suppose your system returns some ranked results from a query. Suppose you have no pre-defined notion of what is relevant, and suppose you cannot receive any kind of user feedback. What can you do? This is important, because information retrieval evaluation metrics are based on relevance. Maybe it isn't possible to define relevance without user feedback, if so, maybe some information retrieval evaluation metrics without dependency on user feedback can be suggested? AI: There is no formal definition for the concept of relevance, because relevance depends completely on the context and is therefore highly subjective. This is why the best way (some might say the only way) to evaluate relevance is to actually ask users what is relevant for them. For any ML-based task, one needs to design a proper evaluation framework in order to control and measure the quality of the results. Naturally the evaluation method should be chosen so that it reflects as much as possible the level of quality with respect to the goal of the task, i.e. what one would intuitively expect from it. Evaluation metrics are almost always simplified indicators of this "level of quality", so what matters is how well they correlate with what a user would expect from the system: sometimes even a perfectly standard evaluation measure might not to be suited to the goal of the task. My point is that evaluation is a matter of analysis and design. There are infinitely many options, but the point is to select the most appropriate one for the job. Here are some of these options: The ideal case is to have annotated data (e.g. user feedback) directly suited to the data and the task: then it's just a matter of counting how often the prediction is correct. It's common to evaluate a system against another annotated dataset X, assuming that the task is similar enough so if the system works well on X then it will work well on the real dataset. Another less than ideal way is to evaluate against the predictions of another reference system X: in this case X is considered the gold standard, so there is no way for the tested system to perform better than X. Indirect evaluation: if there is another task being performed at a later stage with the predictions, and this task can be evaluated more easily than the IR task itself. Heuristics: that would be the least reliable kind of evaluation, but it's better than nothing. It ranges from simply counting the number of words in common between the query and top N results to developing complex methods using third-party resources.
H: image classification with training set with 4 classes and test set with 3 classes I have to do image classificaion with a CNN, and for doing this I have been given a training set with 4 classes and a test set with 3 classes. I am really confused because I don't know if this is going to influence my prediction. It never happened to me. How can I deal with this? Thanks in advance. AI: In principle there's nothing wrong with that, since every instance in the test set is predicted individually. You will have a 4 x 3 confusion matrix, because the model might predict some false positives on the fourth class. Of course you won't be able to know if the model can correctly identify a true instance from the missing class. It depends what is the goal: If the model is meant to be able to predict any of the 4 classes, then it should be trained on the 4 classes and it would be preferable to also test it on the 4 classes, but testing it only on 3 should already gives a good indication of its performance. If the model only needs to predict 3 classes ever, then the instances of the 4th class should be removed from the training set since they just make things more complex.
H: Evaluating the performance of a machine learned recommendation system I have a set of resumes $R=\{{r_1,...,r_n\}}$, which I've transformed to a vector space using TF-IDF. Each resume has a label, which is the name of their current employer. Each of these labels comes from the set of possible employers $E = \{{e_1,...,e_m\}}$. From this, I have trained a machine learning model. This model then takes some $r_i$ from the test set, and assigns a probability to each member of $E$. The results are then ranked, from highest probability to lowest probability. E.g. $P(e_2|r_i)=0.56, P(e_{52}|r_i)=0.29, P(e_{29}|r_i)=0.14,...etc.$ The resume, $r_i$ belongs to some individual, so this ranking is used to inform the individual as to what companies the model believes are most likely to hire them, given the details of what their resume contains (their skills, past employers, education, personal summary). In this case, company $e_2$ is most likely, followed by $e_{52}$ and so on. My question is, how do you evaluate the performance of this recommendation system? Where the information need of the user is to learn what companies their resume matches to the best. My own ideas My understanding from information retrieval is that we need to determine some measure of relevance. From this, it's possible to use some measure like mean average precision to measure performance. Determining relevance seems like the tricky part. For instance $e_2$ has a high probability, but is it actually relevant? Maybe $r_i$ is based on aeronautical engineering, but $e_2$ is a food store, which is clearly not relevant. My current idea is to take each $r_i$ in the training set belonging to the same label $e_j$, and then compute a single TF-IDF vector which is the average of the TF-IDF vectors belonging to each $r_i$ labelled as $e_j$. E.g. (an unrealistic example) Suppose $r_2$ and $r_9$ are labelled as $e_4$. Now suppose $r_2$ has TF-IDF vector $[0.2, 0.1, 0.5, 0.2]$ and $r_9$ has TF-IDF vector $[0.22, 0.12, 0.44, 0.22]$. Then the average of these is $[0.21, 0.11, 0.47, 0.21]$. Repeating this process for all $e_j\in E$ results in $m$ of these vectors. From this, it's possible to compute the cosine similarity between some $e_i$ and $e_j$. Returning to the first example, we can take the true label of $r_i$, and then find the cosine similarity between this label and each member of $E$. Then we set some threshold and evaluate whether $\text{cosineSim}(\text{true label}, e_j) < \text{some threshold}$. If the cosine similarity is above the threshold, then $e_j$ is relevant, otherwise, $e_j$ is not relevant. I'm not sure if this is a sensible/valid approach (I wonder if it defeats the point of the machine learning, since I may as well just use the cosine similarity? That said, I cannot forgo the machine learning component in this project). Maybe this is an over complication, and something like top k accuracy would be fine. I.e. is the true label in the top k suggestions? I'm not sure, I'm interested to have some more informed perspective. AI: To the extent possible you should try to evaluate based on your data rather than some ad-hoc measure. As you rightly noticed, there is a real risk that the ad-hoc measure would just confirm the predictions of the model, since it uses a somewhat similar method. I would suggest that you split your data between a training set and test set (or even better use cross-validation), and indeed use top-K accuracy (or something similar) to evaluate on the test set. That would be the safe option for a proper evaluation, and then you could try to see if your ad-hoc measure correlates with it: if it does, then you have evidence that in the future it can be be used instead of a test set. Side note: your instances don't contain any negative evidence such as resumes rejected by an employer. In case you could obtain this kind of data, it could probably improve the predictions.
H: Difference between packaged sentiment analysis tools (TextBlob/NLTK) and training your own classifier? I'm new to ML and training classifiers in practice, so I was just wondering what the difference was between the built-in sentiment tools of packages such as NLTK and TextBlob as compared to manually creating a classifier (training, testing, etc). I think I read in a comment somewhere that Textblob/NLTK's existing sentiment analysis tools basically just tokenize the text and count the number of positive/negative words to determine an overall sentiment rating (not sure how accurate this is). Does anyone know if using a custom classifier would, in general, be a better way to doing sentiment analysis of text (I'm looking at analyzing the sentiments expressed in hotel reviews)? AI: I would say that the meaningful difference in approaches to sentiment classification is between knowledge-based and statistical ones. The knowledge-based, as you mention, usually use a polarity lexicon, that contains words with a sentiment value and then calculate the sentiment of a text by summing up the values of the words. The statistical ones train a model based on a labeled training set (i.e. in a supervised setting). The models that are used for that differ, for instance you can use a Naive Bayes classifier or an SVM or any sort of neural network. With regards to the packages you mentioned, as far as I understand Textblob indeed uses a lexicon. NLTK provides a lexicon-based sentiment classification but it also allows you to train your own statistical model. If a knowledge-based or a statistical approach is better for you use-case depends really on your data. Same holds for the difference between off-the-shelf vs custom trained one. But it also depends on the time you are able to spend on creating your own model and the expertise you can build on. If your domain is quite limited, I would argue hotel reviews are quite limited, fine-tuning a knowledge-based approach (by tweaking the underlying lexicon) might give you good results. In any case, I strongly suggest that you have a test set to evaluate your performance when creating or fine-tuning your model or comparing different models.
H: Understanding AC_errorRate loss function I'm reading an article about Rolling Window Regression: a Simple Approach for Time Series Next value Predictions. He explains about 5 different loss functions. I managed to understand the first four, but I don't understand the fifth one: Almost correct Predictions Error rate (AC_errorRate) — the percentage of predictions that is within %p percentage of the true value AI: From my understanding, this loss type means: you define a threshold percentage error (let’s say 2%) for each true value y, the desired prediction should be between y + 0.02*y and y - 0.02*y the percentage of predicted values fulfilling the rule above contribute to the “inliers” predictions, I.e., good predictions This idea reminds me to what RANSAC does
H: Problem with overfitting for a CNN I am doing image classification with a CNN and I am having trouble building a network that does not do overfitting. I have in my training set 2000 images of 4 classes, while in my test set I have 3038 of the same 4 classes. My CNN is the following: def Network(input_shape, num_classes, regl2 = 0.0001, lr=0.0001): model = Sequential() # C1 Convolutional Layer model.add(Conv2D(filters=32, input_shape=input_shape, kernel_size=(3,3),\ strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation before passing it to the next layer model.add(BatchNormalization()) # C2 Convolutional Layer model.add(Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Batch Normalisation model.add(BatchNormalization()) # C3 Convolutional Layer model.add(Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Batch Normalisation model.add(BatchNormalization()) # C4 Convolutional Layer model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) #Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C5 Convolutional Layer model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C6 Convolutional Layer model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C7 Convolutional Layer model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # Flatten model.add(Flatten()) flatten_shape = (input_shape[0]*input_shape[1]*input_shape[2],) # D1 Dense Layer model.add(Dense(4096, input_shape=flatten_shape, kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # D2 Dense Layer model.add(Dense(4096, kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # D3 Dense Layer model.add(Dense(1000,kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # Output Layer model.add(Dense(num_classes)) model.add(Activation('softmax')) # Compile adam = optimizers.Adam(lr=lr) model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy']) return model and everytime I train and test I clearly overfit, because if I test the model I obtain a low accuracy, around 45%, and the curves of accuracy for test and training are really far apart if I plot them. How could I improve my network in such a way it does not overfits? Thanks in advance. AI: If the model is overfitting you can either increase regularization or simplify the model, as already suggested by @Oxbowerce: remove some of the convolutions and/or maybe reduce the dense layers. Given that you already have several different types of regularizers present, I can suggest another one for convolutional layers: spatial dropout. By using SpatialDropout2D available in Keras, you can drop entire features from convolutional layers. You can try using it after the first convolutions after C5,C6 and C7 for starters. But given that you have a very small dataset, I am not sure if there is a lot of room for improvement. It would be easier to comment if you also share the training and test accuracy graphs directly. Anyway, the best way to approach image recognition problems with small datasets is via transfer learning. I suggest you look into it if you do not have to build the model from scratch for some other reason.
H: NLP and one-class classifier building I have a big dataset containing almost 0.5 billions of tweets. I'm doing some research about how firms are engaged in activism and so far, I have labelled tweets which can be clustered in an activism category according to the presence of certain hashtags within the tweets. Now, let's suppose firms are tweeting about an activism topic without inserting any hashtag in the tweet. My code won't categorized it and my idea was to run a SVM classifier with only one class. This lead to the following question: Is this solution data-scientifically feasible? Does exists any other one-class classifier? (Most important of all) Are there any other ways to find if a tweet is similar to the ensable of tweets containing activism hashtags? AI: Yes, this is feasible. One-class classification is a thing, but it is usually used in a context where it is hard or impossible to get negative samples. In your case, I would argue, you can quite easily get tweets that are not about activism, therefore you can render it as a binary classification, because you have data points of two classes or labels: 1 for tweets that are part of your class and another 1 for tweets that are not. There are many ways to build a classifier, SVM is only one of them. You could also use a Naive Bayes algorithm, or as @Kasra mentioned a neural network model. No matter what you use, you will have to organise your data such that you have samples of both classes: activism and non-activism within your set. This means that you should randomly pick tweets from your big dataset and manually check if they relate to activism, even if they don't have the hashtags in them that you used for identifying the activism tweets in the beginning. Further, you have to think about the features that your classifier will use. The simplest might be the bag of words within the tweets, but you might also pre-process the tweet to exclude stop-words. Depending on which algorithm you use, you might find that your classifier relies a lot on the presence of your particular hashtags as features for predicting the class. In this case it might struggle to identify other tweets without this hashtags as activism, even if they are activism. I would experiment with pre-processing the tweets in your entire dataset to remove those hashtags from the the tweets.
H: How to interpret classification report of scikit-learn? As you can see, it is about a binary classification with linearSVC. The class 1 has a higher precision than class 0 (+7%), but class 0 has a higher recall than class 1 (+11%). How would you interpret this? And two other questions: what does "support" stand for? The precision and recall scores in the classification report are different compared to the results of sklearn.metrics.precision_score or recall_score. Why is that so? AI: The classification report is about key metrics in a classification problem. You'll have precision, recall, f1-score and support for each class you're trying to find. The recall means "how many of this class you find over the whole number of element of this class" The precision will be "how many are correctly classified among that class" The f1-score is the harmonic mean between precision & recall The support is the number of occurence of the given class in your dataset (so you have 37.5K of class 0 and 37.5K of class 1, which is a really well balanced dataset. The thing is, precision and recall is highly used for imbalanced dataset because in an highly imbalanced dataset, a 99% accuracy can be meaningless. I would say that you don't really need to look at these metrics for this problem , unless a given class should absolutely be correctly determined. To answer your other question, you cannot compare the precision and the recall over two classes. This only means you're classifier is better to find class 0 over class 1. Precision and recall of sklearn.metrics.precision_score or recall_score should not be different. But as long as the code is not provided, this is impossible to determine the root cause of this.
H: What is my training score the mean_train_score or mean_test_score? I am using sklearn to train some models (random forest, decision tree). For the training I am using RandomsearchCV with Stratified k-fold as cross-validation. Then I make a predictions on the test set and calculate the test score. However, I would like to compare the test score with the training score. I assumed I could use the mean_train_score of the cv_results_ report from the RandomseachCV as training score for the model, because I thought it would show the validation against the hould-out-fold from the k-folds. However, I am not sure about this because there is also a mean_test_score. I was looking for an explanation of the mean_train_score and mean_test_score. I know these scores exits also for the single folds. But how are these scores calculated? And is one of them my training score, which shows how my model during the training performed? I found an approach of explanation, but it's too superficial for me: GridSearch mean_test_score vs mean_train_score AI: The mean_test_score is actually the mean score of the validation step for each fold. The "test" word is probably not well chosen by sklearn in that case, if you want to make the distinction between validation and test. However, I don't think you are totally finished here, and you should therefore not compare mean_train_score with the test core. Indeed, the cross-validation phase gives you the best set of hyperparameters (case with maximal "test" score, which is actually a validation score), but you should not keep the corresponding model, especially (but not only) if the training set has few observations or there are few folds. You should instead re-train your model a last time, with this set of hyperparameters, but over the entire train set (not only the $\frac{k-1}{k}$ cases used during cross-validation). This trained model will give you the training score (over the whole training set) and the test score (over the test set).
H: How to extract features from the encoded layer of an autoencoder? I have done some research on autoencoders, and I have come to understand that they can also be used for feature extraction (see this question on this site as an example). Most of the examples out there seem to focus on autoencoders applied to image data, but I would like to apply them to a more general data set. Therefore, I have implemented an autoencoder using the keras framework in Python. For simplicity, and to test my program, I have tested it against the Iris Data Set, telling it to compress my original data from 4 features down to 2, to see how it would behave. The encoder seems to be doing its job in compressing the data (the output of the encoder layer does indeed show only two columns). However, the values of these two columns do not appear in the original dataset, which makes me think that the autoencoder is doing something in the background, selecting/combining the features in order to get to the compressed representation. Here is the complete working example: from pandas import read_csv from numpy.random import seed from sklearn.model_selection import train_test_split from keras.layers import Input, Dense from keras.models import Model # Get input data and separate features from labels df = read_csv("iris.data") Y = df.iloc[:,4] X = df.iloc[:, : 4] # Split data set in train and test data X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.5, random_state=seed(1234)) # Input information col_num = X.shape[1] input_dim = Input(shape=(col_num,)) # Encoding information encoding_dim = 2 encoded = Dense(encoding_dim, activation='relu')(input_dim) # Decoding information decoded = Dense(col_num, activation='sigmoid')(encoded) # Autoencoder information (encoder + decoder) autoencoder = Model(input=input_dim, output=decoded) # Train the autoencoder autoencoder.compile(optimizer='adadelta', loss='mean_squared_error') autoencoder.fit(X_train, X_train, nb_epoch=50, batch_size=100, shuffle=True, validation_data=(X_test, X_test)) # Encoder information for feature extraction encoder = Model(input=input_dim, output=encoded) encoded_input = Input(shape=(encoding_dim,)) encoded_output = encoder.predict(X_test) # Show the encoded values print(encoded_output[:5]) This is the output from this example: [[ 0.28065908 6.151131 ] [ 0.8104178 5.042427 ] [-0. 6.4602194 ] [ 3.0278277 2.7351477 ] [ 0.06134868 5.064625 ]] Basically, my idea was to use the autoencoder to extract the most relevant features from the original data set. However, so far I have only managed to get the autoencoder to compress the data, without really understanding what the most important features are though. My question is therefore this: is there any way to understand which features are being considered by the autoencoder to compress the data, and how exactly they are used to get to the 2-column compressed representation? AI: You are using a dense neural network layer to do encoding. This layer does a linear combination of the input layers + specified non-linearity operation on the input. Important to note that auto-encoders can be used for feature extraction and not feature selection. It will take information represented in the original space and transform it to another space. The compression happens because there's some redundancy in the input representation for this specific task, the transformation removes that redundancy. Original features are lost, you have features in the new space. Which input features are being used by the encoder? Answer is all of them. For how exactly are they used? Answer is you can check the weights assigned by the neural network for the input to Dense layer transformation to give you some idea. You can probably build some intuition based on the weights assigned (example: output feature 1 is built by giving high weight to input feature 2 & 3. So encoder combined feature 2 and 3 into single feature) . But there's a non-linearity (ReLu) involved so there's no simple linear combination of inputs. If your aim is to get qualitative understanding of how features can be combined, you can use a simpler method like Principal Component Analysis. The factor loadings given in PCA method's output tell you how the input features are combined. If the aim is to find most efficient feature transformation for accuracy, neural network based encoder is useful. But you loose interpretability of the feature extraction/transformation somewhat.
H: How to evaluate the K-Modes Clusters? K-modes algorithm is available here I want to do clustering of my binary dataset. I need to specify the number of clusters that I need as an output: KModes (n_clusters, init, n_init, verbose) My dataset contains 1000 lines and 1000 rows, I want to calculate the distance between my clusters in order to know the exact number of cluster that I need to choose. I don't know how to compare between them. Probably I want to use the hamming distance because it is the most suitable distance to compare between binary data. Any recommendations that you can give are welcomed. AI: The are some techniques to choose the number of clusters K. The most common ones are The Elbow Method and The Silhouette Method. Elbow Method In this method, you calculate a score function with different values for K. You can use the Hamming distance like you proposed, or other scores, like dispersion. Then, you plot them and where the function creates "an elbow" you choose the value for K. Silhouette Method This method measure the distance from points in one cluster to the other clusters. Then visually you have silhouette plots that let you choose K. Observe: K=2, silhouette of similar heights but with different sizes. So, potential candidate. K=3, silhouettes of different heights. So, bad candidate. K=4, silhouette of similar heights and sizes. Best candidate.
H: Types of Regression Techniques? Can someone explain types of Regression Techniques, and Where do we use? Thanks in Advance. AI: There are various kinds of regression techniques available to make predictions. These techniques are mostly driven by three metrics (number of independent variables, type of dependent variables and shape of regression line). Linear Regression Polynomial Regression Logistic Regression Quantile Regression Ridge Regression Lasso Regression Elastic Net Regression Principal Components Regression (PCR) Partial Least Squares (PLS) Regression Support Vector Regression Ordinal Regression Poisson Regression Negative Binomial Regression Quasi Poisson Regression Cox Regression Tobit Regression Find More at the below links https://www.listendata.com/2018/03/regression-analysis.html https://www.analyticsvidhya.com/blog/2015/08/comprehensive-guide-regression/ https://www.geeksforgeeks.org/types-of-regression-techniques/ http://www.urbanaseminary.org/types-of-regression-analysis/ If you want to choose your model based on "most used", you are wrong. you should choose based on the type of your input data and goal data and statistics of your data. https://blog.minitab.com/blog/how-to-choose-the-best-regression-model
H: Difficulty interpreting word embedding vector similarity (spaCy) I calculate vector similarities like this: nlp = spacy.load('en_trf_xlnetbasecased_lg') a = nlp("car").vector b = nlp("plant").vector dot(a, b)/(norm(a)*norm(b)) 0.966813 Why are the vector similarities so high for unrelated words for the embedding? This is not the only pair for which they are abnormally high. I also had a similar experience with fastText, so I am wondering, am I misunderstanding something? Also I am able to get vectors for non-words like "asdfasfdasfd" or "zzz123Y!/§zzzZz", and they differ from each other. How is this possible? AI: Why are the vector similarities so high for unrelated words for the embedding? For the specific example you give, I would argue that it makes sense that car and plant have high similarity. This is likely due to phrases such as car manufacturing plant Also I am able to get vectors for non-words like "asdfasfdasfd" or "zzz123Y!/§zzzZz", and they differ from each other. How is this possible? For your specific case, since you use the en_trf_xlnetbasecased_lg, the answer is straightforward. Embeddings provided by XLNet are contextual, meaning that even if the word itself isn't a word, you'll get an embedding given the words in its context. Also, it is likely that HuggingFace's implementation uses Byte-Pair Encoding as tokens, making it much more robust to out-of-vocabulary situations.
H: Where are WEKA installed packages stored I would like to use the WEKA library in a Java program but I can't seem to find the methods I installed using WEKA's package manager. Does anyone know where the installed methods are stored? For clarification, I installed WEKA, installed the extra package using WEKA and can use it with the WEKA GUI. But I can't find the installed package in the .lib file. AI: I don't know how WEKA stores the extra packages on other installations, but for my Windows 10 with WEKA 3.8 it was here: WEKA stores the extra packages in a direcory called packages. This package was in my user directory and under the wekafiles folder. Example: "C:\Users\myUser\wekafiles\packages....." Hope this helps someone else.
H: Calibrating Correlation I am facing a weird problem in my on going project and thought if someone here could help me out with this. Actually I have large data set. I have to perform a regression task on top of that. While doing the initial analysis and feature selection task, I did correlation of all the features with the target variable. When I got the results I saw one of the feature having negative correlation with the target variable but my business unit suggests it must have a positive correlation to proceed further. Now they are asking me to filter out the rows causing negative correlation. I don't know how to proceed in this context. Can anybody help me out or give me heads up in this? Thank you very much in advance AI: First what comes to my mind: "the only statistics you can trust are the ones you have falsified yourself" - Churchill However, to answer your question. The correlation coefficient is defined as: You can find data points that are negatively correlated by looking at the covariance, that is defined as: First, you calculate the mean of your dependent and independent variable. Then you subtract the mean from the corresponding data points. After that, you multiply those two series. Negative values are rows that contribute to a negative correlation coefficient. However, as a statistician, I am not supporting this approach (data deletion on purpose is not scientific working).
H: Training LSTM for time series prediction with nan labels I have a time series of features $x_1,x_2,x_3,...,x_n$. I want to make a prediction $y_1,y_2,y_3,...,y_n$ for each timestep. However, in my training data some of the $y$ can be nan. I'd like the fit to just ignore these (i.e. the cost for this pair measured $y$ and predicted $y$ is zero). I'm currently using tensorflow through Keras. Is there an analogue of the masking layer for the label? I'm currently using tensorflow through Keras. Alternatively, it might be possible to change the loss function, but I don't know how, expecially while retaining numerical efficiency. AI: I suggest implementing it this way : Set the nan value to 0 or any other value when compiling keras model use parameter sample_weight_mode='temporal' You can use masking on top of this by supplying the weight as the mask (sequence of values 1 if not nan 0 otherwise). The steps above should give you the desired result.
H: Trouble understanding the partial differentiation used in reinforcement learning I am studying deterministic actor-critic algorithms in reinforcement learning. I try to give a brief explanation of actor-critic algorithms before jumping into the mathematics. The actor takes in state $s$ and outputs a deterministic action $a$ based on the distribution policy $u$. The state and action are fed into the critic. The critic sees how good it is to take a particular action from a given state using the action-value function $Q(s,a,w)$. The critic is then updated via temporal difference (TD) learning and the actor updated in the direction of the critic Thus it can be seen that the actor's goal is to try and maximise the state action value function $Q(s,a,w)$ by picking the best actions in the given state. I am having trouble understanding the mathematics behind the updating of the actor. The below equation gives how the actor is updated. \begin{equation} \frac{\partial l}{\partial u} = \frac{\partial Q(s, a, w)}{\partial a} \frac{\partial a}{\partial u} \end{equation} What I understand is that we are taking the partial derivative of $l$ with respect to $u$, and we are backpropogating the critic gradient to the actor. It seems that $l$ is a differentiable function of the variable $a$, but I am confused when it comes to describing what is happening in the equation above as it seems to consist of two functions multiplied together. Can someone kindly explain what is really happening in the mathematics above? AI: Your understanding of what's going on seems to be correct, just one little clarification: $u$ should be the model parameters of the deterministic policy $\mu(s,u)$ and not a distribution itself, same as $w$ are the model parameters of $Q(s,a,w)$, but that's probably what you meant (or I might be unfamiliar with the formulation). Concerning your actual question, the update step implied by $\frac{\partial l}{\partial u}$ is supposed to make the deterministic policy $\mu(s,u)$ to get closer to the optimal $a$, which maximizes $Q(s,a,w)$. As $a = \mu(s,u)$, we have a composite function on our hands $$ Q(s,a,w) = Q(s, \mu(s, u), w)$$ When updating the actor paramters $u$ such that $Q$ is maximized, we need to make a step in the direction of the gradient of $Q$ with respect to $u$ which, since it is a composite function, is computed using the chain rule $$ \frac{\partial Q}{\partial u} = \frac{\partial Q}{\partial a}\frac{\partial a}{\partial u}$$ The notation is a bit sloppy, replacing $\mu$ and $a$ and so on, but that also seems to be the case in the literature. So what is going on intuitively consits of two parts: moving $a$ in the direction of $\frac{\partial Q}{\partial a}$ will increase $Q$, e.g. in 1D if $\frac{\partial Q}{\partial a} > 0$ increasing $a$ would increase $Q$ and if $\frac{\partial Q}{\partial a} < 0$ increasing $a$ would decrease $Q$ moving $u$ in the direction of $\frac{\partial a}{\partial u}$ will increase $a$, in 1D the example would be the same as above If you multiply these together and update $u$ according to the product you end up moving $u$ such that $Q$ increases by either increasing or decreasing $a$, which is exactly what you want to do.
H: How to use ADWIN with multiple columns I want to perform drift detection on data with multiple input values (x0, x1, x2, x3). I'm using an adaptive window algorithm found from sci-kit found here. Doing this from skmultiflow.drift_detection.adwin import ADWIN adwin = ADWIN() adwin.add_element(np.array([1, 2, 3, 4])) Results in ValueError: setting an array element with a sequence. But doing this from skmultiflow.drift_detection.adwin import ADWIN adwin = ADWIN() adwin.add_element(np.array([1]) works just fine. Any idea how to do the first thing? My current solution is to just use 4 different drift detectors, but I would like to use one. AI: This is not currently implemented. So while you can add any number as a concept value, you're probably best served by adding 1s and 0s depending on whether a not misspecification occurred. The reason it is not implemented is probably that ADWIN essentially performs a statistical test (or a heuristic approximation) whether the mean of two (large enough) windows is significantly different. And comparison of multivariate mean vectors is very tricky, in particular if one cannot make any assumptions about normality etc. Otherwise, your workaround of using 4 drift detectors seems like the best solution!
H: Why I get AttributeError: 'float' object has no attribute '3f'? I am getting this error: AttributeError: 'float' object has no attribute '3f' I don't understand why I am getting it, I am following the example straight from the book "applied text analysis" The chunk of code in python is: total = sum(words.values()) for gender, count in words.items(): pcent = (count / total) * 100 nsents = sents[gender] print( "{0.3f}% {} ({} sentences)".format(pcent, gender, nsents) ) I see that pcent clearly will return a float, why the author tries to apply .3f what am I missing? AI: Try this instead, print( "{:.3f}% {} ({} sentences)".format(pcent, gender, nsents) ) Refer the latest docs for more examples and check the Py version!
H: Deep learning model gives random results First I am new to machine learning if it is an obvious question, I am sorry. dataset_coefficients = loadtxt( 'in.csv', delimiter=',') dataset_answers = loadtxt( 'out.csv') X = dataset_coefficients[:, 0:4] y = dataset_answers model = Sequential() model.add(Dense(32, input_dim=4, activation='relu')) model.add(Dense(16, activation='sigmoid')) model.add(Dense(8, activation='sigmoid')) model.add(Dense(4, activation='sigmoid')) model.add(Dense(2, activation='sigmoid')) model.add(Dense(1, activation='softmax')) model.compile(loss="mean_squared_error", optimizer='adam', metrics=['accuracy']) model.fit(X, y, epochs=250, batch_size=100, verbose=0) _, accuracy = model.evaluate(X, y, verbose=0) print('Accuracy: {}%'.format(accuracy * 100)) in.csv looks like this 1, -21, 147, -343 1, 19, 115, 225 1, 1, -64, -64 1, 30, 300, 1000 1, 16, 64, 0 1, 3, -81, -243 1, 3, 3, 1 1, -13, 16, 192 out.csv 1 2 3 1 2 3 1 2 When I test the accuracy with the data which was previously used for training Output is Accuracy: 33.33333432674408% No matter how big my dataset is, the result has the exact same accuracy rate of 33% which seems like the prediction is literally random. What am I doing wrong? Thanks AI: I identified several problems when looking through your codes : The first column for in.csv is a dummy variable for bias term (all 1 for the whole column). For NN layers in Keras unless you put include_bias=False, weights for bias will be included automatically hence this column would not be necessary. Your variable is not scaled/standardized, although sometimes it will work without normalization but it is always recommended to do this for NN since input like that may lead to unstable training. Consider using StandardScaler or MinMaxScaler for starter. You mentioned on the tags that you are working on classification so this is what you should do. First, identify the number of unique classes, this should be the number of hidden unit on the last layer. Next, you have two choices, if you want to use OneHotEncoder then when compiling your model your loss should be 'categorical_crossentropy'. The other choice, you first make sure that your label index starts from 0 and then use 'sparse_categorical_crossentropy' as your loss. Softmax activation function applies the function AFTER the weights and biases operation. So the output of your model will first be a set of probability of the sample being some class and to get one output simply call .argmax(axis=1) which will return the index with the most probability on each row. You also need to do + 1 since your class labels starts from 1 instead of 0.
H: Is not having overfitting more important than overall score (F1: 80-60-40% or 43-40-40)? I've been trying to model a dataset using various classifiers. The response is highly imbalanced (binary) and I have both numerical and categorical variables, so I applied SMOTENC and Random oversampling methods on Training set. In addition, I used a Validation set to tune the models parameters by GridSearchCV(). As both precision and recall were important for me, I used f1 to find the best model. I should note that I selected these three subsets by cluster analysis and extracting samples by stratified train_test_split() from each cluster; so I have more confident that the subsets have more similarity. Due to complex nature of Decision Tree and Random Forest or boosting techniques, I usually get high fitting (high f1 score) on Training set, relatively high on Validation set, but moderate to low on Test set. The general sign for overfitting is the high difference between Training and Test sets (or between Validation and Test sets in my problem); but I am confused how to select the best model in following cases: Case A: Training fit is very high; but Validation and Test sets fit are low but close to each other Case B: Training, Validation, and Test fits are similar; but much lower than Case A. F1 Score Model Train Val Test ---------------------------------------- A: SVC 80.1 60.3 37.5 B: MLPClassifier: 43.2 40.0 39.1 I know that Case A might be the best model,however there is no guarantee that is produces similar result for new data, but which model do you pick with regards to overfitting? (assume that precision and recalls for both models are similar) AI: The best model in your case is B not just because it scored higher on the test set, but because it showed very little sign of overfitting. By not overfitting as much and being more consistent in its scores (train/val/test), you know what you're getting from the model. If you try it tomorrow on a secondary test set, you'd expect similar results. Model A on the other hand is very inconsistent. If you evaluated this model on the secondary test set, you can't tell how much to expect. It could be higher, it could be lower... Generally speaking, if you've overfit as much on the validation set, it's a good indication to start over (re-split the dataset into train/val/test randomly, preprocess, fit the model). This time try less hyperparameter options though. From the numbers I see even though model B currently is better than A, A might have the capacity to be better than B, if properly trained. I'd suggest re-doing model A's training from scratch, but with more regularization and less hyperparameter-tuning.
H: Carlification of the MSE loss sum symbol So I have a question regarding the MSE loss on the application of a Neural Network. Loss function: $\text{MSE} = \frac{1}{2} \sum_{i=1}^{n} (Y_i - \hat{Y_i}) ^ 2$ I am wondering for what the $\sum_{i=1}^{n}$ stands. Do I sum over the loss of all training examples for each output node in my Neural Network? Or do I use a single training example and sum over all Neural network output nodes? Or do I both and sum over all training examples and over all output nodes? I want to use the MSE loss later than for updating my weights in the Neural Network. What would I do for that? AI: I think it depends on what you're doing. If you want to find the MSE of just one training example, then you do number 2. sum all in one example. If you want to see if it's stable or converging to stability, you could calculate MSE on all examples independently, and then average them to give you a sense of overall behavior
H: Using categorical_crossentropy for binary classification Is it ok to use categorical_crossentropy for binary classification or is it better to use binary_crossentropy AI: Binary cross-entropy is a special case of categorical cross-entropy with just 2 classes. So theoretically it does not make a difference. If $y_k$ is the true label and $\hat{y}_k$ is the predicted label of class $k$ (both one-hot encoded, i.e. $\sum_k y_k =1 \land \sum_k \hat{y_k} =1 \land y_k,\hat{y_k} \in \{0,1\}$) the multicategorical cross-entropy for $K$ categories is $$-\sum_{k=1}^{K}y_k \log(\hat{y}_k)$$ which is equal to the binary cross-entropy for $K=2$ : $$-\sum_{k=1}^{K}y_k \log(\hat{y}_k) = -y \log(\hat{y})- (1-y) \log(1-\hat{y})$$ Just in case there are any implementation differences, e.g. speed-wise, I would still just use binary cross-entropy for a binary classification problem instead of multicategory with $K=2$.
H: How to treat time based ticket prices for train/test split I have a dataset of airfare price tickets that were scraped throughout a 6 month period where each observation represents a particular price for a specific flight on a specific date that it was scraped. In other words, a specific unique flight may appear multiple times in the dataset if it's scraped multiple times on different days. For example, Scrape Date: 11/13/19, Days To Trip: 42, Flight: DL1345 , Departure: 12/25, Time: 5:00PM, Price: 290 Scrape Date: 11/22/19, Days To Trip: 33, Flight: DL1345 , Departure: 12/25, Time: 5:00PM, Price: 330 Scrape Date: 12/01/19, Days To Trip: 24, Flight: DL1345 , Departure: 12/25, Time: 5:00PM, Price: 349 I know that with time-series data such as stock prices, you want to split your training/testing data so that the data in testing is in the future and comes after the data in training. However, I don't believe the dataset I have would warrant a split like this and I can instead randomly shuffle the data for train/test split but I am not 100% sure on the right call. Should I split the data based on time or can I randomly sample since the price of the tickets don't depend on each other? AI: Indeed airfare data is different than many other price data. In To Buy or Not to Buy: Mining Airfare Data to Minimize Ticket Purchase Price the authors put it as following: Computational finance is concerned with predicting prices and making buying decisions in markets for stock, options, and commodities. Prices in such markets are not determined by a hidden algorithm, as in the product pricing case, but rather by supply and demand as determined by the actions of a large number of buyers and sellers. Thus, for example, stock prices tend to move in small incremental steps rather than in the large, tiered jumps observed in the airline data. (tho the reality is not that black and white as airfares also depend on seat availability and stock prices are also indirectly driven by algorithms but generally they have a point) These algorithm-driven price changes do not so much rely on historical price development which is why I would give your approach a try. In line with that there are multiple papers in which airfare prices have not been treated as timeseries (besides the one already linked also see Predicting Airfare Prices - tho in this paper they did consider an attribute related to the most recent previous price of the ticket!). Nevertheless, there are many date-related aspects your might want to consider. "Days to trip" is one which is in your data already. Others, easily available from your data, could be "weekday of departure", "departure during holiday season or not" and "weekday of price request". And then of course there are tons of other variables which might be important, e.g. "number of stopovers", "overnight flight or not", "number of free baggage" (and obviously the cabin class or maybe even booking class) etc. (also see Airfare Prices Prediction Using Machine Learning Techniques)
H: Clustering initialization I'm running into a problem while working on clustering. I work on data with white Gaussian noise. All of the methods I have come across use some sort of random initialization to set up the mean and covariance matrix of the clusters. My question is: Since the initialization is random, there is a chance that I get a really bad starting point which gives me bad results. How do I deal with this? One specific initialization I'm considering is the K-Means++ which is better than strictly random because it at least attempts to use the data to make informed initialization, but it too is random in the end. Do people usually do multiple runs and take the best initialization? What about that for streaming data? AI: You have two options: 1) Let the K-means algorithm run for a large number of iterations (if on sklearn, change the max_iter parameter value for sklearn.cluster.KMeans). It will eventually converge to a good result (but it will take more time) 2) Make and "educated guess" for the initial starting point. One way to do that is to transform your data in a space where you know your points can only lie within a specific region: from there, you can evaluate the best seeds where to start the K-means. For a clearer explanation, see this article (for a quick overview, look at the related slides, in particular at step 3, slide number 10)
H: Explanation behind the calculation of accuracy in deep learning model I am trying to model an image segmentation problem using convolutional neural network. I came across code in Github which I am not able to understand the meaning of following lines of codes for calculation of accuracy - def new_test(loaders,model,criterion,use_cuda): for batch_idx, (data,target) in enumerate(loaders): output = model(data) ###Accuracy _, predicted = torch.max(output.data, 1) total_train += target.nelement() correct_train += predicted.eq(target.data).sum().item() model(data) outputs a tensor of shape B * N * H * W B = Batch Size N = Number of segmentated classes H,W = Height,Width of an image AI: _, predicted = torch.max(output.data, 1) On the first line above, it uses torch.max() to find all cases in output.data along the dimension - 1(i.e along each row). In addition, torch.max() returns (values, indices), so only the indices are stored in a variable called predicted while the values are not stored. total_train += target.nelement() The second line above just tracks the total size of the train set into a variable called total_train based on the number of elements in target correct_train += predicted.eq(target.data).sum().item() The third line above takes the predicted variable, which contains all indices of when the output is greater than 1, checks it against target.data, and sums up all items that are equal. In other words, it is counting how many of the predicted values are equal to the target data.
H: Are there some research papers about text-to-set generation? I have googled but find no results. Text-to-(word)set generation or sequence-to-(token)set generation. For example, input a text and then output the tags for this text: 'Peter is studying English' --> {'good behavior','person','doing something'} Thank you! AI: Check out these papers below and google keywords: Multi-label Classification. X-BERT: eXtreme Multi-label Text Classification with BERT HAXMLNet: Hierarchical Attention Network for Extreme Multi-Label Text Classification SGM: Sequence Generation Model for Multi-label Classification Ranking-Based Autoencoder for Extreme Multi-label Classification AttentionXML: Label Tree-based Attention-Aware Deep Model for High-Performance Extreme Multi-Label Text Classification Also check out the code below: https://github.com/chenyuntc/PyTorchText
H: k-fold cross validation with RNNs is it a good idea to use k-fold cross-validation in the recurrent neural network (RNN) to alleviate overfitting? A potential solution could be L2 / Dropout Regularization but it might kill RNN performance as discussed here. This solution can affect the ability of RNNs to learn and retain information for longer time. My dataset is strictly based on time series i.e auto-correlated with time and depends on the order of events. With standard k-fold cross-validation, it leaves out some part of the data, trains the model on the rest while deteriorating the time-series order. What can be an alternate solution? AI: TL;DR Use Stacked Cross-Validation instead of traditional K-Fold Cross-Validation. Stacked Cross-Validation In Sckit-learn, this is called TimeSeriesSplit (docs). The ideas that instead of randomly shuffling all your data points and losing their order, like you suggested, you split them in order (or in batches). Belows is a picture of traditional K-Fold vs Stacked K-Fold. They both have K=4 (four iterations). But in the traditional K-Fold all the data is used all the time, whereas in the Stacked K-Fold only the past data is used for training and the now data for testing. Strategy Split your data in a Stacked K-Fold fashion. At every iteration, store your model and measure its performance against the test set. After all iterations are over, pick the stored model with the highest performance (might not even be the last one). Note: This is a common approach in training neural nets (batching with validation sets).
H: Are stationarity and low autocorrelation the prerequisite of regression model? As said in the title, are stationarity and low autocorrelation the prerequisite of general / linear regression model ? That is, if a time series is non-stationary or has large autocorrelation, would it be easier or harder to be predicted using regression model such as linear model and deep learning ? AI: For (linear) regression models, the following assumptions have to hold: Linear Relationship Multivariate normality including your error term with zero mean + some infinite variance following a normal distribution No-autocorrelation No (strong) multicollinearity No homoscedasticity Non-stationarity (caused by changing mean or/and variance over time) is prevailing in most level data (e.g. stock prices) - as it has a unit root or is trend stationary. To cure your data from non-stationarity in most of the times, it is sufficient to use the relative change (percentage change - or log changes). Taking logarithmic changes, can also cure of homoscedasticity a bit. To test for stationarity, you can use the Augmented-Dickey-Fuller-Test where the Null hypothesis states that a unit-root is present in your data set, thus you have a non-stationary variable. Coming to your second question about time-series and deep learning. Unfortunately, I haven't found a good paper/article yet that discusses the assumptions on the main data properties that have to be satisfied to get correct statistical results. See my own post - not answered yet However, as deep learning models make use of the same underlying statistical assumptions - e.g. probability distributions etc. or even the underlying models (see e.g. linear activation function) - I guess, in a time-series framework, the same assumptions have to hold, otherwise, there is the chance of spurious regression.
H: Help getting corresponding dataframe values I have two dataframes: self.thisSession_df: id exerciseId sets 1 1 12 2 1 14 2 2 15 2 2 15 self.exercises_df: id exerciseName 1 Squat 2 Pullup I would like to find a way to replace the exerciseId in self.thisSession_df with the corresponding name from self.exercises_df Hopeful result: self.thisSession_df: id exerciseId sets 1 Squat 12 2 Squat 14 2 Pullup 15 2 Pullup 15 I tried a solution that I found on here and modified it to come up with: self.thisSession_df['exerciseId'] = self.thisSession_df['exerciseId'].map(df1.set_index('id')['exerciseName']) This gives me error: string indices must be integers I would appreciate a nudge in right direction! AI: I would recommend merging your two dataframes to get the exercise names: exercise_name_df = thisSession_df.merge(exercises_df, left_on='exerciseId', right_on='id') This will give you a dataframe like exercise_name_df: id exerciseId sets exerciseName 1 1 12 Squat 2 1 14 Squat 2 2 15 Pullup 2 2 15 Pullup Then you can replace exerciseId with the name if you really want to: # reassign id to name exercise_name_df['exerciseId'] = exercise_name_df['exerciseName'] # drop redundant column exercise_name_df.drop(columns=['exerciseName'], inplace=True) I would argue that the above method is more readable than the map() solution. But I think the map() solution should also work with a small tweak: thisSession_df['exerciseId'] = thisSession_df['exerciseId'].map(exercises_df.set_index('id')['exerciseName'])
H: Evaluating likelihood of egg breaking when falling in random container on concrete I am working on a project where I would like to predict whether an egg will break if it is put in a container that is then dropped on concrete. I am looking at the different factors that play a role in whether the egg will break. So far, I have come up with the following: mass friction padding shape of container What other variables could help predict the outcome? AI: This question might be better answered on the physics stackexchange, but I'll take a crack at it. Although the factors you mentioned probably affect the likelihood the egg will break, some of them are not specific enough to be descriptive features. For example, how do you quantify "shape"? I will try to provide some more features that can be quantified, but some of them might take effort to retrieve. Egg Features Eggshell thickness the thickness will probably not be constant, so you can take the mean, standard deviation, and even isolate some key points on the egg to sample such as the top/bottom points. some studies indicate that "eggs with thin but more uniform eggshell were stronger than those with thick but less uniform eggshell" from this paper you might be able to measure thickness using optical techniques such as imaging or lasers. Temperature both of the egg and the surroundings Time since egg was laid this might be approximated by the expiration date, particularly when only using a single brand Density you can calculate this by putting the egg in water (fresh eggs sink) to first determine the volume, then divide the mass by this volume. Calcium deposits the bumps you can see and sometimes feel on chicken eggs are calcium deposits that range in frequency and size. I am not sure how you would quantify these, but they could be another factor. Wikipedia indicates some abnormalities with eggs that you might want to account for or remove from your samples. I copied the abnormalities here: double-yolk eggs yolkless eggs Double-shelled eggs: where an egg may have two or more outer shells, is caused by a counter-peristalsis contraction and occurs when a second oocyte is released by the ovary before the first egg has completely traveled through the oviduct and been laid. Shell-less or thin-shell eggs possibly caused by egg drop syndrome Container Features ideally the impact energy would not be absorbed by the egg, so somehow you need to measure this using estimates of the material metrics as well as shape parameters. One such measurement might be: Average surface area that contacts the ground You might find looking at the container's convex hull useful for visualization purposes Many other factors can contribute such as: Hardness Resilience Elasticity Stiffness Ductility Lastly, there are probably many features that can be calculated by combining the egg and container, but those seem to be much more complicated. For example, measuring the amount of movement, on average, an egg can endure within a container by applying pressure to certain areas. I am not sure how you would easily extract these types of data, but they certainly exist.
H: Thoughts on Feature Engineering of a duration_in_program Variable So I am trying to predict which customers would leave a loyalty program sponsored by X firm, using an ML classification model. I further believe that the duration for which a customer has been in the program affects their likelihood of staying/leaving the program, for reasons such as long-term customers get more loyalty discounts etc.. which may raise the indirect cost/price of them leaving a program. However, one issue that I am currently facing, is that I am calculating duration_in_program, based on the start_date and end_date for each customer. However, I think one issue with this approach of coding the variable is that there are values for duration_in_program that don't map to any "stayed" outcomes. Which kind of makes sense. Like, if everyone in the program with a duration_in_program of 3 yrs left the program, then the model will just learn to always predict that as "left". It is crucial to point that: the program allows customers to stay in the program for a maximum period of five years, after that point, they receive regular prices paid by other cable customers. Therefore, one way I am thinking of dealing with this is that duration_in_program, is determined by an arbitrary cutoff point. For instance, if they still have not left the program by the date at which the 1st cohort completes the program (i.e. 31/08/2017), then we consider their duration_in_program equal to that. So someone that joined in (i.e. 01/09/2013), and still has not left by 31/08/2017, then we set their duration_in_program=4. Any thoughts on my approach above? AI: What you are describing sounds much like a survival model, where you have observations that are still surviving (still in the program) and observations that died along the way (attrition). You would want to look into survival methods and how to censor your data. There are basic regression methods for this where you are trying to predict time-until-event. XGBoost can also handle censored data for survival.
H: Representing a higher-dimensional chart I'm hitting an unusual roadblock in my quest to represent a set of data for the layman, and thought I'd ask for advice on how best to accomplish this task. My data points are represented by a 4-float tuple (a, b, c, d), the sum of which is constant. These represent test conditions, to which an outcome z is assigned. This outcome is also a float, but that's largely irrelevant. The fact that there are four effective x-axes really is throwing me off regarding the choice of visualization, and that's where I'd really like some help. If you were in my shoes, what would you render this as? Just for completeness sake, here is a (small) extract of one of the sets: 0,3000,2500,750,101788 250,3000,2500,500,100458 500,3000,2500,250,99439 750,3000,2500,0,98573 0,3000,2750,500,101993 250,3000,2750,250,100834 500,3000,2750,0,99813 0,3000,3000,250,102370 250,3000,3000,0,101150 AI: I think there is 2 ways to do it. Dimensionality reduction Using technique such as PCA, you can represent your data in lower number of dimensions. This is a good approach if you want to represent all your data in one graph. Dimension removal Another way to visualize is simply to visualize only a few dimensions at a time. For example you can visualize a/b, c/d, a/c in 3 different 2D graphs. While much simpler, this approach does not allow you to have a single graph visualization.
H: Text embeddings and data splitting I have created some document embeddings which were then used further in text classification tasks. After revisiting my code I was unsure about the workflow I used to train the document embeddings. At the moment I am creating the document embeddings based on the complete corpus available at the time of training. After the training is done, I evaluate the model by looking whether it creates useful similarities between the document embeddings. Those embeddings are used then in machine learning models and that's where the embeddings will be split into train, test and validation sets. Now my question is: Where is the right time to split the data? Should I do it before creating the document embeddings to prevent data leakage? I have used the mentioned approach because I viewed the creation of the document embeddings as a preprocessing step, so the computer can work with textual data. However, after I have put some thought into it, I think it's the wrong approach. I wanted to hear from more experienced NLP practitioners how they approach this task. Sorry for this very basic question. Thanks. AI: TL;DR If you are training the document-embedding model, then split the data before you convert the text into embeddings. If you are using a pre-trained document-embedding model, then it won't matter and it is pre-processing step that it doesn't matter when you execute it. Pipeline when training your own document-embedding model Split your text data into train/validate/test sets. Use your train set to train the document-embedding model. Use your trained document-embedding model to convert train and validation sets to train your other model (e.g. classification model). Test your final model by using your trained document-embedding model to convert the test set and test the trained final (classification) model.
H: the library 'transformers' works also with older version of Tensorflow? i am working with Tensorflow version 1.14 and i would like to use the bert embedding. In order to do so, i was thinking to use the transformers library( https://pypi.org/project/transformers/) but i am not sure if that will work with my tensorflow version. AI: It does work but I recommend using TF2 for any new developments. In the end, if something does not work, you will have to port it back to TF2. Even though there are not significant difference in the semantics of the framework
H: F1_score(average='micro') is equal to calculating accuracy for multiclasification Is f1_score(average='micro') always the same as calculating the accuracy. Or it is just in this case? I have tried with different values and they gave the same answer but I don't have the analytical demonstration. from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score y_true = [0, 1, 2, 0, 1, 2] y_pred = [0, 2, 1, 0, 0, 1] print(f1_score(y_true, y_pred, average='micro')) print(accuracy_score(y_true,y_pred)) # 0.3333333 # 0.3333333 AI: In classification tasks for which every test case is guaranteed to be assigned to exactly one class, micro-F is equivalent to accuracy. The above answer is from: https://stackoverflow.com/questions/37358496/is-f1-micro-the-same-as-accuracy More detailed explanation: https://simonhessner.de/why-are-precision-recall-and-f1-score-equal-when-using-micro-averaging-in-a-multi-class-problem/
H: ML in R (caret-package) missing hyperparameters I have a pretty specific question regarding the caret package however I still hope to finde help here. I recently worked with the caret package and trained a multilayer perceptron with method = 'mlp'. I looked up the github page of Max Kuhn (developer of caret), and it says that you only need to tune one hyperparameter: the size (number of neurons in the hidden layer). Which is really convinient. However it further states that caret for the training builds on the RSNNS Package (by Bergmeier). The mlp model implemented in this RSNNS package has additional tunable parameters over just the size hyperparameter (i.e. learnFunc,hiddenActFunc,Std_Backpropagation, maxit). So I asked myself what values caret uses for those parameters? Default values or are those optmizied? AI: It appears that the defaults are used, except for lin, which is inferred from the type of the target variable: [source code] Note too that you can set any of the other RSNNS parameters through the dots.
H: Would Topic Modelling be classified as NLP or NLU? I recently started my journey into the world of NLP, it's been one heck of a ride. I'm currently trying to understand whether topic modelling would be considered as NLP or NLU. Initially I would assume that topic modelling would be classified as NLP. However, if we use word embeddings for topic modelling wouldn't it then be classified as NLU, as we have deeper understanding of how the words relate to each other in vector space? Maybe I'm having trouble formulating the inherent difference between NLP and NLU, when do we draw the line between the two? Your insight regarding this matter would be highly appreciated. AI: Maybe I'm having trouble formulating the inherent difference between NLP and NLU, when do we draw the line between the two? There is a confusion here: NLP is the whole domain of AI which deals with natural language. It includes virtually any task related to processing language data (usually mostly written data, but that's not the point). Topic modeling is one of these tasks. NLU is the problem of Natural Language Understanding, which is usually considered as one of the main goals of NLP. If anything, NLU is a problem that NLP tries to solve, i.e. a sub-topic in the large area of NLP. Also notice that using words embeddings can improve things, but it doesn't solve all the difficulties related to semantics, far from it. [edit] The scope of NLU is not strictly defined: in the broadest possible definition, it would include anything vaguely related to extracting meaning from text, and in this very generous sense topic modeling would have a connection to it with or without embeddings (and so would a lot of other NLP tasks). Wikipedia says: The umbrella term "natural-language understanding" can be applied to a diverse set of computer applications, ranging from small, relatively simple tasks such as short commands issued to robots, to highly complex endeavors such as the full comprehension of newspaper articles or poetry passages. Many real world applications fall between the two extremes, for instance text classification for the automatic analysis of emails and their routing to a suitable department in a corporation does not require in depth understanding of the text. But the most commonly accepted definition of NLU is stricter, it would only consider tasks which directly involve the interpretation of text in a quite complex setting. The typical example is the "virtual assistant" such as Amazon Alexa, OK Google, Apple's Siri. In this sense topic modeling is simply a completely different task, no matter the "degree of understanding".
H: How to create a score for a SWOT analysis (strengths, weaknesses, opportunities, and threats)? I'm developing a participatory social environmental diagnostic. To do this, I'm using primary (qualitative data from interviews with stakeholders) and secondary data (local socioeconomic data). From this data, I distribute the identified factors in a SWOT matrix - identifying opportunities, strengths, weaknesses and threats that could afftect the local community. After that, each factor is given a scale from 0 to 2, being: 0 - Null or small impact 1 - Medium impact 2 - High impact In this way, I have this kind of data: Ok. Now I build a scale with this formula: = (((strengths + opportunities) - (weaknesses + threats)) / ((strengths + opportunities) + (weaknesses + threats)) * 2 OBS: basically opportunities and strengths are positive values and threats and weaknesses are negative, so I subtract the negative values from the positive ones. The division by the sum of all values are made to have the statistical universe (population, in this case the maximum value). The multiplication by 2 are made to have the possibility to create a range from -200% to 200%, but it could be -100% to 100% too ;) Now I have a scale ranging from -200% to 200% that I interpret in this way: Very unfavorable (-200% to -100%) Unfavorable (-100% to -29%) Balance (-30% to 30%) Favorable (31% to 99%) Very favorable (100% to 200%) OBS: This scale was used at this website (https://en.luz.vc/template/swot-analysis-excel-tool-template/) Now, what I need is to convert this to 0 to 10 scale so I apply this formula: I've discover this formula here: (https://stats.stackexchange.com/questions/25894/changing-the-scale-of-a-variable-to-0-100) Finally, my question is how can I achieve this score (from 0 to 10) right from the beginning, without the need to make the conversion from the first scale? OBS: The first scale give a percentage (-200% to 200%) and the second one a decimal number (0 to 10). Warm regards ;) AI: Let's suppose you have a variable $x_{old} \in [-1,1]$. If you would like to transform that to $ x_{new} \in [-2,2]$ then you could just apply the transformation formula you posted: With $min_{old}=-1, max_{old}=1, min_{new}=-2, max_{new}=2$ you would get $$\frac{max_{new}-min_{new}}{max_{old}-min_{old}} (v-min_{old}) + min_{new} = \frac{2-(-2)}{1+1} (v+1) + (-2) = \frac{4}{2} (v+1) -2\\= 2v+2-2=2v$$ Now if you compare that to your formula = (((strengths + opportunities) - (weaknesses + threats)) / ((strengths + opportunities) + (weaknesses + threats)) * 2 then you can see that it is exactly what you did here by multiplying with 2 at the very end. You transformed your first scale from $[-1,1]$ to $[-2,2]$. Therefore, if you want it to be in the interval $[0,10]$ just apply a different transformation at the very beginning when deriving the first scale: $$\frac{max_{new}-min_{new}}{max_{old}-min_{old}} (v-min_{old}) + min_{new} = \frac{10-0}{1+1} (v+1) + 0 = 5v +5$$ Which means your calculation for the first scale needs to be = (((strengths + opportunities) - (weaknesses + threats)) / ((strengths + opportunities) + (weaknesses + threats)) * 5 + 5 (instead of scaling it to $[-2,2]$). Intuitively I like to think of it like this: you "stretch" the interval from $[-1,1]$ to $[-5,5]$ by multiplying with 5 ("making it 5 times larger"). And then, since you want it to be positive and start at $0$, you "shift" it "upwards" by adding 5.
H: Can i build an image classification model where each image has multiple labels? If I am building a model where I need to predict the vehicle, color of it, and make of it, then can I use all the labels for a single image and build my model around it. Like for a single image of a vehicle which is a car (car1.jpg) will have labels like - Sedan(Make), Blue(Color) and Car(Type of vehicle). Can I make a single model for this or I will have to make 3 separate models for this problem. AI: One model will work in this case. Lets put it this way, let's say we are trying to solve multi-class(lets say 5) classification problem in neural networks we will have 5 neurons in the final dense layer and ideally use softmax as activation function and categorical_crossentrophy as loss function. Now coming back to multi-label classification lets take an example: Your car1 has labels like Sedan,Blue,and Type(Good). Your car2 has labels like SUV,Black, and type(Bad). You will these many neurons in your final layer lets say here we have 6 different types of labels for 2 cars, and we do one hot encoding for the target variables. The difference is we use sigmoid and binary_crossentrophy as activation and loss functions. This will give probabilities of the labels for which each image belong to. Any questions, please post.
H: How to select features for a ML model I have a dataset with 5K records for binary classification problem. My features are min_blood_pressure, max_blood_pressure, min_heart_rate, max_heart_rate etc. Similarly, I have more than 15 measurements and each of them have min and max columns amounting to 30 variables. When I ran correlation on the data, I was able to see that these input features are highly correlated. I mean min_blood_pressure is highly correlated (>80%) to max_blood_pressure. Each measurement with its min and max feature is highly correlated. Though their individual correlation to target variable is less. So in this case, which one should I drop or how should I handle this scenario? I guess there is min and max variables for a reason. How would you do in a situation like this? Should we find the average of all the measurements and create a new feature? Can anyone help me with this? AI: I'd start here. Most basic idea is to run statistical tests to see how target variable depends on each feature. These include tests like chi-square or ANOVA. Tree-based models can also output feature importance. Check this post. There's plenty of posts on kaggle with code. Might be worth checking those: https://www.kaggle.com/willkoehrsen/introduction-to-manual-feature-engineering https://www.kaggle.com/rejasupotaro/effective-feature-engineering https://www.kaggle.com/willkoehrsen/automated-feature-engineering-tutorial As your data set isn't so drastically large, you could push grid search and check how your model behaves for different factors of PCA. It's hard to tell a priori whether you should drop some features. I guess trying each combination of 30 features is completely out of scope, though you might try dropping most redundant ones. As your data contains categorical features, it might be good idea to give catboost a try. They claim it handles categorical features better than other gradient boosters. Just keep in mind, that default number of estimators is 10 times of that in xgboost. You might lower it for experiments. First, I'd create base model with all the features. Now comes the question: which method to choose? Gradient boosters poses ability of learning the feature importance, those redundant ones will get little weight and you might not see much of an improvement, when dropping features. You might get more insight using more vanilla methods, but in the end you'll be certainly deploying gradient boosting to production, so I don't see much sense in it. I'd stick with xgboost or catboost and perform experiments using same parameters. Please keep in mind: though some features might be highly redundant, they may still contribute some knowledge to your model.
H: LSTM with linear activation function I'm trying to do multi-step regression and I use an output layer: LSTM(1, activation='linear', return_sequences=True) Is this the wrong way of achieving this? Should I use a TimeDistributed(Dense(1)) as output? If yes, why? AI: I don't see any particular advantage in using linear (i.e.: none) activation. The power of Neural Network lies in their ability to "learn" non-linear patterns in your data. Moreover, the Tanh and sigmoid gates are thought to control for the stream of information that unrolls through time, they have been designed for that, and personally I'd be cautious in changing that. If you want to do some multistep regression you have many options, but the most advanced architecture is the sequence-to-sequence, or seq2seq: from tensorflow.keras import Sequential from tensorflow.keras.layers import Bidirectional, LSTM from tensorflow.keras.layers import RepeatVector, TimeDistributed from tensorflow.keras.layers import Dense from tensorflow.keras.activations import elu, relu seq2seq = Sequential([ Bidirectional(LSTM(len_input), input_shape = (len_input, no_vars)), RepeatVector(len_input), Bidirectional(LSTM(len_input, return_sequences = True)), TimeDistributed(Dense(hidden_size, activation = elu)), TimeDistributed(Dense(1, activation = relu)) ]) where len_input is the length of the input sequence. The output is the same sequence shifted forward a number of steps equivalent to the lenght of your prediction. hidden_size is the size of Dense() layers, that is completely optional. I used the Bidirectional() wrapper, but this is optional too. Not all tasks require bi-LSTM, feel free to remove it if you need. The (combined) role of RepeatVector() and TimeDistributed() layers is to replicate the latent representation and the following Neural Network architecture for the number of steps necessary to reconstruct the output sequence. RepeatVector() generates this "multiplication", while TimeDistributed() repeats the application of on each of these repeated signals, generating the final sequence in practice. Both input and output must be 3-dimensional numpy arrays of shape: ( number of observations , length of input sequence , number of variables ) Seq2seq models are harder to train (higher number of parameters, longer training times), but their performance is superior to other RNN architectures. At the present time, any state-of-the-art RNN is some kind of seq2seq.
H: How to split train/test data 50% by class and grouping by Object ID in R? I get pixel values ​​from it using reference polygons. Extracted pixel values are in data frame, but one row represent extracted values for single pixel. In the classification I need to split the dataset into test (50%) and training (50%) by class (tree, meadow e.t.c) I know how to split a set according to classes. However, I want values ​​extracted for one polygon to be assigned to one of the sets (training OR test ) and they were not mixed For this purpose I want to use the polygon ID (Object Identification). I would like to do this using the createDataPartition function. These are just two sample classes (there are many more) Here is part of table with extracted values: "band_1" "band_2" "band_3" "CLASS" "Id" 110 134 119 "tree" 1 112 133 118 "tree" 1 105 125 110 "tree" 2 112 132 117 "tree" 2 109 125 115 "meadow" 6 93 110 101 "meadow" 6 86 106 95 "meadow" 7 105 136 116 "meadow" 7 102 128 111 "meadow" 8 108 129 115 "meadow" 8 113 134 119 "meadow" 8 Here is code: trainIndeks <- caret::createDataPartition(EXTRACTED$CLASS, p = 0.5, list=FALSE, times = 1) dataTrain <- EXTRACTED[trainIndeks,] dataTest <- EXTRACTED[-trainIndeks,] AI: Instead of: dataTrain <- EXTRACTED[trainIndeks,] dataTest <- EXTRACTED[-trainIndeks,] try: dataTrain <- EXTRACTED[ID %% 2 == 1,] dataTest <- EXTRACTED[ID %% 2 == 0,] dataTrain will contain only odd ID, dataTest only even ones. Using this method you may lose the balance of CLASS distribution between dataTrain and dataTest, but you may improve it by replication or removing of a few records, or by checking the distribution in other splitting, e.g. dataTrain <- EXTRACTED[ID %% 4 < 2,] dataTest <- EXTRACTED[ID %% 4 > 1,] EDIT: For random changing the split, you can use: set.seed(123) N <- 10 #N <- round(max(EXTRACTED$ID)/10) # for more random grouping p <- 0.5 # train/(train+test) ratio idx <- sample(0:(N-1),round(N*p)) dataTrain <- EXTRACTED[ID %% N %in% idx,] dataTest <- EXTRACTED[!(ID %% N %in% idx),]
H: How to define the number of features to select in RFECV? I am trying to work on feature selection stage for my dataset. I am a newbie to ML. I have around 60 columns and am trying to select top 15 features. I came to know about RFECV for which I wrote a code like as shown below. I am aware that n_features is present for RFE but it is missing for RFECV. Is there anyother way to assign the number of features to select? model = RandomForestClassifier(n_estimators=100, random_state=0) # create the RFE model and select 15 attributes rfe = RFECV(model,step=5, cv=5,min_features_to_select = 15,max_features_to_select = 15) # this doesn't work. `n_features=15` also doesn't work rfe = rfe.fit(X_train_std, y_train) # summarize the selection of the attributes feat = rfe.support_ fret = rfe.ranking_ features = X.columns print(features[feat].tolist()) Can someone help me to get top 15 features only? Where can I configure the n_features paramter? Currently it displays more than 30 features. I don't really know how or from where does it get its number (30)? AI: That's the point of RFECV over RFE: the former selects the best number of features through cross-validation. If you want 15 features, use RFE instead (or some other feature selection method). From the API docs, cross-validated selection of the best number of features and from the User Guide RFECV performs RFE in a cross-validation loop to find the optimal number of features. To respond to the comments (my response is a bit too long for comments): Yes, RFECV is meant to produce the optimal number of features. RFE is run from the full feature set down to 1 feature on each of the cross-validation splits, then those models are scored on the test folds and averaged; then the best-scoring number of features can be taken and then RFE is run again down to that number of features. The thought is that (barring some special domain knowledge) RFECV is better than RFE, except that it may take rather longer to run. By definition, the RFECV's accuracy (on the CV-splitted training set) will be better than RFE with any other fixed number of features. Now, the usual caveats apply. The number of features returned by RFECV may not necessarily generalize the best to unseen data, especially if that data isn't perfectly iid with the training data. Especially with small datasets, the splits made in the CV may affect the results, and perhaps going from the smaller CV train sets to the whole train set should allow more (or fewer?) features. And the top features on each of the splits may not be the same, so again when going to the full training set the question of increasing (decreasing?) number of features is reasonable.
H: How do I make a submission of a CNN? I have built a CNN to do image classification for images representing different weather conditions. I have 4 classes of images : Haze, Rainy, Snowy, Sunny. I have built my CNN and evaluated the performances. N ow I have been given a blind test set, so images without a label, and I have to make a submission. So I have to buld a .csv file which contains should contain one line for each predicted class of images, so it should have the structure ,. Thus each line should be a string which identifies the image and its prediction. Now the problem is that I don't understand how to do this. I am really confused because I have never done something similar. My code is the following: trainingset = '/content/drive/My Drive/Colab Notebooks/Train' testset = '/content/drive/My Drive/Colab Notebooks/Test_HWI' batch_size = 31 train_datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rescale = 1. / 255,\ zoom_range=0.1,\ rotation_range=10,\ width_shift_range=0.1,\ height_shift_range=0.1,\ horizontal_flip=True,\ vertical_flip=False) train_generator = train_datagen.flow_from_directory( directory=trainingset, target_size=(256, 256), color_mode="rgb", batch_size=batch_size, class_mode="categorical", shuffle=True ) test_datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rescale = 1. / 255 ) test_generator = test_datagen.flow_from_directory( directory=testset, target_size=(256, 256), color_mode="rgb", batch_size=batch_size, class_mode="categorical", shuffle=False ) num_samples = train_generator.n num_classes = train_generator.num_classes input_shape = train_generator.image_shape classnames = [k for k,v in train_generator.class_indices.items()] then I build the network: def Network(input_shape, num_classes, regl2 = 0.0001, lr=0.0001): model = Sequential() # C1 Convolutional Layer model.add(Conv2D(filters=32, input_shape=input_shape, kernel_size=(3,3),\ strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation before passing it to the next layer model.add(BatchNormalization()) # C2 Convolutional Layer model.add(Conv2D(filters=64, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Batch Normalisation model.add(BatchNormalization()) # C3 Convolutional Layer model.add(Conv2D(filters=128, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Batch Normalisation model.add(BatchNormalization()) # C4 Convolutional Layer model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) #Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C5 Convolutional Layer model.add(Conv2D(filters=400, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C6 Convolutional Layer model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Conv2D(filters=512, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C7 Convolutional Layer model.add(Conv2D(filters=800, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Conv2D(filters=800, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # C8 Convolutional Layer model.add(Conv2D(filters=1000, kernel_size=(3,3), strides=(1,1), padding='valid')) model.add(Activation('relu')) # Pooling model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid')) # Batch Normalisation model.add(BatchNormalization()) # Flatten model.add(Flatten()) flatten_shape = (input_shape[0]*input_shape[1]*input_shape[2],) # D1 Dense Layer model.add(Dense(4096, input_shape=flatten_shape, kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # D2 Dense Layer model.add(Dense(4096, kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # D3 Dense Layer model.add(Dense(1000,kernel_regularizer=regularizers.l2(regl2))) model.add(Activation('relu')) # Dropout model.add(Dropout(0.4)) # Batch Normalisation model.add(BatchNormalization()) # Output Layer model.add(Dense(num_classes)) model.add(Activation('softmax')) # Compile adam = optimizers.Adam(lr=lr) model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy']) return model #create the model model = Network(input_shape,num_classes) model.summary() I train the network: steps_per_epoch=train_generator.n//train_generator.batch_size val_steps=test_generator.n//test_generator.batch_size+1 try: history = model.fit_generator(train_generator, epochs=100, verbose=1,\ steps_per_epoch=steps_per_epoch,\ validation_data=test_generator,\ validation_steps=val_steps) except KeyboardInterrupt: pass now, I have the images without labels in the google drive, so I define the path to them: blind_testSet = '/content/drive/My Drive/Colab Notebooks/blind_testset' but now I don't know what shoul I do. I really don't know how to define the .csv file I mentioned above. Can someone please help me? Thanks in advance. [EDIT] Ok I am trying to make the predictions on the blind test set, but it is taking really a long time. What I have done is the following: blind_testSet = '/content/drive/My Drive/Colab Notebooks/submission/blind_testset' test_datagen_blind = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rescale = 1. / 255 ) test_generator_blind = test_datagen.flow_from_directory( directory=blind_testSet, target_size=(256, 256), color_mode="rgb", batch_size=batch_size, class_mode="categorical", shuffle=False ) preds = model.predict_generator(test_generator_blind,verbose=1,steps=val_steps) the images I have inside this blind test set are 1500, but is it normal that it takes so long? Thanks. [EDIT 2] To try to make the submission I am trying to use a code similar to this: def make_submission(model, filename="submission.csv"): df = pd.read_csv("../input/test.csv") X = df.values / 255 X = X.reshape(X.shape[0], 28, 28, 1) preds = model.predict_classes(X) subm = pd.DataFrame(data=list(zip(range(1, len(preds) + 1), preds)), columns=["ImageId", "Label"]) subm.to_csv(filename, index=False) return subm but it seems to not work in my case. I have also tried to keep only the last 2 lines and use them, so : subm = pd.DataFrame(data=list(zip(range(1, len(preds) + 1), preds)), columns=["ImageId", "Label"]) subm.to_csv(filename, index=False) can someone help me creating this csv file? Thanks. AI: You have to make predictions first. Than order these predictions to the ids in the blind_testSet. So something like this: test_set=pd.read_csv(blind_testSet) test_set["predicted_labels"]=model.predict(quntified pictures from test set) EDIT: on the question why is it taking so long to train: you have deep convolutional layers. Backpropagating is very expensive process. A lot can be said on this topic how to speed up the computations, but lets say that you should be looking at utilizing GPU power
H: Where do I find the .csv file for the submission after creating it? I am making a submission of a classificaition problem with CNN on Google Colab. So I have arrived at doing this: subm.to_csv('submission.csv', index=False) so in theory I should have finished. The only problem is that I don't know where to find the newly created csv file. Can somebody please help me? Thanks in advance. AI: When you are done with Colab instince files saves will not persist. You have to configure saving them to google drive. Try something like this; from google.colab import drive drive.mount('/content/gdrive', force_remount=True) root_dir = "/content/gdrive/My Drive/" base_dir = root_dir + 'fastai-v3/' so now you can access and save files to google drive directly.
H: What is the difference between a regular Linear Regression model and xgboost with objective set to "reg:linear"? As I understand it, a regular linear regression model already minimizes for squared error, which means that it is the theoretical best prediction for this metric. Does xgboost's "reg:linear" objective do something other than minimize squared error? AI: Linear regression is a parametric model: it assumes the target variable can be expressed as a linear combination of the independent variables (plus error). Gradient boosted trees are nonparametric: they will approximate any* function. Xgboost deprecated the objective reg:linear precisely because of this confusion. It has been replaced by reg:squarederror, and has always meant minimizing the squared error, just as in linear regression. So xgboost will generally fit training data much better than linear regression, but that also means it is prone to overfitting, and it is less easily interpreted. Either one may end up being better, depending on your data and your needs.
H: Monitoring machine learning models in production I am looking for tools that allow me to monitor machine learning models once they are gone to production. I would like to monitor: Long term changes: changes of distribution in the features with respect to training time, that would suggest retraining the model. Short term changes: bugs in the features (radical changes of distribution). Changes in the performance of the model with respect to a given metric. I have been looking over the Internet, but I don't see any in-depth analysis of any of the cases. Can you provide me with techniques, books, references or Software? AI: The changes in distribution with respect to training time are sometimes referred to as concept drift. It seems to me that the amount of information available online about concept drift is not very large. You may start with its wikipedia page or some blog posts, like this and this. In terms of research, you may want to take a look at the scientific production of João Gama, or at chapter 3 of his book. Regarding software packages, a quick search reveals a couple of python libraries on github, like tornado and concept-drift. Update: recently I came across ML-drifter, a python library that seems to match the requirements of the question for scikit-learn models.
H: Trouble in calculating the covariance matrix I'm trying to calculate the covariance matrix for a dummy dataset using the following formula, but it's not matching with the actual result. Let's say the dummy dataset contains three features, #rooms, sqft and #crimes. Each column is a feature vector, and we have 5 data points. I'm creating this dataset using the following code: matrix = np.array([[2., 3., 5., 1., 4.], [500., 700., 1800., 300., 1200.], [2., 1., 2., 3., 2.]]) Let's normalize the data, so the mean becomes zero. D = matrix.shape[0] for row in range(D): mean, stddev = np.mean(matrix[row,:]), np.std(matrix[row,:]) matrix[row,:] = (matrix[row,:] - mean)/stddev Now, I can write a naive covariance calculator that looks at all possible pairs of features, and that works perfectly. def cov_naive(X): """Compute the covariance for a dataset of size (D,N) where D is the dimension and N is the number of data points""" D, N = X.shape covariance = np.zeros((D, D)) for i in range(D): for j in range(i, D): x = X[i, :] y = X[j, :] sum_xy = np.dot(x, y) / N if i == j: covariance[i, j] = sum_xy else: covariance[i, j] = covariance[j, i] = sum_xy return covariance But, if I try to implement the formula mentioned in the beginning, the result is incorrect. The method I am trying out is as follows: def cov_naive_2(X): """Compute the covariance for a dataset of size (D,N) where D is the dimension and N is the number of data points""" D, N = X.shape covariance = np.zeros((D, D)) for i in range(N): x = X[:, i] covariance += x @ x.T return covariance / N What am I doing wrong here? Expected output: array([[ 1. , 0.96833426, -0.4472136 ], [ 0.96833426, 1. , -0.23408229], [-0.4472136 , -0.23408229, 1. ]]) Actual output from cov_naive_2 array([[3., 3., 3.], [3., 3., 3.], [3., 3., 3.]]) AI: Your approach is mathematically right, your problem comes from the fact that numpy matrix multiplication defaults to inner product when providing vectors, independently from them being transposed to row or column. Try modifying the penultimate line like this to force outer product. def cov_naive_2(X): """Compute the covariance for a dataset of size (D,N) where D is the dimension and N is the number of data points""" D, N = X.shape covariance = np.zeros((D, D)) for i in range(N): x = X[:, i] covariance += np.outer(x, x) # <---- here return covariance / N ```
H: Exception: Data must be 1-dimensional appears when trying to make a submission I am doing image classification, and until now I have built my network and evaluated it. The only thing that remains to do is to do the submission, so I have a blind test set which contains images with no labels, and using the model I have created, I have to make predictions on this blind test set. I have 4 classes: HAZE, RAINY, SUNNY, SNOWY. My code is the following: blind_testSet = '/content/drive/My Drive/Colab Notebooks/submission' test_datagen_blind = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rescale = 1. / 255 ) test_generator_blind = test_datagen.flow_from_directory( directory=blind_testSet, target_size=(256, 256), color_mode="rgb", batch_size=batch_size, class_mode="categorical", shuffle=False ) preds =transfer_model.predict_generator(test_generator_blind,verbose=1,steps=val_steps) import pandas as pd import numpy as np predicted_class_indices=np.argmax(preds,axis=1) labels = (train_generator.class_indices) labels = dict((v,k) for k,v in labels.items()) print(labels) predictions = [labels[k] for k in predicted_class_indices] filenames=test_generator_blind.filenames results=pd.DataFrame({"image ID":filenames, "Predictions":preds}) results.to_csv("submission.csv",index=False) the problem is that I get the error: Exception Traceback (most recent call last) <ipython-input-22-37b4cd72c225> in <module>() 9 filenames=test_generator_blind.filenames 10 results=pd.DataFrame({"image ID":filenames, ---> 11 "Predictions":preds}) 12 results.to_csv("submission.csv",index=False) 4 frames /usr/local/lib/python3.6/dist-packages/pandas/core/internals/construction.py in sanitize_array(data, index, dtype, copy, raise_cast_failure) 727 elif subarr.ndim > 1: 728 if isinstance(data, np.ndarray): --> 729 raise Exception("Data must be 1-dimensional") 730 else: 731 subarr = com.asarray_tuplesafe(data, dtype=dtype) Exception: Data must be 1-dimensional Can someone help me? Thanks. [EDIT]I have also tried to to : preds =transfer_model.predict_generator(test_generator_blind,verbose=1,steps=val_steps).reshape(-1,1) but it does not work and I get the same error. AI: Python tells you that the data you give for the column "predictions" is not 1-dimensional (i.e. it's not a flat list). And indeed preds is not a 1-dimensional array, what you want to give is the corresponding labels that you collected in predictions. So Instead of this: results=pd.DataFrame({"image ID":filenames, "Predictions":preds}) you probably want: results=pd.DataFrame({"image ID":filenames, "Predictions":predictions})
H: Which combination of 3 hyperparameters to combat overfitting of a convolutional neural network? I have a small dataset with which I want to train a CNN by using Data Augmentation. Since the CNN is overfitting due to the small data set, I would like to optimize some hyperparameters. However, since I would like to use GridSearchCV from Scikit-Learn for this and I would therefore like to optimize only 3 hyperparameters due to reducing computational time. Here the question arises which combination of hyperparameters should I use for grid search? My current approach would be to optimize the learning rate, the dropout layer rate and the number of epochs. I choose the learning rate because the book "Deep Learning" by Goodfellow recommends to always optimize the learning rate. But I'm not sure if my combination of hyperparameters for tuning is really good. What combination would you recommend? Many thank for every hint My previous architectur is as follows: model = Sequential() model.add(Conv2D(32,(3,3)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Conv2D(32,(3,3)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2)) model.add(Dropout(0.2)) model.add(Conv2D(64, (3,3)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Conv2D(64,(3,3)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2)) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(512)) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(Dropout(0.4)) model.add(Dense(10)) model.add(Activation('softmax')) As optimizer I use Adam. AI: If you are asking specifically about overfitting, I would only keep dropout rate from your list of three. Other two can be any chosen from: number of filters in the convolutional layers, number of convolutions, number of dense layers (if any), number of neurons in dense layers. Learning rate should be optimized but not for the purpose of combatting overfitting, at least the way I understand it. Using Keras, you can start with some learning rate - not necessarily very fine tuned, and slowly reduce it over time once the training reaches a plateau using Learning Rate Scheduler in Callbacks. Also, there are ways to find an optimal learning rate such as learning rate finder, so it would probably be a misuse of resources to optimize for learning rate using grid search (I never used them though!). You can also use the Model Checkpoint present in the above Callbacks page to save the model as the validation loss improves only, and ignore the later epochs where overfitting becomes an issue. This way, you won't need to include number of epochs in your search.
H: Mismatch between optimum features and grid scores using RFECV? I have a dataset with 5K columns focused on binary classification. I have more than 60 columns. I am trying to find the best features through RFECV approach. Though it produces 30 optimum features, when I plot in graph, I only see 12 features. Please see my code and plot below model = RandomForestClassifier(n_estimators=100, random_state=0) model_b = LinearSVC(class_weight='balanced',max_iter=1000) # create the RFE model and select 15 attributes rfe = RFECV(model,step=5,cv=5) rfe = rfe.fit(X_train_std, y_train) # summarize the selection of the attributes feat = rfe.support_ fret = rfe.ranking_ features = X.columns print(rfe.n_features_) # this returns 30 as output print(rfe.grid_scores_) this produces the below output I was expecting to see the grid scores for 30 features and in the plot, I was expecting to see the x-axis to have 30 features as well. But it shows only for 12 features. Similarly if I had only 19 features in my dataset, RFECV returns all 19 as optimal feature which is fine. But again in grid score it only shows 4 q1) Does this mean beyond 12 features, there is no increase in model accuracy? q2) I assume grid_scores are nothing but weightage/rank which indicates the influence that a feature has on outcome. But how do I get the name of this 12 features? q3) Wwhy does it show optimum no of features as 30 but grid scores is shown only for 12. Can you help me with these question please? AI: RFE stands for Recursive Feature Elimination, meaning that the search starts off with a full set of features and with each step some features (in your case, 5) are dropped in an attempt to improve the predictive power of the model. If we take a closer look at sklearn's RFECV reference, it states: grid_scores_ : array of shape [n_subsets_of_features] The cross-validation scores such that grid_scores_[i] corresponds to the CV score of the i-th subset of features. The i-th subset of features being referred to is the group of features being evaluated at that particular step of the search. Judging by your output (and the plot), only 12 steps were needed to figure out which 30 features are optimal and that no further improvements could be found using the chosen approach - that's why you see an increase in CV score followed by a plateau. Your assumption in q1 is almost correct, but you are looking at algorithm steps, not features. This is simply an issue of misinterpretation, so is q3. As for q2, you need to look at support_ attribute of the RFECV object. It returns a boolean mask that corresponds to X.columns (if True, then that feature is selected, if False, then not). You can also look at ranking_, but it provides information on the "goodness" of discarded features only (all selected features are ranked 1). The importances of selected features are model-specific. For RandomForestClassifier, you can look at feature_importances_. For LinearSVC, look at coef_. Hopefully this answers your question. Good luck with the selection!
H: Spyder 4: changed behavior or "run cell" / run selected code I'm a user of spyder. This weekend I updated to spyder 4, which seems to have received many usefull improvements, however I have a problem with running selected code. The logic seems to have changed. Unfortunately for me it is very important, that I can select code lines and run them ad hoc, without copying them to the shell each time. For older versions of spyder this could be easily done by selecting the code and pressing [Ctrl]+[Enter], but now this seems to execute not just the code I selected. To be honest, I don't even know, what it executes. The same applies if I select "run cell" from the menu after selecting code. Can anybody shed some light into this? how can I execute selected code in spyder 4? For me this functionality is so important, that I really think about downgrading to a lower version, even though I would lose big improvements in the editor. AI: [Ctrl]+[Enter] is for debugging mode and F9 is what you are looking for. See also this stackoverflow question.
H: How to distinguish informative and non-informative feature - Feature importance? I have a dataset with 5K records focused on binary classification problem. I have more than 60 features in my dataset. When I used Xgboost, I got the below Feature Importance plot. However I am not sure how to find out whether all of these are informative? Questions 1) Yes, I can select top 15/20/25 etc. But is that how it is done? Is there any minimum F-score that we ought to look for? 2) Or is it like I select top 10 features, check for accuracy and again add 2-3 features in every round and verify accuracy manually. Is this how it is done? 3) How would you people go about it? I tried with full dataset, the accuracy is only about 86%. when I tried with 15-20 features, it's only about 84. So manual feature selection is the only way to improve further? Can you help me please? AI: My approach is quite different. After the creation of feature importance, I usually formulate the question "Is a feature of low importance really non-informative or its informative nature is shadowed by other features?" How do I answer this question? I use a simple method - decreasing the number of features in each iteration* (a greater chance that the most important features are not between them), adding a dummy feature containing random numbers, and creating a new feature importance. The random dummy feature is non-informative by definition, so assumption, that features of importance less than this dummy feature are non-informative is IMHO not so strange. So, the next step is to remove these features. Then I usually make a selection starting from the other side. Sometimes more valuable, more important features, being surely informative, cause overfitting and removing them increases accuracy. *) The default partition of features in each iteration (colsample_bytree) is equal to 1, but it may be set in the stage of feature selection to a much lower value of the range 0.2-0.4.
H: How to transform specific type feature to yield better prediction? I have a dataset with 5K records focused on binary classification problem. I have about 60 features. Out of 60 features, around 45-46 features are of 'Min' and 'Max' type. For example, minimum blood pressure, maximum old pressure, minimum heart rate, maximum heart rate, minimum potassium, maximum potassium etc. Several other vital parameters like sodium, urea etc. follow this pattern of min and max. Any suggestions on how can I transform them without losing info . currently when I use as-is, I get only around 86% accuracy with recall value of minority class only around 70. Resamplin didn't help much. Your suggestions and experience in how to transform this to yield better predictions would really be helpful. AI: ML has sometimes problems to find relations obvious for people, so I would help a little bit by creating a new feature: relX = (maxX-minX)/minX because some diseases are more correlated with jumps of some parameters than with their level, however, such relX may be too much correlated with target and may cause overfitting. Usually, after creating such a feature, it is better to remove one of the original features, sometimes both.
H: CV(Curriculum vitae) Recommendation System guidance I am building a recommender system which matches people's CV with a vacancy. So far, I used TF-IDF & Cosine Similarity to get a matching score between a vacancy and a candidate's CV. I want to know whether there are any other approaches to create such a recommendation engine? AI: Try word embeddings and unsupervised learning, for example: medium main idea is that similiar users will be in similiar clusters
H: doubt in improvement of performances given by a layer in a deep neural network Today I was discussing about neural network with a friend, and he said that the more layers we add, the less increase in accuracy each layer gives individually. Is this fact true? I know that it is better to go deep then wide when building a neural network, but it is true that if we have a lot of layers, individually they give a smaller contribution to the performances? Thanks in advance. AI: Yes, there is a threshold where additional layers are too much. Look at residual networks
H: Why is the code provided in this book mostly commented out? I am new to text mining and have been playing around with the code provided in the book Applied Text Analysis With Python. I came to a problem with this specific part: https://github.com/foxbook/atap/blob/master/snippets/ch08/oz.py In this python script, most of the code is commented out. Is there a specific reason why someone would do that? What's the correct way to uncomment it? I am using jupyter notebook, I pasted the code in a cell and deleted one by one each '#', but I think I messed up the indentation. AI: Welcome to the site! It looks to me that the code you reference is just showing some example usage for the functions defined above it. There are several reasons someone might do this--sometimes it's so auto-documentation generating code (e.g., doxygen) can pick up the example code for downstream formatting, other times it's so that the reader can identify the code they might need to modify for their own use case. I suspect the latter here. If you're interested in trying to just run the code as written, you could copy it into a text editor like TextMate or VSCode and do a vertical select (usually shift+option+click/drag) to highlight and delete the comment characters!
H: Constructing the Confusion matrix from given metrics I am given the following metrics for a certain classifier : -Total number of cases in the dataset = 110 -Accuracy: 92.7% -Precision : 96.9% -Recall : 95% Are this information enough to reconstruct the confusion matrix? AI: [edit thanks to comment] I'm assuming this is a binary classifier, since normally a multi-class classifier would not be evaluated with precision/recall (it would require micro/macro precision/recall). Yes, that should be enough: accuracy = 92.7%: $$\frac{TP+TN}{110}=0.927 \rightarrow TP+TN=101.97$$ This means we have 102 correct predictions, so $FP+FN= 8$ incorrect predictions (since $TP+FP+TN+FN=110$). precision = 96.9%: $$\frac{TP}{TP+FP}=0.969 \rightarrow TP=31.258\times FP$$ recall = 95%: $$\frac{TP}{TP+FN}=0.950 \rightarrow TP=19 \times FN$$ This gives us: $$\frac{TP}{31.258}+\frac{TP}{19}=8 \rightarrow TP = 94.6$$ let's assume that means 95 true positive instances, so we get: $FP = 3$ $FN = 5$ $TN = 7$
H: Change number format of confusion matrix I have the following confusion matrix: I would like to change the format of the numbers that, when they exceed the value 99, appear in scientific format. I would like them to appear in a standard format. That is: 3.3e + 02 would be 330. This is the function I have implemented: cm= confusion_matrix(y_test,predicted_classes) plt.figure(figsize=fig_size) plt.title('Confusion matrix of the classifier') sns.heatmap(cm,annot=True) plt.xlabel('Predicted') plt.xticks(rotation=45) plt.ylabel('True') plt.ioff() plt.show() AI: Add fmt = ".1f" parameter in sns.heatmap(). You can rewrite your code as follows to get all numbers in scientific format. sns.heatmap(cm,annot=True, fmt=".1f") Refer this link for additional customization.