text
stringlengths
83
79.5k
H: What is the best approach for classifying non-English text What would be the best approach for classifying non-English (Sinhala / Tamil) text? Currently I use Fasttext. Are there any better options? I want to classify user questions into chatbot intents. Therefore, there may be many target classes. AI: As far as I know, the best way would be to use pretrained embedder. Embedder encodes your text into language-agnostic latent space. You input your text and you get fixed-length numerical vector as an output. You can use latent space encodings as a feature vector, which you can use to train discriminative models. They also suit well for resampling like SMOTE or ADASYN. Some time ago Facebook released a model called LASER. You can read about it here. It supports Sinhala and Tamil as well. Here is a github repository. There is also unofficial distribution on pypi. It substitutes internal tools for tokenization and BPE encoding. For sake of convenience I've been working mostly with this distribution and I can confirm it works just fine. Here is a repository. I'd also suggest to consider embedder abuse. It covers a lot of languages, which means you can train your models on e.g. English and prediction will work for Tamil out of the box! Natural language is hyper-dimensional space and most seminal models are using encoders. To my knowledge it is the go-to approach for any language.
H: Why SVM gridsearch takes longer time? I have a dataset of 5K records and 60 features focussed on binary classification. Please find my code below for SVM paramter tuning. It's running for a longer time than Xgb.LR and Rf. The other algorithms mentioned returned results within minutes (10-15 mins) whereas SVM is running for more than 45 mins. Questions 1) Is SVM usually slower and takes longer time? 2) Is there any issue with my code below? 3) How can I make the gridsearch faster? from sklearn.svm import SVC param_grid = {'C': [0.001,0.01,0.1,1,10,100,1000], 'gamma': [1, 0.1, 0.01, 0.001, 0.0001], 'kernel': ['linear', 'rbf','poly'], 'class_weight':['balanced']} svm=SVC() svm_cv=GridSearchCV(svm,param_grid,cv=5) svm_cv.fit(X_train_std,y_train) AI: Simple, optimization problem of SVM is of quadratic order. Just check first line of documentation "The fit time scales at least quadratically with the number of samples and may be impractical beyond tens of thousands of samples."
H: How to yield better AUC score? I have a dataset with 5K records and 60 features focused on binary classification. Class proportion is 33:67 Currently I am trying to increase the performance of my model which is stuck at F1-score of 89% (majority) and 75% (minority) class and AUC of 80%. I tried Gridsearchcv and feature engineering. Though I don't explicity call out the best parameters on Gridsearch below, I guess when I fit, it takes the best parameters only. But nothing seems to help. Does this mean my data has issues? When I mean issue, I am not talking about missing values. I mean the way the data was extracted. Can it be data entry issues? This is what I tried for gridsearchcv. Am I doing it right? import xgboost as xgb parameters_xgb = { 'learning_rate': (0.1,0.01,0.05,0.5,0.3,1), 'n_estimators': (100,200,500,1000), 'max_depth':(5,10,20),} xg_clf = xgb.XGBClassifier() xgb_clf_gv = GridSearchCV(xg_clf,parameters_xgb,cv=5) # using cross validation with best hyperparameters xgb_clf_op = xgb_clf_gv.fit(X_train_std,y_train) y_pred = xgb_clf_op.predict(X_test_std) cm = confusion_matrix(y_test, y_pred) print(cm) print("Accuracy is ", accuracy_score(y_test, y_pred)) print(classification_report(y_test, y_pred)) I also tried catboost and gb. The AUC is only around 80-82% throughout in test data. AI: I would not necessarily call it data issues. There is always some threshold that you just can not surpass, depending on the dataset ofcourse. Generally feature engineering and understanding the data will yield much greater increases than just hyp.par. optimization, which as you can see from the picture, yields often marginal increases (there is a case where its worst than default parameters)
H: What's wrong with RF/SVM with word embedding (GloVe)? I searched many times in google for examples on word embedding (specifically GloVe) with Random forest and I couldn't find any single example. For GloVe, it was all either LSTM or CNN. Maybe there's something I don't understand, but why only these 2 algorithms are used with GloVe, what's wrong with the other algorithms? AI: Usually these are highly dimensional matrices, and all of the information can not be modeled by less-complex models like RF and SVM.
H: How can I compare the contribution of two predictors in two different sorts for machine learning algorithms? I'm new to machine learning and try to clarify my problem in research. I just wonder if I can compare the importance of two different variables in two different sorts. For example, A and B are two variables that I want to compare their contribution to ML accuracy. The rest of the variables (like C, D, and E) for each sort are the same. If I got the results that the rest of the variables (C, D, and E) contributes more (obtain a higher importance score) in the model predicted by A and the rest, can I just say B performs better for ML and its contribution is more significant than the model with A and the rest? AI: No you can not. You are forgeting about variable interactions (C-A,D-A,E-A, etc...) that could favor A. You could answer following qestion: If I were to measure information in my variables via variability how would I proceede? For example PCA The more information you have in the variable(predictor) the higher the chance that it will contribute more to accuracy
H: What does many low important feature indicate? I have a dataset where I am focusing on binary classification problem. In total,I have around 60 features in my dataset When I used Xgboost Feature Importance, I was able to see that the top 5 features account for 42% whereas rest of the of the 50 features account for 40-49 % (each feature about 1%) and remaing 8-10 features have zero importance or less than 1% of importance. This is my best paramter list for Xgboost after gridsearch op_params = {'alpha': [10], 'as_pandas': [True], 'colsample_bytree': [0.5], 'early_stopping_rounds': [100], 'learning_rate': [0.04], 'max_depth': [6], 'metrics': ['auc'], 'num_boost_round': [10000], 'objective': ['reg:logistic'], 'scale_pos_weight': [3.08], 'seed': [123], 'subsample': [0.75]} Since I have many low importance features, should I try to use them all in my model to increase the model metrics? When I built the model only with top 5 features, I was able to get 80% accuracy. I am trying to understand is it even useful to make use of these low importance feature for prediction? Shown below is my feature importance in descending order Do they even really help? Any insights would really be helpful AI: Its all about a tradeoff. The more you add unimportant features, the marginal will the benefits get but you risk injecting more complexity and potentially overfitting. Ocams Razor Also be carefull with the default feature importance approach. Read this.
H: How to best use geographical information as a factor? I am trying to predict crime rates and I have naively used lat and long as two separate factors (which seem to work well!). Are there any best practices for location as a factor? AI: If you are predicting crime rates in a certain region, we may use clustering to deduce useful information. In clustering, basically, we will try to group similar data points together and treat them as a single class. We can understand this with an instance. We have various points ( Latitude and Longitude ) and each of them represents a certain type of crime. Even by mere observation, we can conclude that some specific types of crime occur in a particular region only. Basically, we are going to cluster such points which are in the vicinity of each other and belong to the same class ( kind ). For example, an emergency call from an area ( with more cases of robbery ) arrives, the probability that the victim has also suffered from the robbery is more than any other crimes. As we get more data, we can retrain our clustering algorithms to make more clusters and thereby increase efficiency.
H: Efficient recurrent network for sequences of varying length Suppose I have a bunch of sequences of varying lenghts. The absolute majority of them are short, just a few dozens items long. However, very few of them are significantly longer - more than a hundred items long. The question is, how to organize them efficiently as an input to recurrent layer? Padding doesn't work, since many sequences need to be heavily padded. Limiting batch size is not an option - these sequences are obtained as parts of a larger structure. AI: Sequence bucketing Depending on the input length of sequences you can dynamically change the padding and speed it up. Take a look at here Sequence bucketing
H: Which kNN model to chose? I am trying to tune the "n_neighbors" for a kNN model andI have the following problem : Based on the mean cross validation score the optimal kNN model should be the one with 10 neighbors. On the other hand, when I plot the "scores vs neighbors" graphs I see that there are models whose score distance between the training and the test data is much smaller ( for instance the model with 20 neighbors ). I am new to ML and this is still very confusing to me.. but should I stick to the 10 neighbors model, or is the 20 neighbors model better ? How do I decide ? Any help is much appreciated. Here is my code and the graphs : best_score = 0 neighbors = np.arange(1,31) all_train_scores = [] all_test_scores = [] for n_neighbors in neighbors : reg = KNeighborsRegressor(n_neighbors = n_neighbors, metric = 'manhattan') score = cross_val_score(reg, X_train, y_train, cv = 5) score = np.mean(score) if score > best_score : best_score = score optimal_choice = {'n_neighbors' : n_neighbors} reg.fit(X_train, y_train) train_score = reg.score(X_train, y_train) test_score = reg.score(X_test, y_test) all_train_scores = np.append(all_train_scores, train_score) all_test_scores = np.append(all_test_scores, test_score) AI: It depends on couple of things but one of the important ones is how big is your set. Note that the difference is 0.01 in R squared, so if dataset is small taking 20 samples do determine class might be costly, on the other hand if you can afford it 10 samples might be to little do ensure class membership. Depending on your next usage I would weigh in data size.
H: Recommended data cleaning techniques for multivariate time series prediction? I have to predict the next step(s) in a multivariate time series with about 30 features and 50.000 samples. I am thinking of using LSTM. Which techniques are usually recommended for cleaning the data when using LSTM? Does it make sense to transform the data into a stationary time series when using LSTM? Should the data be normally distributed when you are using PCA? There is also a very large amount of missing timestamps. Does it make sense to impute/fill (by forward filling or something else) big gaps or is it just better in that case to ignore the missing data completely? AI: 3 questions: Does it make sense to transform the data into a stationary time series when using LSTM? Always Stationarity is always desired property and data should be transformed (read more) Should the data be normally distributed when you are using PCA? No There are multiple assumptions around PCA IF you use some matrix factorization techniques. For example if using SVD you should make sure that your matrix is of full rang. Does it make sense to impute/fill (by forward filling or something else) big gaps or is it just better in that case to ignore the missing data completely? If for certain features a lot of data is missing you should drop them all together. Dont try to impute it, you will add false information.
H: K-Fold Cross Validation for NNs When using K-Fold CV, is it still useful to have a Train/Validation/Test split? Or simply just a Train/Test? I.e. split up data into k bins, and leave one out for testing, train on the rest, and take average of the scores. AI: It depends. If you're evaluating your model's performance without tuning hyperparameters, then a train/test split is sufficient. If you're tuning hyperparameters, then you need a validation set. Within each fold, you'll train on the training set (of course), using the validation set to tune hyperparameters. Then you'll evaluate performance on the test set.
H: How to interpret coefficients from logistic regression? I ran a logistic regression (statsmodel) on my data with 60 features using the below code import statsmodels.api as sm logit_model=sm.Logit(y_train,X_train_std) result=logit_model.fit() print(result.summary()) I was able to see that few variables had negative coefficient and few had positive coefficients. Am I right to understand that irrespective of sign of coefficients, all the below variables are significant predictors that influence the outcome? Or does negative coefficient mean they don't have any influence on the model outcome? But p-value is significant. Am a bit confused. Can you help in simple terms please The below output shows the records whose p-values were less than 0.05 AI: They are all significant but for certain thing. What do I mean? You are predicting evidence, i.e. first column of the following picture: In other words you have "linear regression part"+ instead of y you have evidence. So changing values of independnet variable X (positive or negative) will influence different binary class (0 or 1), hence different values are significant for different thing. (they add some info)
H: How to pass linear regression weights to Xgboost regressor? I'm trying to build an xgboost regressor or a catboost regressor for a task. I have a working linear regression model. I also trained an xgboost regressor model for the task but it was worse than the linear regression model. I am wondering if there is a way to pass the linear regression weights (model parameters) as an initial set of parameters to the xgboost (or catboost) model to ensure performance gain? E.g. if $w0*x0 + w1*x1 + w2*x2 + w3*x3 = y$ is the linear regression model, is there any way to tell xgboost to start by the same equation (and get better while training)? AI: Answer is NO Why? weights are hyperparameters of the linear regression model and they are not the same as the ones for xgboost or catboost. What you can do is combine models (if you really want to use xgboost or catboost additionally) SEE this NOTE just because model is more powerfull it does not mean that it will beat linear regression on every dataset ;)
H: Pandas dataframe with multiple observations per model I currently have a pandas dataframe with the following format Model Metric Value -------------------------------------------------------------- 0 Ours Accuracy [0.79, 0.79, 0.82] 1 Theirs Accuracy [0.68, 0.56, 0.64] 2 Ours Sensitivity [0.64, 0.55, 0.55] 3 Theirs Sensitivity [0.82, 0.82, 0.78] 4 Ours Specificity [0.68, 0.48, 0.6] 5 Theirs Specificity [0.68, 0.48, 0.6] In the evaluation script I am writing I want to be able to take into account situations where the training of a model is repeated multiple times, the results are stored in an numpy array (Value column). For visualization with seaborn, i believe i need a long form where it looks something like this: Model Metric Value -------------------------------------------------------------- 0 Ours Accuracy 0.79 1 Ours Accuracy 0.79 2 Ours Accuracy 0.82 3 Theirs Accuracy 0.68 4 Theirs Accuracy 0.56 5 Theirs Accuracy 0.64 6 Ours Sensitivity 0.64 ... .... ... ... I cannot figure out how to do this. AI: # create the df df = pd.DataFrame({'model':['our','theirs'],'metric':['acc','sen'],'value':[np.array([1,2,3]),np.array([4,5,6])]}) # unzip the array into three columns df[['ex1','ex2','ex3']]=pd.DataFrame(df['value'].to_list(),index=df.index) # melt df to long df df.melt(id_vars=['model','metric'],value_vars=['ex1','ex2','ex3']) You may need to sort the dataframe as you like.
H: How to get significance level for ranked features? I am aware of below approaches of feature selection a) Feature Importance methods which are available in tree based models like Random Forest and Xgboost,GradientBoost etc. b) statsmodel.logistic regression which in it's summary output provide us the results which contains whether variables are significant or not (P-value) c) SelectKbest which uses ANOVA, Chi-square etc to compute the influence of input variable on target attribute But unfortunately with methods b and c, it doesn't consider the feature interaction. Am I right? It works by considering each column to the target variable Whereas with methods a it returns the ranking but we aren't sure about whether they are significant or not. Is there anyway to know from Feature Importance whether the Features are significant or not? I understand features occurring in top 4-5 places could be significant but is there anyway to test/validate this? Or is it like I pick each feature (out of say top 20 assuming they have a role) from feature importance result and do a SelectKbest test or statsmodel summary? How can I know that the features that I select from Feature importance model are significant? AI: 1. univariate feature importance (that is c) in your list) You are correct. Univariate statistics to estimate feature importance does not capture feature interactions. But they are fast and simple. 2. model-based feature importance (that are a) and b) in your list) On the other hand model-based feature importance estimates can capture interactions as long as the model is capable of doing so (see "Introduction to Machine Learning with Python"; Mueller, Guido; 2017; p. 238/239). Which is not the case for linear regression. For model-based feature importance estimates using trees there are ways to derive p-values. And at least R does have some implementations for that. Have a look at section "2.5 Importance testing procedures" in this paper The revival of the Gini importance?.
H: unique predictions for "multi-label multi-output" classification task Let’s assume that four participants (A, B, C and D) take on five sport-challenges (e.g. swimming, running, ...). Our goal is to predict the placement of each participant for each challenge. Moreover, let’s assume we have appropriate predictors. We know that each placement (1 to 4) is unique for each challenge (only one winner ...). My questions: I think this prediction task is a multi-label multi-output classification, right? Are there any algorithms, which provide unique forecasts (for each class). In other words, the algorithm should classify each person in one unique class (1 to 4). Obviously, we know a priori that same placements are very unlikely. Thank you! Greets AI: 2 Questions. I think this prediction task is a multi-label multi-output classification, right? Yes. But you could consider splitting these joint classification tasks into seperate ones (i.e. for swimming whats the placement of A,B,C,D etc) IF you data allows it. Are there any algorithms, which provide unique forecasts (for each class)? Think in terms of probabilities. The strongest (most discriminatory one) is the one that has the highest-not-so-much-overlapping probabilities to different outputs. So taking these probabilities and "rounding" (possibly forcing some predictions if model is highly uncertain---meaning, for example, that he predicts 25% for every placement, here some other heuristics should be used. For example create a new variable that says how did competitor fare in other competitions etc) that up will give unique classes.
H: Proper masking in the transformer model Concerning the transformer model, a mask is used to mask out attention scores (replace with 1e-9) prior to the matrix multiplication with the value tensor. Regarding the masking, I have 3 short questions and would appreciate if you could clarify those: Are the attention scores the only place (besides the loss) where masks are needed or should the input be masked out as well? I am asking because is see implementations where a linear layer for the query, key and values is with bias=False is used. Is the reason for setting bias=False to have zeros preserved in the output of the layers or is there a different explanation? Should a padding_idx be used when learning word embeddings in order to zero out the padded tokens? AI: I will take as reference fairseq's implementation of the Transformer model. With this assumption: In the transformer, masks are used for two purposes: Padding: in the multi-head attention, the padding tokens are explicitly ignored by masking them. This corresponds to parameter key_padding_mask. Self-attention causality: in the multi-head attention blocks used in the decoder, this mask is used to force predictions to only attend to the tokens at previous positions, so that the model can be used autoregressively at inference time. This corresponds to parameter attn_mask. The weight mask, which is the combination of the padding and causal masks, is used to know which positions to fill with $-\infty$ before computing the softmax, which will be zero after it. You don't need to preserve any zeros in the output, as the attention blocks take care of that (see answer (1)). In the original Transformer article, the attention works without bias, but the bias does not change performance. Actually, in fairseq the bias are used by default. Yes, padding_idx is certainly used to zero out padded tokens.
H: What's a good F1-score in (not) extremely imbalanced dataset? I have a dataset with around 4.7K focused on binary classification. Class proportion is 33:67. meaning Label 1 is 1558 (33%) and Label 0 is 3154 (67%) of my dataset. Is my dataset imbalanced? some people say it is not bad My objective is to increase the F1-score only. I set the class_weight=balanced in my parameters and scoring=f1 during CV as shown below. svm=SVC(random_state=42) svm_cv=GridSearchCV(svm,param_grid,cv=5,scoring='f1') svm_cv.fit(X_train_std,y_train) Can you let me know through a code sample on how I can increase the weightage to minority class? if that is any different from choosing balanced paramter Currently my results are like as follows I understand AUC for few algo is above 80 but I believe F1-score is more important for a imbalanced class problems like mine. Can you help? I tried Oversampling minority class but not much improvement. Increasing features doesn't take me to 80% F1-score AI: I would say you data in not imbalanced. 33:67 is not a bad ratio but Try using under sampling of majority class. As another option you can try difference algorithms like randomforest. You can also try boosting.
H: Can 2 different OOV words get the same vector in FastText? Since FastText sums up the vectors(order is not considered) of an OOV word's subwords, is it possible for two different OOV words to get the same vector ? If so, then can you give an example? AI: TL;DR Theoretically it is possible, but it is unlikely. 1) Uncommon subwords word1 = 'iiii' word2 = 'jjjj' word1_subwords = ['<ii', 'iii', 'iii', 'ii>'] word2_subwords = ['<jj', 'jjj', 'jjj', 'jj>'] In this example, there are basically 6 subwords: ['<ii', '<jj', 'iii', 'jjj', 'ii>', 'jj>'], but these are not common subwords in general. So, there is a possibility that the embedding for all the subwords is the same (e.g. [0,0,...,0,0]), making their sum all the same. 2) Homographs word1 = 'lie' # meaning: tell something untruthful word2 = 'lie' # meaning: to rest on a horizontal position In this example, there are two homograph words. These are different words but they have the same spelling. Since FastText only take syntax into account, they will have the same subword embedding sum.
H: which is better : F1-score of 'N' in imbalanced data or 'N+3' in balanced data? I have a dataset with 4712 records. Label 1 is 1558(33%) and Label 0 is 3154 (67%) a) Currently when I run the model and analysis as is (without sampling techniques), I get an F1-score of 71-77. I chose F1-score and AUC score as the metric as my dataset is imbalanced. (Atleast that's what I felt looking at class proportion). My AUC also ranges between 80-83 for tree based models. Screenshot of models with imbalanced data is given below b) When I under-sample the majority class, I get all the metrics like F1-score/accuracy/AUC above 80 but less than 85 screenshot below Now I am not sure which one should I consider? I know it's all about trade-offs. My objective is to avoid/minimize the misclassifications Based on your experience in ML projects, what would you guys suggest? Can someone enlighten me with some reasons why to choose one over the other? AI: Second. why? you said you objective is to minimize misclassifications which equates to maximizing F1 score, and you achieve that in inbalaced situations when you down-sample the majority class (one approach to maximize) Potential Pitfall: It could be that you just got rid of of some important infomration in the majority class and you are over-fitting on what is left. In some cases 1500 datasamples would suffice, but I it could be that it doesn not have enough information. I would rather advise to upsample the smaller class, for example using ADASYN, SMOTe , knnSMOTE etc... Only you can answer this question, by knowing some meta-information about your dataset, quality rules, are you expecting huge covariate-shift, is the dropped data really that different in distribution, etc... Then do the analysis. If there is one thing you should always ask for is diverse and informative dataset, and more data will (almost) always beat better algorithm.
H: How to perform convolution with kernel bigger than image? In this question I've seen an example of convolution by the kernel with the shape bigger than initial image's one: import numpy as np from scipy import signal x = np.array([(0.51, 0.9, 0.88, 0.84, 0.05), (0.4, 0.62, 0.22, 0.59, 0.1), (0.11, 0.2, 0.74, 0.33, 0.14), (0.47, 0.01, 0.85, 0.7, 0.09), (0.76, 0.19, 0.72, 0.17, 0.57)]) y = np.array([(0, 0, 0.0686, 0), (0, 0.0364, 0, 0), (0, 0.0467, 0, 0), (0, 0, 0, -0.0681)]) gradient = signal.convolve2d(np.rot90(np.rot90(y)), x, 'valid') So, we get this: array([[ 0.044606, 0.094061], [ 0.011262, 0.068288]]) I understand that y is flipped by 180 degress. But how does "valid" convolution work here? When we can get (2x2) shape from (4x4) convolved by (5x5)? AI: Is your question how can we perform convolution operation with kernel bigger than input image? If yes then answer is padding. TL;DR you increase the size of the original image with boundary pixels so that kernel can "fit"
H: CrossMapLRN2d in pytorch I had to convert a code written in pytorch to keras (with tensorflow backend). But there was this layer called CrossMapLRN2d which had no direct counterpart in Keras. So wanted to know what does this layer do and how to implement it in keras. The exact line of code was nn.CrossMapLRN2d(size=5, alpha=0.0001, beta=0.75, k=1.0) AI: Local (contrast) normalization Local Response Normalization (LRN) layer implements the lateral inhibition we were talking about in the previous section. This layer is useful when we are dealing with ReLU neurons. Why is that? Because ReLU neurons have unbounded activations and we need LRN to normalize that. We want to detect high frequency features with a large response. If we normalize around the local neighborhood of the excited neuron, it becomes even more sensitive as compared to its neighbor. 2 options in keras. First it is already implemented check here but thats "old" keras. New, Google implementation can be found here
H: Training a Siamese Neural Network for object similarity assessment I am training a Siamese neural network with pairs of similar and dissimilar objects. The features of the objects are binary data on whether they contain some properties or not (2048 features per object). I then split my dataset into training, validation and test set (60:20:20). Afterwards, I prepared the dataset myself by pairing up at random the objects accordingly yielding 50% similar and 50% dissimilar pairs and I augment the data in the training set by generating extra pairs by random (resulting in 100,000 different instances, again a balanced dataset (50:50), vs. 1,000 instances for the validation set). I then proceed to train the Siamese network and end up estimating the cosine distance between the two outputs to get a similarity metric which is compared to my label with the binary cross entropy loss function. The learning rate used is low (lr = 0.0001) and I am using the Adam optimiser. I have tried producing really small batches (batch_size = 25), adding dropout and increasing the number of instances to avoid overfitting, but the model does not seem to generalise well regardless (see picture). I was wondering if anyone could give me any hint on what is going on - and also why is it that such bumps can be appreciated during the learning process -. AI: without going into architecture that looks reasonable but always can be updated: Weights are dominantly updated for the negative pairs. why? In the train set where you have (approx) 10 000 times more negative samples, you are training on them and letting the network learn almost only negative pairs, but then you test it on validation and you get what you would expect. No improvement on accuracy or loss on validation set. Make sure you are more balanced (down sample train or upsample valid/test) EDIT: given new ifo here is a couple of suggestions. Indication is overffiting obviously Early Stopping as soon as validation dataset reaches minimum STOP training. ITs overkill after that Covariate-shift make sure that patterns are similiar in all three sets (I assume they are but just in case) Reduce complexity, how deep is your architecture, maybe you are over doing it. If you took some random Siamese network architecture from online it may be too much (they used it for images for example where you need additional complexity)
H: Get the Polynomial Equation with Two Variables in Python TL;DR predict "price", given "length" and "wandRate" I have some time-series data where the dependent variable is a polynomial result of 2 independent data points. Here is a snippet: This is past pricing data of Processed Rice Grains of a certain kind of rice. Based on the variable "wandRate" (1st variable) which is the price for any "length" (2nd variable) over 8.2, prices of rice grains with lower lengths are calculated. These prices are based on a long trial-and-error method of asking various experts on their "opinion" on how a certain grain of a certain length should be priced. There are other variables which can't be objectively measured, but length is the main indicator. I was wondering if it would be possible to create an objective model or find a polynomial equation in two variables to predict "price", given "length" and "wandRate" I was led to thinking in terms of a polynomial when I plotted the data in google sheets and a sixth-degree-polynomial equation gave an intuitively correct looking trendline. NOTE: I do not have a strong math background so simple google searches about "polynomial in 2 variables from data python equation" did not yield any implementable results. I'm looking for some python code to accomplish this. ANY guidance on where to look would be appreciated. AI: So you want to fit 6-th degree polynomial in python to your data? Main thing you should note is that it will be still linear regression, its juts that predictors are polynomial (most important is that your weights are still linear (betas in lin.regression)) You can transform your features to polynomial using this sklearn module and then use these features in your linear regression model. > from sklearn.preprocessing import PolynomialFeatures from sklearn > import linear_model poly = PolynomialFeatures(degree=6) > poly_variables = poly.fit_transform(variables_length_wand_rate) > poly_var_train, poly_var_test, res_train, res_test = train_test_split(poly_variables, results,test_size = 0.3, random_state=4) > regression = linear_model.LinearRegression() > model = regression.fit(poly_var_train,res_train) > score = model.score(poly_var_test, res_test)
H: One scaler for all features or one scaler per feature? I have a time series with more than 30 features. For preprocessing with scikit learn do you usually use one scaler per feature or one scaler for all features that should be standardized/normalized? AI: Sklearn scaler works on feature/column (and thats why you want) Imagine if it did not. Than you would shift your mean and std in weird-determined-by distribution-of-the-whole-set-kind of way.
H: Technique to determine variation in metric due to varying parameters So basically I have a large set of features corresponding to a metric - like many ML problems. What I want to know is: can we correlate the variance of metric with the variation in each feature. ex: I have features x, y, z that produce an output say 10. When I vary x, no matter how much I vary it, the output stays relatively close to 10. However, when I vary y the output is heavily influenced. Is there a good technique to be able to assign a value correlating x and/or y to the metric? I'm mostly looking for direction here.. i.e. techniques or relevant papers. In my experience I haven't really come across this problem. I don't have a good solution in my toolbelt. Thanks! AI: If the variation in values of feature then should not you remove the column? Your model will not learn from it if the variance is low. you can Q-Q plots when you vary the features to check how close are the 2 datasets comparing their distribution
H: How to choose input variables for ML Let's say I have a huge database with 100K records and 60 columns. Let's say one of the column is "min_p". What I do is apply some logic/rule to determine the output label for this record. Basically I look at previous two records and next two records of this min_p. If the condition is satisfied, I will mark the label as 1 else I will mark it as 0. Now my question, since I have directly derived the label from this called "min_p", should I retain it as one of my predictors in my final dataset? Since I have used that derive the label, I didn't include them in my dataset as a input variable thinking that it is incorrect Can you help me with this? AI: [edited, I misread the question in the first version] The fact that the label is determined from a combination of values from this feature is not a problem in itself: if it makes sense, it's always better to give the best indicators to the learning algorithm. So the only questions are: whether it makes sense for your problem to have the feature provided as input for any new instance: if yes, then there's no reason to remove it. whether it's useful to apply ML to your problem: if the label can be determined directly from a single feature, it's simply not useful to train a model. You mention that the label is based on information from the previous/next two records. Keep in mind that the model needs to predict its target for any individual instance as input, unless you're using a sequential model (for instance with times series).
H: Does Sklean's SGDClassifier automatically standardize the training data when regularization is turned on? Generally speaking--it is best to apply standarizaton (z-scoring the training data) prior to regularization. Does sklearn.linear_model.SGDClassifier automatically standardize the training data or not when the 'penalty' argument is set to a value other than none (i.e. 'l2', 'l2', or 'elasticnet')? AI: No, sklearn generally doesn't apply scaling inside of any of its models, instead relying on the user to do that. This seems like the right way to do it, since you might want to try different scaling techniques depending on your data. From the User Guide: Stochastic Gradient Descent is sensitive to feature scaling, so it is highly recommended to scale your data. For example, scale each attribute on the input vector X to [0,1] or [-1,+1], or standardize it to have mean 0 and variance 1...
H: Decision Tree Classifier to classify values based on values of other columns I have data with multiple labels, for example My X set is fromt second to third column, and I want to classify either first column or the last column, so I made my Y the last column. The goal is so that if I would classify Vios it would return me Car or 0 in other words it can find its way to the first row. Classification use case: classify("poodle") #just pretend this is a working function returns: Pets How I did it in attempt to train my model: from sklearn.feature_extraction.text import TfidfVectorizer X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 72) count_vect = CountVectorizer() X_train_counts = count_vect.fit_transform(X_train) tfidf_transformer = TfidfTransformer() X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts) clf3 = RandomForestClassifier().fit(X_train_tfidf, y_train) I'm using a guide from somewhere on the net that works a bit the same with it, but at the end im getting returned: ValueError: Found input variables with inconsistent numbers of samples: [5, 4156] I knew immediately i was doing it wrong. How do I train a model so that it achieves my goal? Any relevant guides or techniques I should be following intead of this? I dont even know the correct way to use vectors on this case. AI: Couple of things: Cant replicate the problem exactly but if you follow these steps you are not exposing yourself TfidfVectorizer is CountVectorizer + TfidfTransformer, you are exposing yourself to unnecessary complexity and potential errors Use pipelines, cant stress this enough, its a compact way to pack all the sklearn transformers together AND THEN use fit, predict methods... I would advise you to follow something like that, or find similiar problem here
H: KMeans clustering for Image Data I am trying to cluster the sample of Imagenet Dataset using K-Means clustering. In this approach, I have used the below 2 approaches to get the optimal number of clusters. Elbow method From the Graph it seems like the best number of clusters is from 6 to 10. Silhouette score Cluster : 2 | Silhouette score : 0.036273542791604996 Cluster : 3 | Silhouette score : -0.00300691369920969 Cluster : 4 | Silhouette score : 0.0025101888459175825 Cluster : 5 | Silhouette score : -0.005924953147768974 Cluster : 6 | Silhouette score : -0.00808520708233118 Cluster : 7 | Silhouette score : -0.006091121584177017 Cluster : 8 | Silhouette score : -0.00549863139167428 Cluster : 9 | Silhouette score : -0.014739749021828175 Cluster : 10 | Silhouette score : -0.021131910383701324 Cluster : 11 | Silhouette score : -0.04057755321264267 Cluster : 12 | Silhouette score : -0.012825582176446915 Cluster : 13 | Silhouette score : -0.012340431101620197 Cluster : 14 | Silhouette score : -0.032936643809080124 Cluster : 15 | Silhouette score : -0.04154697805643082 Cluster : 16 | Silhouette score : -0.04323640838265419 Where as in Silhouette analysis it looks like only for cluster 4 it is showing better value. Rest of the clusters are seems like samples are wrongly assigned to wrong clusters. In such cases, which metric needs to be considered? Updates: I have reduced the dimension of the features using PCA. Below are the updated graphs for Elbow and Silhouette analysis. However I do not see any improvement in the clustering. As per Silhouette the samples are not being assigned to the closest cluster. Elbow Silhouette Updating the question with latest graphs, VGGNet has been replaced by Resnet50 Code snippet: for i in range(lower_range, upper_range): # For Elbow curve kmeans_cluster = KMeans(n_clusters = i) kmeans_cluster_fit = kmeans_cluster.fit(features_list_np) loss = kmeans_cluster_fit.inertia_ error.append(kmeans_cluster_fit.inertia_) # For Silhoutte analysis preds = kmeans_cluster.fit_predict(features_list_np) score = silhouette_score(features_list_np, preds) score_list.append(score) sample_score = silhouette_samples(features_list_np, preds) sample_score_list.append(sample_score) print("Cluster : {} | Loss : {} | Silhoutte Score : {}".format(i, loss, score)) zero_samples = 0 positive_samples = 0 negative_samples = 0 for each_sample in sample_score: if each_sample == 0: zero_samples += 1 if each_sample > 0: positive_samples += 1 if each_sample < 0: negative_samples += 1 print("Cluster : {} | Silhouette sample distribution - Zero : {} | Positive : {} | Negative : {}".format(i, zero_samples, positive_samples, negative_samples)) zero_list.append(zero_samples) pos_list.append(positive_samples) neg_list.append(negative_samples) Also should the importance should be given to Silhouette Score or the Silhouette Samples. Because in some of the cases, Silhouette score is less but the number of samples having positive values are more. Thank you, KK AI: Neither, there is not enough discriminatory information in data (yet) Dont squeeze the data until it tells you the truth. You can change the metric (malahobian distance for example) and the algo but you cant expect it to show miracles. Using elbow method, as you increase the number of clusters it will always become more homogenous. You dont have a "kink" indicating optimal clustering number. And with Silhouete (be careful those are averaged Silhoute score per cluster, in other words for k=4 you have 4 scores indicating wether points are lying inside/border/should- be-other-cluster that are averaged) you get that all of the points (in average) lie on the border of the clusters, without clear distinction of the clusters (thats what 0 means). Advice Find better quantitative representation of data, new feature, reduce noise etc...
H: Format of data in SQL for machine learning I am a beginner at Machine Learning and am starting out on a ML project. I have a large chunk of the source material and have started extracting the data from it to be stored in SQL (initial test with SQLite, but that is going to be insufficient for production). The question that I am now facing that I can't find any kind of answer to is to what extend to preprocess the data that I store for best performance? For example, ML methods are usually bad at handling categories and need them to be more like 0/1 values in a lot of columns showing the category rather than the category as a string in a single column. As I have many different such cases for a single row it would mean a lot of extra columns in SQL to achieve this preprocessing. I will also be using different ML methods like regression and classification on the data so exact preprocessing requirementsmight be hard to predict. The data consists of %, times, categories, string labels and more. I will have to do some additional processing after retrieval from database regardless of how much preprocessing I am doing beforehand as some preprocessing is just not feasible (or even possible) to store completely prepared in SQL. % is of course easy, but when to do what for many of the other forms of data still eludes me. Setup is single machine for daily data retrieval (small updates), data extraction and storage, modelling (unknown update interval) and predictions (multiple daily). Since I will be using many different models and aggregate prediction results I am very keen to have high performance without having to go nuts about it. I work in Python but can shift c/Java-like language if significant gains can be shown. Current estimate is for around 10 million data points, but that could easily be 10 times that number when broken down into categories. As I am new at this I think it fair that you ask for clarifications if I have left out anything of relevance in determining what the best format for data in the SQL should be. I realise that performance is not as clear as desired, but I don't know what the bottleneck is going to be. Small investments like some additional RAM is not really a bottleneck compared to the need for an additional machine to run some portion of the process. AI: Disclaimer: I'm not at all expert about deploying big ML systems in production. This answer is only based on my experience with many different ML problems and datasets My humble advice would be not to try to design the format of the data before having a quite precise idea of what kind of ML process is going to be applied. There is no "one size fits all" in ML and there's a real risk that by starting with the format of the data, you will end up with something which turns out to be completely inappropriate for the task. Start with local experiments instead: Use a small subset of the data at first. Design some simple problems of the same kind as the real ones you plan to do eventually. Vary as many aspects as possible of the experiments: preprocessing, learning algorithms, parameters, size of the data, etc. Move progressively to more realistic tasks and amount of data. Evaluate the advantages/disadvantage of the different methods/setups, then select a range of target setups Finally design everything including the data format based on these target setups. Following this logic during the experimental stage you can just export your data in any format convenient for whatever framework you're testing. it's only at the end of the experimental stage that you design the production system, e.g. SQL server etc.
H: Is there any good practice to cluster 3D data array? So I'm not sure what word fits best to describe this data, probably "dimension" would be wrong since it may be used for flat samples with 3 features; but by 3D data I mean some structure in a form of [samples, timesteps, features]. And there are 2 features in each timestamp. It looks like [ [ [1,2], [3,4] ], [ [5,6], [7,8] ] ], like an LSTM input. [1,2] is a timestep and [[1,2],[3,4]] is a sample. So one way is to just flatten out timesteps and make them into a 1D array. However is there any better way that would somehow utilize the information conducted by "grouping" of features inside a timestamp? Also how do I properly describe this data structure? AI: Given that allmpst all clustering algorithms assume data is unordered, reshaping the data into some n*p format ist indeed appropriate. If you want to take positions into account, you'll have to encode them as additional features (which can prove to be tricky because of scaling and feature weighting). But don't treat clustering as a black box. You may have some particular goal in mind, and adequately preparing the data is a must for clustering. Consider k-means: it searches a least-squares appropximation. It's you job to prepare the data in a way that least-squares on these features is useful.
H: Why is the decoder not a part of BERT architecture? I can't see how BERT makes predictions without using a decoder unit, which was a part of all models before it including transformers and standard RNNs. How are output predictions made in the BERT architecture without using a decoder? How does it do away with decoders completely? To put the question another way: what decoder can I use, along with BERT, to generate output text? If BERT only encodes, what library/tool can I use to decode from the embeddings? AI: The need for an encoder depends on what your predictions are conditioned on, e.g.: In causal (traditional) language models (LMs), each token is predicted conditioning on the previous tokens. Given that the previous tokens are received by the decoder itself, you don't need an encoder. In Neural Machine Translation (NMT) models, each token of the translation is predicted conditioning on the previous tokens and the source sentence. The previous tokens are received by the decoder, but the source sentence is processed by a dedicated encoder. Note that this is not necessarily this way, as there are some decoder-only NMT architectures, like this one. In masked LMs, like BERT, each masked token prediction is conditioned on the rest of the tokens in the sentence. These are received in the encoder, therefore you don't need an decoder. This, again, is not a strict requirement, as there are other masked LM architectures, like MASS that are encoder-decoder. In order to make predictions, BERT needs some tokens to be masked (i.e. replaced with a special [MASK] token. The output is generated non-autoregressively (every token at the output is computed at the same time, without any self-attention mask), conditioning on the non-masked tokens, which are present in the same input sequence as the masked tokens.
H: Importing .ipnyb file from Kaggle into local Jupyter Total beginner question here, please let me know if it would be more appropriate somewhere else. I just created my first iPython notebook in Kaggle and I downloaded the ipnyb file. Now I have installed Jupyter locally and want to try working on the same notebook that way. It seems like I got Jupyter working because I am able to create a new local notebook and it looks like what I would expect: But when I open the ipnyb file that I downloaded from Kaggle, I just see what looks like raw JSON instead of a live notebook: I also noticed that the icons of these two notebook files look different: Any suggestions about what I might be doing wrong and how I might get my properly import my Kaggle notebook into Jupyter locally? AI: Thats because "ipnyb" is not proper format. Try downloading/changing it properly to .ipynb extension.
H: Is it possible to decompose a scalar value to a inter-dependent vector neural network? My data contains a scalar feature $r$, I found this feature is important for training my deep model. My idea is supposing there is a 3-layer MLP $f(x), x \in \mathbb{R}^{n}$, where $n=1$. It outputs a vector with dimension $m$ where each value in $[0, 1]$. For my data, it inputs $r$ and outputs an m-sized vector. So here is my decomposition idea make sense? AI: Yes, you can do that by interchanging the position of decoder and encoder in an autoencoder. In an autoencoder, you give a long vector as input - the encoder reduces it to a short length vector (compressed) - the decoder now takes this compressed vector as input and upsamples it to the size of the original vector. The autoencoder is trained by taking the Mean Square Error (MSE) of the output of decoder with respect to the input vector. This enforces the compressed vector representation to contain the information of the input vector. Now coming to your case. You simply need to pass the single scalar value to a decoder that upsamples, say your 3 layer fully connected neural networks. Let this output be denotes as "latent representation". Now pass this "latent representation" to the encoder which uses this "latent representation" to output just a single scalar value. Use MSE objective to enforce the the above single scalar output to match the input scalar value. Once the training is done of the above reverse autoencoder, the "latent representation" will give you the required vector containing the information about the scalar value you wished to represent as a vector.
H: How do I predict survival curves using xgboost? The xgboost package enables survival modeling using parameter arguments: objective = "survival:cox" and eval_metric = "cox-nloglik". The predict method for the resulting model only outputs risk scores (same as type = "risk" in the survival::coxph function in r). How do I use xgboost to predict entire survival curves? AI: The proportional hazard model assumes hazard rates of the form: $h(t|X) = h_0(t) \cdot risk(X)$ where usually $risk(X) = exp(X\beta)$. The xgboost predict method returns $risk(X)$ only. What we can do is use the survival::basehaz function to find $h_0(t)$. Problem is it's not "calibrated" to the actual baseline hazard rate computed in xgboost. What we can do is find some constant $C$ that minimizes the ibrier score between the sample observed death/censorship times and $h_0(t) \cdot risk(X) \cdot C$. I've implemented this approach in a tiny R package I've written.
H: Are parquet files compressed? Parquet File Format Hadoop. Parquet, an open-source file format for Hadoop. Parquet stores nested data structures in a flat columnar format. Compared to a traditional approach where data is stored in the row-oriented approach, parquet is more efficient in terms of storage and performance. What is the advantage of a parquet file? Are parquet files compressed? AI: Yes, its compressed, read more for advanteges: Parquet, is an open source file format for Hadoop, it stores nested data structures in a flat columnar format. Compared to a traditional approach where data is stored in row-oriented approach, parquet is more efficient in terms of storage and performance. Parquet stores binary data in a column-oriented way, where the values of each column are organized so that they are all adjacent, enabling better compression. It is especially good for queries which read particular columns from a “wide” (with many columns) table since only needed columns are read and IO is minimized. When we are processing Big data, cost required to store such data is more (Hadoop stores data redundantly I.e 3 copies of each file to achieve fault tolerance) along with the storage cost processing the data comes with CPU,Network IO, etc costs. As the data increases cost for processing and storage increases. Parquet is the choice of Big data as it serves both needs, efficient and performance in both storage and processing. To conclude main advantages of parquet: Organizing by column allows for better compression, as data is more homogeneous. The space savings are very noticeable at the scale of a Hadoop cluster. I/O will be reduced as we can efficiently scan only a subset of the columns while reading the data. Better compression also reduces the bandwidth required to read the input. As we store data of the same type in each column, we can use encoding better suited to the modern processors’ pipeline by making instruction branching more predictable.
H: How to calculate perplexity in PyTorch? I am wondering the calculation of perplexity of a language model which is based on character level LSTM model. I got the code from kaggle and edited a bit for my problem but not the training way. I have added some other stuff to graph and save logs. However, as I am working on a language model, I want to use perplexity measuare to compare different results. In tensorflow, I have done it via this answer and it was easy. I have looked for a way doing it in PyTorch and literally no related result on Google. I need some help, and it is really appreciated. Here is the related code, I believe: criterion = nn.CrossEntropyLoss() # create training and validation data val_idx = int(len(data)*(1-val_frac)) data, val_data = data[:val_idx], data[val_idx:] if(train_on_gpu): net.cuda() counter = 0 n_chars = len(net.chars) for e in range(epochs): # initialize hidden state h = net.init_hidden(batch_size) for x, y in get_batches(data, batch_size, seq_length): counter += 1 # One-hot encode our data and make them Torch tensors x = one_hot_encode(x, n_chars) inputs, targets = torch.from_numpy(x), torch.from_numpy(y) if(train_on_gpu): inputs, targets = inputs.cuda(), targets.cuda() # Creating new variables for the hidden state, otherwise # we'd backprop through the entire training history h = tuple([each.data for each in h]) # zero accumulated gradients net.zero_grad() # get the output from the model output, h = net(inputs, h) # calculate the loss and perform backprop loss = criterion(output, targets.view(batch_size*seq_length)) AI: I was surfing around at PyTorch's website and found a calculation of perplexity. You can examine how they calculated it as ppl as follows: criterion = nn.CrossEntropyLoss() total_loss = 0. ... for batch, i in enumerate(range(0, train_data.size(0) - 1, bptt)): ... loss = criterion(output.view(-1, ntokens), targets) loss.backward() total_loss += loss.item() log_interval = 200 if batch % log_interval == 0 and batch > 0: cur_loss = total_loss / log_interval ... print('ppl {:8.2f}'.format(math.exp(cur_loss))) ... As @SpiderRico reminded, I got it from this link
H: What's the best way to train a NER model? I am trying to do a project using NLP. My goal is to process Cyber Threat Intelligence articles like this to extract information such as actor’s name, malwares and tools used… To do that I want to use NER. However, there isn’t training data available on the web. So I was wondering if I should process manually 10-20 articles to make my training data or if I could do something like taking only interesting lines such as “Rancor conducted at least two rounds of attacks intending to install Derusbi or KHRat malware on victim systems” in multiples articles and replacing the group name by another actor. This way I could deduplicate my training data by the number of known actors. But doing that, only the actor name is changing. So, the context is always the same. I am wondering what’s the best way to train my model considering the quantity of training data available? AI: I would start by training some very strong Named Entity classifier on available datasets for NER. One is the Annotated Corpus for Named Entity Recognition available on Kaggle. Additionally, you can find a good list of datasets here. I know they have nothing to do with cybersecurity, but I think it's important to incorporate very different sources in a big, final dataset, in order to make a model that is good at generalizing on texts it has never seen before. Another source of data for NER tasks is the annotated corpora available from nltk library, such as the free part of the Penn Treebank dataset, and Brown corpus. Please beware that different datasets might use different categories for classification (i.e. the set of Named Entities can be different from dataset to dataset). Make sure you make all your data compatible to your classifier before training After that, I suggest you to go with seq2seq models. Every state-of-the-art RNN is some form of seq2seq. Once you trained a classifier, you could try to annotate few articles manually, and check the performance of your model on those. It's time consuming, but I personally like these "qualitative" checks, I think they can tell you a lot.
H: Why is T test reweighting on a word X word co-occurrence matrix so effective? I am going through Stanford NLP class: http://web.stanford.edu/class/cs224u/ A task in the homework is to implement T-test reweighting on a word X word co-occurrence matrix: https://nbviewer.jupyter.org/github/cgpotts/cs224u/blob/2019-spring/hw1_wordsim.ipynb#t-test-reweighting-[2-points] $$\textbf{ttest}(X, i, j) = \frac{ P(X, i, j) - \big(P(X, i, *)P(X, *, j)\big) }{ \sqrt{(P(X, i, *)P(X, *, j))} }$$ I have 2 questions: What is the intuition behind this formula? It looks a little like PMI but I can't understand what it's doing. The T-test explanation out there seems to be unrelated to this task. It works amazingly well (when evaluated by this test): raw matrix yield a correlation score of 0.014, PMIed matrix 0.123 and t scored matrix: 0.408979. This number seems almost too good to be true for such a simple model. Can anyone bring some intuition/experience about why that is? AI: IT is very similiar to PMI, here you just expand it to the whole dictionary matrix (matrix representation of the whole vocabulary), normalize it by subtracting quantitive representation of the sum of words found in row i column j and than standardize. (Like when using sklearn Standardize(), similiar atleast) Intuition? Well why is tf-idf working (generally as text quantification technique), you are focusing on essential n-grams and minimising away the rest, with this re-weighting you are getting close to tf-idf representation in a sence.
H: How different should discriminator be from generator in GAN When training a GAN, the generator $G$ strives to fool the discriminator $D$, while $D$ attempts to catch any output generated $G$ and isolate it from a real data point. They grow together training in turns for each epoche. Assuming $D$ is already an expert classifier (for example, classifying birds and nonbirds images). What will happen if I freeze the weights of $D$ and only train $G$ (to generate high-resolution bird images from low-resolution ones for example)? Is there a mathematical problem here? Is $D$ so good that the generator will not be able to learn due to a very high initial error? I have obviously simulated it and failed. AI: Generator cant learn if the discriminator error is too small. But generator should always be "ahead" of generator in order to learn. There is a paper explaining why the generator's gradient vanishes if the discriminator gets too strong. And how does this proportion look mathemactically optimal TL;DR Dont make it too strong, but make sure D is ahead to ensure optimal learning. GANS are notoriously unstable (expensive also)
H: sklearn.naive_bayes VS categorical variables In a binary classification, how can I use sklearn.naive_bayes python module to predict the class of inputs with 5 categorical variables (not binary)? AI: Hot encode the categorical variables and use Bernoulli naive Bayes. Hot encoding is usually the trick one uses in representing categorical variables.
H: What is the difference between NLP and text mining? As discussed with Sean in this Meta post, I thought it would be nice to have a question which can help people who were confused like me, to know about the differences between text mining and NLP! So, what are the differences between nlp and text-mining? I have included my understanding as an answer. If possible, please explain your answer with a brief example! AI: I agree with Sean's answer. NLP and text mining are usually used for different goals. Also, there is indeed an overlap and both definitions are vogue. Other than the difference in goal, there is a difference in methods. Text mining techniques are usually shallow and do not consider the text structure. Usually, text mining will use bag-of-words, n-grams and possibly stemming over that. In NLP methods usually involve the text structure. You can find there sentence splitting, part-of-speech tagging and parse tree construction. Also, NLP methods provide several techniques to capture context and meaning from text. Typical text mining method will consider the following sentences to indicate happiness while typical NLP methods detect that they are not I am not happy I will be happy when it will rain If it will rain, I'll be happy. She asked whether I am happy Are you happy?
H: Classification using xgboost - predictions I was trying to build a 0-1 classifier using xgboost R package. My question is how predictions are made? For example in random forests, trees "vote" against each option and the final prediction is based on majority. As regard xgboost, the regression case is simple since prediction on whole model is equal to sum of predcitions for weak learners (boosted trees), but what about classification? Does xgboost classifier works the same as in the random forest (I don't think so, since it can return predictive probabilities, not class membership). AI: The gradient boost algorithm create a set of decision tree. The prediction process used here use these steps: for each tree, create a temporary "predicted variable", applying the tree to the new data set. use a formula to aggregate all these tree. Depending on the model: bernoulli: 1/(1 + exp(-(intercept + SUM(temporary pred)))) poisson, gamma: exp(intercept + SUM(temporary pred)) adaboost: 1 /(1 + exp(-2*(intercept + SUM(temporary pred)))) The temporary "predicted variable" is a probability, having no sense by its own. The more tree you have, the more smooth is your prediction.( as for each tree, only a finite set of value is spread across your observations) The R process is probably optimised, but it is enough to understand the concept. In the h2o implementation of the gradient boost, the output is a flag 0/1. I think the F1 score is used by default to convert probability into flag. I'll do some search/test to confirm that. In that same implementation, one of the default output for a binary outcome is a confusion matrix, which is a great way to assess your model ( and open a whole new bunch of interrogations). The intercept is "the initial predicted value to which trees make adjustments". Basically,just an initial adjustment. In addition: h2o.gbm documentation
H: Neural networks: which cost function to use? I am using TensorFlow for experiments mainly with neural networks. Although I have done quite some experiments (XOR-Problem, MNIST, some Regression stuff, ...) now, I struggle with choosing the "correct" cost function for specific problems because overall I could be considered a beginner. Before coming to TensorFlow I coded some fully-connected MLPs and some recurrent networks on my own with Python and NumPy but mostly I had problems where a simple squared error and a simple gradient descient was sufficient. However, since TensorFlow offers quite a lot of cost functions itself as well as building custom cost functions, I would like to know if there is some kind of tutorial maybe specifically for cost functions on neural networks? (I've already done like half of the official TensorFlow tutorials but they're not really explaining why specific cost functions or learners are used for specific problems - at least not for beginners) To give some examples: cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_output, y_train)) I guess it applies the softmax function on both inputs so that the sum of one vector equals 1. But what exactly is cross entropy with logits? I thought it sums up the values and calculates the cross entropy...so some metric measurement?! Wouldn't this be very much the same if I normalize the output, sum it up and take the squared error? Additionally, why is this used e.g. for MNIST (or even much harder problems)? When I want to classify like 10 or maybe even 1000 classes, doesn't summing up the values completely destroy any information about which class actually was the output? cost = tf.nn.l2_loss(vector) What is this for? I thought l2 loss is pretty much the squared error but TensorFlow's API tells that it's input is just one tensor. Doesn't get the idea at all?! Besides I saw this for cross entropy pretty often: cross_entropy = -tf.reduce_sum(y_train * tf.log(y_output)) ...but why is this used? Isn't the loss in cross entropy mathematically this: -1/n * sum(y_train * log(y_output) + (1 - y_train) * log(1 - y_output)) Where is the (1 - y_train) * log(1 - y_output) part in most TensorFlow examples? Isn't it missing? Answers: I know this question is quite open, but I do not expect to get like 10 pages with every single problem/cost function listed in detail. I just need a short summary about when to use which cost function (in general or in TensorFlow, doesn't matter much to me) and some explanation about this topic. And/or some source(s) for beginners ;) AI: This answer is on the general side of cost functions, not related to TensorFlow, and will mostly address the "some explanation about this topic" part of your question. In most examples/tutorial I followed, the cost function used was somewhat arbitrary. The point was more to introduce the reader to a specific method, not to the cost function specifically. It should not stop you to follow the tutorial to be familiar with the tools, but my answer should help you on how to choose the cost function for your own problems. If you want answers regarding Cross-Entropy, Logit, L2 norms, or anything specific, I advise you to post multiple, more specific questions. This will increase the probability that someone with specific knowledge will see your question. Choosing the right cost function for achieving the desired result is a critical point of machine learning problems. The basic approach, if you do not know exactly what you want out of your method, is to use Mean Square Error (Wikipedia) for regression problems and Percentage of error for classification problems. However, if you want good results out of your method, you need to define good, and thus define the adequate cost function. This comes from both domain knowledge (what is your data, what are you trying to achieve), and knowledge of the tools at your disposal. I do not believe I can guide you through the cost functions already implemented in TensorFlow, as I have very little knowledge of the tool, but I can give you an example on how to write and assess different cost functions. To illustrate the various differences between cost functions, let us use the example of the binary classification problem, where we want, for each sample $x_n$, the class $f(x_n) \in \{0,1\}$. Starting with computational properties; how two functions measuring the "same thing" could lead to different results. Take the following, simple cost function; the percentage of error. If you have $N$ samples, $f(y_n)$ is the predicted class and $y_n$ the true class, you want to minimize $\frac{1}{N} \sum_n \left\{ \begin{array}{ll} 1 & \text{ if } f(x_n) \not= y_n\\ 0 & \text{ otherwise}\\ \end{array} \right. = \sum_n y_n[1-f(x_n)] + [1-y_n]f(x_n)$. This cost function has the benefit of being easily interpretable. However, it is not smooth; if you have only two samples, the function "jumps" from 0, to 0.5, to 1. This will lead to inconsistencies if you try to use gradient descent on this function. One way to avoid it is to change the cost function to use probabilities of assignment; $p(y_n = 1 | x_n)$. The function becomes $\frac{1}{N} \sum_n y_n p(y_n = 0 | x_n) + (1 - y_n) p(y_n = 1 | x_n)$. This function is smoother, and will work better with a gradient descent approach. You will get a 'finer' model. However, it has other problem; if you have a sample that is ambiguous, let say that you do not have enough information to say anything better than $p(y_n = 1 | x_n) = 0.5$. Then, using gradient descent on this cost function will lead to a model which increases this probability as much as possible, and thus, maybe, overfit. Another problem of this function is that if $p(y_n = 1 | x_n) = 1$ while $y_n = 0$, you are certain to be right, but you are wrong. In order to avoid this issue, you can take the log of the probability, $\log p(y_n | x_n)$. As $\log(0) = \infty$ and $\log(1) = 0$, the following function does not have the problem described in the previous paragraph: $\frac{1}{N} \sum_n y_n \log p(y_n = 0 | x_n) + (1 - y_n) \log p(y_n = 1 | x_n)$. This should illustrate that in order to optimize the same thing, the percentage of error, different definitions might yield different results if they are easier to make sense of, computationally. It is possible for cost functions $A$ and $B$ to measure the same concept, but $A$ might lead your method to better results than $B$. Now let see how different costs function can measure different concepts. In the context of information retrieval, as in google search (if we ignore ranking), we want the returned results to have high precision, not return irrelevant information have high recall, return as much relevant results as possible Precision and Recall (Wikipedia) Note that if your algorithm returns everything, it will return every relevant result possible, and thus have high recall, but have very poor precision. On the other hand, if it returns only one element, the one that it is the most certain is relevant, it will have high precision but low recall. In order to judge such algorithms, the common cost function is the $F$-score (Wikipedia). The common case is the $F_1$-score, which gives equal weight to precision and recall, but the general case it the $F_\beta$-score, and you can tweak $\beta$ to get Higher recall, if you use $\beta > 1$ Higher precision, if you use $\beta < 1$. In such scenario, choosing the cost function is choosing what trade-off your algorithm should do. Another example that is often brought up is the case of medical diagnosis, you can choose a cost function that punishes more false negatives or false positives depending on what is preferable: More healthy people being classified as sick (But then, we might treat healthy people, which is costly and might hurt them if they are actually not sick) More sick people being classified as healthy (But then, they might die without treatment) In conclusion, defining the cost function is defining the goal of your algorithm. The algorithm defines how to get there. Side note: Some cost functions have nice algorithm ways to get to their goals. For example, a nice way to the minimum of the Hinge loss (Wikipedia) exists, by solving the dual problem in SVM (Wikipedia)
H: How can I read in a .csv file with special characters in it in pandas? I am trying to read in a .csv file containing some data. I only need to read in specific chunks of rows from the file, such as line 15- line 20, line 45-line 50, and so on. However, the file contains text and copy write information like, such as ©1990-2016 AAR,All rights reserved in several places. Such lines seem to be producing the error ValueError: No columns to parse from file, because when I just copy lines without such information using pd.read_csv(), it works fine. My goal is to automate the process of downloading these files from the web and reading them into pandas to grab chunks of rows and then do some processing with it, so I can't just manually specify the windows of text lacking such characters. Here is what I tried:pd.read_csv("filename.csv",encoding=utf-8, skiprows = 14) and pd.read_csv("filename.csv",encoding=utf-16, skiprows = 15), after looking at similar answers in stack exchange, but this didn't work. Can anyone give me some guidance on this? AI: There is df.drop command that can be used as follows to remove certain rows (in this case, 15 & 16): df.drop(df.index[[15,16]]) If the rows you don't need are regular (e.g. you never need row 15) then this is a quick and dirty solution. If you only want to drop arbitrary rows containing some value, this should do the trick: df = df.drop([df.column_name == ©1990-2016 AAR])
H: How can I dynamically distinguish between categorical data and numerical data? I know someone who is working on a project that involves ingesting files of data without regard to the columns or data types. The task is to take a file with any number of columns and various data types and output summary statistics on the numerical data. However, he is unsure of how to go about dynamically assigning data types for certain number-based data. For example: CITY Albuquerque Boston Chicago This is obviously not numerical data and will be stored as text. However, ZIP 80221 60653 25525 are not clearly marked as categorical. His software would assign the ZIP code as numerical and output summary statistics for it, which does not make sense for that sort of data. A couple ideas we had were: If a column is all integers, label it as categorical. This clearly wouldn't work, but it was an idea. If a column has fewer than n unique values and is numeric, label it categorical. This might be closer, but there could still be issues with numerical data falling through. Maintain a list of common numeric data that should actually be categorical and compare the column headers to this list for matches. For example, anything with "ZIP" in it would be categorical. My gut tells me that there is no way to accurately assign numeric data as categorical or numerical, but was hoping for a suggestion. Any insight you have is greatly appreciated. AI: I'm not aware of a foolproof way to do this. Here's one idea off the top of my head: Treat values as categorical by default. Check for various attributes of the data that would imply it is actually continuous. Weight these attributes based on how likely they are to correlate with continuous data. Here are some possible examples: Values are integers: +.7 Values are floats: +.8 Values are normally distributed: +.3 Values contain a relatively small number of unique values: +.3 Values aren't all the same number of characters: +.1 Values don't contain leading zeros: +.1 Treat any columns that sum to greater than 1 as being numerical. Adjust the factors and weights based on testing against different data sets to suit your needs. You could even build and train a separate machine learning algorithm just to do this.
H: Analyzing customer response I'm new to ML. I'm taking over a Classification project which involves analyzing data for customers which returned a product and I need to determine the return reason (~10 categories). This data was captured at the counter, and could include words like: LGTM (Looks good to me) NFF (No fault found), etc. I have a training set of 1000 records and when using Google Prediction API I get a "classificationAccuracy" value of "0.82" and 10 labels. Questions: 1. Any recommended API to analyze this type of data?. 2. What is a good "classificationAccuracy" value? Thank you AI: http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html You can use the above tutorial to get acquaintance with text classification. Afterwards it should be easier to formulate nontrivial questions to move even further.
H: What to do with stale centroids in K-means When I run Kmeans on my dataset, I notice that some centroids become stale in the they are no longer the closest centroid to any point after some iteration. Right now I am skipping these stale centroids in my next iteration because I think those centroids no longer represent any useful set of the data, however I wanted to know if there are other reasonable ways to deal with these centroids. AI: k-means finds only a local optima. Thus a wrong number of cluster or simply some random state of equilibrium in the attracting forces could lead to empty clusters. Technically k-means does not provide a procedure for that, but you can enrich the algorithm with no problem. There are two approaches which I found that are useful: remove the stale cluster, choose a random instance from your data set and create a new cluster with centroid equal with the chosen random point remove the stale cluster, choose the farthest distant point from any other centroids, create a new cluster with centroid in that point Both procedures can lead to indefinite running time, but if the number of this kind of adjustments is finite (and usually it is) that it will converge with no problem. To guard yourself from infinite running time you can set an upper bound for the number of adjustments. The procedure itself is not practical if you have a huge data set a a large number of clusters. The running time can became prohibitive. Another procedure to decrease the chances for that to happen is to use a better initialization procedure, like k-means++. In fact the second suggestion is an idea from k-means++. There are no guarantees, however. Finally a note regarding implementation. If you can't change to code of the algorithm to make those improvements on the fly, your only option which comes to my mind is to start a new clustering procedure where you initialize the centroid positions for non-stale clusters, and follow procedures for stale clusters.
H: Classifying survey response text SVM I have 800 responses to an open-ended survey question. Each response is categorized into 3 categories based on a list of 70 categories. These categories are things like "stronger leadership", "better customer service", "programs", and etc... My question is, can I use this as a training data set in order to develop a model that I can use in the future as we get more survey responses? We would like to be able to tag, label, or classify each survey response into (up to) 3 of the 70 categories. Is this even possible? Or do I have to use a NB with simple words? Can you please guide me to tutorials, examples, etc.? Using R in this exercise. AI: Assigning ~3 of 70 categories means you would be performing multi-label classification. In the end, it doesn't make much difference if you use Naive Bayes or SVM; they are both families of algorithms that translate provided independent variables (your feature space) into hopefully correct dependent variables (target classes). The question is how to construct a good feature space. The state of the art approaches in text mining are (or were) first tokenizing words, stripping punctuation and stop words, stemming or lemmatizing them, creating a bag-of-words model of those words' relative frequencies and perhaps the frequencies of those words' bigrams or trigrams. Then run your classification learners on that. Assume the resulting feature space table might get really wide (lots of words and combinations of words), so you might want to consider some form of dimensionality reduction. Of course, you will have to repeat the same filtering process with exact same parameters for each new survey you want to classify. Here's another good batch of answers on multi-label text classification.
H: What is the best file format to store an uncompressed 2D matrix? For what it's worth my particular case is a symmetrical matrix, but this question should be answered more generally. AI: The most compatible format is surely CSV/TSV. It's text and you can usually Gzip it on the fly with the software package you are using. There is no widely standardized format for storing matrix array data. Matlab has its *.mat files, NumPy has *.npz, Stata and SAS have their own, ... Best just use a clear-text file. If the matrix is symmetric, if it is very large or if there will be a lot of them, you could spare 50% in space requirement by storing only the lower (or upper) triangular part of it. If you chose to do so, there is, again, no widely accepted format. Just store the shape first and then the flattened, 1D data.
H: Optimizing co-occurrence matrix computation I am computing co-occurence matrix for a fixed windows size in python using scipy's lil_matrix for storing the counts and computing the counts by sliding the context window over each word and then counting in the window. Now the code is taking too much time for relatively small corpus size also (100 MB Wikipedia dump). The code is : def gen_coocur(window_size=5): ''' Generates coocurrence matrix ''' # vocab is precomputed. coocur_matrix = lil_matrix((len(vocab)+1, len(vocab)+1), dtype=np.float64) for page in self.wiki_extract.get_page(): # word_tokenize is tokenizer from nltk doc_tokens = word_tokenize(page.decode('utf-8')) N = len(doc_tokens) for token in self.vocab: for i in xrange(0,window_size): if (token in doc_tokens[0:i] or token in doc_tokens[i:(i+window_size+1)]) and token != doc_tokens[i]: coocur_matrix[self.vocab[doc_tokens[i]],self.vocab[token]] +=1 for i in xrange(window_size, (N-window_size)): if token in doc_tokens[(i-window_size):(i+window_size+1)] and token != doc_tokens[i]: coocur_matrix[self.vocab[doc_tokens[i]],self.vocab[token]] +=1 for i in xrange(N-window_size, N): if (token in doc_tokens[i:N] or token in doc_tokens[i-window_size:N]) and token != doc_tokens[i]: coocur_matrix[self.vocab[doc_tokens[i]],self.vocab[token]] +=1 vocab is a dictionary which maps words -> wordId. How can I optimize this code to run faster? AI: From easiest to hardest: Try running it in pypy or numba Find a faster implementation. Unfortunately I can not recommend one. Parallelize the loop over the documents. Not so hard since your vocabulary is precomputed. (Even if it weren't you could get away with it using the hashing trick.) Combine this with the first bullet. Rewrite the inner loop in Cython. Rewrite the whole thing in a faster language like C++ or Scala.
H: What is a "residual mapping"? A recent paper by He et al. (Deep Residual Learning for Image Recognition, Microsoft Research, 2015) claims that they use up to 4096 layers (not neurons!). I am trying to understand the paper, but I stumble about the word "residual". Could somebody please give me an explanation / definition what residual means in this case? Examples We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. [...] Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as $\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}(x) := \mathcal{H}(x)−x$. The original mapping is recast into $\mathcal{F}(x)+x$. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping AI: It's $F(x)$; the difference between the mapping $H(x)$ and its input $x$. It's a common term in mathematics (DE).
H: Decompose annual time series in R I have a time series. Data points are available for each year from 1966 to 2000. Using R, I want to decompose this time series into trend, seasonal and random components. When I run the decompose command, I get the error "time series has no or less than 2 periods". Since my data is annual I have specified a frequency of 1. What am I doing wrong? Here is the R code that I am using: dat=c(37.2,37,37.4,37.5,37.7,37.7,37.4,37.2,37.3,37.2,36.9,36.7,36.7,36.5, 36.3,35.9, 35.8,35.9,36,35.7,35.6, 35.2, 34.8, 35.3,35.6,35.6, 35.6, 35.9,36,35.7, 35.7, 35.5, 35.6, 36.3, 36.5) whts <- ts(dat, frequency = 1, start=1966, end=2000) is.ts(whts) plot.ts(whts) whtimeseriescomponents <- decompose(whts) AI: Seasonal decomposition doesn't make sense in this situation. You're sampling frequency needs to be greater than 1 for this to work! I know this changes your model, but just for the sake of example: > dat=c(37.2,37,37.4,37.5,37.7,37.7,37.4,37.2,37.3,37.2,36.9,36.7,36.7,36.5, 36.3,35.9, 35.8,35.9,36,35.7,35.6, 35.2, 34.8, 35.3,35.6,35.6, 35.6, 35.9,36,35.7, 35.7, 35.5, 35.6, 36.3, 36.5) > whts <- ts(dat, frequency=2, start=1966, end=2000) > decompose(whts) $x Time Series: Start = c(1966, 1) End = c(2000, 1) Frequency = 2 [1] 37.2 37.0 37.4 37.5 37.7 37.7 37.4 37.2 37.3 37.2 36.9 36.7 36.7 36.5 36.3 35.9 35.8 35.9 36.0 35.7 35.6 [22] 35.2 34.8 35.3 35.6 35.6 35.6 35.9 36.0 35.7 35.7 35.5 35.6 36.3 36.5 37.2 37.0 37.4 37.5 37.7 37.7 37.4 [43] 37.2 37.3 37.2 36.9 36.7 36.7 36.5 36.3 35.9 35.8 35.9 36.0 35.7 35.6 35.2 34.8 35.3 35.6 35.6 35.6 35.9 [64] 36.0 35.7 35.7 35.5 35.6 36.3 $seasonal Time Series: Start = c(1966, 1) End = c(2000, 1) Frequency = 2 [1] -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 [9] -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 [17] -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 [25] -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 [33] -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 [41] -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 [49] -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 [57] -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 0.007141266 [65] -0.007141266 0.007141266 -0.007141266 0.007141266 -0.007141266 $trend Time Series: Start = c(1966, 1) End = c(2000, 1) Frequency = 2 [1] NA 37.150 37.325 37.525 37.650 37.625 37.425 37.275 37.250 37.150 36.925 36.750 36.650 36.500 36.250 [16] 35.975 35.850 35.900 35.900 35.750 35.525 35.200 35.025 35.250 35.525 35.600 35.675 35.850 35.900 35.775 [31] 35.650 35.575 35.750 36.175 36.625 36.975 37.150 37.325 37.525 37.650 37.625 37.425 37.275 37.250 37.150 [46] 36.925 36.750 36.650 36.500 36.250 35.975 35.850 35.900 35.900 35.750 35.525 35.200 35.025 35.250 35.525 [61] 35.600 35.675 35.850 35.900 35.775 35.650 35.575 35.750 NA $random Time Series: Start = c(1966, 1) End = c(2000, 1) Frequency = 2 [1] NA -0.157141266 0.082141266 -0.032141266 0.057141266 0.067858734 -0.017858734 -0.082141266 [9] 0.057141266 0.042858734 -0.017858734 -0.057141266 0.057141266 -0.007141266 0.057141266 -0.082141266 [17] -0.042858734 -0.007141266 0.107141266 -0.057141266 0.082141266 -0.007141266 -0.217858734 0.042858734 [25] 0.082141266 -0.007141266 -0.067858734 0.042858734 0.107141266 -0.082141266 0.057141266 -0.082141266 [33] -0.142858734 0.117858734 -0.117858734 0.217858734 -0.142858734 0.067858734 -0.017858734 0.042858734 [41] 0.082141266 -0.032141266 -0.067858734 0.042858734 0.057141266 -0.032141266 -0.042858734 0.042858734 [49] 0.007141266 0.042858734 -0.067858734 -0.057141266 0.007141266 0.092858734 -0.042858734 0.067858734 [57] 0.007141266 -0.232141266 0.057141266 0.067858734 0.007141266 -0.082141266 0.057141266 0.092858734 [65] -0.067858734 0.042858734 -0.067858734 -0.157141266 NA $figure [1] -0.007141266 0.007141266 $type [1] "additive" attr(,"class") [1] "decomposed.ts"
H: What model should I use to find a common pattern for a specific user group based on the other dimensions? I have a big .CSV database of 25k users with various attributes of the last user's activity and events during the past 6 weeks This is an example of the data: username (B) (C) (D) (E) nicole 524 329 203 787 asteria 197 186 286 120 I want to create a common behavior pattern based on the values of the attributes of each user and run an algorithm to find a common pattern that defines this group's behavior and to find out if there is any correlation in the dimensions values and which dimension define this list of users. I am fully aware that correlation does not necessarily equal causation. Now I see several challenges in front of me and would greatly appreciate some input from others, or some good resources to find further information. What is the model for this problem ? What kind of Algorithm is the best to deal with this situation? What is the tools that you recommend to use of the project ? Any ideas would be great. AI: The most common approach is to create business rules handmade, based on the univariate and multivariate analysis of the variable. Basically, do some frequency count, see if you could isolate some subset of your data just looking at one or two variables. Then when you have your labels, create a linear or so model with this new variable as output. For exemple, a linear discriminant analysis. The analysis will supply you with new insights on your group. If you want to rely on an algorithm, two solutions: As you don't seems to have a lot of variables, a non-supervised segmentation could do the job. For exemple, a k-Nearest Neighbor or a decision tree are basic and good aproachs. With a few more variables, I like is to do a principal component analysis then a non-supervised classification to define your group on the result of the PCA. Note that a PCA + handmade rules based on the analysis of your PCA result may be enough. Each time, in the end, a discriminant analysis and a profile of your groups to asses the quality of your results.
H: What would be the best way to structure and mine this set of data? http://pastebin.com/K0eq8cyZ I went through each season of "It's Always Sunny in Philadelphia" and determined the character groupings (D=Dennis, F=Frank, C=Charlie, M=Mac, B=Sweet Dee) for each episode. I also starred "winners" for some episodes. How best could I organize this data, in what type of database, and what data science tools would extract the most information out of it? I was thinking of making an SQL table like so: (1) (2) (3) (4) (5) Episode# | Dennis | Frank | Charlie | Mac | Sweet Dee 008 | 5 | 3,4 | 2,4 | 2,3 | 1 010 | 5 | 3,4,6| 2,4,6 |2,3,6| 1 ...where all the values are arrays of ints. 6 represents that the character won the episode and each number represents one of the 5 characters. Thoughts? AI: How best could I organize this data, in what type of database? A simple relational database should do, but you could also use a "fancy" graph database if you want. One table for the users, and one for the "interactions". Each interaction would have foreign key columns for the two participants, labeled winner and loser, and the number of the episode the interaction it occurred. Also any ideas on the best way to visually represent this data? A graphical representation for social network analysis suggests itself. Here are some papers and a subreddit for inspiration. In your case, there is a concept of competition with clear winners/losers, so you could make your graph directed. Have the characters be the nodes, and add directed edges from the winning party to the losing party for each interaction. Collapse repeated interactions, etc. This approach would let you quickly identify overall winners and losers, as well as simply who interacts with whom.
H: NLP - Is Gazetteer a cheat? In NLP, there is the concept of Gazetteer which can be quite useful for creating annotations. As far as I understand: A gazetteer consists of a set of lists containing names of entities such as cities, organisations, days of the week, etc. These lists are used to find occurrences of these names in text, e.g. for the task of named entity recognition. So it is essentially a lookup. Isn't this kind of a cheat? If we use a Gazetteer for detecting named entities, then there is not much Natural Language Processing going on. Ideally, I would want to detect named entities using NLP techniques. Otherwise, how is it any better than a regex pattern matcher? AI: Gazetteer or any other option of intentionally fixed size feature seems a very popular approach in academic papers, when you have a problem of finite size, for example NER in a fixed corpora, or POS tagging or anything else. I would not consider it cheating unless the only feature you will be using is Gazetteer matching. However, when you train any kind of NLP model, which does rely on dictionary while training, you may get real world performance way lower than your initial testing would report, unless you can include all objects of interest into the gazetteer (and why then you need that model?) because your trained model will rely on the feature at some point and, in a case when other features will be too weak or not descriptive, new objects of interest would not be recognized. If you do use a Gazetteer in your models, you should make sure, that that feature has a counter feature to let model balance itself, so that simple dictionary match won't be the only feature of positive class (and more importantly, gazetteer should match not only positive examples, but also negative ones). For example, assume you do have a full set of infinite variations of all person names, which makes general person NER irrelevant, but now you try to decide whether the object mentioned in text is capable of singing. You will rely on features of inclusion into your Person gazetteer, which will give you a lot of false positives; then, you will add a verb-centric feature of "Is Subject of verb sing", and that would probably give you false positives from all kind of objects like birds, your tummy when you're hungry and a drunk fellow who thinks he can sing (but let's be honest, he can not) -- but that verb-centric feature will balance with your person gazetteer to assign positive class of 'Singer' to persons and not animals or any other objects. Though, it doesn't solve the case of drunk performer.
H: Autoencoders for feature creation When using an auto encoder to create non-linear dimensional reduced featires, is it more common to use the output of the network (the prediction of the input features) or to use the weights from the (or 1 of the if there are multiple) hidden layers? If the hidden layer is used, do you use the hidden layer activation as features or weights from the hidden layer to the output? AI: When you want to use Auto-Encoders (AEs) for dimensionality reduction, you usally add a bottleneck layer. This means, for example, you have 1234-dimensional data. You feed this into your AE, and - as it is an AE - you have an output of dimension 1234. However, you might have many layers in that network and one of them has significantly less dimensions. Lets say you have the topology 1234:1024:784:1024:1234. You train it like this, but you only use the weights from the 1234:1024:784 part. When you get new input, you just feed it into this network. You can see it as a kind of preprocessing. For the later stages, this is a black box. This is manly useful when you have a lot of unlabeled data. It is called Semi Supervised Learning (SSL).
H: What's a good machine learning algorithm for low frequency trading? I'm trying to train an algorithm to copy some of the top traders on various forex social trading sites. The problem is that the traders only trade around say 10 times per month so even if I only look at minute resolution numbers that's .02% of the time [ 10/(60*24*30)*100 ]. I've tried using random forest and it gives an error rate of around 2% which is unacceptable and from what I've read most machine learning algorithms have similar errors rates. Does anyone know of a better approach? AI: Random forests, GBM or even the newer and fancier xgboost are not the best candidates for binary classification (predicting ups and down) of stocks predictions or forex trading or at least not as the main algorithm. The reason is that, for this particular problem, they require a huge amount of trees (and tree depth in case of GBM or xgboost) to obtain reasonable accuracy (Breiman suggested using at least 5000 trees and to "not be stingy" and in fact his main ML paper on RF he used 50,000 trees per run). However, some quants use random forests as feature selectors while others use it to generate new features. It all depends on the characteristics of the data. I would suggest you read this question and answers on quant.stackexchange where people discuss what methods are the best and when to use them, among them ISOMAP, Laplacian eigenmaps, ANNs, swarm optimization. Check out the machine-learning tag on the same site, there you might find information related to your particular dataset.
H: Filter Data for clustering I am trying to synthetize clients Data in order to do clustering. My problem is for 1 customer I have severals rows. I would like to synthetize informations to get 1 row per customer. This clustering is about how customers use fidelity program. Here is a picture of my table : By Column (left to right) : 1) CustomerID 2)Date at which their use their points 3) Category number (Ex: 1 is gift card, 2 is a flight etc) 4) How many points they used 5) How many items they purchased with points My question is how could I have 1 customer per row without loosing informations. Maybe Pivot Table? But I don"t know how it work exactly. I am new to statistic btw. Thank you Cédric AI: If you can afford to do the full join once, do it and learn which columns are useful through feature selection. Then you can only SELECT these columns for subsequent iterations, when the database is updated. Here's a survey: Feature Selection for Clustering: A Review
H: do autoencoders work well for non images? I have a classification problem for which a feedforward, fully connected neural net works reasonably well (two classes, true positive and true negative rate close to 80%). I want to get these rates to 90%, and more features is one of the catalysts for improvements I can think of. Do autoencoders to learn additional, interesting features work well for problems that do not involve images? AI: Yes, but no-one can tell if they will work well for your problem, so just try it and see. Don't give up if it does not work at first, because training neural networks requires some practice; there are lots of parameters, and not every configuration will work well. Even the optimization algorithm is a hyperparameter.
H: On coursera what exactly does Andrew Ng say in videos Lectures 60 & 61 of machine learning? Model Selection and Train/Validation/Test Sets - Stanford University | Coursera: At 10:59~11:10 One final note: I should say that in the machine learning as of this practice today, there aren't many people that will do that early thing that I talked about, and said that, you know...​ Is my comprehension correct? Because English subtitles on coursera sometimes are not correct. As I konw, here what Chinese subtitle means is opposite to what English one does. So I am not sure whether Andrew Ng said "there aren't" or "there are" Thanks for your reading.​ I would like to ask another one. Diagnosing Bias vs. Variance - Stanford University | Coursera: At 02:34~02:36, what Andrew Ng said is not quite clear as well as the English subtitle. My comprehension is as following If d equals 1,.... to be high training error. It's not that complete. Would anyone like to identify that? Thank you... AI: No, he actually says the opposite: One final note: I should say that in the machine learning as of this practice today, there are many people that will do that early thing that I talked about, and said that, you know...​ Then he says (the "early thing" he talked about): selecting your model as a test set and then using the same test set to report the error ... unfortunately many people do that In this lesson he explains about separating the data set: training set to train the model; cross validation set to find the right parameters; test set to find the final generalization error (of the function with the best parameter values found during using the cross validation set). So Andrew Ng is complaining that many people us the same data set to find the right parameters, and then report the error of that data set as final generalization error.
H: What is the best Keras model for multi-class classification? I am working on research, where need to classify one of three event WINNER=(win, draw, lose) WINNER LEAGUE HOME AWAY MATCH_HOME MATCH_DRAW MATCH_AWAY MATCH_U2_50 MATCH_O2_50 3 13 550 571 1.86 3.34 4.23 1.66 2.11 3 7 322 334 7.55 4.1 1.4 2.17 1.61 My current model is: def build_model(input_dim, output_classes): model = Sequential() model.add(Dense(input_dim=input_dim, output_dim=12, activation=relu)) model.add(Dropout(0.5)) model.add(Dense(output_dim=output_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adadelta') return model I am not sure that is the correct one for multi-class classification What is the best setup for binary classification? EDIT: #2 - Like that? model.add(Dense(input_dim=input_dim, output_dim=12, activation='sigmoid')) model.add(Dropout(0.5)) model.add(Dense(output_dim=output_classes, activation='softmax')) model.compile(loss='binary_crossentropy', optimizer='adadelta') AI: Your choices of activation='softmax' in the last layer and compile choice of loss='categorical_crossentropy' are good for a model to predict multiple mutually-exclusive classes. Regarding more general choices, there is rarely a "right" way to construct the architecture. Instead that should be something you test with different meta-params (such as layer sizes, number of layers, amount of drop-out), and should be results-driven (including any limits you might have on resource use for training time/memory use etc). Use a cross-validation set to help choose a suitable architecture. Once done, to get a more accurate measure of your model's general performance, you should use a separate test set. Data held out from your training set separate to the CV set should be used for this. A reasonable split might be 60/20/20 train/cv/test, depending on how much data you have, and how much you need to report an accurate final figure. For Question #2, you can either just have two outputs with a softmax final similar to now, or you can have final layer with one output, activation='sigmoid' and loss='binary_crossentropy'. Purely from a gut feel from what might work with this data, I would suggest trying with 'tanh' or 'sigmoid' activations in the hidden layer, instead of 'relu', and I would also suggest increasing the number of hidden neurons (e.g. 100) and reducing the amount of dropout (e.g. 0.2). Caveat: Gut feeling on neural network architecture is not scientific. Try it, and test it.
H: Data structure design for supporting arbitrary number of columns in table or database I'm currently working on a sort-of a meta-modeler to build a free web service so that people can input data and run several models on that data. The task I'm currently struggling is: user needs to enter data column by column, which would consist of a number n of ID's , a number m of attributes and a number k of classes, with the conditions that n, m > 0 and k >= 0. Data is heterogeneous, so indexes can be both numeric or text, and the same goes for attributes and classes. I'm supposing there will be no null in the data for simplicity. I'm currently thinking on: 1) Creating a table with more than enough columns (all with null values), so that I can work using only the non-null columns (which will be got from user input). However this would limit the size of the datasets people could input. 2) Resorting to create an specialized data structure on a programming language, do all the work there and finally, create a table dynamically to store the result data there. 3) Using a database specialized for this kind of data (maybe a document-based DB). 4) Create a data structure on the RDBMS itself (I'm using PostgreSQL), let's say a variable size array, so that I can create the table directly from the user input, using only 3 variable arrays (one for indexes, one for attributes and one for classes). However, I keep in mind that attributes and indexes could be of different types, so the array would have to support heterogeneous data type and I don't know if this is possible on a RDBMS or SQL. I've been looking for information on information but got no result until now. Any guidances to a package, language library, extension or paper, thesis, technical report with relevant information would be appreciated. Also, personal experiences with doing something similar could be useful. AI: I've done something like you're describing using mongoDB--I think you'll best use your time using some sort of NoSQL approach, rather than creating a specialized one-off solution. If you're using Python, I've had excellent experiences using PyMongo to handle reads and writes from within my code. I would strongly caution you against adopting your approach #1. This could break very easily in the future, and there are databases designed to handle your exact problem!
H: Inferring Relational Hierarchies of Words I am new to natural language processing and I have not heard of a problem similar to mine yet. I was wondering if anyone could refer me to a method for solving my problem, or tell me how this problem is referred to in the academic literature, so that I can look for resources online. Here is the problem : From some text (wikipedia articles, for example), I would like to extract the hierarchy of different concepts that can be found in it. By hierarchy I mean a tree wherein A is a descendant of B if A or one of A's parents (transitive) is defined by B. For instance, normal distribution would be a descendant of probability (since normal distribution is defined using probabilities) and probability would be a descendant (or child) of mathematics. Since it is transitive, normal distribution would also be a child of mathematics. One way I thought about solving this is by looking at the number of times a word A is used alone (called A), the words A and B are used together (called A AND B, 'together' could be, for instance, in the same article or in the same paragraph, or in the same sentence), and the number of times the word B is used alone (called B). Let A be mathematics and B be probability. Then, if the ratios (A AND B)/A and (A AND B)/B are low, then it could imply that there is no direct link between A and B (but a link could exist through transitivity). Conversely, if A is bigger than B, A is a bigger concept than B. If A and B are almost the same then they are probably siblings (children of the same parent). Let's take 3 examples: Mathematics (A) and carrot (B). A AND B is really low compared to A and B, so there is no direct link between them (or only an indirect link by transitivity). Mathematics (A) and probabilities (B). A AND B is quite high compared to B, and A is much bigger than B, so B should be a child of A (probabilities is a child of mathematics). Topology (A) and Probabilities (B). A AND B is relativaly high (the texts that present the different areas of mathematics will likely speak about the 2), A and B are about the same order of magnitude, so A and B should be the children of a same parent. Indeed, Topology and Probabilities are the children of Mathematics. This way of solving the problem is far from perfect, for instance 'the' (A) and 'probability' (B) would probably end up saying probability is a child of the (because A AND B is huge and A is much bigger than B). If anyone knows some papers on this or has any ideas on how I might solve this problem, I would appreciate some direction. Also, does my solution seem viable? How could it be improved? AI: Look up taxonomy/ontology construction/induction. Relevant papers: Automatic Taxonomy Construction from Keywords via Scalable Bayesian Rose Trees Topic Models for Taxonomies OntoLearn Reloaded. A Graph-Based Algorithm for Taxonomy Induction Ontology Population and Enrichment: State of the Art Probabilistic Topic Models for Learning Terminological Ontologies
H: Improve k-means accuracy Our weapons: I am experimenting with k-means and Hadoop, where I am chained to these options for various reasons (e.g. Help me win this war!). The battlefield: I have articles, which belong to c categories, where c is fixed. I am vectorizing the contents of the articles to TF-IDF features. Now I am running a naive k-means algorithm, which takes c centroids to begin with and starts, iteratively, grouping articles (i.e. rows of the TF-IDF matrix, where you can see here how I built it), until converenge occurs. Special notes: Initial centroids: Tried with random from within each category or with the mean of all the articles from each category. Distance function: Euclidean. Question(s): The accuracy is poor, as expected, can I do any better, by making another choice for the initial centroids, or/and pick another distance function? print "Hello Data Science site!" :) AI: Great question, @gsamaras! The way you've set up this experiment makes a lot of sense to me, from a design point of view, but I think there are a couple aspects you can still examine. First, it's possible that uninformative features are distracting your classifier, leading to poorer results. In text analytics, we often talk about stop word filtering, which is just the process of removing such text (e.g., the, and, or, etc.). There are standard stop word lists you can easily find online (e.g., this one), but they can sometimes be heavy-handed. The best approach is to build a table relating feature frequency to class, as this will get at domain-specific features that you won't likely find in such look-up tables. There is varying evidence as to the efficacy of stop word removal in the literature, but I think these findings mostly have to do with classifier-specific (for example, support vector machines tend to be less affected by uninformative features than does a naive bayes classifier. I suspect k-means falls into the latter category). Second, you might consider a different feature modeling approach, rather than tf-idf. Nothing against tf-idf--it works fine for many problems--but I like to start with binary feature modeling, unless I have experimental evidence showing a more complex approach leads to better results. That said, it's possible that k-means could respond strangely to the switch from a floating-point feature space to a binary one. It's certainly an easily-testable hypothesis! Finally, you might look at the expected class distribution in your data set. Are all classes equally likely? If not, you may get better results from either a sampling approach, or using a different distance metric. k-means is known to respond poorly in skewed class situations, so this is something to consider as well! There is probably research available in your specific domain describing how others have handled this situation.
H: Analysis of Real-Time Bidding I'm totally new to the topic of real-time bidding in which I know Machine Learning algorithms are used pretty often. Can somebody explain me the system in a plain language i.e. a language for a non-technical person? What is the bidding? Who bids on what? Where does Machine Learning get involve? What is cookie matching mainly about? AI: A simplified example: suppose two companies: the webpage owner and a book store. And a customer named Jane, interested in reading. The web page owner has some free space on its page, which can be sold for advertisement. The book store wants to place an advertisement on the same web page, in order to increase sales. Both companies meet eachother at an auction, where the web space is sold to the highest bidder in real-time. So, the book store is bidding on the right to show an advertisement to Jane, who is visiting the page of the webpage owner. The machine learning is done on the part of the book store, who receives information about the web page visitor. This can be all sort of information that the web page owner wants to release, and that could be of any use to the bookstore. Based on this information, it is decided weather to make a bidding for the advertisement space or not, and the amount up to which the book store will want to make a bidding. Without cookie matching, the webpage visitor Jane will probably not be identifiable by the bookstore, so the store must decide on bidding, based on parameters like geographical location of the customer, browser version (just to name a few ). With cookie matching, each visitor/customer gets a unique identifier at the book store. Based on this identifier, the book store has more information of the visitor, like: what ads have been served before, and how long ago ? This visit to the web page can be linked to earlier visits and this information will likely ease the decision making process for doing the bidding on the advertisement space. ( There is more to it as there can be two more intermediary companies: one that holds the auction and one that delivers the ad )
H: Simple ANN visualisation TLDR: Please help me understand the graph representation of the network in the image below. Hi, this is pretty stupid, but I'm just have trouble visualising what I'm actually doing with this neural network. I've read about neural networks and multilayer perceptrons for some time and I'm just getting started with actually using them. I started with a super simple example, just to get warmed up but now I've confused myself. I artificially generated some data and used nntools in matlab to attempt to "predict" the results. I built a neural network with the following parameters: feed forward backprop network. Gradient Descent training algorithm. Gradient Descent learniing algorithm. Performance/loss function of mean squared error. two layers: first with three neurons and Tansig activation function. the second with one neuron and linear activation. I end up with something looking like this: However, I don't know what this actually represents, I'm all sorts of confused right now. Could someone please explain/upload an image/draw some ascii to represent the neurons and edges in the above network? It would really help clear my head. Currently I think it's like this: T L o / \ / \ IN > o--o--o--o > OUT \ / \ / o With linear activations in columns L and Tanh activations in columns T. Is that right? Doesn't make sense to me. AI: I believe this is the representation you're after, please excuse the rough sketch but I think it explains the structure appropriately. Single input going to three hidden units, each with a bias and tansig activation. The outputs of the hidden layer are summed (via linear activation) with a bias to produce the output.
H: Machine Learning Steps Which of the below set of steps options is the correct one when creating a predictive model? Option 1: First eliminate the most obviously bad predictors, and preprocess the remaining if needed, then train various models with cross-validation, pick the few best ones, identify the top predictors each one has used, then retrain those models with those predictors only and evaluate accuracy again with cross-validation, then pick the best one and train it on the full training set using its key predictors and then use it to predict the test set. Option 2: First eliminate the most obviously bad predictors, then preprocess the remaining if needed, then use a feature selection technique like recursive feature selection (eg. RFE with rf ) with cross-validation for example to identify the ideal number of key predictors and what these predictors are, then train different model types with cross-validation and see which one gives the best accuracy with those top predictors identified earlier. Then train the best one of those models again with those predictors on the full training set and then use it to predict the test set. AI: I found both of your options slightly faulty. So, this is generally (very broadly) how a predictive modelling workflow looks like: Data Cleaning: Takes the most time, but every second spent here is worth it. The cleaner your data gets through this step, the lesser would your total time spent would be. Splitting the data set: The data set would be splitted into training and testing sets, which would be used for the modelling and prediction purposes respectively. In addition, an additional split as a cross-validation set would also need to be done. Transformation and Reduction: Involves processes like transformations, mean and median scaling, etc. Feature Selection: This can be done in a lot of ways like threshold selection, subset selection, etc. Designing predictive model: Design the predictive model on the training data depending on the features you have at hand. Cross Validation: Final Prediction, Validation
H: Image classification in python I have a set of images that are considered as good quality image and other set that are considered as bad quality image. I have to train a classification model so that any new image can be said good/bad. SVM seems to be the best approach to do it. I know how to do it in MATLAB. But,can anyone suggest how to do it in python? What are the libraries? For SVM scikit is there, what about feature extraction of image and PCA? AI: As this question highly overlaps with a similar question I have already answered, I would include that answer here (linked in the comments underneath the question): In images, some frequently used techniques for feature extraction are binarizing and blurring Binarizing: converts the image array into 1s and 0s. This is done while converting the image to a 2D image. Even gray-scaling can also be used. It gives you a numerical matrix of the image. Grayscale takes much lesser space when stored on Disc. This is how you do it in Python: from PIL import Image %matplotlib inline #Import an image image = Image.open("xyz.jpg") image Example Image: Now, convert into gray-scale: im = image.convert('L') im will return you this image: And the matrix can be seen by running this: array(im) The array would look something like this: array([[213, 213, 213, ..., 176, 176, 176], [213, 213, 213, ..., 176, 176, 176], [213, 213, 213, ..., 175, 175, 175], ..., [173, 173, 173, ..., 204, 204, 204], [173, 173, 173, ..., 205, 205, 204], [173, 173, 173, ..., 205, 205, 205]], dtype=uint8) Now, use a histogram plot and/or a contour plot to have a look at the image features: from pylab import * # create a new figure figure() gray() # show contours with origin upper left corner contour(im, origin='image') axis('equal') axis('off') figure() hist(im_array.flatten(), 128) show() This would return you a plot, which looks something like this: Blurring: Blurring algorithm takes weighted average of neighbouring pixels to incorporate surroundings color into every pixel. It enhances the contours better and helps in understanding the features and their importance better. And this is how you do it in Python: from PIL import * figure() p = image.convert("L").filter(ImageFilter.GaussianBlur(radius = 2)) p.show() And the blurred image is: So, these are some ways in which you can do feature engineering. And for advanced methods, you have to understand the basics of Computer Vision and neural networks, and also the different types of filters and their significance and the math behind them. The entire analytics is done with the PIL package. I wouldn't claim that it's a one-stop shop for Image analytics, but for a starter to novice level, it is pretty much it.
H: How to make k-means distributed? After setting up a 2-noded Hadoop cluster, understanding Hadoop and Python and based on this naive implementation, I ended up with this code: def kmeans(data, k, c=None): if c is not None: centroids = c else: centroids = [] centroids = randomize_centroids(data, centroids, k) old_centroids = [[] for i in range(k)] iterations = 0 while not (has_converged(centroids, old_centroids, iterations)): iterations += 1 clusters = [[] for i in range(k)] # assign data points to clusters clusters = euclidean_dist(data, centroids, clusters) # recalculate centroids index = 0 for cluster in clusters: old_centroids[index] = centroids[index] centroids[index] = np.mean(cluster, axis=0).tolist() index += 1 print("The total number of data instances is: " + str(len(data))) I have tested it for serial execution and it is OK. How to make it distributed in Hadoop? In other words, what should go to the reducer and what to the mapper? Please note that if possible, I would like to follow the tutorial's style, since it's something I have understood. AI: Unless you are trying to do this as a learning exercise, just use Spark which has ML libraries made for distributed computing. See here
H: Is parsing files an application of machine learning? I presently receive files from a device in a semi-csv format. I have a written a simple recursive descent parser for getting information out of these files. Every time the device updates firmware, I have a new version of the parser for the changes the update brings. Down the road, we will be taking data from other devices, which means another parser and more updates to firmware. I'm wondering if I could define a basic structure of "this is the data I need" and use a neural network to get the parsed data without having to write a parser for each new file type that comes in. Is this a pipe dream or is it a valid application of machine learning? I'm much more of a software engineer than I am a data scientist, but I'm starting to dip my toes into the machine learning realm. Thanks in advance. AI: I would answer the question at two levels. The first level is "can it be done using machine learning?" I would say that machine learning is essentially about learning. So given that you prepare sufficient examples of sample documents and the output to expect from those documents, you can train a network to learn the structure of documents and extract the relevant information. The more general form of extracting information from documents is a well-researched problem and is more commonly known as Information Retrieval. And it is not limited to just machine learning techniques, you can use Natural Language Processing tools as well. So, in its general form, it is actually being done in practice. Coming to the second level, "should you be doing it using machine learning?". I would agree to what @NeilSlater said. The better and more feasible approach would be to use good programming practices so that you can reuse parts of your parser as your dataset evolves.
H: Which machine learning approach/algorithm do I choose for path validation? I apologize for lack of terminology, I'm no computer scientist. I have a problem of validating paths in a directed graph with complex nodes. The full description is the following: I have a decent set (about 1K) of directed graphs; Each node contains a complex data structure (it is a hierarchical data structure, not a picture or sound); I have some of paths in those graphs known as "correct" paths (based mostly on data in nodes); And I have some of paths in those graphs known as "incorrect" paths (with classification why it is incorrect). I'd like to predict given a graph with those complex nodes and a path, is this path "correct". Which machine learning algorithm will suit me best? In general, what approach I should use? Edit: Each full graph is either have app paths processed (correct/incorrect) or completely blank (no path is processed); Correctness depends on both position of node in a graph AND data in the node; Humans would need heuristics to decide or guess which paths are correct; Most of the paths are "correct"; I hope to convert human heuristics to some kind of "correctness" recognition. AI: This is a good question but it's rather complicated. I can suggest two approaches: Graphical models; specifically Bayesian networks since your graph is directed. Recurrent neural networks. Here's a talk on popular recent model: Sequence to Sequence Learning with Neural Networks.
H: Represent time-series data in much compact form I have time series data of one month plotted day-wise as Notice every day follows a different pattern. Now, I want to show this "diversity in pattern" of each day in much compact form in a research paper. What are the different ways/options of representing this in compact form using R. AI: Simulate some data: library(ggplot2) library(purrr) library(ggthemes) days <- seq(as.Date("2015-08-01"), as.Date("2015-08-31"), by="1 day") hours <- sprintf("%02d", 0:23) map_df(days, function(x) { map_df(hours, function(y) { data.frame(day=x, hour=y, val=sample(2500, 1), stringsAsFactors=FALSE) }) }) -> df Check it: ggplot(df, aes(x=hour, y=val, group=day)) + geom_line() + facet_wrap(~day) + theme_tufte(base_family="Helvetica") + labs(x=NULL, y=NULL) Since you're only trying to convey the scope of the variation, perhaps use a boxplot of the values of hours across days? ggplot(df, aes(x=hour, y=val)) + geom_boxplot(fill="#2b2b2b", alpha=0.25, width=0.75, size=0.25) + scale_x_discrete(expand=c(0,0)) + scale_y_continuous(expand=c(0,0)) + coord_flip() + theme_tufte(base_family="Helvetica") + theme(axis.ticks=element_blank()) + labs(x=NULL, y=NULL) That can be tweaked to fit into most publication graphics slots and the boxplot shows just how varied each day's readings are. You could also use boxplot.stats to get the summary data and plot it on a line chart: library(dplyr) library(tidyr) bps <- function(x) { cnf <- boxplot.stats(x)$conf data.frame(as.list(set_names(cnf, c("lwr", "upr"))), mean=mean(x)) } group_by(df, hour) %>% do(bps(.$val)) %>% ggplot(aes(x=hour, y=mean, ymin=lwr, ymax=upr, group=1)) + geom_ribbon(fill="#2b2b2b", alpha=0.25) + geom_line(size=0.25) + theme_tufte(base_family="Helvetica") + theme(axis.ticks=element_blank()) + labs(x=NULL, y=NULL)
H: Orange 3 Heatmap clustering under the hood I have recently used the heatmap widget in Orange 3. All the documentation says is "Clustering (clusters data by similarity)". Is this using hierarchical or k-means or some other type of clustering? On that note, is there a way to look at the code being run by all the widgets to see whats going on under the hood? It would be nice if after you finish the workflow you would get a file with the script run to perform the analysis. AI: It appears the widget uses hierarchical clustering. I guess the metric is Euclidean distance by default and there doesn't seem to be a way to specify another one (except by using Distances widget and connecting it into the Distance Map widget). I don't think it is possible to export the widget's workflow as pure code, but you can look at what the widget does in the source code (seems pretty low-level, though). What you can do, however, is select subsets of data (can be saved with Save Data widget) for further analysis if that's of any help.
H: Markov switching models What are some reference sources for understanding Markov switching models? AI: Firstly, for understanding the Markov switching models, a nice knowledge of Markov models and the way they work. Most importantly, an idea of time series models and how they work, is very important. I found this tutorial good enough for getting up to speed with the concept. This is another tutorial on a similar application of the switching model, which is the regime switching model. The statsmodels library has a nice support for building the Morkov switching models. Here is one simple and quick Python tutorial which uses the statsmodels library.
H: Knn distance plot for determining eps of DBSCAN I would like to use the knn distance plot to be able to figure out which eps value should I choose for the DBSCAN algorithm. Based on this page: The idea is to calculate, the average of the distances of every point to its k nearest neighbors. The value of k will be specified by the user and corresponds to MinPts. Next, these k-distances are plotted in an ascending order. The aim is to determine the “knee”, which corresponds to the optimal eps parameter. Using python with numpy/sklearn, I have the following points, with the following distance for 6-knn: X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) nbrs = NearestNeighbors(n_neighbors=len(X)).fit(X) distances, indices = nbrs.kneighbors(X) # Indices [[0 1 2 3 4 5] [1 0 2 3 4 5] [2 1 0 3 4 5] [3 4 5 0 1 2] [4 3 5 0 1 2] [5 4 3 0 1 2]] # Distances [[ 0. 1. 2.23606798 2.82842712 3.60555128 5. ] [ 0. 1. 1.41421356 3.60555128 4.47213595 5.83095189] [ 0. 1.41421356 2.23606798 5. 5.83095189 7.21110255] [ 0. 1. 2.23606798 2.82842712 3.60555128 5. ] [ 0. 1. 1.41421356 3.60555128 4.47213595 5.83095189] [ 0. 1.41421356 2.23606798 5. 5.83095189 7.21110255]] then I computed the average distance: distances.mean() 2.9269575028354495 The problem is I don't understand how exactly could I represent the same plot as them with distances in y-axis and number of points according to the distances on the x-axis using python. Thank for your help. AI: You take the last column of that matrix sort descending plot index, distance hope to see a knee (if the distance does not work well. there might be none)
H: t test or anova I have a pandas data frame of the form: r1 r2 r3 r4 r5 0 1 12 0 4 1 1 2 9 2 32 5 0 0 0 12 14 3 1 23 0 2 43 5 2 9 3 5 1 1 0 0 0 0 1 1 0 0 0 0 And I want to check if any column: r1, r2, r3, r4, r5 significantly differs from any of the other. Should I do a t test or an anova? And how would I set it up for the computation? AI: This is typical statistics problem. When you have multiple 'classes' that you assume are normally distributed you first run an ANOVA. Then, IFF (if-and-only-if) the ANOVA is significant, then run post-hoc pairwise t-tests with an appropriate correction (e.g. Bonferroni).
H: Why do cost functions use the square error? I'm just getting started with some machine learning, and until now I have been dealing with linear regression over one variable. I have learnt that there is a hypothesis, which is: $h_\theta(x)=\theta_0+\theta_1x$ To find out good values for the parameters $\theta_0$ and $\theta_1$ we want to minimize the difference between the calculated result and the actual result of our test data. So we subtract $h_\theta(x^{(i)})-y^{(i)}$ for all $i$ from $1$ to $m$. Hence we calculate the sum over this difference and then calculate the average by multiplying the sum by $\frac{1}{m}$. So far, so good. This would result in: $\frac{1}{m}\sum_{i=1}^mh_\theta(x^{(i)})-y^{(i)}$ But this is not what has been suggested. Instead the course suggests to take the square value of the difference, and to multiply by $\frac{1}{2m}$. So the formula is: $\frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2$ Why is that? Why do we use the square function here, and why do we multiply by $\frac{1}{2m}$ instead of $\frac{1}{m}$? AI: Your loss function would not work because it incentivizes setting $\theta_1$ to any finite value and $\theta_0$ to $-\infty$. Let's call $r(x,y)=\frac{1}{m}\sum_{i=1}^m {h_\theta\left(x^{(i)}\right)} -y$ the residual for $h$. Your goal is to make $r$ as close to zero as possible, not just minimize it. A high negative value is just as bad as a high positive value. EDIT: You can counter this by artificially limiting the parameter space $\mathbf{\Theta} $(e.g. you want $|\theta_0| < 10$). In this case, the optimal parameters would lie on certain points on the boundary of the parameter space. See https://math.stackexchange.com/q/896388/12467. This is not what you want. Why do we use the square loss The squared error forces $h(x)$ and $y$ to match. It's minimized at $u=v$, if possible, and is always $\ge 0$, because it's a square of the real number $u-v$. $|u-v|$ would also work for the above purpose, as would $(u-v)^{2n}$, with $n$ some positive integer. The first of these is actually used (it's called the $\ell_1$ loss; you might also come across the $\ell_2$ loss, which is another name for squared error). So, why is the squared loss better than these? This is a deep question related to the link between Frequentist and Bayesian inference. In short, the squared error relates to Gaussian Noise. If your data does not fit all points exactly, i.e. $h(x)-y$ is not zero for some point no matter what $\theta$ you choose (as will always happen in practice), that might be because of noise. In any complex system there will be many small independent causes for the difference between your model $h$ and reality $y$: measurement error, environmental factors etc. By the Central Limit Theorem(CLT), the total noise would be distributed Normally, i.e. according to the Gaussian distribution. We want to pick the best fit $\theta$ taking this noise distribution into account. Assume $R = h(X)-Y$, the part of $\mathbf{y}$ that your model cannot explain, follows the Gaussian distribution $\mathcal{N}(\mu,\sigma)$. We're using capitals because we're talking about random variables now. The Gaussian distribution has two parameters, mean $\mu = \mathbb{E}[R] = \frac{1}{m} \sum_i h_\theta(X^{(i)})-Y^{(i))}$ and variance $\sigma^2 = E[R^2] = \frac{1}{m} \sum_i \left(h_\theta(X^{(i)})-Y^{(i))}\right)^2$. See here to understand these terms better. Consider $\mu$, it is the systematic error of our measurements. Use $h'(x) = h(x) - \mu$ to correct for systematic error, so that $\mu' = \mathbb{E}[R']=0$ (exercise for the reader). Nothing else to do here. $\sigma$ represents the random error, also called noise. Once we've taken care of the systematic noise component as in the previous point, the best predictor is obtained when $\sigma^2 = \frac{1}{m} \sum_i \left(h_\theta(X^{(i)})-Y^{(i))}\right)^2$ is minimized. Put another way, the best predictor is the one with the tightest distribution (smallest variance) around the predicted value, i.e. smallest variance. Minimizing the the least squared loss is the same thing as minimizing the variance! That explains why the least squared loss works for a wide range of problems. The underlying noise is very often Gaussian, because of the CLT, and minimizing the squared error turns out to be the right thing to do! To simultaneously take both the mean and variance into account, we include a bias term in our classifier (to handle systematic error $\mu$), then minimize the square loss. Followup questions: Least squares loss = Gaussian error. Does every other loss function also correspond to some noise distribution? Yes. For example, the $\ell_1$ loss (minimizing absolute value instead of squared error) corresponds to the Laplace distribution (Look at the formula for the PDF in the infobox -- it's just the Gaussian with $|x-\mu|$ instead of $(x-\mu)^2$). A popular loss for probability distributions is the KL-divergence. -The Gaussian distribution is very well motivated because of the Central Limit Theorem, which we discussed earlier. When is the Laplace distribution the right noise model? There are some circumstances where it comes about naturally, but it's more commonly as a regularizer to enforce sparsity: the $\ell_1$ loss is the least convex among all convex losses. As Jan mentions in the comments, the minimizer of squared deviations is the mean and the minimizer of the sum of absolute deviations is the median. Why would we want to find the median of the residuals instead of the mean? Unlike the mean, the median isn't thrown off by one very large outlier. So, the $\ell_1$ loss is used for increased robustness. Sometimes a combination of the two is used. Are there situations where we minimize both the Mean and Variance? Yes. Look up Bias-Variance Trade-off. Here, we are looking at a set of classifiers $h_\theta \in H$ and asking which among them is best. If we ask which set of classifiers is the best for a problem, minimizing both the bias and variance becomes important. It turns out that there is always a trade-off between them and we use regularization to achieve a compromise. Regarding the $\frac{1}{2}$ term The 1/2 does not matter and actually, neither does the $m$ - they're both constants. The optimal value of $\theta$ would remain the same in both cases. The expression for the gradient becomes prettier with the $\frac{1}{2}$, because the 2 from the square term cancels out. When writing code or algorithms, we're usually concerned more with the gradient, so it helps to keep it concise. You can check progress just by checking the norm of the gradient. The loss function itself is sometimes omitted from code because it is used only for validation of the final answer. The $m$ is useful if you solve this problem with gradient descent. Then your gradient becomes the average of $m$ terms instead of a sum, so its' scale does not change when you add more data points. I've run into this problem before: I test code with a small number of points and it works fine, but when you test it with the entire dataset there is loss of precision and sometimes over/under-flows, i.e. your gradient becomes nan or inf. To avoid that, just normalize w.r.t. number of data points. These aesthetic decisions are used here to maintain consistency with future equations where you'll add regularization terms. If you include the $m$, the regularization parameter $\lambda$ will not depend on the dataset size $m$ and it will be more interpretable across problems.
H: Distributed k-means in Spark I want to implement K-means algorithm in Spark. I am looking for a starting point and I found Berkeley's naive implementation. However, is that distributed? I mean I see no mapreduce operations. Or maybe, when submitted in Spark, the framework actually makes the needed tricks under the hood to distribute the algorithm? I also found that Spark shows mapreduce the exit and I am using Spark 1.6. EDIT: This code produces a runtime error, check here. AI: In that link you posted, you can look at the python full solution here at the end and go through it to see what all is distributed. In short, some parts are distributed, like reading data from the file, but the very important parts like the distance computation are not. Running down, we see: sc = SparkContext("local[6]", "PythonKMeans") This instantiates the context and creates a local cluster which the jobs will be submitted to lines = sc.textFile(..) This is still setting up. No operations have taken place yet. You can verify this by putting timing statements in the code data = lines.map(lambda x: (x.split("#")[0], parseVector(x.split("#")[1]))) The lambda here will be applied to lines, so this operation will split the file in parallel. Note that the actual line also has a cache() at the end (see cache]). data is just a reference to the spark object in memory. (I may be wrong here, but I think the operation still doesn't happen yet) count = data.count() This forces the parallel computation to start, and the count to be stored. At the end, the reference data is still valid, and we'll use it for further computations. I'll stop with detailed explanations here, but wherever data is being used is a possible parallel computation. The python code itself is single threaded, and interfaces with the Spark cluster. An interesting line is: tempDist = sum(np.sum((centroids[x] - y) ** 2) for (x, y) in newCentroids.iteritems()) centroids is an object in python memory, as is newCentroids. So, at this point, all computations are being done in memory (and on the client, typically clients are slim, i.e. have limited capabilities, or the client is an SSH shell, so the computers resources are shared. You should ideally never do any computation here), so no parallelization is being used. You could optimize this method further by doing this computation in parallel. Ideally you want the python program to never directly handle individual points' $x$ and $y$ values.
H: How to interpret a decision tree correctly? I'm trying to work out if I'm correctly interpreting a decision tree found online. The dependent variable of this decision tree is Credit Rating which has two classes, Bad or Good. The root of this tree contains all 2464 observations in this dataset. The most influential attribute to determine how to classify a good or bad credit rating is the Income Level attribute. The majority of the people (454 out of 553) in our sample that had a less than low income also had a bad credit rating. If I was to launch a premium credit card without a limit I should ignore these people. If I were to use this decision tree for predictions to classify new observations, are the largest number of class in a leaf used as the prediction? E.g. Observation x has medium income, 7 credit cards and 34 years old. Would the predicted classification for credit rating = "Good" Another new observation could be Observation Y, which has less than low income so their credit rating = "Bad" Is this the correct way to interpret a decision tree or have I got this completely wrong? AI: Let me evaluate each of your observations one by one, so that it would be more clear: The dependent variable of this decision tree is Credit Rating which has two classes, Bad or Good. The root of this tree contains all 2464 observations in this dataset. If Good, Bad is what you mean by credit rating, then Yes. And you are right with the conclusion that all the 2464 observations are contained in the root of the tree. The most influential attribute to determine how to classify a good or bad credit rating is the Income Level attribute. Debatable Depends on how you consider something to be influential. Some might argue that the number of cards might be the most influential, and some might agree with your point. So, you are both right and wrong here. The majority of the people (454 out of 553) in our sample that had a less than low income also had a bad credit rating. If I was to launch a premium credit card without a limit I should ignore these people. Yes, but it would also be better if you consider the probability of getting a bad credit from these people. But, even that would turn out to be NO for this class, which makes your observation correct again. If I were to use this decision tree for predictions to classify new observations, are the largest number of class in a leaf used as the prediction? E.g. Observation x has medium income, 7 credit cards and 34 years old. Would the predicted classification for credit rating = "Good" Depends on the probability. So, calculate the probability from the leaves and then make a decision depending on that. Or much simpler, use a library like the Sklearn's decision tree classifier to do that for you. Another new observation could be Observation Y, which has less than low income so their credit rating = "Bad" Again, same as the explanation above. Is this the correct way to interpret a decision tree or have I got this completely wrong? Yes, this is a correct way of interpreting decision trees. You might be tempted to sway when it comes to selection of influential variables, but that is dependant on a lot of factors, including the problem statement, construction of the tree, analyst's judgement, etc.
H: Should I take random elements for mini-batch gradient descent? When implementing mini-batch gradient descent for neural networks, is it important to take random elements in each mini-batch? Or is it enough to shuffle the elements at the beginning of the training once? (I'm also interested in sources which definitely say what they do.) AI: It should be enough to shuffle the elements at the beginning of the training and then to read them sequentially. This really achieves the same objective as taking random elements every time, which is to break any sort of predefined structure that may exist in your original dataset (e.g. all positives in the beginning, sequential images, etc). While it would work to fetch random elements every time, this operation is typically not optimal performance-wise. Datasets are usually large and are not saved in your memory with fast random access, but rather in your slow HDD. This means sequential reads are pretty much the only option you have for good performance. Caffe for example uses LevelDB, which does not support efficient random seeking. See this, which confirms that the dataset is trained with images always in the same order.
H: What is a Dichotomy? I am currently reading: Stephen Jose Hanson: Meiosis Networks, 1990. and I stumbled about this: It is possible to precisely characterize the search problem in terms of the resources or degress of freedom in the learning model. If the task the learning system is to perform is classification then the system can be analyzed in terms of its ability to dichotomize stimulus points in feature space. Dichotomization Capability: Network Capacity Using a linear fan-in or hyperplane type neuron we can characterize the degrees of freedom inherent in a network of units with thresholded output. For example, with linear boundaries, consider 4 points, well distributed in a 2-dimensional feature space. There are exactly 14 linearly separable dichotomies that can be formed with the 4 target points. However, there are actually 16 ($2^4$) possible dichotomies of 4 points in 2 dimensions consequently, the number of possible dichotomies or arbitrary categories that are linearly implementable can be thought of as a capacity of the linear network in $k$ dimensions with $n$ examples. What is a "dichonomy" in this case? (Side questions: what is a fan-in type neuron?) AI: In a machine learning context, a dichotomy is simply a split of a set into two mutually exclusive subsets whose union is the original set. The point being made in your quoted text is that for four points, a linear boundary can not form all possible dichotomies (i.e., it does not shatter the set). For example, if the four points are arranged on the corners of a square, a linear boundary can be used to create all possible dichotomies except it cannot produce a boundary that splits the two points lying along one diagonal from the other two points (and vice versa), as you indicated in your own answer.
H: Doc2Vec - How to label the paragraphs (gensim) I am wondering how to label (tag) sentences / paragraphs / documents with doc2vec in gensim - from a practical standpoint. Do you need to have each sentence / paragraph / document with its own unique label (e.g. "Sent_123")? This seems useful if you want to say "what words or sentences are most similar to a single specific sentence labeled "Sent_123". Can you have the labels be repeated based on content? For example if each sentence / paragraph / document is about a certain product item (and there are multiple sentence / paragraph / document for a given product item) can you label the sentences based on the item and then compute the similarity between a word or a sentence and this label (which I guess would be like an average of all those sentences that had to do with the product item)? AI: Both are possible. You can give every document a unique ID (such as a sequential serial number) as a doctag, or a shared string doctag representing something else about it, or both at the same time. The TaggedDocument constructor takes a list of tags. (If you happen to limit yourself to to plain ints ascending from 0, the Doc2Vec model will use those as direct indexes into its backing array, and you'll save a lot of memory that would otherwise be devoted to a string -> index lookup, which could be important for large datasets. But you can use string doctags or even a mixture of int and string doctags.) You'll have to experiment with what works best for your needs. For some classification tasks, an approach that's sometimes worked better than I would have expected is skipping per-text IDs entirely, and just training the Doc2Vec model with known-class examples, with the desired classes as the doctags. You then get 'doc vectors' just for the class doctags – not every document – a potentially much smaller model. Later inferring vectors for new texts results in vectors meaningfully close to related class doc vectors.
H: How to make data predictions As a total beginner I am trying to apply some "predictions" on top of a bunch of csv files which contains house transactions for the last 20 years divided per area. What I would like to predict is the trend of the transactions for lets say the next year for a specific area. What general steps would you follow, to analyse those data and then predict? I read different articles but what I am looking for it is a sort of "best general practice" for this sort of problem. AI: Regression will work well if your data set is large, but only for predicting current house prices (say, for example, estimating the value of your house). That's what people generally mean when they talk about predicting house prices from current house sales data. The question of how house prices will behave in the next year is much, but much more complicated, and would not depend simply on the data you currently have. You would need to involve other information and a much more complex model, which would need to involve things like current level of household debt, inflation rate, economic outlook, etc. Daunting. Generally, speculative prices follow some stochastic process. They depend on the current value, but diverge more and more the farther you go in the future.
H: Gathering the number of Google results from a large amount of searches. I am trying to build a simple dataset using Google, mainly because it seems like the best option for what I want. I want to measure fame for a large group of scientists. The quick method is to measure the amount of Google results when searching their name. I do not care about the results, only the number of them. That method has it's flaws I know, so I am not opposed to an alternative. My scientist data is composed of thousands of entries. Which is causing issues. I tried to programmatically search Google, but less then 1000 searches later they blocked the program. I also looked into their Search API, but that is limited to 100 searches a day unless I pay for more, but since I am a poor college student that isn't an option. I was hoping someone here may be able to offer suggestions on build a dataset with some way of measuring fame. AI: With any search engine you will be limited by number of requests and any way of outcoming those limits will be a gray zone of violation of end user agreement (and, eventually, you will get banned for some time, of course). You should be looking into Search APIs of known search engines, for example, Bing gives you 5000 searches per month for free which - for a proof of concept research - might be enough. Also, 5k/month will give you some 20-30k until summer, so your data set will become bigger while you will be polishing your idea. Also, Google's free tier search is limited to 100 requests per day. Which gives you completely legal 3k per month as well. Combined (given, you treat Google and Bing results as equal) you get 8k per month.
H: What is the difference between (objective / error / criterion / cost / loss) function in the context of neural networks? The title says it all: I have seen three terms for functions so far, that seem to be the same / similar: error function criterion function cost function objective function loss function I was working on classification problems $$E(W) = \frac{1}{2} \sum_{x \in E}(t_x-o(x))^2$$ where $W$ are the weights, $E$ is the evaluation set, $t_x$ is the desired output (the class) of $x$ and $o(x)$ is the given output. This function seems to be commonly called "error function". But while reading about this topic, I've also seen the terms "criterion function" and "objective function". Do they all mean the same for neural nets? Geoffrey Hinton called cross-entropy for softmax-neurons and $E(W) = \frac{1}{2} \sum_{x \in E}(t_x-o(x))^2$ a cost function. AI: The error function is the function representing the difference between the values computed by your model and the real values. In the optimization field often they speak about two phases: a training phase in which the model is set, and a test phase in which the model tests its behaviour against the real values of output. In the training phase the error is necessary to improve the model, while in the test phase the error is useful to check if the model works properly. The objective function is the function you want to maximize or minimize. When they call it "cost function" (again, it's the objective function) it's because they want to only minimize it. I see the cost function and the objective function as the same thing seen from slightly different perspectives. The "criterion" is usually the rule for stopping the algorithm you're using. Suppose you want that your model find the minimum of an objective function, in real experiences it is often hard to find the exact minimum and the algorithm could continuing to work for a very long time. In that case you could accept to stop it "near" to the optimum with a particular stopping criterion. I hope I gave you a correct idea of these topics.
H: Soccer Field Segmentation I would like to develop a soccer field segmentation method. For this purpose, I prepared a training image data set and annotated field and non-field pixels. Following is a gr-chromacity plot of all training samples, colored with respect to their labels. I want to train a classifier for inferring the label of a new sample. The first approach comes to my mind is using Gaussian mixture models to model both distributions. Would you recommend another method for this purpose? AI: I would not suggest GMM at this point as the distribution of points in the space is not well-shaped enough. Even if you want to use it it's better to look at your data in PC space (i.e. using PCA). My suggestion would be: 1) Think of your features. What are they? Are you going to use these gr-chromacity as features? If yes you should know that kernel methods work better on this as the features are highly nonlinear. The image show that you need a feature mapping anyway. 2) It seems you have already thought of kernel methods as you put SVM as a tag. you can use it for classification. Might work better than GMM here. Also think of probabilistic graphical models as they have been used intensively for image segmentation and your images are structured enough (a football field has its fixed position in the image anyway). 3) If you have raw labeled dataset, I'd recommend to think of smarter features for segmentation. in gr-chromacity you already loose some information about colors which is the most important thing for you here. I would recommend taking the position of pixels into account as well. Then a PCA on the new data may reveal some more linearly separated classes.
H: Sort by average votes/ratings I have a data set that's a dictionary of tuples. Each key represents an ID number and each tuple is (yesvotes, totalvotes). Example: {17: (6, 10), 18: (1, 1), 21: (0, 2), 26: (1, 1), 27: (3, 4), 13: (2, 2)} I need to find the max key of the set. I want to assign weights so, for instance, key 17 would be ranked higher than key 18 because even though the ratio is much smaller, it has ten times the total votes. Is there an optimal way to do this? My best guess is simply calculate new ratios by (yesvotes/totalvotes)*(totalvotes+1) but that doesn't seem right... Is there some kind of standardized field of study concerning fair-voting? AI: Yes, this is a well-studied problem: rank aggregation. Here is a solution with code. The problem is that the quantity you are trying to estimate, the "score" of the item, is subject to noise. The fewer votes you have the greater the noise. Therefore you want to consider the variance of your estimates when ranking them.
H: Why is there no end user Application, yet? machine learning is being hyped since the deep neural networks. It seems to me, that you have to program in order to do machine learning. But is the process of training data and labeling data is the same of every problem. Why isn't there an Excel like application that enables thousands of non experts to do machine learning ? Disclaimer : I am not a data scientist . AI: Listing 2 examples: IBM Watson Analytics Amazon ML use case Preparing the data for supervised learning is require skills. Not all data came labeled and in form to be used as in need to solving problems. Also many more platforms/Api are in market now but for sure you can't solve a problem only with 1 algorithm, is needed much more ... Hope it help.
H: PCA and maintaining relationship with target variable I'm rather new to PCA and was hoping to have some confusion cleared up. Lets say for example we have a feature matrix that's nx100 and I want to get it down to something a bit smaller, p-dimensions, without losing too much variance. After applying PCA and receiving and new feature matrix nxp, I would use x_reduced to predict some target variable y. My question is, after the transformation, the new reduced feature matrix has been rotated by the eigenvectors and is sitting on a new basis. Yet, our y has not changed relative to X_reduced. I'm unsure about how y_original and x_reduced can be used for training since y has not changed with respect to x_reduced. Is there a way to correct for this or am I not thinking about it correctly? AI: The short answer is that the y_original and x_reduced are still connected to each other, so it is safe to train your data using y_original and x_reduced. While x_reduced is on a different scale, as you mentioned via eigenvectors, it still is representative of the data that was attached to that observation, just in a different format. You lose a lot of interpretability as far as what the actual numbers mean which is why it may seem confusing, but it's just a transformed representation of the x_original that (hopefully) contains enough of the x_original variability to make it useful.
H: Colouring points based on cluster on matplotlib I have a set of points where I performed a KMeans classification. How make a plot where the color of the point is based on the cluster they belong? EDIT: for clarification, having the set of points, I want to use the values of the array generated from KMeans.predict() ( from sklearn) to choose the color of each point. AI: The sklearn documentation shows you how: colors = np.array([x for x in 'bgrcmykbgrcmykbgrcmykbgrcmyk']) colors = np.hstack([colors] * 20) ... if hasattr(algorithm, 'cluster_centers_'): centers = algorithm.cluster_centers_ center_colors = colors[:len(centers)] plt.scatter(centers[:, 0], centers[:, 1], s=100, c=center_colors)
H: ML packages in R: caret v e1071 I've looked and surprisingly have not found too much discussion on the relative strengths of the caret and e1071 package. From my understanding, these packages perform many of the same ML algorithms. With that in mind, I'm interested in what those practitioners who have experience with both think these packages relative strengths and weaknesses are. What do you think about readability of code, ease of coding, methods, speed, etc... In particular I'm interested in multi-class SVM AI: I think that caret and e1071 serve difference purposes. First lets discuss caret, its closest competitor is the mlr package. Both are meta packages that allow to optimize models across parameters. Take for example a problem where you are not sure whether you would want to use Lasso or Ridge to create a model. As explained here caret allows to choose the optimal lambda based on different type of cross validations. The e1071 package instead is a bag of functions developed by the TU Wien. It is probably one of the most popular packages for support vector machines which are used by caret. But its goal is very different than caret, it implements learning algorithms and other functions whereas caret seeks to find the best parameter in learning models.
H: How do linear learning systems classify datapoints that fall on the hyperplane How do linear learning systems, such as the simple "closest to the class average" algorithm or SVMs, classify datapoints that fall on the hyperplane? AI: Linear, binary classifiers can choose either class (but consistently) when the datapoint which is to classify is on the hyperplane. It just depends on how you programmed it. Also, it doesn't really matter. This is very unlikely to happen. In fact, if we had arbitrary precision computing and normal distributed features, there would be a probability of 0 (exactly, not rounded) that this would happen. We have IEEE 754 floats, so the probability is not 0, but still so small that there are much more important factors to worry about.
H: Understanding Reinforcement Learning with Neural Net (Q-learning) I am trying to understand reinforcement learning and markov decision processes (MDP) in the case where a neural net is being used as the function approximator. I'm having difficulty with the relationship between the MDP where the environment is explored in a probabilistic manner, how this maps back to learning parameters and how the final solution/policies are found. Am I correct to assume that in the case of Q-learning, the neural-network essentially acts as a function approximator for q-value itself so many steps in the future? How does this map to updating parameters via backpropagation or other methods? Also, once the network has learned how to predict the future reward, how does this fit in with the system in terms of actually making decisions? I am assuming that the final system would not probabilistically make state transitions. Thanks AI: In Q-Learning, on every step you will use observations and rewards to update your Q-value function: $$ Q_{t+1}(s_t,a_t) = Q_t(s_t,a_t) + \alpha [R_{t+1}+ \gamma \underset{a'}{\max} Q_t(s_{t+1},a') - Q_t(s_t, a_t)] $$ You are correct in saying that the neural network is just a function approximation for the q-value function. In general, the approximation part is just a standard supervised learning problem. Your network uses (s,a) as input and the output is the q-value. As q-values are adjusted, you need to train these new samples to the network. Still, you will find some issues as you as using correlated samples and SGD will suffer. If you are looking at the DQN paper, things are slightly different. In that case, what they are doing is putting samples in a vector (experience replay). To teach the network, they sample tuples from the vector, bootstrap using this information to obtain a new q-value that is taught to the network. When I say teaching, I mean adjusting the network parameters using stochastic gradient descent or your favourite optimisation approach. By not teaching the samples in the order that are being collected by the policy the decorrelate them and that helps in the training. Lastly, in order to make a decision on state $ s $, you choose the action that provides the highest q-value: $$ a^*(s)= \underset{a}{argmax} \space Q(s,a) $$ If your Q-value function has been learnt completely and the environment is stationary, it is fine to be greedy at this point. However, while learning, you are expected to explore. There are several approaches being $\varepsilon$-greedy one of the easiest and most common ways.
H: Ignoring symbols and select only numerical values with pandas In one field I have entries like 'U$ 192,0'. Working on pandas, how I ignore non numerical data and get only the numerical part? AI: Use str.strip if the prefix is fixed or str.replace if not: data = pandas.Series(["U$ 192.0"]) data.str.replace('^[^\d]*', '').astype(float) This removes all the non-numeric characters to the left of the number, and casts to float.
H: Left Join with b.key being NULL in R I am trying to replicate the below sql query in R select a.*, b.key from Table1 a LEFT OUTER JOIN Table2 b on a.key = b.key where b.key is null I have read through this post however I am still struggling to code my specific case. https://stackoverflow.com/questions/1299871/how-to-join-merge-data-frames-inner-outer-left-right I have tried the below but the result doesnot allow me to filter for b.key IS NULL LoansToInsert_stg1 <- merge(x = Prior_stg2, y = BankLoans_stg2, by = "Account_ID", all.x = TRUE) Any insights? Example: Key1 <- c("A1","A2","A3","A4","A5") Key2 <- c("A1","A2","A3","B4","B5") BV1 <- c(100, 200, 300, 400, 500) BV2 <- c(150, 250, 350, 450, 550) df1 <- as.data.frame(cbind(Key1, BV1)) df2 <- as.data.frame(cbind(Key2, BV2)) Expected Output as a new df: Key1 BV1 Key2 BV2 A1 100 A1 150 A2 200 A2 250 A3 300 A3 350 A4 400 NA NA A5 500 NA NA AI: If I understand correctly: Table1 <- data.frame(key = seq(1,100),a.data = rnorm(100)) Table2 <- data.frame(key = c(seq(1,30),rep(NA,30)), b.data = seq(1,60)) ##Assuming this is what you want library(sqldf) sql.ans <- sqldf("select a.*, b.key from Table1 a LEFT OUTER JOIN Table2 b on a.key = b.key where b.key is null") ## dplyr version library(dplyr) dplyr.ans <- Table1 %>% filter(!key %in% Table2$key) ## Regular R version R.ans <- Table1[which(!Table1$key %in% Table2$key),] EDIT after dummy data and expected output dplyr.ans2 <- left_join(df1,df2, by = c("Key1" = "Key2")) Key1 BV1 BV2 1 A1 100 150 2 A2 200 250 3 A3 300 350 4 A4 400 <NA> 5 A5 500 <NA>
H: Analyze performance Poisson regression model on a time series(count forecasting) I have tried to build a model to forecast the count of a particular variable.The model that was used for the purpose was poisson .Unfortunately ,i don't have enough stat knowledge to analyze the model performance .If somebody can provide some insights as of how the model is performing,as well as some tweaks to improve the model performance will be greatly helpful. I am also willing to try out other models if it performs better. I am using python with the statmodels package to build the model. Attaching a graph which shows the fitted and the actual values(Green shows the actual values and Blue shows the fitted values) Also,providing the summary() output of the model Generalized Linear Model Regression Results ============================================================================== Dep. Variable: Work_Item_Type No. Observations: 581 Model: GLM Df Residuals: 574 Model Family: Poisson Df Model: 6 Link Function: log Scale: 1.0 Method: IRLS Log-Likelihood: -16752. Date: Mon, 22 Feb 2016 Deviance: 31268. Time: 21:59:12 Pearson chi2: 1.05e+05 No. Iterations: 9 =============================================================================== coef std err z P>|z| [95.0% Conf. Int.] ------------------------------------------------------------------------------- Intercept 2.8492 0.051 55.426 0.000 2.748 2.950 Weekday -0.2066 0.032 -6.446 0.000 -0.269 -0.144 day_of_week -0.0926 0.007 -13.367 0.000 -0.106 -0.079 wom 0.1122 0.007 16.996 0.000 0.099 0.125 week -0.0411 0.001 -53.597 0.000 -0.043 -0.040 TimeDelta 0.0001 5.1e-05 2.933 0.003 4.96e-05 0.000 month_of_yr 0.2192 0.004 60.981 0.000 0.212 0.226 =============================================================================== Also attaching a sample of the dataset used clear_date Count_Work_Item_Type 7/7/2014 1 7/10/2014 1 7/11/2014 5 7/17/2014 2 7/22/2014 1 7/24/2014 1 7/29/2014 3 7/30/2014 4 8/13/2014 1 Since i had only the date and the variable to be forecast i created a bunch of other variables like Weekday(binomial)? Day of week Week of Month Week Time Delta (Starts from 0 increment by one until end) Month of Year Also,i haven't done any kind of transformation on the variables. Please do comment if you need additional information: Thhanks AI: I'm not sure what you mean by "performance", but if what you mean is fit the answer is clear. You need to be using the log-likelihood to differentiate between different models. Basically, when you are fitting the model you are trying to maximize the log-likelihood. Thus the log-likelihood is giving you some sense of how well the parameters of your model are doing at fitting the data. In your case, you want to get log-likelihood to be as close to zero as possible. Now this is kind of terrible advice, because if you were clever enough to come up with a feature for every observation, you could get a perfect fit. That's bad because your model would be completely useless. There are functions that take your log-likelihood as an input and transform it to penalize you for adding more variables, etc. We won't worry about those right now. Just keep in mind not to mindlessly lower the log-likelihood. Once you have a model that you can live with you should run some sort of cross-validation, and/or hold out set. Then you can use any number of metrics to validate the predictive performance of your model. I think that is the more important of the two issues. You could calculate the mean square error on your hold out set. $$MSE=\sum_{i=1}^n(\hat y_i - y_i)^2$$ This would give you a really basic metric to assess how well your model is predictive of the output.
H: How do we know Kernels are successful in making data linearly Separable? When we have linearly inseparable datasets and we are using machine learning algorithms such as SVMs, we use kernels to implicitly map datapoints into a feature space that makes them linearly separable. But how do we know if a kernel has indeed, implicitly, been successful in making the datapoints linearly separable in the new feature space? What is the guarantee? AI: You cannot guarantee this. Some data is not separable by any kernel because of duplicates. By trying too hard, you will cause overfitting. Essentially, you force the implicit mapping to be so complex it contains a copy of your training data (which is exactly what happens if you choose a too small bandwidth with RBF). If you want a good generalization performance, you will have to tolerate some errors, and use e.g. soft-margin and such techniques. Perfect separation is not something to aim for. Such a guarantee is just a guarantee of being able to overfit! Use cross-validation to reduce the risk of overfitting and find the right balance between being optimal on training data and actual performance.
H: Stanford NER - Increase probability for a certain class I'm new to machine learning so I apologize if this question is silly. I'm using Stanford NER's english 4class classifier with good results. However, since my dataset is mostly focused on organizations, I think the results could be improved if I could boost the probability for an entity to be an organization to the detriment of other classes.(Ex: I would prefer "Carl Zeis" to be identified as an organization rather than a person). Is my supposition correct? If so, can it be achieved in an easier way than retraining the model? AI: Your data set will influence the labeling results. If it is focused on organizations, the NER should favor them simply by virtue of the data it's fed. So you might need to do anything. But if you do observe undesirable behavior in the resulting NER, you can adjust the weights. It's the same with any machine learning algorithm. A humorous demonstration was provided by Google's Deep Dream: it kept seeing dogs every where. Why? Because the data set they used for training had an abundance of dogs. (And Carl Zeis should be labeled as a person. The company is Carl Zeiss.)