text
stringlengths
83
79.5k
H: Problem in constructing co-occurence matrix I have a basic doubt in construction of co-occurence matrix from text. Coocurence matrix X is defined as : Xij : number of times word i occurs in context of word j in a window of size w. Now is this context limited to sentence i.e do we context word of a main word in a sentence only or across the whole document? For ex: Consider the corpus : human interface computer survey user computer system response time eps user interface system system human system eps user response time trees graph trees graph minors trees graph minors survey I like graph and stuff I like trees and stuff Sometimes I build a graph Sometimes I build trees Now for word graph and context window 10 do we look for only 4th last line or do we search for words across line also considering whole doc as one line? AI: Whether to window only over sentences or over the whole corpus depends on the context of your problem and on the structure of your corpus. In this corpus the order of your sentences does not matter, if they were switched around it would still be the same corpus. This means that you should stick to only the lines and not see it as one line. If your corpus would be a book for example, spilling over sentences makes a lot more sense. If you are unsure, try both, it is just a hyperparameter of your model.
H: How are Hyperplane Heatmaps created and how should they be interpreted? For nonlinear data, when we are using Support Vector Machines, we can use kernels such as Gaussian RBF, Polynomial, etc to achieve linearity in a different (potentially unknown to us) feature space and the algorithm learns a maximal separating hyperplane in that feature space. My question is how do we create heatmaps such as the one seen in the image below to show this max. separating hyperplane in our original space and how should it be interpreted? AI: I think I can answer that, since I implement such a thing in my own library, even if I really don't know how it's implemented in other libraries. Although I am confident that if there are other ways, they don't differ too much. It took my a few weeks to understand how such a graph can be drawn. Let's start with a general function $f:\mathbb{R} \times \mathbb{R} \to \mathbb{R}$. What you want is to draw points with a colour which signifies the value of the function. One way would be to simplify the whole problem and draw one point for each pixel. This would work, but will draw only shaded surfaces and it's impossible to draw lines with various formats (dotted lines with some colours and line widths. The real solution which I found makes two simplifications. The first one would the that instead of colouring with a gradient you can colour bands. Suppose that your function $f$ takes values in $[-1, 1]$. You can split your co-domain into many subintervals like: $[-1, -0.9]$, $[-0.9, -0.8]$ and so on. Now what you have to do would be to paint a polygon filled with appropriate color, for each interval. So your original problem is simplified to draw multiple instances of a simpler problem. Note that when your intervals are small enough it will look like a gradient and even a trained eye would not notice. The second simplification would be to split the space which needs to be drawn into a grid of small rectangles. So instead of drawing a polygon on the whole surface, you need to fill each small rectangle with appropriate much simpler polygon. If it's not obvious, the problem is much much simplified. Take for example a one of the rectangle. This rectangle has four corners and you can take an additional point in the center of that rectangle (you might need that point in some certain situations). The question is how to fill with appropriate colour the proper region? You need to evaluate function in all four corners and in the center. There are some specific cases: function evaluated in all corners are smaller than the beginning of the interval => you need to do nothing functions evaluated in all corners are greater than the end of the interval => you need to do nothing functions evaluated in all corners are within interval => fill the whole rectangle with an appropriate color You can stop here if you want, but your figures would looks non-smooth. You can go further: left-up, left-down, right-down in interval, right-up bigger => there are two points, one on up side and one on right side which contains the function evaluated at max value from interval => those two points together with top-right corner forms a triangle which can be filled many other cases which requires only common judgement to decide which polygons to be formed and should be filled. Using this algorithm you can fill polygons or draw lines. In the specific case of SVM you need to know that the line which corresponds with $f(x,y)=0$ is the line which classifies points into positive and negative samples. Also, lines which evaluates the function at $-1$ or $1$ corresponds with the maximal margins of SVM. After some time I found that this kind of approach is named iso lines, or iso curves. Perhaps are more similar algorithms like that. My implementation is named mesh contour (I did not found a proper name at that time in the beginning) and you can find source here. Some examples:
H: What is a "good" sample size Let's say if I have 2 TB of data, what is the best sample size to pick? I understand that there is a limit on how much RAM/processing power I have and hence I should make my sampling decision around that. But let's say if the processing power is not a concern for me right now. What would be a good way to approach my sample size? AI: This is a tough question to answer without more information. I'm going to assume that this is for model building, but without more detail it's hard to recommend something. However, there are some things which should generally be known: Population size How large is the population? Does your 2TB of data comprise the total population, or is this a sample of a given timeframe? What frame of data are you looking at - is this 2 days worth of data that is only representative for a given subset of the population, or is this everything? You'll need to know this to know what conclusions you can draw from this dataset. Variance What's the variance of the sample? If it's categorical data, how many unique values are there? Having a metric around this will help determine the number of samples you'll need. If this is a low variance set, you may only need a few hundred/thousand observations. Stratification/grouping Is your data grouped in a meaningful way? If so, you'll need to factor this into your sample. Depending on what you're doing, you'll want a meaningful representation of the population. If the data is not grouped, but has distinct groups within it that you care about, you may need to stratify or pre-process your data. Model and goals All of this ends up coming down to what you're trying to do. If you're trying to classify or parse a set of unique entities, you may be better off streaming a large set of your data rather than trying to sample it. If you're trying to classify images or customers based on behavior, you may only need a small subset depending on how these groups differ.
H: Calculation and Visualization of Correlation Matrix with Pandas I have a pandas data frame with several entries, and I want to calculate the correlation between the income of some type of stores. There are a number of stores with income data, classification of area of activity (theater, cloth stores, food ...) and other data. I tried to create a new data frame and insert a column with the income of all kinds of stores that belong to the same category, and the returning data frame has only the first column filled and the rest is full of NaN's. The code that I tired: corr = pd.DataFrame() for at in activity: stores.loc[stores['Activity']==at]['income'] I want to do so, so I can use .corr() to gave the correlation matrix between the category of stores. After that, I would like to know how I can plot the matrix values (-1 to 1, since I want to use Pearson's correlation) with matplolib. AI: I suggest some sort of play on the following: Using the UCI Abalone data for this example... import matplotlib import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Read file into a Pandas dataframe from pandas import DataFrame, read_csv f = 'https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data' df = read_csv(f) df=df[0:10] df Correlation matrix plotting function: # Correlation matric plotting function def correlation_matrix(df): from matplotlib import pyplot as plt from matplotlib import cm as cm fig = plt.figure() ax1 = fig.add_subplot(111) cmap = cm.get_cmap('jet', 30) cax = ax1.imshow(df.corr(), interpolation="nearest", cmap=cmap) ax1.grid(True) plt.title('Abalone Feature Correlation') labels=['Sex','Length','Diam','Height','Whole','Shucked','Viscera','Shell','Rings',] ax1.set_xticklabels(labels,fontsize=6) ax1.set_yticklabels(labels,fontsize=6) # Add colorbar, make sure to specify tick locations to match desired ticklabels fig.colorbar(cax, ticks=[.75,.8,.85,.90,.95,1]) plt.show() correlation_matrix(df) Hope this helps!
H: How to plot large web-based heatmaps? I want to plot large heatmaps (say a matrix $500 \times 500$). I can do it in Python/matplotlib.pyplot with pcolor, but it is not interactive (and I need an interactive heatmap). I have tried with D3.js but what I found is aiming at displaying small heatmaps: http://bl.ocks.org/tjdecke/5558084 Naively extending this example with a bigger matrix (e.g. $500 \times 500$) can crash the web-browser. So, can anyone point me toward a good way of displaying and interacting with large heatmaps with a web-based technology: I want to be able to interact on a web-page or a ipython notebook. AI: Plotly and Lightning are [supposedly] able to visualize extremely large data sets.
H: Linear regression with non-symmetric cost function? I want to predict some value $Y(x)$ and I am trying to get some prediction $\hat Y(x)$ that optimizes between being as low as possible, but still being larger than $Y(x)$. In other words: $$\text{cost}\left\{ Y(x) \gtrsim \hat Y(x) \right\} >> \text{cost}\left\{ \hat Y(x) \gtrsim Y(x) \right\} $$ I think a simple linear regression should do totally fine. So I somewhat know how to implement this manually, but I guess I'm not the first one with this kind of problem. Are there any packages/libraries (preferably python) out there doing what I want to do? What's the keyword I need to look for? What if I knew a function $Y_0(x) > 0$ where $Y(x) > Y_0(x)$. What's the best way to implement these restrictions? AI: If I understand you correctly, you want to err on the side of overestimating. If so, you need an appropriate, asymmetric cost function. One simple candidate is to tweak the squared loss: $\mathcal L: (x,\alpha) \to x^2 \left( \mathrm{sgn} x + \alpha \right)^2$ where $-1 < \alpha < 1$ is a parameter you can use to trade off the penalty of underestimation against overestimation. Positive values of $\alpha$ penalize overestimation, so you will want to set $\alpha$ negative. In python this looks like def loss(x, a): return x**2 * (numpy.sign(x) + a)**2 Next let's generate some data: import numpy x = numpy.arange(-10, 10, 0.1) y = -0.1*x**2 + x + numpy.sin(x) + 0.1*numpy.random.randn(len(x)) Finally, we will do our regression in tensorflow, a machine learning library from Google that supports automated differentiation (making gradient-based optimization of such problems simpler). I will use this example as a starting point. import tensorflow as tf X = tf.placeholder("float") # create symbolic variables Y = tf.placeholder("float") w = tf.Variable(0.0, name="coeff") b = tf.Variable(0.0, name="offset") y_model = tf.mul(X, w) + b cost = tf.pow(y_model-Y, 2) # use sqr error for cost function def acost(a): return tf.pow(y_model-Y, 2) * tf.pow(tf.sign(y_model-Y) + a, 2) train_op = tf.train.AdamOptimizer().minimize(cost) train_op2 = tf.train.AdamOptimizer().minimize(acost(-0.5)) sess = tf.Session() init = tf.initialize_all_variables() sess.run(init) for i in range(100): for (xi, yi) in zip(x, y): # sess.run(train_op, feed_dict={X: xi, Y: yi}) sess.run(train_op2, feed_dict={X: xi, Y: yi}) print(sess.run(w), sess.run(b)) cost is the regular squared error, while acost is the aforementioned asymmetric loss function. If you use cost you get 1.00764 -3.32445 If you use acost you get 1.02604 -1.07742 acost clearly tries not to underestimate. I did not check for convergence, but you get the idea.
H: how to make sklearn pipeline using custom model? I want to make a sklearn pipeline using the custom Artificial Neural Network I already have. I want to make pipeline in which input goes to ANN and its output goes to the sklearn.svm.SVC model and final prediction is made. So, how can I do this using sklearn pipeline? AI: Implementing a custom transformer is simple. You have to implement the fit and transform methods like below. Since your ANN is already trained (right?) the fit method has to do nothing, just return self. And the transform method has to pass the incoming data to the ANN and return its output. from sklearn.base import BaseEstimator, TransformerMixin class MyANNTransformer(BaseEstimator, TransformerMixin): def __init__(self, ann): self.ann = ann def fit(self, X, y): return self def transform(self, X) return self.ann.predict(X) Now you can include that in pipelines: from sklearn.pipeline import make_pipeline from sklearn.svm import SVC pipe = make_pipeline(MyANNTransformer(ann), SVC()) pipe.fit(Xtrain, ytrain) pipe.predict(Xtest)
H: How to store and analyze classification results with Python? I'm applying ML for classification task in Python using sklearn/pandas. I'm going to try various things to get the best results, and I wonder how do I effectively store and analyze all the parameters and results of the classification? Parameters include: Number of training examples (which can be extended as I get more labeled data). Set of features. Classification algorithm. Algorithm hyperparameters. Results include: Precision/recall for each of the classes. Overall precision/recall. Support for each class, etc. Of course, I can manually copy the parameters and results to an Excel spreadsheet every time, but it's not an optimal solution. Are there any Python libraries (or modules of sklearn/pandas) which allow to easily store and display the parameters and results for later analysis? How do you solve this task? AI: Sacred is a python library developed by the IDSIA lab that "facilitates automated and reproducible experimental research". It is available through pip as sacred. For a related discussion see reddit.
H: Expand or compact features? I have a classification task for people with 3 categories. I want to apply machine learning for that. I have 10 sources of data, which have the same fields (say 4: age, job title, a number of organizations, a number of followers). Data is incomplete, some fields can be missing in some profiles. The training set is limited (say, 300 examples). I have two strategies for feature engineering, and I don't know which one to use. Expand features: take 40 features (Profile 1 age, Profile 1 job title, ..., Profile 10 age, Profile 10 job title). Compact features: take 4 features, and apply some heuristics to merge the values from different profiles. Say, take age and job title which occur most frequently, take a maximum number of organizations, take a sum of numbers of followers. What strategy is generally used to give best results and why? AI: The way I see it is that your 10 sources of data, they all refer to the same set of people. Depending on the attributes, some can be expanded, some can be merged ... Attributes such as age should be unique, so it doesn't make sense to expand it to Profile 1 age, profile 2 age ... One simple way is merge them is by using the average or use max. Expanding age only add redundant data to your feature matrix, and increase its dimensionality, in most cases, this doesn't help generalization performance of your model. On the other hand, number of followers can be expanded. Depending on the data source, a guy has 10 followers on Twitter but 1000 followers on Google+ might simply mean that he barely uses Twitter. That being said, the way you pick your features or engineer new features should increase your model performance, so if expanding number of followers actually decrease Cross Validation or Test performance, compared to the one using sum of followers then you can simply use sum of followers.
H: convert some observations into variables I have a table formatted like the following table: Feature amount ID Location Feat1 2 1 US Feat2 0 1 US Feat3 0 1 US Feat4 1 1 US Feat2 2 2 US Feat4 0 2 US Feat3 0 2 US Feat6 1 2 US Let's say I have 200 different IDs. I want to convert all different features into variables and amount into observations, so I combine rows with same ID into one row. For example, Feat1 Feat2 Feat3 Feat4 Feat5 Feat6 ID Location 2 0 0 1 NA NA 1 US NA 2 0 0 NA 1 2 US Is there a good way to do it either in Python (pandas) or R? Thanks in advance! AI: Assume your table can be put into a pandas DataFrame object data with 4 columns as above. One way to achieve what you want is to do a GROUPBY using ID and Location. Then gradually assign values to each row of new table: newdata = pd.DataFrame(columns=['ID', 'Location', 'Feat1', 'Feat2', 'Feat3', 'Feat4', 'Feat5', 'Feat6']) grouped = data.groupby(['ID', 'Location']) for index, (group_name, d) in enumerate(grouped): newdata.loc[index, 'ID'] = group_name[0] newdata.loc[index, 'Location'] = group_name[1] for feature, amount in zip(d['Feature'], d['amount']): newdata.loc[index, feature] = amount
H: How to deal with categorical feature of very high cardinality? I would like to train a binary classifier on feature vectors. One of the features is categorical feature with string, it is the zip codes of a country. Typically, there is thousands of zip codes, and in my case they are strings. How can convert this feature into numerical? I do not think that using one-hot-encoding is good as a solution for my case. Am I right by saying that? If yes, what would be a suitable solution? AI: One-hot-encoded ZIP codes shouldn't present a problem with modern tools, where features can be much wider (millions, billions even), but if you really want you could aggregate area codes into regions, such as states. Of course, you should not use strings, but bit vectors. Two other dimensionality reduction options are MCA (PCA for categorical variables) and random projection.
H: Does traning the Word2Vec model multiple times affect `min_count` parameter? In Word2Vec, If I train a set of sentences multiple times with change in order (as it increases the vector representations), will the frequency of a word get changed due to it.? For example, if I have the word "deer" in my corpus 4 times and If I set the min_count to be 5, does training the model 3 times repeatedly count "deer" with frequency 12 and will be included in the model ? If it knows it is the same corpus then how it is possible to differentiate, if I retrain the model with a new corpus. AI: The question has been answered in google groups by Gordon mohr. Normally there's one read of the corpus to build the vocabulary (which includes initializing the model based on the learned vocabulary size), then any number of extra passes for training. It's only after the one vocabulary-learning scan that word counts are looked at (and compared to min_count for trimming). If you supply a corpus (as a restartable iterator) as one of the arguments to the initial creation of the Word2Vec model, all these steps are done automatically: one read of the corpus (through the build_vocab() method) to collect words/counts, then one or more passes (as controlled by the 'iter' parameter and done through the train() method) for training. Still, only the count for the single pass over the supplied corpus matters for frequency decisions. If you don't supply a corpus at model-initialization, you can then call build_vocab(…) and train(…) yourself. It's only what's passed to build_vocab() that matters for retained frequency counts (and the estimate of corpus size). You can then call train(…) in other ways, or repeatedly – it just keeps using the vocabulary from the one earlier build_vocab(…) call. (Note that train(…) does try to reuse the single-pass corpus size, remembered from the vocab-scanning pass, to give accurate progress-estimates and schedule the decay of the training-rate alpha. So if you give a different-sized corpus to train(…), you should also use its other optional parameters to give it a hint of the size.)
H: SVM prediction time increase with number of test cases I am using scikit-learn's SVM for the MNIST digit classification dataset. In order to improve the performance I extended the dataset by adding rotated samples. I was aware that SVM takes O(N^3) time to train the data, where N is the number of training vectors. However even prediction seems to take increase polynomially, the number of test vectors is the same. Is there any explanation for this or some equation that relates prediction time to the number of training samples? I am using a 3rd degree polynomial as the kernel with C=100.0. Note: I am doing a group project to compare the performance of various methods so I can't use any other method as my teammates would have used those. I referred to a paper by Decoste and Scholkoph which uses Virtual SVM. However I don't think I can run this on my current system if I can't run a simple extended training set. AI: The number of support vectors must be increasing. The prediction time is proportional to that; after all, the kernel classifier is $f(x) = \sum_i \alpha_i k(x, x_i)$, where the summation is over the support vectors. With sklearn you can find out how many you have by inspecting n_support_
H: Which one first: algorithm benchmarking, feature selection, parameter tuning? When trying to do e.g. a classification, my approach currently is to try out various algorithm first and benchmark them perform feature selection on the best algorithm from 1 above tune the parameters using the selected features and algorithm However, I often cannot convince myself that there may be a better algorithm then the selected one, if the other algorithms have been optimized with the best parameter / most suitable features. At the same time, doing a search across all algorithms * parameters * features are just too time-consuming. Any suggestion on the right approach / sequence? AI: I assume you mean feature selection as feature engineering. The process I usually follow and I see some people do is Feature engineering Try a few algorithms, usually highly performant ones such as RandomForest, Gradient Boosted Trees, Neutral Networks, or SVM on the features. 2.1 Do simple parameter tuning such as grid search on a small range of parameters If the result of step 2 is not satisfactory, go back to step 1 to generate more features, or remove redundant features and keep the best ones, people usually call this feature selection. If running out of ideas for new features, try more algorithms. If the result is alright or close to what you want, then move to step 3 Extensive parameter tuning The reason for doing this is that classification is all about feature engineering, and unless you know some incredible powerful classifier such as deep learning customized for a particular problem, such as Computer Vision. Generating good features is the key. Choosing a classifier is important but not crucial. All the classifiers mentioned above are quite comparable in terms of performance, and most of the time, best classifier turns out to be one of them. Parameter tuning can boost performance, in some cases, quite a lot. But without good features, tuning doesn't help much. Keep in mind, you always have time for parameter tuning. Also, there's no point of tuning parameter extensively then you discover a new feature and redo the whole thing.
H: How do you calculate how dense or sparse a dataset is? I'm looking deeper into collaborative filtering. One really interesting paper is "A Comparative Study of Collaborative Filtering Algorithms" http://arxiv.org/pdf/1205.3193.pdf In order to select which CF algorithm should be used the paper refers to the density of the dataset. What it doesn't do is explain how you actually calculate the density of your dataset. So in the context of that above paper can anyone help explain to me how I would calculate the density of a dataset? The paper refers regularly density in the 1-5% range. AI: It's actually defined on the first page: ... sparsity level (ratio of observed to total ratings) ... In other words, the fraction of the user/item rating matrix that is not empty. Remember that the problem is that most user-item pairs have no rating, and we wish to estimate them. Example: Let there be three users and four products. The number of possible ratings is $3\times4 = 12$. If every user rates only one product each (regardless of which product), the density is 3/12 = 25%.
H: Interpreting the results of randomized PCA in scikit-learn I'm using scikit-learn to do a genome-wide association study with a feature vector of about 100K SNPs. My goal is to tell the biologists which SNPs are "interesting". RandomizedPCA really improved my models, but I'm having trouble interpreting the results. Can scikit-learn tell me which features are used in each component? AI: Yes, through the components_ property: import numpy, seaborn, pandas, sklearn.decomposition data = numpy.random.randn(1000, 3) @ numpy.random.randn(3,3) seaborn.pairplot(pandas.DataFrame(data, columns=['x', 'y', 'z'])); sklearn.decomposition.RandomizedPCA().fit(data).components_ > array([[ 0.43929754, 0.81097276, 0.38644644], [-0.54977152, 0.58291122, -0.59830243], [ 0.71047094, -0.05037554, -0.70192119]]) sklearn.decomposition.RandomizedPCA(2).fit(data).components_ > array([[ 0.43929754, 0.81097276, 0.38644644], [-0.54977152, 0.58291122, -0.59830243]]) We see that the truncated decomposition is simply the truncation of the full decomposition. Each row contains the coefficients of the corresponding principal component.
H: Using a model for a different dataset I have generated a model for predicting the future input trend with a sample data using linear regression in Knime. I want to validate the model using a different data set. Suppose the data set I used for creating the model came from the sensor of device A. The prediction is accurate for the dataset from device A. Lets say I saved the model as a PMML file called A. Can I use the same PMML file A to predict values of Device B (Values of B are not comparable). If not, should I create a different model for all data set that I have? Question How do I combine all the models generated so that it predicts any given data set? Is it possible? AI: I think you answered yourself by "values of B are not comparable". Learning for predictions is based on a fundamental assumption which is the data for prediction has the same joint distribution as the data for learning. This is the link between those processes. Now, if you want to handle that in a meaningful way you have to know somehow the source type. In your example the device type. One way would be to introduce the device type as a different column in your data set, so that the model can have the chance to differentiate between source type. Obviously you have to have training data for all the device types. Supposing you have 2 device types, A and B. Your training data should have some columns for the signal and also a factor column with type A and B. Also you have to have enough instances of data with type A and type B.
H: What can I do with a Decision Tree with poor ROC Let's say I do a Decision Tree analysis. But the performance characteristics are nothing great (e.g. ROC is nothing great). Is there anything I can do with this "not so great" tree. Or do I typically need to trash it and try something else (either a new data set or a new analysis on the same data)? AI: Decision trees has one big quality and one big drawback. Their big quality is that they are what is known as glass-box models. What I mean by that is that they expose what they learn in a very clear and intuitive way. The name comes from the fact that you can see through a glass box. So, because of that, decision trees are very useful for analysis, they are a nice support to understand the relations between variables, which variables are important, in which way they are important. And even if they does not provide crystal clear information, they can bring you ideas about that. This might be very helpful especially if you have domain expert knowledge and you can put things together in a meaningful manner. Their main drawback is their high variability. This problem is mainly caused by their greedy approach. Each decision in the first level nodes shape differently the tree. You can even go further and see that a single additional data point is enough in many cases to get a totally different tree, especially if the sample is small or if the data is noisy. There are two types of approaches to solve this issue. The first type of approach tries to improve the single tree you built. This kind of approaches are known as pruning. A simple example would be reduced error pruning. This approach is simple and produce good results. You train your tree on a sample of data. Then you take another sample of data, you fit the new data on the tree, and then evaluate again the nodes of the tree from the perspective of the new data. If a non-leaf node get at least the same error if would not be split than if it would be split, then you can decide to cut the child nodes and transform that node into a leaf node. There are however much nicer pruning strategies, which are stronger, perhaps based on cross validation or some other criteria, mostly statistical based. Notice however that for reduced error pruning you need additional data, or to split your original sample in two, one for training, the other for pruning. If you go further to estimate the prediction error you need a third sample. The second approach would be either to build multiple times some trees and chose some based on cross validation, bootstrapping or whatever method you you, or use a tree ensembles like bagging or boosting algorithms. Note that for boosting and bagging you loose glass box property. Ultimately you have to choose between understanding and performance, having as a decent compromise the pruning procedure.
H: Dissmissing features based on correlation with target variable Is it valid to dismiss features based on their Pearson correlation values with the target variable in a classification problem? say for instance I have a dataset with the following format where the target variable takes 1 or 0: >>> dt.head() ID var3 var15 imp_ent_var16_ult1 imp_op_var39_comer_ult1 \ 0 1 2 23 0 0 1 3 2 34 0 0 2 4 2 23 0 0 3 8 2 37 0 195 4 10 2 39 0 0 imp_op_var39_comer_ult3 imp_op_var40_comer_ult1 TARGET 0 0 0 0 1 0 0 0 2 0 0 0 3 195 0 0 4 0 0 0 Computing the correlation matrix gives the following values ID var3 var15 imp_ent_var16_ult1 imp_op_var39_comer_ult1 imp_op_var39_comer_ult3 imp_op_var40_comer_ult1 TARGET ID 1.0 -0.00102533166614 -0.00213549813966 -0.00311137548461 -0.00143645708778 -0.00413114484307 -0.00727672024906 var3 -0.00102533166614 1.0 -0.00445177129541 0.0018681447614 0.00598903116859 0.00681691701467 0.00151753041397 var15 -0.00213549813966 -0.00445177129541 1.0 0.0437222608106 0.0947624170998 0.101177078747 0.0427540973727 imp_ent_var16_ult1 -0.00311137548461 0.0018681447614 0.0437222608106 1.0 0.0412213212518 0.0348787079026 0.00989582043194 imp_op_var39_comer_ult1 -0.00143645708778 0.00598903116859 0.0947624170998 0.0412213212518 1.0 0.886476049204 0.342709191344 imp_op_var39_comer_ult3 -0.00413114484307 0.00681691701467 0.101177078747 0.0348787079026 0.886476049204 1.0 0.316671244555 imp_op_var40_comer_ult1 -0.00727672024906 0.00151753041397 0.0427540973727 0.00989582043194 0.342709191344 0.316671244555 1.0 TARGET 0.0031484687227 0.00447479817554 0.101322098561 -1.74602537678e-05 0.0103531295754 0.0035169224417 0.00311938694896 Is it valid, to dismiss all features where the correlation with target is lower than a threshold (say for instance, 0.1)? What if there is a strong inter-attributes correlation as high as 1 where the correlated attributes are continuous variables, does this mean that these features hold redundant information for the learner? can I safely remove one of them without risking to lose information? AI: You've really got a classification problem on your hands, not a regression problem. Your target is not continuous, and Pearson correlation measures a relationship between continuous variables really. That's problematic enough to start. Low correlation means there's no linear relationship; it doesn't mean there's no information in the feature that predicts the target. I think you're really looking for mutual information, in this case between continuous and categorical variables. (I assume your other inputs are continuous?) This is a little involved; see https://stats.stackexchange.com/questions/29489/how-do-i-study-the-correlation-between-a-continuous-variable-and-a-categorical If you're attempting to do feature selection then you could perform a logistic regression with L1 regularization and select features based on the absolute value of their coefficients.
H: Detecting redundancy with Pearson correlation in continuous features I have a set of variables that I want to use for a regression or a classification problem. Having computed the correlation matrix of these variables, I discovered that some of them has an inter-variables Pearson correlation values as high as 1. Does this mean that these variables hold redundant information for the learner? Is it safe to remove one of them without risking information loss? if yes, how to chose the one to remove? AI: If the correlation between two features $x_1$ and $x_2$ is 1 that means that you can write $x_1 = c\cdot x_2 + a$. The only knew knowledge there is are those two constants, the individual values can be retrieved knowing this. I highly doubt there is anything a machine learning algorithm can learn from this and it is a fact that for some having this kind of correlation between features can hurt your performance quite a bit, so I would test it a bit but I would say it's very likely you can remove one of the two, and which one is not going to matter.
H: Combinatoric system which using in bookies "S3" system I am interested what type of combinatorics is using for following bookmakers system called "S3": We have N={1..8} events We build express pairs C=8 like [(1, 2, 4), (1, 3, 6), (1, 6, 8), (2, 3, 5), (2, 5, 8), (3, 7, 8), (4, 5, 7), (4, 6, 7)] Each event repeats 3-times: 1 => (1, 2, 4), (1, 3, 6), (1, 6, 8).. It's not permutation, combination... Please advice what is it? I want as example generate in similar way 4 from 10 if this possible. After I'd like to predict the optimal way for building the `pairs' AI: This can be interpreted as an unusual error correcting code with a non-binary signal. Essentially, you want the tuples to be very different, i.e. there are no two tuples that agree in two positions. That is also a simple strategy for producing such codes (the more interesting question is what (x,y) combination is best.) Think of it this way: assuming you only know part of the tuple. Say the first is unknown: (?, 2, 4) then you want to be able still conclude the correct triple. https://en.wikipedia.org/wiki/Forward_error_correction
H: Add new factor across multiple groups I am trying to add multiple new rows for 3 new factor level in an existing data frame. Please refer to sample data for an example. My starting data frame has 18 levels for col1 and all 12 months for column mon and past 20 years for year. I then impute values and add new columns, however I need new factors to be added for further analysis. For each mon and year combination, a new level should exist. Sample df: col1 <- c(rep("a",4),rep("b",4)) col2 <- c(1:4) mon <- c(rep(c("Jan","Feb", "Mar","Apr"), 4)) year <- c(rep("2016",8), rep("2015",8)) df <- as.data.frame(cbind(col1,col2,mon,year)) head(df,8) # edited to make it readable col1 col2 mon year 1 a 1 Jan 2016 2 a 2 Feb 2016 3 a 3 Mar 2016 4 a 4 Apr 2016 5 b 1 Jan 2016 6 b 2 Feb 2016 7 b 3 Mar 2016 8 b 4 Apr 2016 Expected Output col1 col2 mon year 1 a 1 Jan 2016 2 a 2 Feb 2016 3 a 3 Mar 2016 4 a 4 Apr 2016 5 b 1 Jan 2016 6 b 2 Feb 2016 7 b 3 Mar 2016 8 b 4 Apr 2016 9 c NA Jan 2016 # New level c for each mon and year 10 c NA Feb 2016 # New level c for each mon and year 11 c NA Mar 2016 # New level c for each mon and year 12 c NA Apr 2016 # New level c for each mon and year How do I go about reaching the expected df? AI: Several possibilities. For example, to add c for existing mon-year combinations in your data frame: rbind(df, transform(df[!duplicated(df[, 3:4]), ], col1="c", col2=NA)) # col1 col2 mon year # 1 a 1 Jan 2016 # 2 a 2 Feb 2016 # 3 a 3 Mar 2016 # 4 a 4 Apr 2016 # 5 b 1 Jan 2016 # 6 b 2 Feb 2016 # 7 b 3 Mar 2016 # 8 b 4 Apr 2016 # 9 a 1 Jan 2015 # 10 a 2 Feb 2015 # 11 a 3 Mar 2015 # 12 a 4 Apr 2015 # 13 b 1 Jan 2015 # 14 b 2 Feb 2015 # 15 b 3 Mar 2015 # 16 b 4 Apr 2015 # 17 c <NA> Jan 2016 # 21 c <NA> Feb 2016 # 31 c <NA> Mar 2016 # 41 c <NA> Apr 2016 # 91 c <NA> Jan 2015 # 101 c <NA> Feb 2015 # 111 c <NA> Mar 2015 # 121 c <NA> Apr 2015 To add c for all possible combinations of existing mon values and existing year values: rbind(df, data.frame(col1="c", col2=NA, expand.grid(mon=levels(df$mon), year=levels(df$year)))) To add c for all possible combinations of all months names and existing year values: rbind(df, data.frame(col1="c", col2=NA, expand.grid(mon=month.abb, year=levels(df$year)))) and so on.
H: In Latent Dirichlet Allocation (LDA), is it reasonable to reconstruct the original bag-of-words using the document and word representations? In Latent Dirichlet Allocation (LDA), is it reasonable to reconstruct the original bag-of-words using the document-by-topic and topic-word inferred matrices? I understand that I will not get frequencies by reconstructing the original matrix, but is the non-zeros after reconstruction valid? AI: It is possible to produce a corpus from the learned LDA parameters ($\theta$ and $\phi$) according to the generative model of LDA but it is not realistic to expect that you would recreate the original documents (in bag-of-words form). To be more specific, it is possible - but highly improbable - that you would generate the bag-of-words documents corresponding to the input corpus.
H: K Means giving poor results I have several user names and their salaries. Now I need to cluster user based on their salaries. I am using KMeans clustering and following is my code from sklearn.cluster import KMeans from sklearn.preprocessing import LabelEncoder import pandas as pd le = LabelEncoder() data = pd.read_csv('kmeans.data',header=None, names =['user', 'salary']) # Numerical conversion data['user'] = le.fit_transform(data['user']) km = KMeans(n_clusters=4, random_state= 10, n_init=10, max_iter=500) km.fit(data) data['labels'] = le.inverse_transform(data['user']) data['cluster'] = km.labels_ print data But my results are bad and there are lot of overlapping salaries. Is there anything wrong in the code ? How to improve the results ? Or whether clustering is not a right approach here ? Then how can I cluster users only based on salary ? km.fit(data['salary']) EDIT: I figured out a way to solve my problem using numpy.reshape km.fit(data['salary'].reshape(-1,1)) AI: K-means is based on the assumption that the data is "translation invariant" (more precisely: variance does, and k-means is variance minimization). In other words, it assumes that a difference of d=(x-y)^2 is of the same importance everywhere. Because of this, k-means does not work on skewed data. Furthermore, because of the square, it is sensitive to outliers and other extreme values. For salaries and other monetary values, this usually does not hold. The difference between \$0 and \$1000 is massive, and not the same as a salary difference of \$100000 to \$101000. Salaries are usually rather skewed, and you often have some extreme values. Converting the "user" attribute to a numerical value is outright statistical nonsense. What's variance worth in this attribute? K-means is for continuous numerical data only, and converting data does not chnage the nature, only the encoding - it's still inappropriate.
H: How does SelectKBest work? I am looking at this tutorial: https://www.dataquest.io/mission/75/improving-your-submission At section 8, finding the best features, it shows the following code. import numpy as np from sklearn.feature_selection import SelectKBest, f_classif predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked", "FamilySize", "Title", "FamilyId"] # Perform feature selection selector = SelectKBest(f_classif, k=5) selector.fit(titanic[predictors], titanic["Survived"]) # Get the raw p-values for each feature, and transform from p-values into scores scores = -np.log10(selector.pvalues_) # Plot the scores. See how "Pclass", "Sex", "Title", and "Fare" are the best? plt.bar(range(len(predictors)), scores) plt.xticks(range(len(predictors)), predictors, rotation='vertical') plt.show() What is k=5 doing, since it is never used (the graph still lists all of the features, whether I use k=1 or k="all")? How does it determine the best features, are they independent of the method one wants to use (whether logistic regression, random forests, or whatever)? AI: The SelectKBest class just scores the features using a function (in this case f_classif but could be others) and then "removes all but the k highest scoring features". http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html#sklearn.feature_selection.SelectKBest So its kind of a wrapper, the important thing here is the function you use to score the features. For other feature selection techniques in sklearn read: http://scikit-learn.org/stable/modules/feature_selection.html And yes, f_classif and chi2 are independent of the predictive method you use.
H: Application of ideas from graph theory in machine learning I work with neural networks (ConvNNs, DeepNNs, RNNs/LSTMs) for image segmentation and recognition and Genetic Algorithms for some optimization problems. Recently I started to learn some deep graph theory ideas (random graphs, chromatic numbers, graph coloring). I'm familiar with combinatorics at undergrad level. Are there any existing interesting applications and areas of research of graph theory and combinatorics in ML? AI: Graphs are a very flexible form of data representation, and therefore have been applied to machine learning in many different ways in the past. You can take a look to the papers that are submitted to specialized conferences like S+SSPR (The joint IAPR International Workshops on Structural and Syntactic Pattern Recognition and Statistical Techniques in Pattern Recognition) and GBR (Workshop on Graph-based Representations in Pattern Recognition) to start getting a good idea of potential applications. Some examples: Within the Computer Vision field, graphs have been used to extract structure information that can later on be used on several applications, like for instance object recognition and detection, image segmentation and so on. Spectral clustering is an example of clustering method based on graph theory. It makes use of the eigenvalues of the similarity matrix to combine clustering and dimensionality reduction. Random walks may be used to predict and recommend links in social networks or to rank webpages by relevance.
H: Identifying baseline consumption I have data of intraday electricity consumptions (by half hours - 48 a day) over a year of 4000 households. Task is to establish baseline consumption of each of these households - possibly also differentiated on seasonality. One way how to do this would be just taking the mean of the consumption signals. What would be more sophisticated method for this? I would be very grateful for pointing out to methods I could look into. AI: You can try and model your data as having trend/seasonal/cyclic components if you want something a little more sophisticated. Here is an intro reference to get you started.
H: Sklearn: How to adjust data set proportion during training, but not testing I'm using sklearn/pandas/numpy. I have a labeled data set, where the potential outcomes are either True or False. However, the data set has a much higher proportion of True entries. When running through classifiers with k-fold (n=5) cross validation, this appears to bias the classifier towards just saying True. Using weights, I was able to adjust the sample data set I'm using to have a proportion closer to 1:1, like so (using a pandas csv): results = csv[['result']] weights = np.where(results.as_matrix() == True,0.25,1).ravel() csv_sample = csv.sample(n=60000, weights=weights) And the results are much more promising! However, I'm wondering if there's a way for me to do cross validation where the TRAINING set is adjusted in this manner, but the TEST set is closer to the actual proportion of data. AI: Try to use predictor option class_weight='balanced' or auto. It worked really well for me for SGDClassifier in a similar situation.
H: Histogram of some values only I have a pandas dataframe df, and I want to show the histogram. df.hist(bins=100, label="myhist") Now, for some reason I have lots of zeros in this df, so I only want to show the values between 1 and 100. I tried df.hist(bins=(1,100), label="myhist") but that gives a flat line which has nothing to do with the data. How do I do it? AI: Ok, after some digging around I found that I can pass a range = (1,100) and that does the trick.
H: How to determine if a company decision was successful or not? I'm trying to figure out if a decision taken in a company (offering discounts for specific products) is successful or not. I have done some research and saw that A/B testing might be a way to do this but A/B tests can only be carried out having 2 groups (control and experiment), and in this case all I have is past and "after the decision" data. Can the statistics behind AB testing be used in this case? The data I have is in the for of sales per day per article only for one company before and after the decision (only one decision). Example: sales for articles X and Y from the 2016-01-01 to 2016-02-01. The decision was to lower discounts on articles A and B on 2016-01-15, so I want to know if sales decreased or not after this decision. AI: You could use time series approach to model the "what if not" scenario and compare it with your values after the new program introduction. Check the causal_impact packet from Google and you might also find helpful this tutorial for probabilistic programming. http://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Prologue/Prologue.ipynb
H: Image Segmentation with a challenging background I'm working on an animal classification problem, with the data extracted from a video feed. The recording was made in a pen, so the problem is quite challenging with a dark background and many shadows: Initially I tried scikit-image, but then someone helped me with an advanced tool called crf-rnn (http://crfasrnn.torr.vision/) that does a great job segmenting and labelling objects in an image. I did the following: import caffe net = caffe.Segmenter(MODEL_FILE, PRETRAINED) IMAGE_FILE = '0045_crop2.png' input_image = caffe.io.load_image(IMAGE_FILE) from PIL import Image as PILImage image = PILImage.fromarray(np.uint8(input_image)) image = np.array(image) mean_vec = [np.mean(image[:,:,vals]) for vals in range(image.shape[2])] im = image[:, :, ::-1] im = im - reshaped_mean_vec cur_h, cur_w, cur_c = im.shape pad_h = 750 - cur_h pad_w = 750 - cur_w print(pad_h, pad_w, "999") im = np.pad(im, pad_width=((0, max(pad_h,0)), (0, max(pad_w,0)), (0, 0)), mode = 'constant', constant_values = 255) segmentation = net.predict([im]) segmentation2 = segmentation[0:cur_h, 0:cur_w] The resulting image segmentation is rather poor (although two cows are recognized correctly): I use a trained crf-rnn (MODEL_FILE, PRETRAINED), which works well for other problems, but this one is harder. I would appreciate any suggestions on how to pre-process this sort of image to extract the shape of most cows. AI: It would be appreciated if you could explain precisely what your goal is: you want to identify what animal is in your picture ? you want to count the number of animals ? you want to get the position of each animal in the picture ? In any case, I know that you can get some already trained neural nets from google or anywhere else. This neural net can be used with caffe as it is the case in this google deepdream stuff on github (look at ): https://github.com/google/deepdream/blob/master/dream.ipynb Then, if you want to highlight or identify the positions of your animals, you'll find this article inspiring: http://www.matthewzeiler.com/pubs/cvpr2010/cvpr2010.pdf It explain how to reverse convolutional networks to identify what part of your image helped to recognize what is inside. The found projection get you something similar to your second picture (called a mask), but depending on the neural net you use, you can get better results.
H: Can closer points be considered more similar in T-SNE visualization? I understand from Hinton's paper that T-SNE does a good job in keeping local similarities and a decent job in preserving global structure (clusterization). However I'm not clear if points appearing closer in a 2D t-sne visualization can be assumed as "more-similar" data-points. I'm using data with 25 features. As an example, observing the image below, can I assume that blue datapoints are more similar to green ones, specifically to the biggest green-points cluster?. Or, asking differently, is it ok to assume that blue points are more similar to green one in the closest cluster, than to red ones in the other cluster? (disregarding green points in the red-ish cluster) When observing other examples, such as the ones presented at sci-kit learn Manifold learning it seems right to assume this, but I'm not sure if is correct statistically speaking. EDIT I have calculated the distances from the original dataset manually (the mean pairwise euclidean distance) and the visualization actually represents a proportional spatial distance regarding the dataset. However, I would like to know if this is fairly acceptable to be expected from the original mathematical formulation of t-sne and not mere coincidence. AI: I would present t-SNE as a smart probabilistic adaptation of the Locally-linear embedding. In both cases, we attempt to project points from a high dimensional space to a small one. This projection is done by optimizing the conservation of local distances (directly with LLE, preproducing a probabilistic distribution and optimizing the KL-divergence with t-SNE). Then if your question is, does it keep global distances, the answer is no. It will depend on the "shape" of your data (if the distribution is smooth, then distances should be somehow conserved). t-SNE actually doesn't work well on the swiss roll (your "S" 3D image) and you can see that, in the 2D result, the very middle yellow points are generally closer to the red ones than the blue ones (they are perfectly centered in the 3D image). An other good example of what t-SNE does is the clustering of handwritten digits. See the examples on this link:https://lvdmaaten.github.io/tsne/
H: Are Word2Vec and Doc2Vec both distributional representation or distributed representation? I have read that distributional representation is based on distributional hypothesis that words occurring in similar context tends to have similar meanings. Word2Vec and Doc2Vec both are modeled according to this hypothesis. But, in the original paper, even they are titled as Distributed representation of words and phrases and Distributed representation of sentences and documents. So, are these algorithms based on distributional representation or distributed representation. How about other models such as LDA and LSA. AI: The reply from Andrey Kutuzov via google groups felt satisfactory I would say that word2vec algorithms are based on both. When people say distributional representation, they usually mean the linguistic aspect: meaning is context, know the word by its company and other famous quotes. But when people say distributed representation, it mostly doesn't have anything to do with linguistics. It is more about computer science aspect. If I understand Mikolov and other correctly, the word distributed in their papers means that each single component of a vector representation does not have any meaning of its own. The interpretable features (for example, word contexts in case of word2vec) are hidden and distributed among uninterpretable vector components: each component is responsible for several interpretable features, and each interpretable feature is bound to several components. So, word2vec (and doc2vec) uses distributed representations technically, as a way to represent lexical semantics. And at the same time it is conceptually based on distributional hypothesis: it works only because distributional hypothesis is true (word meanings do correlate with their typical contexts). But of course often the terms distributed and distributional are used interchangeably, increasing misunderstanding :)
H: Heat maps in R with more than 2 categorical variables I have a dataset which looks like- Variable value year Quarter Location A 48.235 2011 Q1 North B 65.444 2011 Q2 North C 77.453 2011 Q3 North D 44.678 2011 Q4 North A 88.542 2012 Q1 South B 66.566 2012 Q2 South C 55.443 2012 Q3 South D 78.990 2012 Q4 South Can anybody help me with a code to generate heatmap with more than 2 categorical variables? I need 'variable' and 'Location' to be on y axis and 'Quarter'&' year' to be on x axis. The shades of the data inside heat map should correspond to 'value'. AI: Here is one way of visualizing the data you have presented. However, I have used some liberties and assumptions to create this plot. First, create a year_quarter variable that simply concatenates the year and quarter to present the time on the X axis. The following piece of code can do this in R year_quarter = paste(dat$year, dat$Quarter, sep="-") Now, the dataset you have will look like: > dat Variable value year Quarter Location year_quarter 1 A 48.235 2011 Q1 North 2011-Q1 2 B 65.444 2011 Q2 North 2011-Q2 3 C 77.453 2011 Q3 North 2011-Q3 4 D 44.678 2011 Q4 North 2011-Q4 5 A 88.542 2012 Q1 South 2012-Q1 6 B 66.566 2012 Q2 South 2012-Q2 7 C 55.443 2012 Q3 South 2012-Q3 8 D 78.990 2012 Q4 South 2012-Q4 Finally, using ggplto2, you can create the plot such that the colour represents the value, the shape represents the Location and the size represents the Variable. This simple one liner can help you produce such a plot: p = ggplot(dat, aes(x = year_quarter, y = value, colour = value)) + geom_point(aes(shape = Location,size = Variable)) This is how the output plot looks like: Note that you can also add geom_line with interaction if you would like the lines to be connected based on Location and Variable.
H: What are the relationships/differences between Bias, Variance and Residuals? I've been trying to find an answer to this question for a long time. What are the relationships/differences between Bias, Variance and Residuals? I think I do understand Bias, Variance and Residuals as separate concepts. Correct me if I'm wrong - Bias is the difference between the average expected results from different runs of the model and the true values from data. Variance is the variability in the expected results (predictions) of a given data point between different runs of the model. Residual is the difference between the expected results from a model and the true values from data. y - y^ Residual seems somewhat similar to Bias. But is that my misunderstanding? Please explain to me in plain words and also simple equations the relationships or differences between these three. AI: I think your confusion arises from mixing two different kinds of terms together here. Bias and variance are general concepts which can be measured and quantified in a number of different ways. A residual is a specific measurement of the differences between a predicted value and a true value. Bias, loosely speaking, is how far away the average prediction is from the actual average. One way to measure it is in the difference of the means. You could also use difference of medians, difference in range, or several other calculations. To a get a complete picture of the bias of a model, you will want to look at several different measures. Variance, when used informally in data science, is a property of single sets (whether predictions or true values). The variance of a model, loosely speaking, is how far from the average prediction a randomly selected prediction will be. It's very often assessed using cross-validation. You construct multiple models using slightly different training sets but the same algorithm and tuning parameters. You then calculate an evaluation metric for each model, and calculate the standard deviation of this evaluation over all your models. This gives you a sense of the "stability" of a given algorithm/parameter set when exposed to different training and testing sets. (N.B. This can be confusing because there is a specific definition of "variance" used in statistics, $v = \sigma^2$. In data science, it's usually used more informally.) Residuals are a specific quantity associated with a single prediction/true value set pair. You've got the right definition there. This makes it not a general concept, but instead a measurement that you can use to assess either bias or variance. They're also frequently used in fitting regression models and otherwise performing gradient descent-style optimization. The mean or median of a residual set can be a way to assess bias, while the standard deviation of a residual set can be used to assess a variance.
H: Benchmark datasets for collaborative filtering I'd like to test a new algorithm for collaborative filtering. A typical use case is to recommend movies based on the preferences of users similar to the specific user. What are some common benchmark datasets that researchers often use to test their algorithms? I know that within Computer Vision people often use MNIST or CIFAR, but I haven't found similar datasets for collaborative filtering. AI: The obvious answer would be the Netflix prize dataset, there is a lot of research into it and most CF algorithms have known scores in it. There are other available datasets that are usually used as benchmarks: Movie lens Dataset: a 20 million ratings dataset used for benchmarking CF algorithms; Jester Dataset: a joke recommendation dataset with more than 6 million ratings; You can find many more datasets in this link
H: From where should I start Machine Learning? I want to know how to start from scratch for Machine Learning. Also which language is best for implementing its algorithms or developing future applications based on it. Thanks! AI: I am a book person so I would recommend one of the following books: Elements of Statistical Learning (Hastie and Tibshirani). Pattern Recognition and Machine Learning (Bishop). The first book is available as a free download from the authors' website. You may download and start reading it. You will get an idea about your deficiencies. If it's too difficult, then you need to improve your statistics and linear algebra skills. For Linear Algebra I recommend: Linear Algebra and its Applications (David Lay). For statistics I like: Discovering Statistics (Andy Fields). Stay away from the recipe books if your aims are long-term.
H: Understanding churn prediction model guys! I have a dataset with a bunch of costumer-behavior features and the output being "Churned"/"Not churned". I applied a simple Random Forest Classifier and got a nice performance. With this, I can predict whether or not a given user will churn. But I need to understand what are the patterns among churned users and patterns among non-churned users. How could I achieve that? (Where I could present something like "Usually, users that churn do this, that and that") PS: No need for a full explanation, I'd happy enough if you give me some directions to what to study so I can achieve this knowledge Many thanks in advance! AI: A couple good options would be to look at a Feature/Variable importance plot for your RF model. Alternatively, depending on the model you could try extracting a couple individual trees from the model and examining them. However, these methods wouldn't be definitive; i.e. determining what variables are strong predictors for churn does not mean that they have a causal impact on churn, and an individual tree may be biased and not representative of the aggregation output presented by the RF model. To determine causation, you could use these methods as a starting point to design a test.
H: Method for finding top-k cosine similarity based closest item on large dataset I have a dataset with 40 million item, where each item is 400-dimension double vector. What I want to do is to find top-k (small k, about 3~10) most similar items to an arbitrary given input vector. Similarity measure is cosine similarity, since this dataset is based on word2vec representation. However, this data is so huge, so it couldn't fit at main memory (I'm now working in single machine). The goal I want to achieve is to find top-k similar items as fast as possible, with small memory(~5G) that could fit in RAM. Any suggestions for this problem? I've already tried PCA, but I observed that this data projected in low-dimension wasn't working very well.. AI: Locality sensitive hashing is a great tool for this problem. Pick n random 400-dimensional vectors. (Be careful or not all directions will be chosen with equal probability; pick each dimension as a standard Gaussian.) Each really defines a hyperplane through the origin cutting your space in half. The sign of the dot product of any of these vectors with some new vector tells you which side of the hyperplane it's on. So computing n dot products gives n 0/1 bits, which make an n-bit hash. Any new vector hashing to the same value must be in the same small sliver of space from the origin. And those are exactly the vectors that have a high cosine similarity with each other since their mutual angles are very small. Likewise anything hashing to almost the same value -- differing in a few bits -- is likely to be nearby. So you can restrict your search for most-similar vectors to things within one or more buckets of hashed candidate vectors. It doesn't help directly with memory since you might need any particular bucket to satisfy a request. You also lose some accuracy since there is not a guarantee the most similar vectors lie in the buckets you examine (though it's likely, the more you examine). It lets you trade speed for accuracy mostly. However you may find you can get away with some caching scheme, where some buckets are rarely if ever accessed and so don't stay in memory. You can see an implementation of this in Oryx, which I think is pretty straightforward. Most of the complexity comes because it lets you specify a target percentage of vectors to evaluate, and works out the optimal hash size based on that and your machine's number of cores.
H: Features selection: should I mix features? I have sets of features of different nature(for example, 300 features from FFT-transform, 1000 categorical features and so on). However there are only 900 samples and I`m trying to select important features using Lasso. So the question is: should I perform feature selection differentially on subsets of features or better firstly, concatenate features and only then perform feature selection? AI: Let LASSO pick the best ones. If the features are highly correlated and you want them picked as a group, add some L2 regularization too. This is called Elastic Net regularization, and it is a generalization of L1 and L2 regularization. Other than that, do not feel obliged to artificially group features.
H: Handling underflow in a Gaussian Naive Bayes classifier I am implementing a Gaussian Naive Bayes classifier (so each feature is continuous and assumed to be coming from a Gaussian distribution). When evaluating the probability of a feature value in the test set, if the value is sufficiently far away from the mean (e.g. the mean and s.d. on the training data is say 0 and 1 but the test value is 10^10) then there is underflow. This is an issue because then the probability will be calculated as 0.0 so the log probability is undefined. Is there a standard way of handling underflow in this case? AI: The standard answer is to work in log space, and manipulate the log of probabilities instead of probabilities, for exactly this reason. This classifier involves products of probabilities which just become sums of log probabilities. You allude to that already, but the problem you suggest isn't a problem. Internally you don't calculate a probability and then take the log again. It stays in log space. So for very small P, log P is a large negative number while P itself may underflow to 0.0. But you may never need to evaluate P internally.
H: Data scientist light? Im a 26 year old guy with an MBA and I work as an ERP system administrator. I have been interested in the field of data science for a while now. I've always liked statistics and various analytical tasks. I would really like to try and work within this field, but it feels a bit overwhelming. People mention that you have to learn Python, R, SQL, Machine Learning, advanced algebra, data modeling, big data (hadoop etc), predictive analytics, various business intelligence tools, VBA, Matlab etc etc. As of today, I have some SQL knowledge and have a general understanding of BI, big data, good excel skills etc. I am willing to learn some of the aforementioned areas, but I don't have the money or time to go back to full time university studies again. So here is my question: is there any recognized "light version" of data scientist on the market? What are they usually called? What skills do they need to master? What should I learn in order to work with big data sets and analytics without having to study full time for another 5 years? I live in Scandinavia so the job market is probably different here, but I thought it would be interesting to hear some answers. AI: Business intelligence is perfect for you; you already have the business background. If you want to become a bona fide data scientist brush up on your computer science, linear algebra, and statistics. I consider these the bare essentials. I don't know about Scandinavia, but in the U.S., data science covers a broad spectrum of tasks ranging from full-time software development to full-time data analysis, often with domain expertise required in various niches, such as experimental design. You have to decide where your strengths and interests lie to pick a position on this spectrum, and prepare accordingly. Useful activities include participating in Kaggle competitions, and contributing to open source data science libraries.
H: Why doesn't AlphaGo expect its opponent to play the best possible moves? In the game won by Lee Sedol, AlphaGo was apparently surprised by a brilliant and unexpected move from Lee Sedol. After analysing the logs, the Deep Mind CEO said that AlphaGo had evaluated a 1/10000 probability for that specific move to be played by Lee Sedol. What I don't understand here is : whatever the probability for a good move to be played, why taking the risk ? Why not instead expecting the opponent to always plays the best moves ? Of course it's always possible that you miss the best move your opponent could play when playing Monte Carlo to evaluate his possibilities, but here it seems that the move was founded. If AlphaGo knew that its strategy could be counter by such a move, why not choosing another strategy, where the worst case scenario would be less "worse". AI: It appears that AlphaGo did not rate the move as a best possible move for Lee Sedol, just as one that was within its search space. To put into context the board is 19x19, so a 1 in 10000 chance of a move is much lower than chance of the square being picked at random. That likely makes the move that it "found" not worth exploring much deeper. It is important to note too that the probabilities assigned to moves are equivalent to AlphaGo's rating for quality of that move - i.e. AlphaGo predicted that this was a bad choice for its opponent. Another way of saying this, is "there is a probability p that this move is the best possible one, and therefore worth investigating further". There is no separate quality rating - AlphaGo does not model "opponent's chance of making a move" separately from "opponent's chance of gaining the highest score from this position if he/she makes that move". There is just one probability covering both those meanings 1 As I understand it, AlphaGo rates the probabilities of all possible moves at each game board state that it considers (starting with the current board), and employs the most search effort for deeper searches on the highest rated ones. I don't know the ratios or how many nodes are visited in a typical search, but expect that a 1 in 10000 rating would not have been explored in much detail if at all. It is not surprising to see the probability calculation in the system logs, as the logs likely contain the ratings for all legal next moves, as well as ratings for things that didn't actually happen in the game but AlphaGo considered in its deeper searches. It is also not surprising that AlphaGo failed to rate the move correctly. The neural network is not expected to be a perfect oracle that rates all moves perfectly (if it was, then there would be no need to search). In fact, the opposite could be said to be the case - it is surprising (and of course an amazing feat of engineering) just how good the predictions are, good enough to beat a world-class champion. This is not the same as solving the game though. Go remains "unsolved", even if machines can beat humans, there is an unknown amount of additional room for better and better players - and in the immediate future that could be human or machine. There are in fact two networks evaluating two different things - the "policy network" evaluates potential moves, and the output of that affects the Monte Carlo search. There is also a "value network" which assesses board states to score the end point of the search. It is the policy network that predicted the low probability of the move, which meant that the search had little or no chance of exploring game states past Lee Sedol's move (if it had, maybe the value network would of detected a poor end result from playing that through). In reinforcement learning, a policy is set of rules, based on known state, that decide between actions that an agent can take.
H: Use test data as train: does it make sense? There is a classification problem(two classes). We have train data, for which we know class labels and we have test data. Imagine, that you have created model that with good accuracy(~95%) make predictions and we know that we are not overfitted. If we make prediction on test data, extract objects for which we sure in class label(for example, predict_proba higher than 90%) and add this objects to train data. Does this tactic make any sense? AI: This idea will most likely increase the bias in the model. Let's assume that the model has non-zero bias in the model. In this case, when it assumes its predictions to be true, without confirmation from an Oracle as in active learning, the bias of the model increases. In common terms, if the model has some amount of bias in its predictions, and it uses its predictions to learn on, the bias in the model can only increase. This issue does not arise when there is 0 bias in the model to begin with, however, in that case, there is no need to learn any further! Note that this is a highly intuitive answer but I cannot think of an argument against the intuition :-) I will appreciate any feedback on this.
H: Why is xgboost so much faster than sklearn GradientBoostingClassifier? I'm trying to train a gradient boosting model over 50k examples with 100 numeric features. XGBClassifier handles 500 trees within 43 seconds on my machine, while GradientBoostingClassifier handles only 10 trees(!) in 1 minutes and 2 seconds :( I didn't bother trying to grow 500 trees as it will take hours. I'm using the same learning_rate and max_depth settings, see below. What makes XGBoost so much faster? Does it use some novel implementation for gradient boosting that sklearn guys do not know? Or is it "cutting corners" and growing shallower trees? p.s. I'm aware of this discussion: https://www.kaggle.com/c/higgs-boson/forums/t/10335/xgboost-post-competition-survey but couldn't get the answer there... XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1, gamma=0, learning_rate=0.05, max_delta_step=0, max_depth=10, min_child_weight=1, missing=None, n_estimators=500, nthread=-1, objective='binary:logistic', reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=0, silent=True, subsample=1) GradientBoostingClassifier(init=None, learning_rate=0.05, loss='deviance', max_depth=10, max_features=None, max_leaf_nodes=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=10, presort='auto', random_state=None, subsample=1.0, verbose=0, warm_start=False) AI: Since you mention "numeric" features, I guess your features are not categorical and have a high arity (they can take a lot of different values, and thus there are a lot of possible split points). In such a case, growing trees is difficult since there are [a lot of features $\times$ a lot of split points] to evaluate. My guess is that the biggest effect comes from the fact that XGBoost uses an approximation on the split points. If you have a continuous feature with 10000 possible splits, XGBoost consider only "the best" 300 splits by default (this is a simplification). This behavior is controlled by the sketch_eps parameter, and you can read more about it in the doc. You can try lowering it and check the difference it makes. Since there is no mention of it in the scikit-learn documentation, I guess it is not available. You can learn what XGBoost method is in the their paper (arxiv). XGBoost also uses an approximation on the evaluation of such split points. I do not know by which criterion scikit learn is evaluating the splits, but it could explain the rest of the time difference. Adressing Comments Regarding the evaluation of split points However, what did you mean by "XGBoost also uses an approximation on the evaluation of such split points"? as far as I understand, for the evaluation they are using the exact reduction in the optimal objective function, as it appears in eq (7) in the paper. In order to evaluate the split point, you would have to compute $L(y,H_{i-1}+h_i)$ where $L$ is the cost function, $y$ the target, $H_{i-1}$ the model built until now, and $h_i$ the current addition. Notice that this is not what XGBoost is doing; they are simplifying the cost function $L$ by a Taylor Expansion, which leads to a very simple function to compute. They have to compute the Gradient and the Hessian of $L$ with respect to $H_{i-1}$, and they can reuse those number for all potential splits at stage $i$, making the overral computation fast. You can check Loss function Approximation With Taylor Expansion (CrossValidated Q/A) for more details, or the derivation in their paper. The point is that they have found a way to approximate $L(y,H_{i-1} + h_i)$ efficiently. If you were to evaluate $L$ fully, without insider knowledge allowing optimisation or avoidance or redundant computation, it would take more time per split. It this regard, it is an approximation. However, other gradient boosting implementations also use a proxy cost functions to evaluate the splits, and I do not know whether XGBoost approximation is quicker in this regards than the others.
H: New to ML and NLP. Is topic/subject extraction a good place to start? I'm new to AI and specifically NLP, I always love to study things as part of a random project I decide to build, right now I'm working on a chat bot and I was looking for an easy way(in terms of code) to extract the main topic/subject from a sentence(in English), the problem is that wherever I search, even if it's a simple code, it combines a full sentence "meaning"(device parts of speech), I don't like to implement something I don't understand so a small step for how to extract only the subject will be really good and can help my bot with some directions, Is there a simple code(don't care what language it written with) to extract, only main topic/s from a sentence(in English)?. AI: You could learn about word embeddings. These will provide you a natural path to topic models, and many other NLP tasks. Look up word2vec. I think it is better not to learn two new subjects (NLP and ML) at once. I would start with ML on its own; take this class perhaps. A good place to start is linear regression, or binary classification.
H: Text Mining on Large Dataset I have a large data set(460 Mb) which has a column - Log with 386551 rows. I wish to use clustering and N-Gram approach to form word cloud. My code is as follows: library(readr) AMC <- read_csv("All Tickets.csv") Desc <- AMC[,4] #Very large data hence breaking it down for creating corpus #DataframeSource has been used insted of VectorSource is to be able to handle the data library(tm) docs_new <- data.frame(Desc) test1 <- docs_new[1:100000,] test2 <- docs_new[100001:200000,] test3 <- docs_new[200001:300000,] test4 <- docs_new[300001:386551,] test1 <- data.frame(test1) test1 <- Corpus(DataframeSource(test1)) test2 <- data.frame(test2) test2 <- Corpus(DataframeSource(test2)) test3 <- data.frame(test3) test3 <- Corpus(DataframeSource(test3)) test4 <- data.frame(test4) test4 <- Corpus(DataframeSource(test4)) # attach all the corpus docs_new <- c(test1,test2,test3,test4) docs_new <- tm_map(docs_new, tolower) docs_new <- tm_map(docs_new, removePunctuation) docs_new <- tm_map(docs_new, removeNumbers) docs_new <- tm_map(docs_new, removeWords, stopwords(kind = "en")) docs_new <- tm_map(docs_new, stripWhitespace) docs_new <- tm_map(docs_new, stemDocument) docs_new <- tm_map(docs_new, PlainTextDocument) #tokenizer for tdm with ngrams library(RWeka) options(mc.cores=1) BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 2, max =2)) tdm <- TermDocumentMatrix(docs_new, control = list(tokenize = BigramTokenizer)) This is giving me results as follows: TermDocumentMatrix (terms: 1874071, documents: 386551)>> Non-/sparse entries: 17313767/724406705354 Sparsity : 100% Maximal term length: 733 Weighting : term frequency (tf) I then converted it to dgMatrix using following : library("Matrix") mat <- sparseMatrix(i=tdm$i, j=tdm$j, x=tdm$v, dims=c(tdm$nrow, tdm$ncol)) While trying to use the following I am getting memory size error: removeSparseTerms(tdm, 0.2) Please suggest further as I am new to Text Analytics. AI: You are using R, and everything that you are currently working on will be held in memory hence the error. You just don't have enough of it. You might be better off creating the frequencies of the terms from the original splits as opposed to creating one big file. Then after you have the frequencies adding them all together. Personally I use this code to create my wordclouds. ##Clean code for single column of a dataframe, in this case named Text x = alltweets tweets.text <- x$text tweets.text.cleaned <- gsub("@\\w+ *#", "", tweets.text) tweets.text.cleaned <- gsub("(f|ht)tp(s?)://(.*)[.][a-z]+", "", tweets.text.cleaned) tweets.text.cleaned <- gsub("[^0-9A-Za-z///' ]", "", tweets.text.cleaned) tweets.text.corpus <- Corpus(VectorSource(tweets.text.cleaned)) tweets.text.final <- tm_map (tweets.text.corpus, removePunctuation, mc.cores=1) tweets.text.final2 <- tm_map (tweets.text.final, content_transformer(tolower), mc.cores=1) tweets.text.final2 <- tm_map(tweets.text.final2, removeNumbers, mc.cores=1) tweets.text.final2 <- tm_map(tweets.text.final2, removePunctuation, mc.cores=1) tweets.text.final2 <- tm_map(tweets.text.final2,removeWords, stopwords("English"), mc.cores=1) tweets.text.final2 = tm_map(tweets.text.final2, removeWords, c("amp", "&")) #create corpus housing.tweets.corpus <- Corpus(VectorSource(tweets.text)) #clean up by removing stop words housing.tweets.corpus <- tm_map(housing.tweets.corpus, function(x)removeWords(x,stopwords())) #install wordcloud if not already installed install.packages("wordcloud") library("wordcloud") #generate wordcloud wordcloud(housing.tweets.corpus,min.freq = 2, scale=c(7,0.5),colors=brewer.pal(8, "Dark2"), random.color= TRUE, random.order = FALSE, max.words = 500)
H: Designing a ConvNet to facilitate game playing For fun I want to design a convolutional neural net to recognize enemy NPCs in a first person shooter. I have captured 100 jpegs of the npcs as well as 100 jpegs of not-NPCs. I have successfully trained a really simple convNEt to identify NPCs. This was really easy because the game actually highlights the NPCs with a red marker to let humans identify them. Makes it SUPER easy for a machine learning algorthm to find them. Great , so now I can classify a screenshot of an NPC. The next step is to identify these in a data stream at 60 frames per second. We all know that the stupid little processors inside most cameras have a face detection algorithm that operates in real time. So my i7 with 2 NVIDIA gpus can do this no sweat. So now I have to grab the screen buffer, capture a screen shot, feed it to my conVnet, get the location of the NPC, and then move the mouse cursor to the center of that NPC. Are there any easy to follow tutorials of running a convolutional neural net on a data stream like this? AI: I've recently started using OpenCV's python implementation, and I found some good OpenCV tutorials on this website: http://www.pyimagesearch.com/ that I really liked. OpenCV allows you to do Haar Cascades for fast facial recognition (by default it doesn't use a convoluted neural network but an optimized implementation of Ada Boosting that evaluates frames in stages for faster processing). OpenCV converts each frame into a multidimensional numpy tensor/matrix that you can then feed into into your ML algorithm (e.g., in TensorFlow or some other library), although I think most people just use the built-in OpenCV face classifiers. In any case, I believe OpenCV can process up to 70 frames per second, so it should be fast enough for you. The original paper that invented Haar Cascades: https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf The OpenCV documentation that further explains Haar Cascades: http://docs.opencv.org/3.1.0/d7/d8b/tutorial_py_face_detection.html#gsc.tab=0
H: Building Customers/Patient Profiles I am looking for ideas on how to proceed with a situation. I have historical data of appointments for many users. I can (easily) predict their future behaviour (whether the next appointment will be positive or not)/ I now need to build profiles, i.e., classify a user as "good" or "bad" user. So, I won't be predicting future appointments, but just say "this user is a good/bad user", and then compare with, e.g., a monthly behaviour. Any ideas where I could look for information on how to proceed? I apologize for my vague first question. I will try to be more clear. I have ~200k users, with their doctor appointment history. As demographic, I only have their age and gender. The rest of the variables relate to their appointment: time of the appointment, which doctor (service), day of the appointment, and so on. I also have whether they went or not to the appointment (show / no show). The assignment consist on classifying the users as "good" or "bad", i.e., whether they will go or not to the appointment. I do not have to predict if they will go to the next appointment, just to have a list of the users classification. By doing this, if user A, whom I classified as "good", calls for an appointment, I know he is "good" and he will make it to the appointment. I do not to take any action on him. Again, I do not have to predict future appointments, or the behavior of a new user, just to classify the existing ones. I hope now it is a bit more clear. If not, let me know. Thank you! I know perhaps the information I am giving is a bit vague, but I didn't know if to go into full detail or not. If you want me to share more info, just let me know. Thanks! AI: Maybe I'm being a bit too simplistic, but I would build a set of training data that looks like this (Good=1 means patient showed up for appt and is good by your definition, 0 = bad) Recid, PatId, SurgeryId, DrId, DateAppt, TimeAppt, Gender, Age, Good 1, 1, 100, 10, 01jan16, 10:30, M, 31, 1 2, 1, 100, 12, 05jan16, 15:20, M, 31, 1 3, 1, 100, 10, 06mar16, 11:45, M, 31, 0 4, 2, 101, 15, 02Feb16, 12:35, F, 75, 1 .... I would then use one of the machine learning tools in R - there are a variety of them, to train a model of your data. Then with another set of data, I would test the model you have just built to see how correct it is. If you don't have a second set of data, then randomly partition your original training set and only train with half of the data. Some further suggestions to make your model more powerful, is create additional variables with the following information a Flag to indicate if a person has missed a previous appointment with any doctor. a Flag to indicate if a person has missed a previous appointment with the particular doctor they are going to be visiting - I know from person experience there are some doctors in my surgery that I have a preference to see. The number of days since the last appointment Day of week of appointment Sounds like a nice dataset to be working with!
H: Neural Networks : Can I use both sigmoid and tanh as activation functions? In a neural network architecture can I use the sigmoid function in some layers and the tanh function in the others? Is it a good choice? AI: Yes you can. There are no hard rules against having different activation functions in any layer, and combining these two types should give no numerical difficulties. In fact it can be a good choice to have tanh in hidden layers and sigmoid on the last layer, if your goal is to predict membership of a single class or non-exclusive multiple class probabilities. The sigmoid output lends itself well to predicting an independent probability (using e.g. a logloss (aka cross-entropy) objective function). Whether or not it is better than using sigmoid on all layers will depend on other features of your network, the data and the problem you are trying solve. Usually the best way to find out which is better - at least in terms of accuracy - is to try out some variations and see which scores best on a cross-validation data set. In my experience, there often is a small difference between using tanh or sigmoid in the hidden layers.
H: Using Diebold-Mariano test I've got predicted results from two different types of neural networks. Now I would like to run significance testing on both of the results to prove that they do not have equal predictive accuracy. I've learnt that the only tool in the game for this is Diebold-Mariano test. What tool I can use to run this testing (Matlab? R?) AI: So you want to do a Diebold-Mariano test eh? How about the Diebold-Mariano test dm.test function in the forecast package of R? dm.test {forecast} Diebold-Mariano test for predictive accuracy Package: forecast Version: 6.2 Description The Diebold-Mariano test compares the forecast accuracy of two forecast methods. Usage dm.test(e1, e2, alternative=c("two.sided","less","greater"), h=1, power=2) (Took me ten seconds to find)
H: Impulse Response Function - Negative Shocks on R I have two questions on how to produce impulse responses using R (1) Impulse responses to a negative shock in the independent variable (money supply) (2) Impulse responses at 2 standard deviations The code I used to generate the impulse responses to a positive shock at 1 standard deviation is the following: m1 <- read.csv("m1.csv", header=T) m1 varm1 <- VAR(m1, p=8, type="cons") irfm1 <- irf(varm1, impulse="m1", response= c("gdp"), boot = FALSE) plot(irfm1) irfm1 AI: Here is a simple example that should work: library(vars) data("Canada") var.2c=VAR(Canada,p=2,type="const") # 1 SD impulse reponse function irf.rw.e=irf(var.2c,impulse="rw",response=c("e")) # gamma is the number of standard deviations for the irf gamma=-0.25 irf.rw.e_gamma = irf.rw.e n=length(irf.rw.e_gamma$irf$rw) for(i in 1:n){ irf.rw.e_gamma$irf$rw[i] = irf.rw.e_gamma$irf$rw[i]*gamma irf.rw.e_gamma$Lower$rw[i] = irf.rw.e_gamma$Lower$rw[i]*gamma irf.rw.e_gamma$Upper$rw[i] = irf.rw.e_gamma$Upper$rw[i]*gamma } plot(irf.rw.e) plot(irf.rw.e_gamma) Source
H: Word analysis in Python I have a list of documents which look like this: ["Display is flickering"] ["Battery charger is broken"] ["Hard disk is making noises"] These text documents are just free text. I have processed with tokenization, lemmatization, stop words removal, and now I want to assign tags based on a list of words. Example: {"#display":["display","screen","lcd","led"]} {"#battery":["battery","power cord","charger","drains"]} {"#hard disk":["hard disk","performance","slow"]} After text normalization I have: ["Display is flickering"] -> ["display","flicker"] What technique is recommended to compare document: ["display","flicker"] with my dictionary of words and return which value matches the best? In this case I would like: ["display","flicker"] = "#display":"display" ["battery","charger","broke"] = "#battery":"charger" Basically it compares Document A in tokens with a list B of other documents and return which document in list B with more common matches. I'm using TF, but want to know if there are other techniques, code samples to use. AI: You can use word embedding in order to compare whole phrases. I am aware about two models: Google's word2vec and Stanford's GloVe. Now, word embedding works best with, well - words. However, you could concatenate every word in your phrase and re-train the models. Afterwards, you could calculate their similarity (say, with cosine similarity) and see how similar your whole phrases are semantically. Hope this helps.
H: Apply function on every four rows Let's say I have a dataset like thisX<-matrix(rnorm(30), nrow=100, ncol=6). I am trying to find a way to apply the sum function to every four rows of column 3. This means I want to sum rows 1-4, rows 4-8 and so on. Are there any functions such as apply or lapply I could use? AI: You could use tapply or aggregate: set.seed(1) X <- matrix(rnorm(30), nrow=100, ncol=6) id <- ceiling(seq_len(nrow(X))/4) tapply(X[, 3], id, sum) # 1 2 3 4 5 6 7 8 9 10 # -0.2137 -1.0629 -0.5030 0.2687 1.4961 -0.9343 2.0076 3.1162 -1.3511 -1.6868 aggregate(X[, 3], by=list(id), FUN=sum) # Group.1 x # 1 1 -0.2137 # 2 2 -1.0629 # 3 3 -0.5030 # 4 4 0.2687 # 5 5 1.4961 # 6 6 -0.9343 # 7 7 2.0076 # 8 8 3.1162 # ... # check results sum(X[1:4, 3]) # [1] -0.2137 sum(X[5:8, 3]) # [1] -1.063
H: What does the activation of a neuron mean? In a neural network, each neuron will have it's activation. But what the activation mean? Does it just mean nothing but a temporal value to produce the final result or it's has something to with our understanding to the problem? For example, considering a neural network to recognize a handwritten character from a picture, I wonder if it's possible that a neuron's activation represents the how the pixel matches the specific value in a small part of the picture? AI: The activation of a neuron is mathematically nothing but a function of its input. Consider a neural network with one hidden layer and one input vector $\mathbf{x}$. The input $a_j$ of neuron $j$ can be written as: $a_j = w_j^T \mathbf{x} + b_j$. The activation of neuron $j$ is then a transformation $g: \mathbf{R} \rightarrow \mathbf{R}$ of the input. For exmaple, one can use the sigmoid activiation function $sig(a_j) = \dfrac{1}{1 + \exp(-a_j)} $ The weights $w$ are chosen to minimize a loss function and not for the sake of their interpretation. Nevertheless, one can try to interpret the activation of a neuron as internal representation of the input. One of the nicest examples I've seen comes from the Rummelhart et al (1986), Figure 4. In that paper, two family trees were fed into a neural network. The family trees represented an Italian and an English speaking family comprising three generations. Among other things, when activating one name of the tree, the neuronal activities represented whether the person was from the English or Italian tree and which generation the person embodied.
H: What is the depth of an image in Convolutional Neural Network? I am learning cs231n Convolutional Neural Networks for Visual Recognition. The lecture notes introduce the concepts of width, height, depth. For example, In CIFAR-10, images are only of size 32x32x3 (32 wide, 32 high, 3 color channels) However, in another example, a volume of size [55x55x96] has 96 depth slices, each of size [55x55] What does width 96 means? Does it mean 96 color channels? Why can we have more than 3 color channels? AI: It means that the number of filters (a.k.a. kernels, or feature detector) in the previous convolutional layer is 96. You may want to watch the video of the lecture, and in particular this slide, which mentions that a filter is applied to the full depth of your previous layer:
H: Importance of feature selection for boosting methods While it is obviously clear that features can be ranked on basis of importance and many machine learning books give examples of random forests on how to do so, its not very clear on which occasions one should do so. In particular, for boosting methods, is there any reason why one should do feature selection. Wouldn't the boosting methods themselves eliminate the low importance feature. Isn't is just always better to add more features (if one didn't have the practical problem of time limitations). AI: There is a difference between boosting and features selection. It is very important to understand that the original boosting algorithm or bagging algorithm have been modified and augmented with many features selection and/or data sampling ( over/ down/ synthetic) to improve the accuracy. Let us talk about the difference between bagging and boosting : Booth of them are random subspace based algorithms, the difference is in bagging we used uniform distribution and all the samples have the same weight , in boosting we use non- uniform distribution, during the training the distribution will be modified and difficult samples will have higher probability. The second difference is the voting. In bagging it is average voting , in boosting it is an weighted voting. Features selection algorithms try to find the best set of features that can separate the classes. But there is no explicit consideration for difficult or easy samples and what is the used training algorithm. In boosting, the algorithm selects the feature that minimize the error , the error is the sum of prob "weights" of samples that miss classified, since the difficult samples have higher weights , the selected feature will be the one that better distinguish between the difficult samples. FE ( Features, data) --> feature set Boosting ( features, data, base learners type, initial distribution, difficult samples) --> feature set
H: Why aren't languages like C, C++ used for data analytics instead of R, Python? I have started learning data science using R, however I have C++ as a subject this semester, and my project is to predict the outcome of a game using C++. I have not come across many instances (close to none, I did find libraries like Shark though) of implementation in C++. Is it to do with the fact that C++ isn't as simple to use when it comes to manipulating large amount of data? AI: Yes, you're correct -- it's that C and C++ are harder to use and are more burdened with boilerplate code that obfuscates your model building logic. When you build models, you have to iterate rapidly and frequently, often throwing away a lot of your code. Having to write boilerplate code each time substantially slows you down over the long run. Using R's caret package or Python's scikit-learn library, I can train a model in just 5-10 lines of code. Ecosystem also plays a big role. For example, Ruby is easy to use, but the community has never really seen a need for machine learning libraries to the extent that Python's community has. R is more widely used than Python (for stats and machine learning only) because of the strength of its ecosystem and its long history catering to that need. It's worth pointing out that most of these R and Python libraries are written in low-level languages like C or Fortran for their speed. For example, I believe Google's TensorFlow is built with C, but to make things easier for end users, its API is in Python.
H: Cross validation when training neural network? The standard setup when training a neural network seems to be to split the data into train and test sets, and keep running until the scores stop improving on the test set. Now, the problem: there is a certain amount of noise in the test scores, so the single best score may not correspond to the state of the network which is most likely to be best on new data. I've seen a few papers point to a specific epoch or iteration in the training as being "best by cross-validation" but I have no idea how that is determined (and the papers do not provide any details). The "best by cross-validation" point is not the one with the best test score. How would one go about doing this type of cross validation? Would it be by doing k-fold on the test set? Okay, that gives k different test scores instead of one, but then what? AI: I couldn't say what the authors refer to by best by cross-validation but I'll mention a simple and general procedure that's out there: You are correct that analyzing one estimate of the generalization performance using one training and one test set is quite simplistic. Cross-validation can help us understand how this performance varies across datasets, instead of wonder whether we got lucky/unlucky with our choice of train/test datasets. Split the whole dataset into k folds (or partitions), and train/test the model k times using different folds. When you're done, you can compute the mean performance and a variance that will be of utmost importance assessing confidence in the generalization performance estimate.
H: Supervised learning vs reinforcement learning for a simple self driving rc car I'm building a remote-controlled self driving car for fun. I'm using a Raspberry Pi as the onboard computer; and I'm using various plug-ins, such as a Raspberry Pi camera and distance sensors, for feedback on the car's surroundings. I'm using OpenCV to turn the video frames into tensors, and I'm using Google's TensorFlow to build a convoluted neural network to learn road boundaries and obstacles. My main question is, should I use supervised learning to teach the car to drive or should I provide objectives and penalties and do reinforcement learning (i.e, get to point B as fast as possible while not hitting anything and staying within the road boundaries)? Below is a list of the pros and cons that I've come up with. Supervised learning pros: The inputs to the learning algorithm are pretty straightforward. The car learns to associate the video frame tensor and sensor distance readings with forward, backward, and angular wheel displacement I can more or less teach the car to drive exactly how I want (without overfitting, of course) I've done tons of supervised learning problems before, and this approach seems to comfortably fit my existing skill set Supervised learning cons: It's not clear how to teach speed, and the correct speed is pretty arbitrary as long as the car doesn't go so fast that it veers off the road. I suppose I could drive fast during training, but this seems like a crude approach. Maybe I could manually add in a constant variable during training that corresponds to the speed for that training session, and then when the learning algorithm is deployed, I set this variable according to the speed I want? Reinforcement learning pros: If I build my car with the specific purpose of racing other people's self driving cars, reinforcement learning seems to be the natural way to tell my car to "get there as fast as possible" I've read that RL is sometimes used for autonomous drones, so in theory it should be easier in cars because I don't have to worry about up and down Reinforcement learning cons: I feel like reinforcement learning would require a lot of additional sensors, and frankly my foot-long car doesn't have that much space inside considering that it also needs to fit a battery, the Raspberry Pi, and a breadboard The car will behave very erratically at first, so much so that maybe it destroys itself. It might also take an unreasonably long time to learn (e.g., months or years) I can't incoporate explicit rules later on, e.g., stop at a toy red-light. With supervised learning, I could incorporate numerous SL algorithms (e.g., a Haar Cascade classifier for identifying stoplights) into a configurable rules engine that gets evaluated between each video frame. The rules engine would thus be able to override the driving SL algorithm if it saw a red stoplight even though the stoplight might not have been part of the training of the driving algorithm. RL seems too continous to do this (ie., stop only at the terminal state) I don't have a lot experience with applied reinforcement learning, although I definitely want to learn it regardless AI: I'd suggest you to try a hybrid approach: First, train your car in supervised fashion by demonstration. Just control it and use your commands as labels. This will let you get all the pros of SL. Then, fine tune your neural net using reinforcement learning. You don't need extra sensors for that: the rewards may be obtained from distance sensors (larger distances = better) and from the speed itself. This will give you the pros of RL and train your NN to the correct goal of driving fast while avoiding obstacles instead of the goal of imitating you. Combining both approaches will get you the pros of both SL and RL while avoiding their cons. RL won't start from random behavior, just small gradual deviations from what you tought the NN. A similar approach was applied successfuly by Google DeepMind with AlphaGo. You can always include explicit rules on top of this. Implement them with high priority and call the NN only when there is no explicit rule for current situation. This is reminiscent of the Subsumption Architecture.
H: Logbook: Machine Learning approaches In the past, when trying different machine learning algorithms in order to solve a problem, I used to write drown the set of approaches on a notebook, keeping details such as features, feature preprocessing, normalization, algorithms, algorithm parameters... therefore, building a hand-written logbook. However, currently I'm concerned about using a 'more professional' tool, so that I can keep more details and even share it with other team-members, who are also able to stamp their approaches. It would be great an automated and collaborative tool that keep track of the work done, considering details like: features, algorithms, algorithms parameter, data pre-process, data, metrics... beyond a collaborative Google Drive Spreadsheet for instance. How are you solving this? How are you keeping track of the work done? What's your logbook tool? Thank you very much in advance. AI: How are you solving this? How are you keeping track of the work done? What's your logbook tool? This might not be the best approach. But, this is how my team does it. We believe that for pulling off an end-to-end data science experiment, proper conscience is very important. So, we use Slack for the same for our discussions and the meetings. In addition to them, we have Rmd (R markdown) files for documenting the planning and the analysis parts.
H: How would I map categories between two similar but different sets of categories Having multiple sets of categories from different listings sites (e.g. Yelp, yellowpages.com, Google My Business...). I want to figure out what category X on one site is on another site. We have hundreds of thousands of businesses and the categories on all the sites they are on, so we could see that "Galbi Foo Restaurant" is in category "Restaurants > Korean" on one site and "Restaurants" on the other. Some examples category mappings that will have to happen: Nail Salons = Manicure & Pedicure Eyelash Service = Visagist Tanning = Sunbed Salon Specialty Food = Grocery (Specialty Food child node doesn't exist) Diagnostic Imaging = Radiologist Where would I start to solve this? It seems like a classification (logistic regression) problem. But this ML stuff hasn't clicked with me yet, so I'm likely to drastically over or under complicate these things :). AI: This sounds like a pretty standard supervised learning problem. In this case, your records would be businesses on site X and their actual category on site Z. Your predictors would be tags/categories for a particular business on site X, and your target variable, y (i.e., what you're trying to predict), would be the category on the other website. As far as the code goes, you have a variety of options depending on your preferred language. You could use the caret package in R, the scikit-learn library in Python, or the Weka library (maybe even Spark's ML lib because of its simplicity) in Java/Scala. Side note, in your question I think you meant to say "logistic regression" instead of "logical regression". You don't need to use logistic regression (although it wouldn't hurt). You could also try algorithms like Random Forests or Naive Bayes. Also worth noting: your target variable will have many classes (ie every possible category for the site you're trying to predict), so don't get alarmed if it seems like there are a lot of classes. That's normal for a problem like the one you've described.
H: Understanding ConfusionMatrix for Google Prediction API I'm trying to analyze my training model, Google Prediction API provides analyze method to get insights for the model. Currently I want to improve confidence levels, for my predictions, I haven't found any documentation in how to read this ConfusionMatrix, any insights will be great: { "kind": "prediction#analyze", "id": "modelX", "selfLink": "https://www.googleapis.com/prediction/v1.6/projects/projectX/trainedmodels/modelX/analyze", "dataDescription": { "outputFeature": { "text": [ { "value": "labelA", "count": "681" }, { "value": "labelB", "count": "127" }, { "value": "labelC", "count": "814" }, { "value": "labelD", "count": "427" } ] }, "features": [ { "index": "0", "text": { "count": "2049" } } ] }, "modelDescription": { "modelinfo": { "kind": "prediction#training" }, "confusionMatrix": { "labelA": { "labelA": "14.17", "labelB": "0.17", "labelC": "3.83", "labelD": "0.67" }, "labelB": { "labelA": "0.50", "labelB": "2.00", "labelC": "1.33", "labelD": "0.00" }, "labelC": { "labelA": "1.17", "labelB": "0.00", "labelC": "70.00", "labelD": "3.50" }, "labelD": { "labelA": "1.17", "labelB": "0.17", "labelC": "4.17", "labelD": "12.17" } }, "confusionMatrixRowTotals": { "labelA": "18.83", "labelB": "3.83", "labelC": "74.67", "labelD": "17.67" } } } AI: As explained in the documentation: This shows an estimate for how this model will do in predictions. This is first indexed by the true class label. For each true class label, this provides a pair {predicted_label, count}, where count is the estimated number of times the model will predict the predicted label given the true label. If you are not sure what a confusion matrix, see Wikipedia, where the "actual class" refers to the same thing as the "true class" in the Google documentation.
H: What methods exist for distance calculation in clustering? when we should use each of them? What methods exist for distance calculation in clustering? like Manhattan, Euclidean, etc.? Plus, I don't know when I should use them. I always use Euclidean distance. AI: Well, there is a book called Deza, Michel Marie, and Elena Deza. Encyclopedia of distances. Springer Berlin Heidelberg, 2009. ISBN 978-3-642-00233-5 I guess that book answers your question better than I can... Choose the distance function most appropriate for your data. For example, on latitude and longitude, use a distance like Haversine. If you have enough CPU, you can use better approximations such as Vincenty's. On histograms, use a distribution-baes distance. Earth-movers (EMD), divergences, histogram intersection, quadratic form distances, etc. On binary data, for example Jaccard, Dice, or Hamming make a lot of sense. On non-binary sparse data, such as text, various variants of tf-idf weights and cosine are popular. Probably the best tool to experiment with different distance functions and clustering is ELKI. It has many many distances, and many clustering algorithms that can be used with all of these distances (e.g. OPTICS). For example Canberra distance worked very well for me. That is probably what I would choose as "default".
H: How to create visualisation on medical data? I want to create a data visualization on medical data i.e. patient's medical history, allergies one is having, any chronic patient's name must be highlighted, etc. And, there can be a separate visualisation for the medicines, which shows medicines' availability, their expiry date can be highlighted by using mouse_hover_on functionality. It will be useful for the doctors and the medical staff as well. Problems: bl.ocks.org has variety of base visualization models to work upon, but the schema of database I have (given by my mentor) is a bit confusing. Also, he won't provide us with the actual data of our college due to its confidential nature, so can we get dummy data on the net? Experience: I have done work on Weka earlier to understand about data mining, and many different algorithms and classifiers. I have built a generic data visualization model (for hierarchical data) using the D3.js I have good working knowledge of Python. So, if anyone could please tell whether there is any hope for doing it(the data visualisation) in python. AI: If you have good working knowledge in Python than you are good for it. D3.js actually has a Python comparable called Bokeh. You question implies a lot of ramifications and does explain fully what you are trying to achieve so let's go by parts. Some 2D Scientific Plotting Python Libraries: Matplotlib ggplot Bokeh Chaco pyQtGraph (This one has some interesting features in volume slicing if it suits you) Some UI development Python libraries: PyQt5 PyQt4 (there are significant changes to PyQt5, thus the suggestion) PySide (also a Qt port) wxPython Some Python numerical libraries: Pandas Numpy Scipy Scikit-learn (You can do some serious data analysis with this) Using combinations of the above you can build some very powerful software solutions. Any of the UI libraries suggested has spreadsheet widgets. There are libraries to read csv and excel, among others (JSON, HDF5, etc.). The data problem can be resolved by either building your own randomized data or by using resources on the net like this or this.
H: What is most likely the bare minimum knowledge one has to have to become data scientist? I am a Python developer but I want to become a data scientist. My Question: At its core what is the bare minimum I need to have to make this transition? I know it cannot simply be that I need to learn Numpy and Pandas. My Thoughts: I am hoping to frame my question with the following three perspectives in mind and am trying to answer what is essentially needed for each category: Technical: Analytics Technical: Computer Science Non-Technical: Soft Skills Any help would be appreciated. :) AI: From personal experience (so take into consider that I might not be representative although I'm probably not that far away too) the people that approached me with a job offer for Data Scientist did so because: 1) Considerable knowledge in one or more programming language typically used for data analysis. In my case Python. 2) Knowledge in applied mathematics (usually they don't even care about the base field). You just have to know how to interpret data and take valid conclusions from it (as a starting point at least). 3) Past experience with libraries such as numpy, scipy, scikit-learn (very relevant), scikit-image (if you are going to do image analysis also), pandas. 4) Past experience with data visualization libraries such as matplotlib, seaborn, Chaco, ggplot, pyQtGraph, Bokeh, etc. 5) Knowledge about regression techniques, clustering, and classification. 6) Valid extras depending on the field are typical applied mathematics in space estimation, image analysis and processing and computer vision, 3D visualization . 7) If you already have experience in building scientific software solutions using those programming languages, it might be a great advantage. With point 7) in mind you might consider looking at PyQt5 and wxPython. 8) Ideally you are also able to present your results to an assistance that is not necessarily made of scientists only (I advise lots of illustrations..., actually, now that I think about it, lots of illustrations even if it's only scientists). So this takes some skill into building appropriate diagrams and figures (see vector graphics software such as Inkscape, together with plotting libraries it can make wonders). 9) Last but not least quite a bit of flexibility (this is common for scientific and development staff). Sometimes you need to change your technology and this takes some learning. Notice that my experience does not say much in terms of web development per se. Mine is a scientific background with very little of web development so people that approach me, do so with this in mind. Other fields might request for different skills (and by the way, you don't need to be a web developer to deal with web data).
H: Connecting Authors with Published Papers I'm specifically interested in tying doctors to their published papers. The key issue is that using name alone will result many collisions. I'm wondering what set of features I would need to reliably connect a doctor with a given published paper? Aside from weak features such as specialty, is there any database that has a link between doctor NPI and papers published? I've seen this on linking NPIs to authors in PubMed but it seems rather unreliable. AI: It's not an easy question to answer even for groups with a lot of leverage such as Research Gate (RG). RG has it's own (I assume proprietary) author matching algorithm which has caused problems in the past. They use the name of the author (in different combinations) to suggest authorship to RG users (so has you said, it does causes a lot of problems). Every once in a while users that are not the author accept the suggestions and, from the portal point of view, gain the equivalent reputation. It's a serious business that requires quite a bit of R&D before making a decision. That being said I can't answer with certainties only with reasonable possibilities. A few questions I would make (and hope to answer with a bit of data analysis): What is the probability that an author with a confirmed publication in an identified journal will publish in that journal again? - Journal Name What is the probability that an author that has partnered with other authors will repeat the same co-authoring combination? Co-author Names What is the expected time story for publishing for each author? Authors rarely publish articles 20 years apart. Typically they publish more and more, or less and less. Time frame How frequently do authors change the institution they belong to? Institution Name What are the preferred keywords for a given author? Key Names What are the preferred citations made by a given author? Bibliography All of the questions above require quite a bit of Text Mining and String Matching, as well as solid dataset to start your mining. Some publishers have their own API although I can't say much about permissiveness (never tried it myself). RG has been promising one for years but, as far as I know, it still does not exist. An unlikely thing I remember now is the inspiring story of Aaron Swartz. This activist, along with other persons, successfully managed to create large open archives for books and articles. Should that information still exist it might be worth your time to take a look there. Also if you have a list of the journals you are considering (you've mentioned only "doctors" which is a bit vague) you can try and see with the publisher if they have any way of accessing their database.
H: Package that is similar to R's caret? Recently I stumbled upon a R package that works similar like caret, but I can't remember what it's name was and I can't find it. It seemed to be less well known, but at least as extensive. Someone able to help? Edit: I search for this other package to widen my horizon. There is no specific other reason. This question is specific however, since I am not looking for a general recommendation, but I am looking for this specific package I have in mind. I will recognize the package once I read its functions and descriptions. I had searched for a couple of hours myself and did not find it by searching for "package similar to caret". That is why I need help. AI: MLR is similar to caret, MLR offers a high-level interface to various statistical and machine learning packages. According to the package description: Interface to a large number of classification and regression techniques, including machine-readable parameter descriptions. There is also an experimental extension for survival analysis, clustering and general, example-specific cost-sensitive learning. Generic resampling, including cross-validation, bootstrapping and subsampling. Hyperparameter tuning with modern optimization techniques, for single- and multi-objective problems. Filter and wrapper methods for feature selection. Extension of basic learners with additional operations common in machine learning, also allowing for easy nested resampling. Most operations can be parallelized. Please check the Reference and Homepage.
H: Algorithm Suggestion For a Specific Problem I'm working on a problem where in I have some data sets about some power generating units. Each of these units have been activated to run in the past and while activation, some units went into some issues. I now have all these data and I would like to come up with some sort of Ranking for these generating units. The criteria for ranking would be pretty simple to start with. They are: Maximum number of times a particular generating unit was activated How many times did the generating unit ran into problems during activation Later on I would expand on this ranking algorithm by adding more criteria. I will be using Apache Spark MLIB library and I can already see that there are quite a few algorithms already in place. http://spark.apache.org/docs/latest/mllib-guide.html I'm just not sure which algorithm would fit my purpose. Any suggestions? AI: You can use a clustering algorithm such as k-means to divide the generators into groups. You never know what kind of groups you'll get until you try it. Try and assess the character of each group of generators as you increase the number of clusters. At some point you should find a meaningful division of generators. The inputs to your k-means algorithm will be the criteria you mentioned in your post: the number of times it was activated, the number of activation problems, and so forth. When you are finished, the group a generator belongs to is its ranking. This method will not generate a ranking of 1-1000 if you have 1000 generators. Rather it will give you, for example with k=3: a group of 243 outstanding generators, 320 average generators, and 446 terrible generators.
H: Sklearn and PCA. Why is max n_row == max n_components? I posted my question on stack overflow, but there someone suggested that I should try it here. What I'm doing now :) OK, first to my data. I have a word-bi-gram frequency matrix (1100 x 100658, dtype=int), where the first 5 columns contain information about the document. So every row is a document and every column a word-bi-gram like (of-the, on-the, and-that,...). I want to visualize the data, but before I do that, I want to reduce the dimension. So I thought I do that with PCA from sklearn. First I set the column labels with myPandaDataFrame.columns = word-bi-grams then I deleted some doc-columns, because I want to see what kind of information I can get if I only look at the proficiency. del existing_df['SUBSET'] del existing_df['PROMPT'] del existing_df['L1'] del existing_df['ESSAYID'] then I set the proficiency column to be the index with myPandaDataFrame.columns.set_index(['PROFICIENCY'], inplace=True, drop=True) and then I did this from sklearn.decomposition import PCA x = 500 pcax = PCA(n_components=x) pcax.fit(myPandaDataFrame) PCA(copy=True, n_components=x, whiten=False) existing_2dx = pcax.transform(myPandaDataFrame) existing_df_2dx = pandas.DataFrame(existing_2dx) existing_df_2dx.index = myPandaDataFrame.index existing_df_2dx.columns = ['PC{0}'.format(i) for i in range(x)] But with this implementation I can only set 1100 n_components as a maximum. This is the number of documents (rows). This makes me suspicious. I tried a couple of examples / tutorials, but I can't get it right. So I hope someone can help me find out what I'm doing wrong? If would also be very happy about a good example / tutorial that can help me with my problem. Thank you. With best regards. AI: Given m rows of n columns, I think it's natural to think of the data as n-dimensional. However the inherent dimension d of the data may be lower; d <= n. d is the rank of the m x n matrix you could form from the data. The dimensionality of the data can be reduced to d with no loss of information, even. The same actually goes for rows, which is less intuitive but true; d <= m. So, it always makes sense to reduce dimensionality to something <= d since there's no loss; we typically reduce much further. This is why it won't let you reduce to more than the number of rows.
H: Aggregating Decision Trees I have a data set with 3 independent variables and 1 dependent variable. The dependent is play_golf, the independents are Humidity, Pending_Chores, Wind. I want to aggregate the probability of playing golf based on those rules of multiple trees. So again, play_golf (this is the dependent value), and there are three independent variables Humidity(High, Medium, Low), Pending_Chores(Taxes, None, Laundry, Car Maintenance) and Wind (High, Low). A rule would be like (IF humidity = "High" AND pending_chores = "None" AND Wind = "High" THEN play_golf = 77% ). I was thinking that random forest would weight the rules somehow on the collection of trees and give a probability. But if that doesn't make sense, then can you just tell me how to get the decision rules with one tree and I will work from that. I believe am I talking about a decision tree vector? I'm not sure. AI: Have a look on HMM (Hidden Markov Model) also. A concrete example Of HMM is available in wikipedia. Decision tree is better in generalising and applying learned data in another context and a Markov-model is better to recall the exact learned machine state.
H: Does it make sense to apply clustering on aggregation of data? I was wondering if it makes sense to apply clustering techniques on an aggregation of data, like, I have three different sources of data such as S1 S2 and S3 where each of these sources share some common columns but the majority are not shared. Does it make sense to group all the sources with all the columns in a big dataframe and apply clustering techniques whereas some records will only have null values for the columns that are not part of its corresponding service. Thank you. Update: The input is basically logs coming from different services with different columns where some of them are shared between all services. The output should be a cluster of records representing a user having the same behavior. The logs are gathered by second corresponding of a user action. They are aggregated by hours for each user to derive other features (the granularity of seconds it according to me, too much). The goal, is to detect anormal behaviors. And the question is, can I regroup different dataframe having different features while some of them are shared and run a K-mean on it. Because my dataframe would look like this (s* for service, and common_ for common feature): ------------------------------------------------------------- s1_f1 | s1_f2 | s2_f1 | s2_f2 | s3_f1 | common_f1 | common_f2 1 OK NULL NULL NULL midday less 2 OK NULL NULL NULL midnight more NULL NULL 2 5 NULL midday less NULL NULL 8 9 NULL morning less NULL NULL NULL NULL 777 morning more NULL NULL NULL NULL 888 night more AI: Since some features are missing for specific sources, the missing values are not missing-at-random but are systematically missing. In this situation, I'd advise against doing clustering on the combined data set with all available features. If missing values were occurring at random, you could have used some missing value imputation method before performing cluster analysis. However, since the values are systematically missing, imputation would be difficult to tackle. (You could try to predict those missing values, but I am afraid that will add a lot of unnecessary noise in the data.) I'd recommend choosing from one of these two options: Perform clustering on the combined data set, but use only those features that are non-missing across all sources. Perform three different cluster analysis, one for each source. This way, you can ensure that you are using as many features (information) as possible. The determination of "abnormal" behavior can then be determined within each source. This can be an added benefit since it would allow you to be more specific about why a use might be abnormal, as you have more features that can be used to explain this. The results can also be then summarized across all sources to create one consolidated report.
H: Why for logistic regression the error is given by [y ln(sigma(x)) + (1 − y) ln(1 − sigma(x)] Why for logistic regression, with target values 0 or 1, it will not work to take the sum of the squares of the difference between target value and prediction, but rather: $$ error({\bf w}) = -1/m * \sum_{i=1}^{m} [ y_i \ln (\sigma({x_i})) + (1-y_i) \ln (1 - \sigma({x_i} ) ] $$ AI: This is the log-likelihood: $\log P(x; w) \equiv \log \prod_i P(x_i | w) = \sum_i \log P(x_i | w)$, where $P(x_i | w) \equiv \left\{ \begin{array}{rl}\sigma(x_i), & y_i =1 \\ 1 - \sigma(x_i), &y_i = 0\end{array} \right.$ Why the log-likelihood? When you have a probabilistic model, such as logistic regression, it's one way (the MLE) of finding the parameters that fit best. Recall that in logistic regression we are, contrary to the name, trying to classify rather than regress, and the MSE is a regression loss; it seeks to minimize the distance from a point, while we wish to penalize being in the wrong subspace (the parts that don't correspond to the correct class). If you squint a bit, you can see that the negative log-likelihood minimizes the cross entropy.
H: Is it ideally correct to benchmark neo4j as graph processing platform? I would like to know if neo4j can be considered a graph processing platform, even though i know that: neo4j: is a graph database management system developed by Neo Technology, Inc. Described by its developers as an ACID-compliant transactional database with native graph storage and processing. graph processing platform: is a platform used for processing graphs, by applying global algorithms towards large graphs (used more in OLAP scenario). Based on this paper http://www.ds.ewi.tudelft.nl/~iosup/perf-eval-graph-proc14ipdps.pdf, neo4j is benchmarked as a processing platform. So my question is, if its correct to put neo4j in the group of processing platform like pregel and giraph. AI: No As Emre has rightly pointed out, the Chief Scientist of the company himself has written a blog post claiming the same. However, Neo4j does a lot of computations which graph processing tools can do. In fact, it does graph traversals much faster than giraph, due to the Hadoop overhead and also as it stores the adjacent nodes in a doubly linked list. So, it's not rare to confuse Neo4j with a graph processing platform (thus the claim in the paper) due to it's overlap with the features of a processing platform like Pregel and Giraph.
H: key parameter in max function in Pyspark In the example given for the max function for PySpark: Pyspark >>> rdd = sc.parallelize([1.0, 5.0, 43.0, 10.0]) >>> rdd.max() 43.0 >>> rdd.max(key=str) 5.0 Q1. How does it get evaluated to 5.0 when key=str ? Is it based on conversion to character/string type ? Q2. Is there any value the parameter "key" can take ? I also found the function definition of "max" at this location https://github.com/adobe-research/spark-gpu/blob/master/src/rdd.py def max(self, key=None): """ Find the maximum item in this RDD. :param key: A function used to generate key for comparing >>> rdd = sc.parallelize([1.0, 5.0, 43.0, 10.0]) >>> rdd.max() 43.0 >>> rdd.max(key=str) 5.0 """ if key is None: return self.reduce(max) return self.reduce(lambda a, b: max(a, b, key=key)) AI: You pass a function to the key parameter that it will virtually map your rows on to check for the maximum value. In this case you pass the str function which converts your floats to strings. Since '5.0' > '14.0' due to the nature of string comparisons, this is returned. What is usually a more likely use is using the key parameter as follows: test = sc.parallelize([(1, 2), (4,3), (2,4)]) test.max(key = lambda x: -x[1]) (1,2) Because of the - we sort descending and the x[1] means we use the second entry in our tuples as the key.
H: Character recognition neural net topology/design I'm building a neural net for classifying characters in pictures. The input can be any character a-z, lowercase and uppercase. I only care about classifying the characters, and not the case, so the neural net has an output vector of length 26; one for each character. It makes sense, intuitively, to have a hidden layer of size 26*2 just upstream of the output layer. It also makes intuitive sense for this layer not to be fully connected to the output layer, but instead having two and two hidden nodes connect to each output node. I have some questions: a) Does this make sense? I'm getting about 75 % success rate on a pretty hard data set with just one hidden layer, but I'm not certain on how to improve from there. b) If so, What activation function should I use from the hidden layer with 26*2 nodes to the output layer? Maybe I should use an OR function for this, since both the lowercase and the uppercase version of a character should output for a single character. c) Would it be wiser to have 26*2 output nodes instead, and just combine lowercase and uppercase outputs after the neural net? AI: Your design makes some sense, but there is no need to limit connections even if you expect to represent probabilities of upper/lower case separately, because they will interact usefully. E.g if the character could most likely be one of o, O, Q, G then this might be useful information to choose the correct one. If you went ahead, you would need to train this network without the final layer (so that it learns the representations you expect, not some other group of 52 features), then add the final layer later, with no need for special connection rules, just use existing ones. Initially you would training the new layer separately from the full output of the 52-class net i.e. probability values, not selected class. Then you would combine with the existing net and fine-tune the result by running a few more epochs with a low learning rate on the final model. That all seems quite complex, and IMO unlikely to gain you much accuracy (although I am guessing, it could be great - so if you have time to explore ideas, you could still try). Personally I would not take your hidden layer idea further. The full 52-class version with simple logic to combine results is I think simpler. This is also not necessary, the neural net can learn to have two different-looking images be in the same class quite easily, provided you supply examples of them in training. However, it may give you useful insights into categorisation failures in training or testing. It is not clear from the question, but if you are not already using convolutional neural network for lower layers, then you should do so. This will make the largest impact on your results by far.
H: Hierarchical Clustering customized Linkage function In my clustering project, I need to customize the linkage function, so that after each cluster merging I can update the inter-cluster distance in my own way. Currently I'm using scikit-learn AgglomerativeClustering, which seems not having this customizable feature. After a quick glance in scipy, no luck there either. Does anyone know any python hierarchical clustering toolkit that has customizable linkage? AI: Fork sklearn and implement it yourself! The linkage function is referenced in cluster/hierarchical.py as join_func = linkage_choices[linkage] and coord_col = join_func(A[i], A[j], used_node, n_i, n_j) If you have time, polish your code and submit a pull request when you're done.
H: Which observation to use when doing k-fold validation or boostrap? I have to perform predictive model over the dataset $D$ (with 1000 obs). From $D$, I extract 700 obs for training $(T)$ and 300 obs for validation $( V )$. I need to perform bootstrap or 10-fold cross validation sampling. The question is which of these sets should I use? Divide $D$ in 10 subsets and alternate training and validation between them ? Divide $T$ (the training subset) in 10 subsets and perform training/validation on those subsets? $V$ is used only for final validation. AI: I recommend using the second option you presented. I would use $T$ with 10-fold CV to select my modeling technique and optimal tuning parameters. Take a look at what performed the best ("best" being the model that gives us the best error, but also doesn't have the error fluctuate too much from fold to fold). After selecting a model, you can use the model on $V$ to get a realistic error rate. The reason I don't recommend the first option is: There are varying degrees of over fitting that can occur when going through model selection and model tuning, then using that same data to get an error rate. CV is a great way to limit this overfitting and it gives us a sense of performance variance which is great, but a classic hold-out validation set is the gold standard for model performance. In your case the first option might not be wrong (depends a lot on data/techniques), but if a hold-out validation set is available I would go for that.
H: How to classify support call texts? I have a spreadsheet with thousands of records regarding support requests. Case number, Issue description, etc. Our goal is to classify these records in many categories in order to assign them the right priority. Example: Customer can't use pickup feature. Customer can't dial 911 or Long Distance numbers. For item number 1, I have decided to use a category called Best Effort and for item 2, an Urgent category. Customer can't use pickup feature, BEST_EFFORT Customer can't dial 911 or Long Distance numbers, URGENT I'm planning to setup a dictionary of words. best_effort = ['pickup','record','conference'] urgent = ['system is down','911', 'can't dial emergency','call center is down'] My goal is to use TFIDF and then cosine similarity to find best match and category. Does it makes sense? Any better recommendation to classify this type of information? AI: Rather than use an external dictionary of keywords that are indicative of target class, you may want to take your raw data (or rather a random subset of it), then hand-label your instances (assign each row a label, BEST_EFFORT or URGENT). This becomes your training data - each row of data can be transformed into a bag-of-words vector indicating the presence/absence of the word in that particular text. You can train a classifier on this data, for example a naive bayes classifier, which can then be tested on the held out unseen test data. The advantages of the proposed approach are: (1) automated computation of features vs. hand created dictionary; (2) probabilistic/weighted indicators of class vs. binary dictionary indicators.
H: Can a Gradient Boosting Regressor be tuned on a subset of the data and achieve the same result? I am working with a large data set (~9M rows with 20+ features). Is it ok to tune via grid search on a fraction of the data (~100k rows) to determine optimal hyperparameters? This is mostly for choosing max_features, min_samples, max_depth. Trees and learning rate come later. Will I get different results tuning the fraction versus the whole data set? AI: You should never train or do grid search on your entire data set, since it will lead to overfitting and reduce the accuracy of your model on new data. What you have described is actually the ideal approach: do grid search / training on a subset of your data. Yes, your model will get different results vs if you used the entire set of data, but your model will be much stronger because of it. For more details on why you would want to split up / sample your data, see this quesiton: https://stats.stackexchange.com/questions/19048/what-is-the-difference-between-test-set-and-validation-set
H: Merging multiple data frames row-wise in PySpark I have 10 data frames pyspark.sql.dataframe.DataFrame, obtained from randomSplit as (td1, td2, td3, td4, td5, td6, td7, td8, td9, td10) = td.randomSplit([.1, .1, .1, .1, .1, .1, .1, .1, .1, .1], seed = 100) Now I want to join 9 td's into a single data frame, how should I do that? I have already tried with unionAll, but this function accepts only two arguments. td1_2 = td1.unionAll(td2) # this is working fine td1_2_3 = td1.unionAll(td2, td3) # error TypeError: unionAll() takes exactly 2 arguments (3 given) Is there any way to combine more than two data frames row-wise? The purpose of doing this is that I am doing 10-fold Cross Validation manually without using PySpark CrossValidator method, So taking 9 into training and 1 into test data and then I will repeat it for other combinations. AI: Stolen from: https://stackoverflow.com/questions/33743978/spark-union-of-multiple-rdds Outside of chaining unions this is the only way to do it for DataFrames. from functools import reduce # For Python 3.x from pyspark.sql import DataFrame def unionAll(*dfs): return reduce(DataFrame.unionAll, dfs) unionAll(td2, td3, td4, td5, td6, td7, td8, td9, td10) What happens is that it takes all the objects that you passed as parameters and reduces them using unionAll (this reduce is from Python, not the Spark reduce although they work similarly) which eventually reduces it to one DataFrame. If instead of DataFrames they are normal RDDs you can pass a list of them to the union function of your SparkContext EDIT: For your purpose I propose a different method, since you would have to repeat this whole union 10 times for your different folds for crossvalidation, I would add labels for which fold a row belongs to and just filter your DataFrame for every fold based on the label
H: Prediction model for marketing to prospective customers (using pandas) I'm currently working on a part-time project which involves predicting the likelihood of customers going to buy a product using data analytics. The company I'm interning with has given me a customer CSV file with all current customers and their attributes and needs to make a prediction model to classify whether prospects are feasible to pursue or not. However since they have given me a list of all their successful customers or leads, in marketing terms, is it possible to train a model like K-means with PCA (and k-fold cross validation?) and get results? I have to train my model to fit a value, say 10, which I will add to the CSV, and further test it. I am using pandas. Another issue is that there is a lot of demographical data, but I managed to overcome it using get_dummies(). The number of columns escalated from about 10 to 47, though. I'm just entering into the world of data analysis, hence I'm a bit clueless as to what path to take or whether what I'm doing is right. The exact analysis is called Predictive Lead Scoring/Analysis, in marketing terminology. EDIT 1 I followed what @HonzaB did and, hence did get a decision tree. However, since I had 40 columns, it looks like this I had to take a screenshot of it, as it was over 2 MB. Obviously it's really big, and I have to prune the tree somehow, but I not sure how to do so on pandas. Also, is there any way that I can just generate the best characteristics as a text file or something that can be understood without the help of a data scientist? EDIT 2 I've read up on a question that is quite similar to what I need to do. Predictive modeling based on RFM scoring indicators. In it there is a link to a paper([Data Mining using RFM Analysis][3]) that talks about rule-based classification. Ideally this is what I need to do, and what is most suitable to the company's need. I want to know if it's possible to do this on Python/pandas. Or is it possible to traverse the decision tree and generate the rules? EDIT 3 I found another website Decision trees in python again, cross-validation that uses cross validation and hyperparameter optimisation to get a better solution. Also they have included Python code to get readable code. It's a feasible solution, however it's quite complicated and I can't understand how it works. Will it work? PS I solved the "really big decision-tree" problem from Edit 1, by reducing max-depth. I didn't know at all. AI: First, I would ask the company if there is more information about the customer. You mentioned you have 10 original columns, which might not be enough to make a good prediction. Same goes for number of rows. Usually, more data, better the model, up to a certain limit. Second, encode categorical features (demographical data in your case) is good thing to do. The increased number of columns dont have to bother you in your case. For the task itself, yes, it is doable. Start easy, simply check importance of each feature (I would let PCA for later), pick few models and test them. Also consider train simple decision tree. Your results can be easily visualized in way the business people understand. As oposed to black-box methods as K-Means.
H: How to group identical values and count their frequency in Python? Newbie to analytics with Python so please be gentle :-) I couldn't find the answer to this question - apologies if it is already answered elsewhere in a different format. I have a dataset of transaction data for a retail outlet. Variables along with explanation are: section: the section of the store, a str; prod_name: name of the product, a str; receipt: the number of the invoice, an int; cashier, the number of the cashier, an int; cost: the cost of the item, a float; date, in format MM/DD/YY, a str; time, in format HH:MM:SS, a str; Receipt has the same value for all the products purchased in a single transaction, thus it can be used to determine the average number of purchases made in a single transaction. What is the best way to go about this? I essentially want to use groupby() to group the receipt variable by its own identical occurrences so that I can create a histogram. Working with the data in a pandas DataFrame. EDIT: Here is some sample data with header (prod_name is actually a hex number): section,prod_name,receipt,cashier,cost,date,time electronics,b46f23e7,102856,5,70.50,05/20/15,9:08:20 womenswear,74558d0d,102857,8,20.00,05/20/15,9:12:46 womenswear,031f36b7,102857,8,30.00,05/20/15,9:12:47 menswear,1d52cd9d,102858,3,65.00,05/20/15,9:08:20 From this sample set I would expect a histogram of receipt that shows two occurrences of receipt 102857 (since that person bought two items in one transaction) and one occurrence respectively of receipt 102856 and of receipt 102858. Note: my dataset is not huge, about 1 million rows. AI: From this sample set I would expect a histogram of receipt that shows two occurrences of receipt 102857 (since that person bought two items in one transaction) and one occurrence respectively of receipt 102856 and of receipt 102858. Then you want: df.groupby('receipt').receipt.count() receipt 102856 1 102857 2 102858 1 Name: receipt, dtype: int64
H: change detection I have a question related to change detection. Application domain is robotics/planning. Background/setting: There is a sensor detecting distance from obstacle (ultrasonic / sonar sensor) at a specific position (x, y, theta) in the environment. It returns some reading at regular time intervals. Lets say the reading is R and over a period of time it records R+ or R- (+/- means variation due to sensor inaccuracies). Case 1: I introduce an additional object between the sensor and the obstacle at a distance D (D < R) so that at the next instance D is detected and returned Case 2: I remove the original obstacle and now the next obstacle is D' (D' > R) and at the next instance D' is returned. Question Is there a way to exactly (or with high probability) say that a changed occurred NOW (when I add or remove an obstacle)? Most change analysis algorithms consider a run length before change point and some data after change point and indicate the position change occurred. But none I have read so far say change happened NOW; even the "online" algorithms seem to need some burn in data. EDIT: Ultimate goal I want to implement a method that takes the data vector and return if the latest data point was a change point. A possible Solution/hack Since my work involves streaming data, this is the approach I am currently taking. Read a window of data (for now, my window size is 20 values) from the end of the stream. Run bcp (from R) on this window. Check for the posterior probability of the change at location 18. (for all the runs i just had, the last value is NA, hence ignore that, and the data is zero indexed, (calling R from Python using rpy2), hence, the position turns out 18 for window size of 20. Set a threshold of 70% for the posterior probability (for now in my experimental setting this works fine, I may have to work on getting a proper threshold later) If the posterior probability at location 18 > 70%, I return TRUE indicating the recent data point has a different mean, or "change detected", else return FALSE. This may not be the most efficient way of doing it, but it is doing its job for now. I am using this approach to carry my work forward. I will update the thread if I find a better approach. Thanks you all for the help! AI: Consider how an algorithm might detect a change. You're observing instances of some random variable, $X_1,X_2,\dots,X_{k-1}$. Suddenly (and unknown to you) at $X_k$ something about the distribution of $X$ changes. Now your observations $X_k,\dots,X_n$ are different in some way. You want to know what $k$ is based on your observations alone. In order to detect the change, you have to have some idea of what 'before' might look like so you can have confidence that 'after' is really different. So, yes, all change detection algorithms will use some run length before and after the true change to make a decision (edit: actually, you don't need run length before and after, you could just have an assumption about the data-generating process. Maybe you say its normal mean 0 variance 1 and your first observation is 5000, you don't need run length to know you're wrong somewhere). Anything else would be an even wilder kind of predicting the future. It seems like the real concern might be the latency of signal detection. You'd like the sensor to detect it after just a few instances of the data after the true change point. So my question is, do you really need it to work now? It seems reasonable to me that you're not interested in the number of data points, but the time it takes to gather them. If you have a sensor that updates 100,000 times a second, 100 data points isn't a huge deal.
H: Why did Tufte call this a "superbly produced duck"? I think I understand Tufte's concept of a "Duck" -- A graphic that is taken over by decorative forms. But I couldn't understand why he called this a duck (a "superbly produced" one at that). It seemed to me more functional than decorative. Thoughts? AI: If you assume duck to mean "irrelevant decorative elements" there are a few things that strike me as likely: (1) the "squares" only roughly indicate location of the target area; (2) squares represent volume, which is incongruent when overlaid on 2d geography; (3) shading of mountains/geographic features doesn't add detail. Also problematic: comparison of h20 volume applied to crop types is difficult to discern using grids. Quick, which crop color uses more water?? If you want to compare volume to crop type then you should use a plain column chart, where it is easy to distinguish relative heights. This is analogous to why bar/column charts are preferable to pie charts - humans are better at comparing heights/lengths vs area/volume. If, on the other hand, the point of the graph is to compare crop type irrigation by area, you would want to display a column graph with regions next to each other, but grouped by crop type. As far as a superbly produced duck, my guess is that the Applied Irrigation Water is a duck, but a superb duck, at least when compared to the monstrous duck on the preceding page of the Visual Display of Quantitative Information 2nd Ed.
H: Is it a good idea to train with a feature which value will be fixed in future predictions? I am facing a regression problem and I have one feature that has some relevant correlation with the output. The value of this feature will be fixed in ALL the predictions I will use this model for. Should I keep it in my model or not? Thanks AI: Given that in your training data this feature has different values and some predictive power, I think not keeping this feature would be a mistake (without looking into overfitting due to having too many features). You cannot just discard the feature from your training set if it does influence the target because then these would be from a different population than your predictions and it will be able to learn from the other features. Extreme example where x_2 will always be 5 in the future: x_1 x_2 y 2 8 6 3 7 5 2.5 5 1.5 3 5 0.5 Just removing x_2 loses a lot of information and would create a significant bias towards higher targets.
H: What is difference between Bayesian Networks and Belief Networks? While reading some articles about Bayesian Networks, I came across many occurrences of Belief Networks. Do both of these terms mean the same thing or is there any difference between Bayesian Networks and Belief Networks? AI: Both are literally the same. A Belief network is the one, where we establish a belief that certain event A will occur, given B. The network assumes the structure of a directed graph. The term Bayesian was coined after the name of Thomas Bayes.
H: How to implement accurate counts or sums when you have different numbers of days of the week? I have a dataset of transaction data for a retail outlet. I am using pandas and want to analyse revenue by day of the week, but there are unequal days in the dataset (i.e. an extra weekend). I have used df.dt.dayofweek to create an integer value for the day of the week, and grouped the data by that integer value using df.groupby(["Day_int",]).sum() So I have a 'total' column at the end that I am interested in, but I want to create a a new column, something like 'adj_total' that applies a division operation of /3 to the days Monday to Friday and /4 to the days Saturday and Sunday. Is the best way to loop through the dataframe or is it better Here is the df that I am working with (most of the column values are nonsensical, only total is of interest). Day_int is the group_by variable and should appear lower than the other columns. Day_int Section Prod_name Cashier Date Time Receipt Total 0 ..................................................91341 1 ..................................................82262 2 ..................................................84145 3 ..................................................90115 4 ..................................................115497 5 ..................................................151971 6 ..................................................109210 AI: I would add a column that is a 3 if it's a weekday and a 4 if it's not using an apply, something like this: df['divide_by'] = df.apply(lambda x: 3 if x['Day_int']<5 else 4, axis=1) Assuming days are Monday 0 to Sunday 6. Then you can add the column as follows: df['adj_total'] = df.apply(lambda x: x['Total']/x['divide_by'], axis=1) Then you can remove the divide_by column and you have the result
H: Preprocessing text before use RNN I'm going to use ( RNN+Logisitic Regression ) to make sentiment analysis. Should I do preprocessing for text like remove stop words, punctuation and extract keywords by found nouns ? AI: Welcome to the Data Science forum. Yes, data preprocessing is an important aspect of sentiment analysis for better results. What sort of preprocessing to be done largely depends on the quality of your data. You'll have to explore your corpus to understand the types of variables, their functions, permissible values, and so on. Some formats including html and xml contain tags and other data structures that provide more metadata. At a high level the sentiment analysis (using bag of words) will involve 4 steps: Step 1: Data Assembly Step 2: Data Processing Step 3: Data Exploration or Visualization Step 4: Model Building & Validation (train & test) Lets understand different possible data preprocessing activities: Convert text to lowercase – This is to avoid distinguish between words simply on case. Remove Number – Numbers may or may not be relevant to our analyses. Usually it does not carry any importance in sentiment analysis Remove Punctuation – Punctuation can provide grammatical context which supports understanding. For bag of words based sentiment analysis punctuation does not add value. Remove English stop words – Stop words are common words found in a language. Words like for, of, are etc are common stop words. Remove Own stop words(if required) – Along with English stop words, we could instead or in addition remove our own stop words. The choice of own stop word might depend on the domain of discourse, and might not become apparent until we’ve done some analysis. Strip white space – Eliminate extra white spaces. Stemming – Transforms to root word. Stemming uses an algorithm that removes common word endings for English words, such as “es”, “ed” and “’s”. For example i.e., 1) “computer” & “computers” become “comput” Lemmatisation – transform to dictionary base form i.e., “produce” & “produced” become “produce” Sparse terms – We are often not interested in infrequent terms in our documents. Such “sparse” terms should be removed from the document term matrix. To give you more insight onto the steps involved, here are some example sentiment analysis using logistic regressions codes https://github.com/srom/sentiment https://github.com/jadianes/data-science-your-way/blob/master/04-sentiment-analysis/README.md Hope this helps.
H: Algorithm or formula to measure happiness? I am working on a project that takes in comments in a given Facebook page and determines the average happiness rating for it based on users of the page. My question is where would I find a formula, rating, or any other literature that would help me find a way to measure happiness using words and collections of words? I'm looking for something similar to cosine similarity I suppose, but not to find similar words, but to find the average positivity or happiness related to a word or collection of words. I'm not entirely sure this is the correct place for this question, but it has to do with data, and big data within Facebook so I am hoping to either get a question or be directed to somewhere that I may be able to find my answer. Thanks in advance for your help. AI: You should be looking towards Natural Language Processing, specifically at Sentiment Analysis. The link I provided is a good starting point for learning about sentiment analysis. If this is what you are looking for, it is available as part of Stanford's Core NLP.
H: How would I chi-squared test these simple results from A/B experiment? I have results from an A/B experiment where users could do one of three things: Watch, Interact, or Nothing My data is like this: Watch | Nothing | Interact A: 327445 | 271602 | 744702 B: 376455 | 140737 | 818204 I tried to use the chisquared test bundled with scipy. I'm completely new to data science, but I believe this is the evaluation metric I would want. scipy.stats.chisquare([ [327445, 271602, 744702], [376455, 140737, 818204] ]) Power_divergenceResult(statistic=array([ 3412.38826538, 41532.93339946, 3456.72996585]), pvalue=array([ 0., 0., 0.])) This does not look like a valid result... I even tried adding an expected frequencies options, but without success. Maybe I'm missing something, either about evaluating this type of data or just not using scipy correctly. Can anyone help me? AI: If what you're trying to answer is if the action taken by a user (watch, interact, nothing) is influenced by the group they are in (A or B) you can use the chi2 test of independence. For that you can use the scipy.stats.chi2_contingency function: a = [327445, 271602, 744702] b = [376455, 140737, 818204] chi2, pvalue, _, _ = scipy.stats.chi2_contingency([a, b]) In this case it returns a chi2 test statistic of 48376.48 and a p-value of 0.0, so the null hypothesis ("action is independent of group") is rejected. You can also use the scipy.stats.chisquare function to arrive at the same results, but other than with the chi2_contingency function you'll have to calculate the "expected frequencies" yourself. The data you have recorded are your observed frequencies: obs = np.array([a, b]).astype(float) (Note that I converted the numbers to float, as the chisquare function will run into some weird integer overflows otherwise...!?) The expected frequencies are calculated like that: exp = np.outer(obs.sum(axis=1), obs.sum(axis=0)) / obs.sum() Finally, calling chi2, pvalue = scipy.stats.chisquare(obs.ravel(), exp.ravel(), ddof=sum(obs.shape)-2) returns the same chi2 test statistic and p-value as before.
H: k-means in R, usage of nstart parameter? I try to use k-means clusters (using SQLserver + R), and it seems that my model is not stable : each time I run the k-means algorithm, it finds different clusters. But if I set nstart (in R k-means function) high enough (10 or more) it becomes stable. The default value for this parameter is 1 but it seems that setting it to a higher value (25) is recommended (I think I saw somewhere in the documentation). So I'm a bit confused... Any advice ? AI: Stability of the clusters is highly dependent on your dataset, for clear cut cases running it multiple times is a waste of resources. I think that is the rationale behind the default value of 1. But I agree that for most smaller cases setting it much higher makes a lot of sense.
H: How do I obtain the weight and variance of a k-means cluster? I am trying to reproduce the results of this paper, but using python and the HMMlearn library instead of matlab. The paper describes a procedure for using HMM (Hidden Markov Model) in order to predict stock prices. The paper details the use of a 4-state, 5-mixture Gaussian distribution as the model. The transition probabilities and the initial state probabilities are uniform, however the emission probabilities are determined based on the results of a k-means algorithm using the data-set of existing stock prices. This latter part is where I have gotten stuck, the paper advises to use the means, variance and weight of each cluster returned from the k-means algorithm as the mean, variance and weight of each component of the mixture. As I understand it the mean of the cluster is simply the center of each centroid, however I'm not sure how you would obtain the variance or the weight. TL;DR Given a 3 dimensional dataset X (in the form [[a, b, c], [d, e, f]...]) and using the k-means algorithm where k = 5 (k = number of mixture components), how would I determine the weight and variance of each cluster? AI: It is valid to use the k-means to initialize the EM for Mixture of Gaussian modeling. As you said, the mean of each component will be the average of all samples belong to the same cluster (it depends on the used clustering algorithm, some times the centroid is not the average of the cluster but is one of the samples). for the weight you can use the following: the weight of cluster x = the number of samples belong to cluster x divided by the total number of samples. thus, the cluster with the highest number of samples is the cluster with the highest weight. for the variance: just find the variance of all samples belong to the same cluster.
H: How to evaluate distance in k-means clusters? I try to use k-means clusters (using SQLserver + R), and I was wondering how we could estimate a distance correctly. For instance, if we consider Euclidean distance form the center of the clusters, what happen if we have, for the same dataset, clusters of different sizes ? a "normal" point in a big cluster will have a distance higher than an "outlier" point in a little one. So: Is it relevant to center / scale euclidean distance on each cluster ? (and then consider outliers as the ones with the highest scaled distance) Are there other kind of distance to consider ? AI: There are several important points to keep in mind in considering your questions: You should always normalize or standardize your data before applying k-means clustering. This is true of most other clustering algorithms also. If you are clustering in more than one dimension, the distance metric is meaningless unless each dimension has the same weight, so normalization is essential. Imagine clustering people by body weight and income. Without normalization the results will depend on whether you think in pounds and dollars, kilograms and pesos, or moles and euros. The lack of normalization introduces non-determinism. Strictly speaking, the stability of the k-means algorithm has been shown for Euclidean distance metrics and there is no assurance of convergence with other distance metrics. More practically, most sensible metrics attain convergence and its not much of an issue, but its worth putting that warning out there. k-means isn't a clustering algorithm that readily lends itself to statistical analysis within the cluster. Every point in the space is a member of one of the k clusters regardless of how much of an outlier the point is. There are other clustering methods that are more adept at finding and ignoring outliers. DBSCAN is one such algorithm that is very good and finding clusters and ignoring noise. Now, answering your questions: Is it relevant to center / scale euclidean distance on each cluster ? (and then consider outliers as the ones with the highest scaled distance) Yes, you can certainly do this. Combining k-means with outlier detection is certainly possible but is probably not the most elegant or efficient algorithm. It kind of sounds like poor-mans's DBSCAN. Euclidean distance works fine, but just do a second set of normalizations using the centroids and the standard deviation of the cluster. Are there other kind of distance to consider? There are lots of other metrics that are useful for many different reasons. As stated, the k-means convergence proofs hold only for Euclidean distance. For outlier detection Euclidean seems best, but there may be cases where Cosine Similarity metrics could be useful. People may suggest L1 (Manhattan) distance metrics, but I find this is only useful when there is significant linear dependence in your data, which can be remedied with dimensionality reduction. Short answer: Give it a try as Euclidean should work fine, but also take a look at clustering via DBSCAN, which has outlier detection built into it. Hope this helps!
H: How to determine if character sequence is English word or noise What kind of features you will try to extract from list of words for future predicting, is it existing word or just mess of characters ? There is description of task that I found there. You have to write a program that can answer whether a given word is English. This would be easy — you'd just need to look the word up in the dictionary — but there is an important restriction: your program must not be larger than 64 KiB. So, I thought it would be possible to use logistic regression for solving the problem. I don't have a lot of experience with data-mining but the task is interesting for me. Thanks. AI: During NLP and text analytics, several varieties of features can be extracted from a document of words to use for predictive modeling. These include the following. ngrams Take a random sample of words from words.txt. For each word in sample, extract every possible bi-gram of letters. For example, the word strength consists of these bi-grams: {st, tr, re, en, ng, gt, th}. Group by bi-gram and compute the frequency of each bi-gram in your corpus. Now do the same thing for tri-grams, ... all the way up to n-grams. At this point you have a rough idea of the frequency distribution of how Roman letters combine to create English words. ngram + word boundaries To do a proper analysis you should probably create tags to indicate n-grams at the start and end of a word, (dog -> {^d, do, og, g^}) - this would allow you to capture phonological/orthographic constraints that might otherwise be missed (e.g., the sequence ng can never occur at the beginning of a native English word, thus the sequence ^ng is not permissible - one of the reasons why Vietnamese names like Nguyễn are hard to pronounce for English speakers). Call this collection of grams the word_set. If you reverse sort by frequency, your most frequent grams will be at the top of the list -- these will reflect the most common sequences across English words. Below I show some (ugly) code using package {ngram} to extract the letter ngrams from words then compute the gram frequencies: #' Return orthographic n-grams for word #' @param w character vector of length 1 #' @param n integer type of n-gram #' @return character vector #' getGrams <- function(w, n = 2) { require(ngram) (w <- gsub("(^[A-Za-z])", "^\\1", w)) (w <- gsub("([A-Za-z]$)", "\\1^", w)) # for ngram processing must add spaces between letters (ww <- gsub("([A-Za-z^'])", "\\1 \\2", w)) w <- gsub("[ ]$", "", ww) ng <- ngram(w, n = n) grams <- get.ngrams(ng) out_grams <- sapply(grams, function(gram){return(gsub(" ", "", gram))}) #remove spaces return(out_grams) } words <- list("dog", "log", "bog", "frog") res <- sapply(words, FUN = getGrams) grams <- unlist(as.vector(res)) table(grams) ## ^b ^d ^f ^l bo do fr g^ lo og ro ## 1 1 1 1 1 1 1 4 1 4 1 Your program will just take an incoming sequence of characters as input, break it into grams as previously discussed and compare to list of top grams. Obviously you will have to reduce your top n picks to fit the program size requirement. consonants & vowels Another possible feature or approach would be to look at consonant vowel sequences. Basically convert all words in consonant vowel strings (e.g., pancake -> CVCCVCV) and follow the same strategy previously discussed. This program could probably be much smaller but it would suffer from accuracy because it abstracts phones into high-order units. nchar Another useful feature will be string length, as the possibility for legitimate English words decreases as the number of characters increases. library(dplyr) library(ggplot2) file_name <- "words.txt" df <- read.csv(file_name, header = FALSE, stringsAsFactors = FALSE) names(df) <- c("word") df$nchar <- sapply(df$word, nchar) grouped <- dplyr::group_by(df, nchar) res <- dplyr::summarize(grouped, count = n()) qplot(res$nchar, res$count, geom="path", xlab = "Number of characters", ylab = "Frequency", main = "Distribution of English word lengths in words.txt", col=I("red")) Error Analysis The type of errors produced by this type of machine should be nonsense words - words that look like they should be English words but which aren't (e.g., ghjrtg would be correctly rejected (true negative) but barkle would incorrectly classified as an English word (false positive)). Interestingly, zyzzyvas would be incorrectly rejected (false negative), because zyzzyvas is a real English word (at least according to words.txt), but its gram sequences are extremely rare and thus not likely to contribute much discriminatory power.
H: Decision Tree generating leaves for only one case I had asked a question regarding predictive analysis for marketing earlier. Prediction model for marketing to prospective customers (using pandas) Still have some doubts about it, but I have a doubt regarding the decision tree that I generated for the marketing data. My aim is to predict if a lead will be won or lost, depending on how they were made aware of the product etc. I have a bool variable "Won", 0-sale had failed, 1-sale was made. Using a decision tree, I was able to generate a model, however, there are no leaves for cases that lead to Not Won. Is this normal? Ive seen examples of the iris data set where all 3 features were represented in the tree, adn, hence am wondering if the approach Ive taken is correct. In the dataset of 38000, there are roughly 1700 who have Won=1. Im using pandas and my parameters for DecisionTreeClassifier are: min_samples_split=2 min_samples_leaf=1 max_depth=3 I got then using grid search for parameter optimisation. I got a mean val of 0.95. If I use a bigger depth, the tree becomes too big to analyse. Thanks EDIT Since Mark said to post some code, I am doing so from sklearn.tree import DecisionTreeClassifier,export_graphviz dt=DecisionTreeClassifier(min_samples_split=2,min_samples_leaf=1,max_depth=3) dt.fit(x,y) features=x.columns #print(features) with open("dt.dot", 'w') as f: export_graphviz(dt, out_file=f,feature_names=features,class_names=["Won","Lost"]) command = ["dot", "-Tpng", "dt.dot", "-o", "hello.png"] try: subprocess.check_call(command) except: exit("Could not run dot, ie graphviz, to produce visualization") from sklearn.cross_validation import cross_val_score scores=cross_val_score(dt,x,y,cv=10) print(scores.mean()) This is the main training code, all previous lines are just munging The cross validation score come to 0.95 Here is the csv snapshot I use all values from "Won" onwards and am training for ""Won" There was a single column X, which had many categorical values (20), "Won" being one of them. Known' 'Recycled' 'Engaged' 'Prospect/MQL' 'Intern Transfer' 'MQL' 'Working' 'Opportunity' nan 'Current Customer' 'Vendor' 'Disqualified' 'Converted' 'SAL' 'SQL' 'engaged' 'working' 'Won' 'Web Registration' 'Inactive However, they were all uniformly distributed ie. from 37000, all had almost the same number of observations. I used get_dummies to transform to numerical values, and dropped all the columns except "Won" All the rest of the values value for things like designation, opp(money value scaled from 1-3) and other categories, which are all boolean AI: You are predicting binary class: 0 - sale failed, 1 - sale succeed. In the decision tree, the information value is sorted. So for example in the first node, you have 35011 predictions of 0 and 1785 predictions of class 1. But then, in your code you have this: class_names=["Won","Lost"]). So you are telling your decision tree that name of class 0 is "Won" and name of class 1 is "Lost". I assume it should be inverted. So in fact, the model always predict 0 - sale failed. Which seems reasonable as you said that out of 38000 samples, only 1700 are class 1. Further, your dataset is highly unbalanced. So when cross validation gives accuracy of 95%, it says nothing, because only if your model gives all prediction 0, you will have accuracy of...95%. For those cases, use models which can put weights to classes (SVM, Logistic regression or Random Forest classifier in scikit have this possibility). Think about undersample your dataset. Or add more of class 1 - dig in your database or do it artificially (SMOTE), Plot confusion matrix and give CV different score measure - precision or recall, depends on your situation. Last thing, don't expect that your decision tree with depth of 3 will perform well. It is nice to show to business, but it gives poor prediction.
H: Prepping Data For Usage Clustering Dataset: I'm given the number of minutes individual customers use a product each day and am trying to cluster this data in order to find common usage patterns. My question: How can I format the data so that, for example, a power user with high levels of use for a year looks the same as a different power user who has only been able to use the device for a month before I ended data collection? So far I've turned each customer into an array where each cell is the number of minutes used that day. This array starts when the user first uses the product and ends after the user's first year of use. All entries in the cells must be double values (e.x. 200.0 minutes used) for the clustering model. I've considered either setting all cells/days after the last day of data collection to either -1.0 or NULL. Are either of these a valid approach? If not what would you suggest? AI: I believe your problem boils down to clustering time-series of different lengths. According to your question, you want the longer time-series of a power user to be considered similar to time-series of similar pattern but much shorter. Therefore you should look into clustering techniques and distance metrics which allow for these properties. I don't know your language of choice but here are some of the many packages in R that you might find interesting : - Fréchet distance - one of the packages offering this is kmlShape - Dynamic Time Warping included in base R - Permutation Distribution Clustering - package pdc This would also solve your data formatting problem as to setting values to -1 or NULL would not be needed anymore. hth.
H: Visualizing N-way frequency table as a Decision Tree in R I have a N-way frequency table generated from a regression model fit. This is a reproducible example of such table: data("CO2") lm.fit = lm(uptake ~ Type + Treatment, data = CO2) lm.fit$coefficients test = count(CO2, c('Type','Treatment')) test$res = predict(lm.fit, newdata = test) test$freq = NULL I am trying to visualize test as a decision tree with nodes as Type and Treatment and res as leaves. I would interpret it as the path the regression model takes, leading to the final value for a particular segment. I am not able to generate a tree with test. I am also open to other novel ways of visualizing these results. My original problem has many categorical variables, so I am looking for a customizable visualization, something from party::ctree or rattle::fancyRpartPlot. AI: You could try library(data.tree) test$pathString <- with(test, paste("lm", Type, Treatment, round(res, 2), sep="/")) (tree <- as.Node(test)) # levelName # 1 lm # 2 ¦--Quebec # 3 ¦ ¦--nonchilled # 4 ¦ ¦ °--36.97 # 5 ¦ °--chilled # 6 ¦ °--30.11 # 7 °--Mississippi # 8 ¦--nonchilled # 9 ¦ °--24.31 # 10 °--chilled # 11 °--17.45 plot(tree) The plot uses the DiagrammeR package.
H: In random forest, what happens if I add features that are correlated? I'm training a random forest, trying to predict market shares of future stores on geographical areas. I have many features for these areas, some of which tell similar but different things about one thing. For example, I know the total number of $accommodations$ in the area, and I also have 5 others columns which are all linked in the following way: $main \space accommodations + secondary \space accommodations + holiday \space accommodations = houses + flats = accommodations$ I have the feeling that including them all in my model would be wrong... but including them might be important... Any hint on how I should handle this? Would it be a good idea to include $accommodations$ as absolute value and include all the other five but as % (of $accommodations$) and not as absolute values? In a similar fashion, I also have the total number of $households$ of the area, the $total \space income$ of the area, and the $average \space income$ of households in the area (so that $households * average \space income = total \space income$). I have the feeling using the average and not the total income would be a better idea, but how can I be sure I'm right ? (I guess I could train three random forests using the average income only, the total income only, and both, and see how they perform on cross validation, but is there a rule of thumb that I should know of which can make me go faster ?) (In case it's relevant, I'm using R and the randomForest package) AI: Random forests don't suffer from correlated variables like linear regression models do. Random forests randomly pick from a subset of variables at each split (hence the "random" in "random forests"). This means that correlated variables are less likely to show up together when the trees are being trained. But even when correlated variables show up in the same random subset of variables, it's still not much of an issue because the variables aren't assigned coefficients. Correlated variables are mostly an issue for linear models that try to hold all other variables constant when calculating coefficients during training. The variable selection process is much simpler for trees and tree-based algorithms like random forests and gradient boosting. When a random forest is being trained and a tree's split is being evaluated, the algorithm will simply pick whichever feature most reduces error on that particular split of the tree. Once a variable is picked, there is no coefficient, just a greater-than/less-than split point, so the problem of "exploding coefficients" doesn't apply.
H: How to improve an existing (trained) classifier? I have a Classifier which I have trained and tested on a small dataset - receiving solid results, though I wish to improve them. If I understand correctly one way of doing so is to add more data to obtain a more precise classification rule. When doing this should I add data to both the training set and the test set? or should I add only to the training set? or maybe I should I create new training and test sets from the 'new dataset'? (new = the old data + the new data). AI: Adding more data does not always help. However, you can get an estimate if more data will help you by the following procedure: Make a plot. On the $x$-axis is the amount of training examples, starting at one example per class going to wherever you are currently. The $y$-axis shows the error. Now you should add two curves: Training and test error. For low $x$, the training error should be very low (almost 0) and the test error very high. With enough data, they should be "about the same". By plotting those curves you can make an educated guess how much more data will give you how much improvement. When doing this should I add data to both the training set and the test set? Depends on what you want to achieve. If only getting a better classifier, then you can only add it to the training set. However, if you're doing this in a scientific setting this might be more difficult. (I assume that your test set is of reasonable size). You might want to have a look at cross-validation.