Id
stringlengths
2
6
PostTypeId
stringclasses
1 value
AcceptedAnswerId
stringlengths
2
6
ParentId
stringclasses
0 values
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
34
27.1k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
1 value
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
2
6
OwnerUserId
stringlengths
2
6
Tags
sequencelengths
1
5
Answer
stringlengths
32
27.2k
SimilarQuestion
stringlengths
15
150
SimilarQuestionAnswer
stringlengths
44
22.3k
1102
1
1103
null
4
1302
The CoreNLP parts of speech tagger and name entity recognition tagger are pretty good out of the box, but I'd like to improve the accuracy further so that the overall program runs better. To explain more about accuracy -- there are situations in which the POS/NER is wrongly tagged. For instance: - "Oversaw car manufacturing" gets tagged as NNP-NN-NN Rather than VB* or something similar, since it's a verb-like phrase (I'm not a linguist, so take this with a grain of salt). So what's the best way to accomplish accuracy improvement? - Are there better models out there for POS/NER that can be incorporated into CoreNLP? - Should I switch to other NLP tools? - Or create training models with exception rules?
Improve CoreNLP POS tagger and NER tagger?
CC BY-SA 3.0
null
2014-09-11T17:09:52.313
2014-09-12T00:40:07.877
null
null
2785
[ "nlp", "language-model" ]
Your best best is to train your own models on the kind of data you're going to be working with.
What machine learning algorithms to use for unsupervised POS tagging?
There are no unsupervised methods to train a POS-Tagger that have similar performance to human annotations or supervised methods. [The current state-of-the-art supervised methods for training POS-Tagger are Long short-term memory (LSTM) neural networks](http://aclweb.org/anthology/D17-1076).
1107
1
1112
null
35
15673
I have a classification problem with approximately 1000 positive and 10000 negative samples in training set. So this data set is quite unbalanced. Plain random forest is just trying to mark all test samples as a majority class. Some good answers about sub-sampling and weighted random forest are given here: [What are the implications for training a Tree Ensemble with highly biased datasets?](https://datascience.stackexchange.com/questions/454/what-are-the-implications-for-training-a-tree-ensemble-with-highly-biased-datase) Which classification methods besides RF can handle the problem in the best way?
Quick guide into training highly imbalanced data sets
CC BY-SA 3.0
null
2014-09-12T15:20:51.767
2016-07-15T22:10:08.333
2017-04-13T12:50:41.230
-1
97
[ "machine-learning", "classification", "dataset", "class-imbalance" ]
- Max Kuhn covers this well in Ch16 of Applied Predictive Modeling. - As mentioned in the linked thread, imbalanced data is essentially a cost sensitive training problem. Thus any cost sensitive approach is applicable to imbalanced data. - There are a large number of such approaches. Not all implemented in R: C50, weighted SVMs are options. Jous-boost. Rusboost I think is only available as Matlab code. - I don't use Weka, but believe it has a large number of cost sensitive classifiers. - Handling imbalanced datasets: A review: Sotiris Kotsiantis, Dimitris Kanellopoulos, Panayiotis Pintelas' - On the Class Imbalance Problem: Xinjian Guo, Yilong Yin, Cailing Dong, Gongping Yang, Guangtong Zhou
Handling large imbalanced data set
There are multiple options, depending on your problem and the algorithms you want to use. The most promising (or closest to your original plan) is to use a generator to prepare batches of training data. This is only useful for models that allow for partial fits, like neural networks. Your generator can just stratify examples by for example generating a batch that includes exactly one of each target. One epoch would be when you served all the samples from the biggest class. Downsampling is not a bad idea but it depends on the difficulty of your task, because you do end up throwing away information. You could look at some curves depending on the amount of samples for your model, if it looks relatively capped this wouldn't be a big issue. A lot of models allow for weighting classes in your loss function. If we have 10,000 of class A and 1,000 of class B, we could weight class B 10x, which means mistakes that way count much harder and it will focus relatively more on samples from class B. You could try this but I could see this going wrong with extreme imbalances. You can even combine these methods, downsample your biggest classes, upsample your smaller classes and use weights to balance them perfectly. EDIT: Example of the batch options: We have 4x A, 2x B and 1x C, so our set is: A1 A2 A3 A4 B1 B2 C1 Regular upsampling would go to: A1 A2 A3 A4 B1 B2 B1 B2 C1 C1 C1 C1 But this will not fit in our memory in a big data setting. What we do instead is only store our original data in memory (could even be on disk) and keep track where we are for each class (so they are seperated on target). A: A1 A2 A3 A4 B: B1 B2 C: C1 Our first batch takes one of each class: A1 B1 C1 Now our C class is empty, which means we reinitialize it, shuffle them (in this case it's only one example). A: A2 A3 A4 B: B2 C: C1 Next batch: A2 B2 C1 B and C are empty, reinitialize them and shuffle: A: A3 A4 B: B2 B1 C: C1 Next batch is: A3 B2 C1 And our last one of the epoch would be A4 B1 C1 As you can see, we have the same distribution as the full memory option, but we never keep more in memory than our original ones, and the model always gets balanced, stratified batches.
1110
1
1118
null
6
1585
I want to cluster a set of long-tailed / pareto-like data into several bins (actually the bin number is not determined yet). Which algorithm or model would anyone recommend?
Binning long-tailed / pareto data before clustering
CC BY-SA 3.0
null
2014-09-13T06:33:17.360
2017-05-30T14:50:23.443
2017-05-30T14:50:23.443
14372
3289
[ "clustering", "k-means" ]
There are several approaches. You can start from the second one. Equal-width (distance) partitioning: - It divides the range into N intervals of equal size: uniform grid - if A and B are the lowest and highest values of the attribute, the width of intervals will be: W = (B-A)/N. - The most straightforward - Outliers may dominate presentation - Skewed data is not handled well. Equal-depth (frequency) partitioning: - It divides the range into N intervals, each containing approximately same number of samples - Good data scaling - Managing categorical attributes can be tricky. Other Methods - Rank: The rank of a number is its size relative to other values of a numerical variable. First, we sort the list of values, then we assign the position of a value as its rank. Same values receive the same rank but the presence of duplicate values affects the ranks of subsequent values (e.g., 1,2,3,3,5). Rank is a solid binning method with one major drawback, values can have different ranks in different lists. - Quantiles (median, quartiles, percentiles, ...): Quantiles are also very useful binning methods but like Rank, one value can have different quantile if the list of values changes. - Math functions: For example, logarithmic binning is an effective method for the numerical variables with highly skewed distribution (e.g., income). Entropy-based Binning [Entropy based method](http://www.saedsayad.com/supervised_binning.htm) uses a split approach. The entropy (or the information content) is calculated based on the class label. Intuitively, it finds the best split so that the bins are as pure as possible that is the majority of the values in a bin correspond to have the same class label. Formally, it is characterized by finding the split with the maximal information gain.
How do i cluster binarized categorial data, without knowing the number of clusters?
For categorical data, robust hierarchical clustering algorithm ( ROCK) will work better that employs links and not distances when merging clusters, which improves quality of clusters of categorical data. Boolean and categorical are two types of attributes that are most suited in this algorithm. ROCK is a static model that combines nearest neighbor, relocation, and hierarchical agglomerative methods. In this algorithm, cluster similarity is based on the number of points from different clusters that have neighbors in common. You can use CBA Package in R to perform the ROCK clustering. Algorithm Steps: Data----->Draw Random Sample----->Cluster with Links----->Label Data in DIsk - A random sample is drawn from the database - A hierarchical clustering algorithm employing links is applied to the samples - This means: Iteratively merge clusters Ci, Cj that maximise the goodness function merge(point1,point2) = total number of crosslinks /expected number of crosslinks Stop merging once there are no more links between clusters or the required number of clusters has been reached. - Clusters involving only the sampled points are used to assign the remaining data points on disk to the appropriate clusters Hope it helps!! For more details with examples, refer the following links: [https://www.cis.upenn.edu/~sudipto/mypapers/categorical.pdf](https://www.cis.upenn.edu/~sudipto/mypapers/categorical.pdf) [https://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/RockCluster](https://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/RockCluster)
1113
1
1146
null
2
158
I have a general methodological question. I have two columns of data, with one a column a numeric variable for age and another column a short character variable for text responses to a question. My goal is to group the age variable (that is, create cut points for the age variable), based on the text responses. I'm unfamiliar with any general approaches for doing this sort of analysis. What general approaches would you recommend? Ideally I'd like to categorize the age variable based on linguistic similarity of the text responses.
General approahces for grouping a continuous variable based on text data?
CC BY-SA 3.0
null
2014-09-13T17:13:23.373
2015-08-21T07:33:06.983
2015-08-21T07:33:06.983
4647
36
[ "bigdata", "clustering", "text-mining" ]
Since it is general methodological question, let's assume we have only one text-based variable - total number of words in a sentence. First of all, it's worth to visualize your data. I will pretend I have following data: ![number of words vs. age](https://i.stack.imgur.com/KciVJ.png) Here we see slight dependency between age and number of words in responses. We may assume that young people (approx. between 12 and 25) tend to use 1-4 words, while people of age 25-35 try to give longer answers. But how do we split these points? I would do it something like this: ![enter image description here](https://i.stack.imgur.com/uect9.png) In 2D plot it looks pretty straightforward, and this is how it works most of the time in practise. However, you asked for splitting data by a single variable - age. That is, something like this: ![enter image description here](https://i.stack.imgur.com/G10GI.png) Is it a good split? I don't know. In fact, it depends on your actual needs and interpretation of the "cut points". That's why I asked about concrete task. Anyway, this interpretation is up to you. In practise, you will have much more text-based variables. E.g. you can use every word as a feature (don't forget to [stem or lemmatize](http://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html) it first) with values from zero to a number of occurrences in the response. Visualizing high-dimensional data is not an easy task, so you need a way to discover groups of data without plotting them. [Clustering](http://en.wikipedia.org/wiki/Cluster_analysis) is a general approach for this. Though clustering algorithms may work with data of arbitrary dimensionality, we still have only 2D to plot it, so let's come back to our example. With algorithm like [k-means](http://en.wikipedia.org/wiki/K-means_clustering) you can obtain 2 groups like this: ![enter image description here](https://i.stack.imgur.com/Tp4aK.png) Two dots - red and blue - show cluster centres, calculated by k-means. You can use coordinates of these points to split your data by any subset of axes, even if you have 10k dimensions. But again, the most important question here is: what linguistic features will provide reasonable grouping of ages.
Clustering categorical variable values based on continuous target values
Since you are looking for a degree of similarity regarding $y$ and the values of $x_1,...,x_5$ do not matter you can view this as a clustering problem regarding $y$: Let $y_1,...y_5$ be the target values with $f(x_i)=y_i$ for $i\in\{1,...5\}$ then you need to define a distance measure $d(y_i,y_j)$ which, since your variables $y$ are continuous, could be the euclidian distance: $d(y_i,y_j) = (y_i-y_j)^2$. But you could also choose the absolute difference. (note that I am assuming your $y_i$ to be one-dimensional here, i.e. $y_i\in \mathbb R$). That gives you a formula to measure "does not vary that much". It also provides an answer to your second question: you can choose from a range of unsupervised ML algorithms to do clustering. The most popular one probably being $K$-means. The idea here is very straight forward: ``` 1. select a number of clusters K 2. initialize the position of the K clusters randomly 3. Assign each y_i to the closest cluster 4. For each cluster k calculate the new cluster position as the mean of all y_i belonging to that cluster 5. repeat 3 and 4 until the assignments of y_i does not change anymore ``` Mathematically this gives you a mapping $c(y_i)=k$. And eventually when you are done with your clustering you just pick a cluster $k$ and for each $y_i$ in that cluster you look up the corresponding $x_i$. Which will return what you asked for.
1123
1
1126
null
3
4861
Suppose I am interested in classifying a set of instances composed by different content types, e.g.: - a piece of text - an image as `relevant` or `non-relevant` for a specific class `C`. In my classification process I perform the following steps: - Given a sample, I subdivide it in text and image - A first SVM binary classifier (SVM-text), trained only on text, classifies the text as relevant/non-relevant for the class C - A second SVM binary classifier (SVM-image), trained only on images, classifies the image as relevant/non-relevant for the class C Both `SVM-text` and `SVM-image` produce an estimate of the probability of the analyzed content (text or image) of being relevant for the class `C`. Given this, I am able to state whether the text is relevant for `C` and the image is relevant for `C`. However, these estimates are valid for segments of the original sample (either the text or the image), while it is not clear how to obtain a general opinion on the whole original sample (text+image). How can I combine conveniently the opinions of the two classifiers, so as to obtain a classification for the whole original sample?
Combine multiple classifiers to build a multi-modal classifier
CC BY-SA 3.0
null
2014-09-16T08:01:35.997
2014-09-16T12:32:10.130
null
null
3321
[ "classification", "svm" ]
Basically, you can do one of two things: - Combine features from both classifiers. I.e., instead of SVM-text and SVM-image you may train single SVM that uses both - textual and visual features. - Use ensemble learning. If you already have probabilities from separate classifiers, you can simply use them as weights and compute weighted average. For more sophisticated cases there are Bayesian combiners (each classifier has its prior), boosting algorithms (e.g. see AdaBoost) and others. Note, that ensembles where initially created for combining different learners, not different sets if features. In this later case ensembles have advantage mostly in cases when different kinds of features just can't be combined in a single vector efficiently. But in general, combing features is simpler and more straightforward.
Weighted Linear Combination of Classifiers
I don't know how to fix your automatic differentiation, but I can show you what I did (and I have seen others do too) when I wanted to achieve the same thing. You can fit a linear meta-classifier on the outputs of your classifiers that you want to ensemble. Here is the implementation from [my scikit toolbox](https://github.com/simon-larsson/extrakit-learn): ``` ''' ------------------------------------------------------- Stack Classifier - extrakit-learn Author: Simon Larsson <larssonsimon0@gmail.com> License: MIT ------------------------------------------------------- ''' from sklearn.base import BaseEstimator, ClassifierMixin from sklearn.utils.validation import check_X_y, check_array, check_is_fitted import numpy as np class StackClassifier(BaseEstimator, ClassifierMixin): ''' Stack Classifier Ensemble classifier that uses one meta classifiers and several sub-classifiers. The sub-classifiers give their output to to the meta classifier which will use them as input features. Parameters ---------- clfs : Classifiers who's output will assist the meta_clf, list classifier meta_clf : Ensemble classifier that makes the final output, classifier drop_first : Drop first class probability to avoid multi-collinearity, bool keep_features : If original input features should be used by meta_clf, bool refit : If sub-classifiers should be refit, bool ''' def __init__(self, clfs, meta_clf, drop_first=True, keep_features=False, refit=True): self.clfs = clfs self.meta_clf = meta_clf self.drop_first = drop_first self.keep_features = keep_features self.refit = refit def fit(self, X, y): ''' Fitting of the classifier Parameters ---------- X : array-like, shape (n_samples, n_features) The training input samples. y : array-like, shape (n_samples,) The target values. An array of int. Returns ------- self : object Returns self. ''' X, y = check_X_y(X, y, accept_sparse=True) # Refit of classifier ensemble if self.refit: for clf in self.clfs: clf.fit(X, y) # Build new tier-2 features X_meta = build_meta_X(self.clfs, X, self.keep_features) # Fit meta classifer, Stack the ensemble self.meta_clf.fit(X_meta, y) # set attributes self.n_features_ = X.shape[1] self.n_meta_features_ = X_meta.shape[1] self.n_clfs_ = len(self.clfs) return self def predict_proba(self, X): ''' Probability prediction Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) The prediction input samples. Returns ------- y : ndarray, shape (n_samples,) Returns an array of probabilities, floats. ''' X = check_array(X, accept_sparse=True) check_is_fitted(self, 'n_features_') # Build new tier-2 features X_meta = build_meta_X(self.clfs, X, self.keep_features) return self.meta_clf.predict_proba(X_meta) def predict(self, X): ''' Classification Parameters ---------- X : {array-like, sparse matrix}, shape (n_samples, n_features) The prediction input samples. Returns ------- y : ndarray, shape (n_samples,) Returns an array of classifications, bools. ''' X = check_array(X, accept_sparse=True) check_is_fitted(self, 'n_features_') # Build new tier-2 features X_meta = build_meta_X(self.clfs, X, self.keep_features) return self.meta_clf.predict(X_meta) def build_meta_X(clfs, X=None, drop_first=True, keep_features=False): ''' Build features that includes outputs of the sub-classifiers Parameters ---------- clfs : Classifiers that who's output will assist the meta_clf, list classifier X : {array-like, sparse matrix}, shape (n_samples, n_features) The prediction input samples. drop_first : Drop first proba to avoid multi-collinearity, bool keep_features : If original input features should be used by meta_clf, bool Returns ------- X_meta : {array-like, sparse matrix}, shape (n_samples, n_features + n_clfs*classes) The prediction input samples for the meta clf. ''' if keep_features: X_meta = X else: X_meta = None for clf in clfs: if X_meta is None: if drop_first: X_meta = clf.predict_proba(X) else: X_meta = clf.predict_proba(X)[:, 1:] else: if drop_first: y_ = clf.predict_proba(X) else: y_ = clf.predict_proba(X)[:, 1:] X_meta = np.hstack([X_meta, y_]) return X_meta ``` This would allow you to use any meta-classifier, but with linear models like ridge/lasso/logistic regression it will acts as learned linear weights of your ensemble classifiers. Like this: ``` from sklearn.tree import DecisionTreeClassifier from sklearn.linear_model import LogisticRegression from xklearn.models import StackClassifier X, y = make_classification(n_classes=2, n_features=4, n_samples=1000) meta_clf = LogisticRegression(solver='lbfgs') ensemble = [DecisionTreeClassifier(max_depth=1), DecisionTreeClassifier(max_depth=5), DecisionTreeClassifier(max_depth=10)] stack_clf = StackClassifier(clfs=ensemble, meta_clf=meta_clf) stack_clf.fit(X, y) print('Weights:', stack_clf.meta_clf.coef_[0],' Bias: ', stack_clf.meta_clf.intercept_) ``` output: ``` Weights: [0.50017775 2.2626092 6.30510687] Bias: [-4.82988374] ```
1124
1
1138
null
9
2536
I am working on a text classification problem using Random Forest as classifiers, and a bag-of-words approach. I am using the basic implementation of Random Forests (the one present in scikit), that creates a binary condition on a single variable at each split. Given this, is there a difference between using simple tf (term frequency) features. where each word has an associated weight that represents the number of occurrences in the document, or tf-idf (term frequency * inverse document frequency), where the term frequency is also multiplied by a value that represents the ratio between the total number of documents and the number of documents containing the word)? In my opinion, there should not be any difference between these two approaches, because the only difference is a scaling factor on each feature, but since the split is done at the level of single features this should not make a difference. Am I right in my reasoning?
Difference between tf-idf and tf with Random Forests
CC BY-SA 3.0
null
2014-09-16T08:14:06.307
2014-09-18T13:14:37.940
null
null
3054
[ "classification", "text-mining", "random-forest" ]
Decision trees (and hence Random Forests) are insensitive to monotone transformations of input features. Since multiplying by the same factor is a monotone transformation, I'd assume that for Random Forests there indeed is no difference. However, you eventually may consider using other classifiers that do not have this property, so it may still make sense to use the entire TF * IDF.
Using TF-IDF with other features in scikit-learn
scikit-learn's [FeatureUnion](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.FeatureUnion.html#sklearn.pipeline.FeatureUnion) concatenates features from different vectorizers. An example of combining heterogeneous data, including text, can be found [here](https://scikit-learn.org/stable/modules/compose.html#feature-union).
1147
1
1175
null
2
413
One of the discussed nice aspects of the procedure that Vowpal Wabbit uses for updates to sgd [pdf](http://lowrank.net/nikos/pubs/liw.pdf) is so-called weight invariance, described in the linked as: "Among these updates we mainly focus on a novel set of updates that satisfies an additional invariance property: for all importance weights of h, the update is equivalent to two updates with importance weight h/2. We call these updates importance invariant." What does this mean and why is it useful?
Invariance Property of Vowpal Wabbit Updates - Explaination
CC BY-SA 3.0
null
2014-09-20T02:22:07.510
2014-10-01T18:15:01.893
2014-10-01T18:15:01.893
1138
1138
[ "machine-learning", "gradient-descent" ]
Often different data samples have different weighting ( eg the costs of misclassification error for one group of data is higher than for other classes). Most error metrics are of the form $\sum_i e_i$ where e_i is the loss ( eg squared error) on data point $i$. Therefore weightings of the form $\sum_i w_i e_i$ are equivalent to duplicating the data w_i times (eg for w_i integer). One simple case is if you have repeated data - rather than keeping all the duplicated data points, you just "weight" your one repeated sample by the number of instances. Now whilst this is easy to do in a batch setting, it is hard in vowpal wabbits online big data setting: given that you have a large data set, you do not just want to represent the data n times to deal with the weighting ( because it increases your computational load). Similarly, just multiplying the gradient vector by the weighting - which is correct in batch gradient descent - will cause big problems for stochastic/online gradient descent: essentially you shoot off in one direction ( think of large integer weights) then you shoot off in the other - causing significant instability. SGD essentially relies on all the errors to be of roughly the same order ( so that the learning rate can be set appropriately). So what they propose is to ensure that the update for training sample x_i with weight n is equivalent to presenting training sample x_i n times consecutively. The idea being that presenting it consecutively reduces the problem because the error gradient (for that single example $x_i$) reduces for each consecutive presentation and update (as you get closer & closer to the minimum for that specific example). In other words the consecutive updates provides a kind of feedback control. To me it sounds like you would still have instabilities (you get to zero error on x_i, then you get to zero error on x_i+1,...). the learning rate will need to be adjusted to take into account the size of the weights.
What are the criteria for updating bias values in back propagation?
Actually, weight values and bias values are updated simultaneously in each pass of backpropagation. That’s because the orientation of loss gradient vector is determined by the partial derivatives of all weights and biases with respect to the loss function. So if in each pass, you want to move in the correct direction towards the minimun of loss function, you must update both weights and biases at the same time and in the correct orientation.
1159
1
1169
null
36
32602
I have a large set of data (about 8GB). I would like to use machine learning to analyze it. So, I think that I should use SVD then PCA to reduce the data dimensionality for efficiency. However, MATLAB and Octave cannot load such a large dataset. What tools I can use to do SVD with such a large amount of data?
How to do SVD and PCA with big data?
CC BY-SA 4.0
null
2014-09-25T08:40:59.467
2019-06-09T17:14:32.920
2019-06-09T17:14:32.920
29169
3167
[ "bigdata", "data-mining", "dimensionality-reduction" ]
First of all, dimensionality reduction is used when you have many covariated dimensions and want to reduce problem size by rotating data points into new orthogonal basis and taking only axes with largest variance. With 8 variables (columns) your space is already low-dimensional, reducing number of variables further is unlikely to solve technical issues with memory size, but may affect dataset quality a lot. In your concrete case it's more promising to take a look at [online learning](http://en.wikipedia.org/wiki/Online_machine_learning) methods. Roughly speaking, instead of working with the whole dataset, these methods take a little part of them (often referred to as "mini-batches") at a time and build a model incrementally. (I personally like to interpret word "online" as a reference to some infinitely long source of data from Internet like a Twitter feed, where you just can't load the whole dataset at once). But what if you really wanted to apply dimensionality reduction technique like PCA to a dataset that doesn't fit into a memory? Normally a dataset is represented as a data matrix X of size n x m, where n is number of observations (rows) and m is a number of variables (columns). Typically problems with memory come from only one of these two numbers. ## Too many observations (n >> m) When you have too many observations, but the number of variables is from small to moderate, you can build the covariance matrix incrementally. Indeed, typical PCA consists of constructing a covariance matrix of size m x m and applying singular value decomposition to it. With m=1000 variables of type float64, a covariance matrix has size 1000*1000*8 ~ 8Mb, which easily fits into memory and may be used with SVD. So you need only to build the covariance matrix without loading entire dataset into memory - [pretty tractable task](http://rebcabin.github.io/blog/2013/01/22/covariance-matrices/). Alternatively, you can select a small representative sample from your dataset and approximate the covariance matrix. This matrix will have all the same properties as normal, just a little bit less accurate. ## Too many variables (n << m) On another hand, sometimes, when you have too many variables, the covariance matrix itself will not fit into memory. E.g. if you work with 640x480 images, every observation has 640*480=307200 variables, which results in a 703Gb covariance matrix! That's definitely not what you would like to keep in memory of your computer, or even in memory of your cluster. So we need to reduce dimensions without building a covariance matrix at all. My favourite method for doing it is [Random Projection](http://web.stanford.edu/~hastie/Papers/Ping/KDD06_rp.pdf). In short, if you have dataset X of size n x m, you can multiply it by some sparse random matrix R of size m x k (with k << m) and obtain new matrix X' of a much smaller size n x k with approximately the same properties as the original one. Why does it work? Well, you should know that PCA aims to find set of orthogonal axes (principal components) and project your data onto first k of them. It turns out that sparse random vectors are nearly orthogonal and thus may also be used as a new basis. And, of course, you don't have to multiply the whole dataset X by R - you can translate every observation x into the new basis separately or in mini-batches. There's also somewhat similar algorithm called Random SVD. I don't have any real experience with it, but you can find example code with explanations [here](https://stats.stackexchange.com/a/11934/3305). --- As a bottom line, here's a short check list for dimensionality reduction of big datasets: - If you have not that many dimensions (variables), simply use online learning algorithms. - If there are many observations, but a moderate number of variables (covariance matrix fits into memory), construct the matrix incrementally and use normal SVD. - If number of variables is too high, use incremental algorithms.
Data analysis PCA
PCA is [not recommended](https://www.researchgate.net/post/Should_I_use_PCA_with_categorical_data) for categorical features. There are equivalent algorithms for categorical features like [CATPCA](http://amse-conference.eu/history/amse2015/doc/Sulc_Rezankova.pdf) and MCA. [](https://i.stack.imgur.com/XEaVY.png)
1165
1
5252
null
6
7356
I'm going to start a Computer Science phd this year and for that I need a research topic. I am interested in Predictive Analytics in the context of Big Data. I am interested by the area of Education (MOOCs, Online courses...). In that field, what are the unexplored areas that can help me choose a strong topic? Thanks.
Looking for a strong Phd Topic in Predictive Analytics in the context of Big Data
CC BY-SA 3.0
null
2014-09-25T20:18:46.880
2020-08-17T20:32:32.897
2014-09-27T16:56:14.523
3433
3433
[ "machine-learning", "bigdata", "data-mining", "statistics", "predictive-modeling" ]
As a fellow CS Ph.D. defending my dissertation in a Big Data-esque topic this year (I started in 2012), the best piece of material I can give you is in a [link](http://www.rpajournal.com/dev/wp-content/uploads/2014/10/A3.pdf). This is an article written by two Ph.D.s from MIT who have talked about Big Data and MOOCs. Probably, you will find this a good starting point. BTW, along this note, if you really want to come up with a valid topic (that a committee and your adviser will let you propose, research and defend) you need to read LOTS and LOTS of papers. The majority of Ph.D. students make the fatal error of thinking that some 'idea' they have is new, when it's not and has already been done. You'll have to do something truly original to earn your Ph.D. Rather than actually focus on forming an idea right now, you should do a good literature survey and the ideas will 'suggest themselves'. Good luck! It's an exciting time for you.
Any Master Thesis Topics related to NoSQL and Machine Learning or Business Intelligence?
The question is a bit broad and opinion-based for StackExchange, but I'll have a quick go anyway: I don't see good topics in this area. For academic machine learning, how you store your data is largely irrelevant, either because the research is on small data anyway and will be read into memory, or because the research is pretty theoretical to begin with. There are certainly interesting issues to explore here for distributed ML. However NoSQL stores would not in general be helpful. They specialize in random access and random updates to keyed data. ML generally needs high-throughput sequential access to data without updating it. BI + ML is too broad. Yes there are topics in there somewhere but hard to discuss at that level.
1195
1
1206
null
2
151
I'm coding a program that tests several classifiers over a database weather.arff, I found rules below, I want classify test objects. I do not understand how the classification, it is described: "In classification, let R be the set of generated rules and T the training data. The basic idea of the proposed method is to choose a set of high confidence rules in R to cover T. In classifying a test object, the first rule in the set of rules that matches the test object condition classifies it. This process ensures that only the highest ranked rules classify test objects. " How to classify test objects? ``` No. outlook temperature humidity windy play 1 sunny hot high FALSE no 2 sunny hot high TRUE no 3 overcast hot high FALSE yes 4 rainy mild high FALSE yes 5 rainy cool normal FALSE yes 6 rainy cool normal TRUE no 7 overcast cool normal TRUE yes 8 sunny mild high FALSE no 9 sunny cool normal FALSE yes 10 rainy mild normal FALSE yes 11 sunny mild normal TRUE yes 12 overcast mild high TRUE yes 13 overcast hot normal FALSE yes 14 rainy mild high TRUE no ``` Rule found: ``` 1: (outlook,overcast) -> (play,yes) [Support=0.29 , Confidence=1.00 , Correctly Classify= 3, 7, 12, 13] 2: (humidity,normal), (windy,FALSE) -> (play,yes) [Support=0.29 , Confidence=1.00 , Correctly Classify= 5, 9, 10] 3: (outlook,sunny), (humidity,high) -> (play,no) [Support=0.21 , Confidence=1.00 , Correctly Classify= 1, 2, 8] 4: (outlook,rainy), (windy,FALSE) -> (play,yes) [Support=0.21 , Confidence=1.00 , Correctly Classify= 4] 5: (outlook,sunny), (humidity,normal) -> (play,yes) [Support=0.14 , Confidence=1.00 , Correctly Classify= 11] 6: (outlook,rainy), (windy,TRUE) -> (play,no) [Support=0.14 , Confidence=1.00 , Correctly Classify= 6, 14] ``` Thanks, Dung
How to classify test objects with this ruleset in order of priority?
CC BY-SA 3.0
null
2014-10-02T13:06:22.433
2016-12-13T09:18:45.437
2016-12-13T09:18:45.437
8501
3503
[ "classification", "association-rules" ]
Suppose your test object is `(sunny, hot, normal, TRUE)`. Look through the rules top to bottom and see if any of the conditions are matched. The first rule for example tests the `outlook` feature. The value doesn't match, so the rule isn't matched. Move on to the next rule. And so on. In this case, rule 5 matches the test case and the classification for the p lay variable is "yes". More generally, for any test case, look at the values its features take and find the first rule that those values satisfy. The implication of that rule will be its classification.
Decision tree or rule
JRip implements a propositional rule learner, “Repeated Incremental Pruning to Produce Error Reduction” [(RIPPER)](http://sci2s.ugr.es/keel/pdf/algorithm/congreso/slipper.pdf), as proposed by Cohen (1995) and OneR builds a simple [1-R classifier](http://www.mlpack.org/papers/ds.pdf), proposed by Holte (1993). Its hard to say which algorithm works better. Best approach is to compare different classification algorithms performance in terms of precision, recall, accuracy, f1 score, AUC, specificity and sensitivity on your train/test data set and pick the one that gives best results or use ensemble of top performing algorithms to build your final model. There is a good white paper doing similar exercise of comparing different classification algorithms including OneR and JRIP [here](https://www.researchgate.net/publication/273015702_Applying_Naive_bayes_BayesNet_PART_JRip_and_OneR_Algorithms_on_Hypothyroid_Database_for_Comparative_Analysis). Hope this helps.
1208
1
4895
null
3
73
Okay, here is the background: I am doing text mining, and my basic flow is like this: extract feature (n-gram), reduce feature count, score (tf-idf) and classify. for my own sake i am doing comparison between SVM and neural network classifiers. here is the weird part (or am i wrong and this is reasonable?), if i use 2gram the classifiers' result (accuracy/precision) is different and the SVM is the better one; but when i use 3-gram the results are exactly the same. what causes this? is there any explanation? is it the case of very separable classes?
What circumstances causes two different classifiers to classify data exactly like one another
CC BY-SA 3.0
null
2014-10-04T19:49:53.543
2015-01-17T04:42:47.530
null
null
3530
[ "text-mining", "neural-network", "svm" ]
Your results are reasonable. Your data brings several ideas to mind: 1) It is quite reasonable that as you change the available features, this will change the relative performance of machine learning methods. This happens quite a lot. Which machine learning method performs best often depends on the features, so as you change the features the best method changes. 2) It is reasonable that in some cases, disparate models will reach the exact same results. This is most likely in the case where the number of data points is low enough or the data is separable enough that both models reach the exact same conclusions for all test points.
Different result of classification with same classifier and same input parameters
From [sklearns random forest documentation](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html): > random_state int, RandomState instance or None, default=None Controls both the randomness of the bootstrapping of the samples used when building trees (if bootstrap=True) and the sampling of the features to consider when looking for the best split at each node (if max_features < n_features). See Glossary for details. Each time you re-run this with `random_state = None` it runs different models. Set random_state to `0` (or any number) and see consistent results.
1223
1
1224
null
4
1877
I'm curious if anyone else has run into this. I have a data set with about 350k samples, each with 4k sparse features. The sparse fill rate is about 0.5%. The data is stored in a `scipy.sparse.csr.csr_matrix` object, with `dtype='numpy.float64'`. I'm using this as an input to sklearn's Logistic Regression classifier. The [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) indicates that sparse CSR matrices are acceptable inputs to this classifier. However, when I train the classifier, I get extremely bad memory performance; the memory usage of my process explodes from ~150 MB to fill all the available memory and then everything grinds to a halt as memory swapping to disk takes over. Does anyone know why this classifier might expand the sparse matrix to a dense matrix? I'm using the default parameters for the classifier at the moment, within an updated anacoda distribution. Thanks! ``` scipy.__version__ = '0.14.0' sklearn.__version__ = '0.15.2' ```
Scikit Learn Logistic Regression Memory Leak
CC BY-SA 3.0
null
2014-10-07T17:27:22.063
2014-10-07T18:09:19.433
null
null
3568
[ "efficiency", "performance", "scikit-learn" ]
Ok, this ended up being an RTFM situation, although in this case it was RTF error message. While running this, I kept getting the following error: ``` DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). ``` I assumed that, since this had to do with the target vector, and since it was a warning only, that it would just silently change my target vector to 1-D. However, when I explicitly converted my target vector to 1-D, my memory problems went away. Apparently having the target vector in an incorrect form caused it to convert my input vectors into dense vectors from sparse vectors. Lesson learned: follow the recommendations when sklearn 'suggests' you do something.
Using scikit-learn iterative imputer with extra tree regressor eats a lot of RAM
TL;DR - use the `max_depth` and `max_samples` arguments to `ExtraTreesRegressor` to reduce the maximum tree size. The sizes you pick might depend on the distribution of your data. As a starting point, you could start with max_depth=5 and `max_samples=0.1*data.shape[0]` (10%), and compare results to what you have already. Tweak as you see fit. --- Apart from the fairly large input space, the data structure built by the `ExtraTreeRegressor` is the main issue. It will continue to expand the tree size until each leaf reaches your criteria, namely `min_samples_leaf=1`. This means every single data point of your input dataset must end up in its own leaf. Apart from probably overfitting, this is going to lead to high memory consumption. See the `Note:` in [the relevant documentation](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesRegressor.html): > The default values for the parameters controlling the size of the trees (e.g. max_depth, min_samples_leaf, etc.) lead to fully grown and unpruned trees which can potentially be very large on some data sets. To reduce memory consumption, the complexity and size of the trees should be controlled by setting those parameter values. Each `ExtraTreesRegressor that you create looks like it might make a full copy of your dataset, according to the documentation for `max_samples`: ``` max_samples : int or float, default=None If bootstrap is True, the number of samples to draw from X to train each base estimator. - If None (default), then draw `X.shape[0]` samples. ``` To gain a deeper understanding of how you might tune your memory usage, you could take a look at [the source code of the ExtraTreesRegressor](https://github.com/scikit-learn/scikit-learn/blob/15a949460/sklearn/ensemble/_forest.py#L1807).
1225
1
1235
null
2
411
I have product purchase count data which looks likes this: ``` user item1 item2 a 2 4 b 1 3 c 5 6 ... ... ... ``` These data are imported into `python` using `numpy.genfromtxt`. Now I want to process it to get the correlation between `item1` purchase amount and `item2` purchase amount -- basically for each value `x` of `item1` I want to find all the users who bought `item1` in `x` quantity then average the `item2` over the same users. What is the best way to do this? I can do this by using `for` loops but I thought there might be something more efficient than that. Thanks!
data processing, correlation calculation
CC BY-SA 3.0
null
2014-10-08T10:42:41.833
2014-10-09T10:58:02.557
null
null
3580
[ "python", "correlation" ]
Pandas is the best thing since sliced bread (for data science, at least). an example: ``` import pd In [22]: df = pd.read_csv('yourexample.csv') In [23]: df Out[23]: user item1 item2 0 a 2 4 1 b 1 3 2 c 5 6 In [24]: df.columns Out[24]: Index([u'user ', u'item1 ', u'item2'], dtype='object') In [25]: df.corr() Out[25]: item1 item2 item1 1.000000 0.995871 item2 0.995871 1.000000 In [26]: df.cov() Out[26]: item1 item2 item1 4.333333 3.166667 item2 3.166667 2.333333 ``` Bingo!
Problem regarding calculating correlation approach?
Most probably, you are using Pearson's correlation method. This method is used for two Continuous features. Here, both the price_drop and the OHE features are Binary Categorical features. So, you can use these methods - Phi - Phi is a measure of the degree of association between two binary variables (two categorical variables, each of which can have only one of two values) Crammer's V - Cramer’s V is an extension of phi for tables larger than 2×2. Both are extensions of the Chi-square test of Independence. Since both the Features have 2 values, both of the above methods will output the same result. ``` # dataset is your DataFrame s1 = dataset['Status'] s2 = dataset[product_type_OHE_01] import pandas as pd from scipy.stats import chi2_contingency n = len(s1) r,c = s1.nunique(), s2.nunique() matrix = pd.crosstab(s1,s2).values chi_sq = chi2_contingency(matrix) phi = np.sqrt(chi_sq[0]/n) cramm_V = np.sqrt(chi_sq[0]/(n*min(r-1,c-1))) print(phi, cramm_V) ```
1229
1
1231
null
7
8654
Lets say I have a database of users who rate different products on a scale of 1-5. Our recommendation engine recommends products to users based on the preferences of other users who are highly similar. My first approach to finding similar users was to use Cosine Similarity, and just treat user ratings as vector components. The main problem with this approach is that it just measures vector angles and doesn't take rating scale or magnitude into consideration. My question is this: Are there any drawbacks to just using the percentage difference between the vector components of two vectors as a measure of similarity? What disadvantages, if any, would I encounter if I used that method, instead of Cosine Similarity or Euclidean Distance? For Example, why not just do this: ``` n = 5 stars a = (1,4,4) b = (2,3,4) similarity(a,b) = 1 - ( (|1-2|/5) + (|4-3|/5) + (|4-4|/5) ) / 3 = .86667 ``` Instead of Cosine Similarity : ``` a = (1,4,4) b = (2,3,4) CosSimilarity(a,b) = (1*2)+(4*3)+(4*4) / sqrt( (1^2)+(4^2)+(4^2) ) * sqrt( (2^2)+(3^2)+(4^2) ) = .9697 ```
Cosine Similarity for Ratings Recommendations? Why use it?
CC BY-SA 3.0
null
2014-10-09T01:41:11.797
2014-10-10T00:03:00.447
null
null
3587
[ "machine-learning", "recommender-system" ]
Rating bias and scale can easily be accounted for by standardization. The point of using Euclidean similarity metrics in vector space co-embeddings is that it reduces the recommendation problem to one of finding the nearest neighbors, which can be done efficiently both exactly and approximately. What you don't want to do in real-life settings is to have to compare every item/user pair and sort them according to some expensive metric. That just doesn't scale. One trick is to use an approximation to cull the herd to a managable size of tentative recommendations, then to run your expensive ranking on top of that. edit: Microsoft Research is presenting a paper that covers this very topic at RecSys right now: [Speeding Up the Xbox Recommender System Using a Euclidean Transformation for Inner-Product Spaces](http://www.ulrichpaquet.com/Papers/SpeedUp.pdf)
Is this the correct way to apply a recommender system based on KNN and cosine similarity to predict continuous values?
> The predicted rating is equal to the sum of each neighbors rating times similarity, then divided by 10 (number of neighbors). You want to obtain a weighted-average of ratings, where the weight is the similarity score. Instead of dividing by 10 above, you should divide by the sum of similarities, to get a correct normalized score. Dividing by 10 is the likely reason for the predictions to be smaller than expected. The formula you should use is: > Final score = Sum_over_neighbours(Rating * Similarity) / Sum_over_neighbours(Similarity)
1236
1
1249
null
1
48
how to get the Polysemes of a word in wordnet or any other api. I am looking for any api. with java any idea is appreciated?
how to get the Polysemes of a word in wordnet or any other api?
CC BY-SA 3.0
null
2014-10-09T12:26:01.643
2014-10-10T16:54:45.047
null
null
3598
[ "nlp" ]
There are several third-party Java APIs for WordNet listed here: [http://wordnet.princeton.edu/wordnet/related-projects/#Java](http://wordnet.princeton.edu/wordnet/related-projects/#Java) In the past, I've used JWNL the most: [http://sourceforge.net/projects/jwordnet/](http://sourceforge.net/projects/jwordnet/) The documentation for JWNL isn't great, but it should provide the functionality you need.
Semantic networks: word2vec?
There are a few models that are trained to analyse a sentence and classify each token (or recognise dependencies between words). - Part of speech tagging (POS) models assign to each word its function (noun, verb, ...) - have a look at this link - Dependency parsing (DP) models will recognize which words go together (in this case Angela and Merkel for instance) - check this out - Named entity recognition (NER) models will for instance say that "Angela Merkel" is a person, "Germany" is a country ... - another link
1244
1
1255
null
12
5203
When ML algorithms, e.g. Vowpal Wabbit or some of the factorization machines winning click through rate competitions ([Kaggle](https://www.kaggle.com/c/criteo-display-ad-challenge/forums/t/10555/3-idiots-solution/55862#post55862)), mention that features are 'hashed', what does that actually mean for the model? Lets say there is a variable that represents the ID of an internet add, which takes on values such as '236BG231'. Then I understand that this feature is hashed to a random integer. But, my question is: - Is the integer now used in the model, as an integer (numeric) OR - is the hashed value actually still treated like a categorical variable and one-hot-encoded? Thus the hashing trick is just to save space somehow with large data?
Hashing Trick - what actually happens
CC BY-SA 3.0
null
2014-10-10T03:48:54.660
2014-10-11T19:48:20.583
null
null
1138
[ "machine-learning", "predictive-modeling", "kaggle" ]
The a second bullet is the value in feature hashing. Hashing and one hot encoding to sparse data saves space. Depending on the hash algo you can have varying degrees of collisions which acts as a kind of dimensionality reduction. Also, in the specific case of Kaggle feature hashing and one hot encoding help with feature expansion/engineering by taking all possible tuples (usually just second order but sometimes third) of features that are then hashed with collisions that explicitly create interactions that are often predictive whereas the individual features are not. In most cases this technique combined with feature selection and elastic net regularization in LR acts very similar to a one hidden layer NN so it performs quite well in competitions.
How to use hashing trick with field-aware factorization machines
One option is to use [xLearn](https://github.com/aksnzhy/xlearn), a scikit-learn compatible package for FFM, which handles that issue automatically. If you require feature hashing, you can write [a custom feature hashing function](https://github.com/aksnzhy/xlearn/issues/99): ``` import hashlib def hash_str(string: str, n_bins: int) -> int: return int(hashlib.md5(string.encode('utf8')).hexdigest(), 16) % (n_bins-1) + 1 ```
1246
1
2515
null
10
8798
let's assume that I want to train a stochastic gradient descent regression algorithm using a dataset that has N samples. Since the size of the dataset is fixed, I will reuse the data T times. At each iteration or "epoch", I use each training sample exactly once after randomly reordering the whole training set. My implementation is based on Python and Numpy. Therefore, using vector operations can remarkably decrease computation time. Coming up with a vectorized implementation of batch gradient descent is quite straightforward. However, in the case of stochastic gradient descent I can not figure out how to avoid the outer loop that iterates through all the samples at each epoch. Does anybody know any vectorized implementation of stochastic gradient descent? EDIT: I've been asked why would I like to use online gradient descent if the size of my dataset is fixed. From [1], one can see that online gradient descent converges slower than batch gradient descent to the minimum of the empirical cost. However, it converges faster to the minimum of the expected cost, which measures generalization performance. I'd like to test the impact of these theoretical results in my particular problem, by means of cross validation. Without a vectorized implementation, my online gradient descent code is much slower than the batch gradient descent one. That remarkably increases the time it takes to the cross validation process to be completed. EDIT: I include here the pseudocode of my on-line gradient descent implementation, as requested by ffriend. I am solving a regression problem. ``` Method: on-line gradient descent (regression) Input: X (nxp matrix; each line contains a training sample, represented as a length-p vector), Y (length-n vector; output of the training samples) Output: A (length-p+1 vector of coefficients) Initialize coefficients (assign value 0 to all coefficients) Calculate outputs F prev_error = inf error = sum((F-Y)^2)/n it = 0 while abs(error - prev_error)>ERROR_THRESHOLD and it<=MAX_ITERATIONS: Randomly shuffle training samples for each training sample i: Compute error for training sample i Update coefficients based on the error above prev_error = error Calculate outputs F error = sum((F-Y)^2)/n it = it + 1 ``` [1] "Large Scale Online Learning", L. Bottou, Y. Le Cunn, NIPS 2003.
Stochastic gradient descent based on vector operations?
CC BY-SA 3.0
null
2014-10-10T13:34:11.543
2014-11-21T11:50:47.717
2014-11-21T10:02:39.520
2576
2576
[ "python", "gradient-descent", "regression" ]
First of all, word "sample" is normally used to describe [subset of population](http://en.wikipedia.org/wiki/Sample_%28statistics%29), so I will refer to the same thing as "example". Your SGD implementation is slow because of this line: ``` for each training example i: ``` Here you explicitly use exactly one example for each update of model parameters. By definition, vectorization is a technique for converting operations on one element into operations on a vector of such elements. Thus, no, you cannot process examples one by one and still use vectorization. You can, however, approximate true SGD by using mini-batches. Mini-batch is a small subset of original dataset (say, 100 examples). You calculate error and parameter updates based on mini-batches, but you still iterate over many of them without global optimization, making the process stochastic. So, to make your implementation much faster it's enough to change previous line to: ``` batches = split dataset into mini-batches for batch in batches: ``` and calculate error from batch, not from a single example. Though pretty obvious, I should also mention vectorization on per-example level. That is, instead of something like this: ``` theta = np.array([...]) # parameter vector x = np.array([...]) # example y = 0 # predicted response for i in range(len(example)): y += x[i] * theta[i] error = (true_y - y) ** 2 # true_y - true value of response ``` you should definitely do something like this: ``` error = (true_y - sum(np.dot(x, theta))) ** 2 ``` which, again, easy to generalize for mini-batches: ``` true_y = np.array([...]) # vector of response values X = np.array([[...], [...]]) # mini-batch errors = true_y - sum(np.dot(X, theta), 1) error = sum(e ** 2 for e in errors) ```
Gradient descent with vector-valued loss
> I see clearly that this works for $l(w) \in \mathbb{R}$, but am wondering how it generalizes to vector-valued loss functions, i.e. $l(w) \in \mathbb{R}^n$ for $n > 1$. Generally in neural network optimisers it does not*, because it is not possible to define what optimising a multi-value function means whilst keeping the values separate. If you have a multi-valued loss function, you will need to reduce it to a single value in order to optimise. When a neural network has multiple outputs, then typically the loss function that is optimised is a (possibly weighted) sum of the individual loss functions calculated from each prediction/ground truth pair in the output vector. If your loss function is naturally a vector, then you must choose some reduction of it to scalar value e.g. you can minimise the magnitude or maximise some dot-product of a vector, but you cannot "minimise a vector". --- * There is a useful definition of [multi-objective optimisation](https://en.wikipedia.org/wiki/Multi-objective_optimization), which effectively finds multiple sets of parameters that cannot be improved upon (for a very specific definition of optimality called Pareto optimality). I do not think it is commonly used in neural network frameworks such as TensorFlow. Instead I suspect that passing a vector loss function into TensorFlow optimiser will cause it to optimise a simple sum of vector components.
2258
1
2261
null
7
724
First of all I know the question may be not suitable for the website but I'd really appreciate it if you just gave me some pointers. I'm a 16 years old programmer, I've had experience with many different programming languages, a while ago I started a course at Coursera, titled introduction to machine learning and since that moment i got very motivated to learn about AI, I started reading about neural networks and I made a working perceptron using Java and it was really fun but when i started to do something a little more challenging (building a digit recognition software), I found out that I have to learn a lot of math, I love math but the schools here don't teach us much, now I happen to know someone who is a math teacher do you think learning math (specifically calculus) is necessary for me to learn AI or should I wait until I learn those stuff at school? Also what other things would be helpful in the path of me learning AI and machine learning? do other techniques (like SVM) also require strong math? Sorry if my question is long, I'd really appreciate if you could share with me any experience you have had with learning AI.
Where to start on neural networks
CC BY-SA 3.0
null
2014-10-12T11:23:26.493
2019-11-26T08:41:17.383
null
null
4620
[ "machine-learning", "neural-network", "svm" ]
No, you should go ahead and learn the maths on your own. You will "only" need to learn calculus, statistics, and linear algebra (like the rest of machine learning). The theory of neural networks is pretty primitive at this point -- it more of an art than a science -- so I think you can understand it if you try. Ipso facto, there are a lot of tricks that you need practical experience to learn. There are lot of complicated extensions, but you can worry about them once you get that far. Once you can understand the Coursera classes on ML and neural networks (Hinton's), I suggest getting some practice. You might like [this](http://karpathy.github.io/neuralnets/) introduction.
How Do I Learn Neural Networks?
I have a Master's in Computer Science and my thesis was about time-series prediction using Neural Networks. The book [Hands on machine learning with Scikit and Tensorflow](https://rads.stackoverflow.com/amzn/click/1491962291) was extremely helpful from a practical point of view. It really lays things very clearly, without much theory and math. I strongly recommend it. On the other hand, the [book](https://www.deeplearningbook.org/) by Ian Goodfellow is also a must (kind of the bible of DL). There you'll find the theoretical explanations, also it will leave you much much more knowledgeable with regards to deep learning and the humble beginning of the field till now. Another, as others have suggested, is of course, [Deep Learning with Python](https://rads.stackoverflow.com/amzn/click/1617294438) by Chollet. I indulged reading this book. Indeed it was very well written, and again, it teaches you tricks and concepts that you hardly grasp from tutorials and courses online. Furthermore, I see you are familiar with Matlab, so maybe you have taken some stats/probability classes, otherwise, all these will overwhelm you a bit.
2273
1
2274
null
5
1083
I have a variable whose value I would like to predict, and I would like to use only one variable as predictor. For instance, predict traffic density based on weather. Initially, I thought about using [Self-Organizing Maps](http://en.wikipedia.org/wiki/Self-organizing_map) (SOM), which performs unsupervised clustering + regression. However, since it has an important component of dimensionality reduction, I see it as more appropriated for a large number of variables. Does it make sense to use it for a single variable as predictor? Maybe there are more adequate techniques for this simple case: I used "Data Mining" instead of "machine learning" in the title of my question, because I think maybe a linear regression could do the job...
What is the best Data Mining algorithm for prediction based on a single variable?
CC BY-SA 3.0
null
2014-10-14T08:50:53.907
2014-10-18T13:25:13.107
2014-10-18T13:25:13.107
84
3159
[ "clustering", "regression", "correlation" ]
Common rule in machine learning is to try simple things first. For predicting continuous variables there's nothing more basic than simple linear regression. "Simple" in the name means that there's only one predictor variable used (+ intercept, of course): ``` y = b0 + x*b1 ``` where `b0` is an intercept and `b1` is a slope. For example, you may want to predict lemonade consumption in a park based on temperature: ``` cons = b0 + temp * b1 ``` Temperature is in well-defined continuous variable. But if we talk about something more abstract like "weather", then it's harder to understand how we measure and encode it. It's ok if we say that the weather takes values `{terrible, bad, normal, good, excellent}` and assign values numbers from -2 to +2 (implying that "excellent" weather is twice as good as "good"). But what if the weather is given by words `{shiny, rainy, cool, ...}`? We can't give an order to these variables. We call such variables categorical. Since there's no natural order between different categories, we can't encode them as a single numerical variable (and linear regression expects numbers only), but we can use so-called dummy encoding: instead of a single variable `weather` we use 3 variables - `[weather_shiny, weather_rainy, weather_cool]`, only one of which can take value 1, and others should take value 0. In fact, we will have to drop one variable because of [collinearity](http://en.wikipedia.org/wiki/Multicollinearity). So model for predicting traffic from weather may look like this: ``` traffic = b0 + weather_shiny * b1 + weather_rainy * b2 # weather_cool dropped ``` where either `b1` or `b2` is 1, or both are 0. Note that you can also encounter non-linear dependency between predictor and predicted variables (you can easily check it by plotting `(x,y)` pairs). Simplest way to deal with it without refusing linear model is to use polynomial features - simply add polynomials of your feature as new features. E.g. for temperature example (for dummy variables it doesn't make sense, cause `1^n` and `0^n` are still 1 and 0 for any `n`): ``` traffic = b0 + temp * b1 + temp^2 * b2 [+ temp^3 * b3 + ...] ```
Choosing the right data mining method to find the effect of each parameter over the target
You can try Bayesian belief networks (BBNs). BBNs can easily handle categorical variables and give you the picture of the multivariable interactions. Furthermore, you may use sensitivity analysis to observe how each variable influences your class variable. Once you learn the structure of the BBN, you can identify the Markov blanket of the class variable. The variables in the Markov blanket of the class variable is a subset of all the variables, and you may use optimization techniques to see which combination of values in this Markov blanket maximizes your class prediction.
2287
1
2288
null
2
331
I am kind of a newbie on machine learning and I would like to ask some questions based on a problem I have . Let's say I have x y z as variable and I have values of these variables as time progresses like : t0 = x0 y0 z0 t1 = x1 y1 z1 tn = xn yn zn Now I want a model that when it's given 3 values of x , y , z I want a prediction of them like: Input : x_test y_test z_test Output : x_prediction y_prediction z_prediction These values are float numbers. What is the best model for this kind of problem? Thanks in advance for all the answers. More details: Ok so let me give some more details about the problems so as to be more specific. I have run certain benchmarks and taken values of performance counters from the cores of a system per interval. The performance counters are the x , y , z in the above example.They are dependent to each other.Simple example is x = IPC , y = Cache misses , z = Energy at Core. So I got this dataset of all these performance counters per interval .What I want to do is create a model that after learning from the training dataset , it will be given a certain state of the core ( the performance counters) and predict the performance counters that the core will have in the next interval.
Regression Model for explained model(Details inside)
CC BY-SA 3.0
null
2014-10-16T12:15:32.017
2014-10-27T16:04:23.527
2014-10-27T16:04:23.527
4668
4668
[ "machine-learning", "logistic-regression", "predictive-modeling", "regression" ]
AFAIK if you want to predict the value of one variable, you need to have one or more variables as predictors; i.e.: you assume the behaviour of one variable can be explained by the behaviour of other variables. In your case you have three independent variables whose value you want to predict, and since you don't mention any other variables, I assume that each variable depends on the others. In that case you could fit three models (for instance, regression models), each of which would predict the value of one variable, based on the others. As an example, to predict x: ``` x_prediction=int+cy*y_test+cz*z_test ``` , where int is the intercept and cy, cz, the coefficients of the linear regression. Likewise, in order to predict y and z: ``` y_prediction=int+cx*x_test+cx*z_test z_prediction=int+cx*x_test+cy*y_test ```
how to interpret predictions from model?
Alright so I rewrote some parts of your model such that it makes more sense for a classification problem. The first and most obvious reason your network was not working is due to the number of output nodes you selected. For a classification task the number of output nodes should be the same as the number of classes in your data. In this case we have 5 kinds of flowers, thus 5 labels which I reassigned to $y \in \{0, 1, 2, 3, 4\}$, thus we will have 5 output nodes. So let's go through the code. First we bring the data into the notebook using the code you wrote. ``` from os import listdir import cv2 daisy_path = "flowers/daisy/" dandelion_path = "flowers/dandelion/" rose_path = "flowers/rose/" sunflower_path = "flowers/sunflower/" tulip_path = "flowers/tulip/" def iter_images(images,directory,size,label): try: for i in range(len(images)): img = cv2.imread(directory + images[i]) img = cv2.resize(img,size) img_data.append(img) labels.append(label) except: pass img_data = [] labels = [] size = 64,64 iter_images(listdir(daisy_path),daisy_path,size,0) iter_images(listdir(dandelion_path),dandelion_path,size,1) iter_images(listdir(rose_path),rose_path,size,2) iter_images(listdir(sunflower_path),sunflower_path,size,3) iter_images(listdir(tulip_path),tulip_path,size,4) ``` We can visualize the data to get a better idea of the distribution of the classes. ``` import matplotlib.pyplot as plt %matplotlib inline n_classes = 5 training_counts = [None] * n_classes testing_counts = [None] * n_classes for i in range(n_classes): training_counts[i] = len(y_train[y_train == i])/len(y_train) testing_counts[i] = len(y_test[y_test == i])/len(y_test) # the histogram of the data train_bar = plt.bar(np.arange(n_classes)-0.2, training_counts, align='center', color = 'r', alpha=0.75, width = 0.41, label='Training') test_bar = plt.bar(np.arange(n_classes)+0.2, testing_counts, align='center', color = 'b', alpha=0.75, width = 0.41, label = 'Testing') plt.xlabel('Labels') plt.xticks((0,1,2,3,4)) plt.ylabel('Count (%)') plt.title('Label distribution in the training and test set') plt.legend(bbox_to_anchor=(1.05, 1), handles=[train_bar, test_bar], loc=2) plt.grid(True) plt.show() ``` [](https://i.stack.imgur.com/2yJNJ.png) We will now transform the data and the labels to matrices. ``` import numpy as np data = np.array(img_data) data.shape data = data.astype('float32') / 255.0 labels = np.asarray(labels) ``` Then we will split the data.. Notice that you do not need to shuffle the data yourself since sklearn can do it for you. ``` from sklearn.model_selection import train_test_split # Split the data x_train, x_test, y_train, y_test = train_test_split(data, labels, test_size=0.33, shuffle= True) ``` Let's construct our model. I changed the last layer to use the softmax activation function. This will allow the outputs of the network to sum up to a total probability of 1. This is the usual activation function to use for classification tasks. ``` from keras.models import Sequential from keras.layers import Dense,Flatten,Convolution2D,MaxPool2D from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.callbacks import ModelCheckpoint from keras.models import model_from_json from keras import backend as K model = Sequential() model.add(Convolution2D(32, (3,3),input_shape=(64, 64, 3),activation='relu')) model.add(MaxPool2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(128,activation='relu')) model.add(Dense(5,activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) ``` Then we can train our network. This will result in about 60% accuracy on the test set. This is pretty good considering the baseline for this task is 20%. ``` batch_size = 128 epochs = 10 model.fit(x_train, y_train_binary, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test_binary)) ``` After the model is trained you can predict instances using. Don't forget that the network needs to take the same shape in. Thus we must maintain the dimensionality of the matrix, that's why I use the [0:1]. ``` print('Predict the classes: ') prediction = model.predict_classes(x_test[0:1]) print('Predicted class: ', prediction) print('Real class: ', y_test[0:1]) ``` This gives > Predict the classes: 1/1 [==============================] - 0s 6ms/step Predicted class: [4] Real class: [4] # Some suggestions The model you are currently using is the one that is most common for MNIST. However, that data only has a single channel thus we don't need as many layers. You can increase the performance by increasing the complexity of your model. Or by reducing the complexity of your data, for example you can train using the grayscale equivalent of the images, thus reducing the problem to a single channel.
2293
1
2296
null
10
2238
I am trying to setup a big data infrastructure using Hadoop, Hive, Elastic Search (amongst others), and I would like to run some algorithms over certain datasets. I would like the algorithms themselves to be scalable, so this excludes using tools such as Weka, R, or even RHadoop. The [Apache Mahout Library](https://mahout.apache.org) seems to be a good option, and it features [algorithms for regression and clustering tasks](https://mahout.apache.org/users/basics/algorithms.html). What I am struggling to find is a solution for anomaly or outlier detection. Since Mahout features Hidden Markov Models and a variety of clustering techniques (including K-Means) I was wondering if it would be possible to build a model to detect outliers in time-series, using any of this. I would be grateful if somebody experienced on this could advice me - if it is possible, and in case it is - how-to do it, plus - an estimation of the effort involved and - accuracy/problems of this approach.
Scalable Outlier/Anomaly Detection
CC BY-SA 3.0
null
2014-10-17T10:47:13.197
2017-05-05T11:04:04.157
2017-05-04T19:57:23.533
31513
3159
[ "data-mining", "bigdata", "algorithms", "outlier" ]
I would take a look at [t-digest algorithm](https://github.com/tdunning/t-digest). It's [been merged into mahout](https://issues.apache.org/jira/browse/MAHOUT-1361) and also a part of [some other libraries](http://github.com/addthis/stream-lib/blob/master/src/main/java/com/clearspring/analytics/stream/quantile/TDigest.java) for big data streaming. You can get more about this algorithm particularly and big data anomaly detection in general in next resources: - Practical machine learning anomaly detection book. - Webinar: Anomaly Detection When You Don't Know What You Need to Find - Anomaly Detection in Elasticsearch. - Beating Billion Dollar Fraud Using Anomaly Detection: A Signal Processing Approach using Argyle Data on the Hortonworks Data Platform with Accumulo
Outliers handling
It's not always a good idea to remove data from your dataset. In some circumstances - and income is a good example - your data will be skewed / long-tailed and so will lie outside of the interquartile range. This doesn't imply that there is anything wrong with the data, but rather that there is a disparity between observations. Nevertheless, if you are set on removing observations perhaps you should consider scaling your features prior to determining which observations are outliers. For example, taking the `log` of a feature and then applying your outlier removal based on the `log(variable)`. Don't forget that IQR doesn't carry over well to categorical and ordered features.
2298
1
2363
null
2
355
I've built a toy Random Forest model in `R` (using the `German Credit` dataset from the `caret` package), exported it in `PMML 4.0` and deployed onto Hadoop, using the `Cascading Pattern` library. I've run into an issue where `Cascading Pattern` scores the same data differently (in a binary classification problem) than the same model in `R`. Out of 200 observations, 2 are scored differently. Why is this? Could it be due to a difference in the implementation of Random Forests?
Differences in scoring from PMML model on different platforms
CC BY-SA 3.0
null
2014-10-17T13:58:39.353
2014-10-29T14:37:00.910
null
null
1127
[ "machine-learning", "r", "apache-hadoop", "random-forest", "predictive-modeling" ]
The difference was, it appears, due to the different implementation of Random Forests in `R` and `Cascading Pattern` (as well as `openscoring` which I tried later) with respect to ties in the tree voting - i.e. when an even number of trees are built (say, 500) and exactly half classify an application as `Good`, and the other half as `Bad`, the handling of those situations differs. Solved it by growing and odd (501) number of trees.
Why is gridsearchCV.best_estimator_.score giving me r2_score even if I mentioned MAE as my main scoring metric?
This is the default behavior for any Scikit-learn regressor, and as far as I know, it cannot be modified. So for regressors, the `score` method will return the $R^2$ and $Accuracy$ for classifiers. ([check](https://github.com/scikit-learn/scikit-learn/blob/7e1e6d09b/sklearn/base.py#L662)) If you want to evaluate the best estimator with MAE you simply have to do: ``` from sklearn.metrics import mean_absolute_error mean_absolute_error(y_test, model_cv.best_estimator_.predict(x_test)) ``` Hope it helps!
2302
1
2307
null
3
5042
I want to analyze the effectiveness and efficiency of kernel methods for which I would require 3 different data-set in 2 dimensional space for each of the following cases: - BAD_kmeans: The data set for which the kmeans clustering algorithm will not perform well. - BAD_pca: The data set for which the Principal Component Analysis (PCA) dimension reduction method upon projection of the original points into 1-dimensional space (i.e., the first eigenvector) will not perform well. - BAD_svm: The data set for which the linear Support Vector Machine (SVM) supervised classification method using two classes of points (positive and negative) will not perform well. Which packages can I use in R to generate the random 2d data-set for each of the above cases ? A sample script in R would help in understanding
R Script to generate random dataset in 2d space
CC BY-SA 3.0
null
2014-10-18T04:58:45.100
2014-10-19T03:35:17.557
null
null
3577
[ "machine-learning", "classification", "r", "clustering" ]
None of the algorithms you mention are good with data that has uniform distribution. ``` size <- 20 #length of random number vectors set.seed(1) x <- runif(size) # generate samples from uniform distribution (0.0, 1.0) y <-runif(size) df <-data.frame(x,y) # other distributions: rpois, rmvnorm, rnbinom, rbinom, rbeta, rchisq, rexp, rgamma, rlogis, rstab, rt, rgeom, rhyper, rwilcox, rweibull. ``` See [this page](http://statistics.ats.ucla.edu/stat/r/modules/prob_dist.htm) for tutorial on generating random samples from distributions. --- For specific set of randomized data sets that are 'hard' for these methods (e.r. linearly inseparable n-classes XOR patterns), see this blog post (incl. R code): [http://tjo-en.hatenablog.com/entry/2014/01/06/234155](http://tjo-en.hatenablog.com/entry/2014/01/06/234155).
How can i get this way to create random data?
Providing multiple values to either the `loc` or `scale` arguments can be used to generate multiple random distributions at once with different parameters. In the code you provided the values for the `loc` argument are the same, meaning that you could also just use the value `-2` instead of `(-2, -2)`. You can see this when fixing the seed and generating new numbers ``` import numpy as np np.random.seed(0) print(np.random.normal((-2, -2), size=(5,2))) # [[-0.23594765 -1.59984279] # [-1.02126202 0.2408932 ] # [-0.13244201 -2.97727788] # [-1.04991158 -2.15135721] # [-2.10321885 -1.5894015 ]] np.random.seed(0) print(np.random.normal(-2, size=(5,2))) # [[-0.23594765 -1.59984279] # [-1.02126202 0.2408932 ] # [-0.13244201 -2.97727788] # [-1.04991158 -2.15135721] # [-2.10321885 -1.5894015 ]] ``` The different between the two lines is that one is generating random noise from a normal (Gaussian) distribution with a mean of -2 and the other from a mean of 2, see also the `loc` keyword in [the documentation](https://numpy.org/doc/stable/reference/random/generated/numpy.random.normal.html).
2323
1
39726
null
11
6074
Can anyone explain how field-aware factorization machines (FFM) compare to standard Factorization Machines (FM)? Standard: [http://www.ismll.uni-hildesheim.de/pub/pdfs/Rendle2010FM.pdf](http://www.ismll.uni-hildesheim.de/pub/pdfs/Rendle2010FM.pdf) "Field Aware": [http://www.csie.ntu.edu.tw/~r01922136/kaggle-2014-criteo.pdf](http://www.csie.ntu.edu.tw/~r01922136/kaggle-2014-criteo.pdf)
Field Aware Factorization Machines
CC BY-SA 3.0
null
2014-10-21T00:09:40.597
2018-10-17T20:53:25.863
2016-10-01T16:31:45.827
20995
1138
[ "machine-learning", "recommender-system" ]
It seems like you're asking for a high-level description. If you refer to the [slides](https://www.csie.ntu.edu.tw/~r01922136/slides/ffm.pdf) linked to within the slides of your original post, there's a comparison of FM (slide 11) vs FFM (slide 12). As a quick example, if you're learning about users and movies, FM might have the following factor: `w_{user_1}*w_{movie_1}*... + w{user_1}*w_{genre_1}*...` FFM would have: `w_{user_1, movies}*w_{movie_1, users}*... + w{user_1, genres}*w_{genre_1, users}*...` The key difference is that in FM, the `w_{user_1}` coefficient is the same in both terms--there's a single notion of the user. In FFM, you learn a separate `w_{user_1}` for each context, e.g. whether it's interacting with movies or genres. Note that it isn't learned separately for each particular movie or genre, but for movies and genres generally. That is, it separately learns the context of the user for each type of interaction. Also note that `w_{movie_1}` went to `w_{movie_1, users}` since that term is interacting with `w_{user_1}`, a user.
How to use hashing trick with field-aware factorization machines
One option is to use [xLearn](https://github.com/aksnzhy/xlearn), a scikit-learn compatible package for FFM, which handles that issue automatically. If you require feature hashing, you can write [a custom feature hashing function](https://github.com/aksnzhy/xlearn/issues/99): ``` import hashlib def hash_str(string: str, n_bins: int) -> int: return int(hashlib.md5(string.encode('utf8')).hexdigest(), 16) % (n_bins-1) + 1 ```
2334
1
2335
null
4
811
I want to analyze [MovieLens data set](http://grouplens.org/datasets/movielens/) and load on my machine the M1 file. I combine actually two data files (ratings.dat and movies.dat) and sort the table according `'userId'` and `'Time'` columns. The head of my DataFrame looks like here (all columns values are corresponding to the original data sets): ``` In [36]: df.head(10) Out[36]: userId movieId Rating Time movieName \ 40034 1 150 5 978301777 Apollo 13 (1995) 77615 1 1028 5 978301777 Mary Poppins (1964) 550485 1 2018 4 978301777 Bambi (1942) 400889 1 1962 4 978301753 Driving Miss Daisy (1989) 787274 1 1035 5 978301753 Sound of Music, The (1965) 128308 1 938 4 978301752 Gigi (1958) 497972 1 3105 5 978301713 Awakenings (1990) 28417 1 2028 5 978301619 Saving Private Ryan (1998) 6551 1 1961 5 978301590 Rain Man (1988) 35492 1 2692 4 978301570 Run Lola Run (Lola rennt) (1998) genre 40034 Drama 77615 Children's|Comedy|Musical 550485 Animation|Children's 400889 Drama 787274 Musical 128308 Musical 497972 Drama 28417 Action|Drama|War 6551 Drama 35492 Action|Crime|Romance [10 rows x 6 columns] ``` I can not understand that the same user with user Id 1 see or rated the different movies (Apollo13 (Id:150), Mary Poppins (Id:1028) and Bambi (Id:2018) exactly at the same time (up to the milleseconds). If somebody works already with this data set, please, clear this situation.
MovieLens data set
CC BY-SA 3.0
null
2014-10-22T14:53:42.127
2014-10-22T15:50:43.230
2014-10-22T15:43:44.803
3281
3281
[ "dataset", "pandas" ]
When you enter ratings on movie lens, you get pages with 10 movies or so. You set all the ratings, then submit by clicking "next page" or something. So I guess all the ratings for the same page are received at the same time, when you submit the page.
benchmark Result for MovieLens dataset?
One result for MovieLens 20M using Factorization Machine can be found [here](https://docs.treasuredata.com/articles/hivemall-movielens20m-fm). They got MAE: 0.60 and RMSE: 0.80. Another result for MovieLens 20M using Autoencoders can be found [here](https://arxiv.org/pdf/1606.07659.pdf). They got RMSE: 0.81.
2337
1
2339
null
6
232
I am not sure whether I formulated the question correctly. Basically, what I want to do is: Let's suppose I have a list of 1000 strings which look like this: cvzxcvzxstringcvzcxvz otortorotrstringgrptprt vmvmvmeopstring2vmrprp vccermpqpstring2rowerm proorororstring3potrprt mprto2435string3famerpaer etc. I'd like to extract these reoccuring strings that occur on the list. What solution should I use? Does anyone know about algorithm that could do this?
Clustering strings inside strings?
CC BY-SA 3.0
null
2014-10-23T14:51:57.160
2014-10-24T18:35:41.350
null
null
4774
[ "nlp", "text-mining", "feature-extraction" ]
Interesting question! I have not encountered it before so here is a solution I just made up, inspired by the approach taken by the word2vec paper: - Define the pair-wise similarity based on the longest common substring (LCS), or the LCS normalized by the products of the string lengths. Cache this in a matrix for any pair of strings considered since it is expensive to calculate. Also consider approximations. - Find a Euclidean (hyperspherical, perhaps?) embedding that minimizes the error (Euclidean distance if using the ball, and the dot product if using the sphere). Assume random initialization, and use a gradient-based optimization method by taking the Jacobian of the error. - Now you have a Hilbert space embedding, so cluster using your algorithm of choice! Response to deleted comment asking how to cluster multiple substrings: The bulk of the complexity lies in the first stage; the calculation of the LCS, so it depends on efficiently you do that. I've had luck with genetic algorithms. Anyway, what you'd do in this case is define a similarity vector rather than a scalar, whose elements are the k-longest pair-wise LCS; see [this](https://cstheory.stackexchange.com/questions/8361/algorithm-find-the-first-k-longest-substrings-between-two-similar-strings) discussion for algorithms. Then I would define the error by the sum of the errors corresponding to each substring. Something I did not address is how to choose the dimensionality of the embedding. The word2vec paper might provide some heuristics; see [this](https://groups.google.com/forum/#!topic/word2vec-toolkit/HRvNPIqe6mM) discussion. I recall they used pretty big spaces, on the order of a 1000 dimensions, but they were optimizing something more complicated, so I suggest you start at R^2 and work your way up. Of course, you will want to use a higher dimensionality for the multiple LCS case.
Clustering 1-gram Strings
You said: > All the algorithms I've checked so far e.g. n-grams variants, bag of words etc. are only used for clustering strings that are part of a context but here is not the case. However, those algorithms can still be used if you don't treat tokens as full words, but as combinations of characters instead. Here is an example of such algorithm in Python using Sklearn. ``` from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.cluster import KMeans data = [ 'alex@examplemail.com', 'bernard@examplemail.com', 'chris@newmail.com', 'dorothy@nemail.com', 'emily@examplemail.com' ] X = TfidfVectorizer(analyzer='char').fit_transform(data) kmeans = KMeans(n_clusters=2).fit(X) for cluster in set(kmeans.labels_): print('\nCluster: {}'.format(cluster)) for i, label in enumerate(kmeans.labels_): if label == cluster: print(data[i]) > Cluster: 0 > alex@examplemail.com > bernard@examplemail.com > emily@examplemail.com > > Cluster: 1 > chris@newmail.com > dorothy@nemail.com ``` The major trick is `TfidfVectorizer(analyzer='char')`, where you don't treat n-grams of words, but of characters instead. P.S. If you at some point want to use word n-grams (if you have generally well written sentences) you can use `TfidfVectorizer(analyzer='word')`.
2349
1
2350
null
5
199
For my Computational Intelligence class, I'm working on classifying short text. One of the papers that I've found makes a lot of use of granular computing, but I'm struggling to find a decent explanation of what exactly it is. From what I can gather from the paper, it sounds to me like granular computing is very similar to fuzzy sets. So, what exactly is the difference. I'm asking about rough sets as well, because I'm curious about them and how they relate to fuzzy sets. If at all. Edit: [Here](http://ijcai.org/papers11/Papers/IJCAI11-298.pdf) is the paper I'm referencing.
Rough vs Fuzzy vs Granular Computing
CC BY-SA 3.0
null
2014-10-26T13:12:23.597
2014-12-13T21:30:55.703
2014-12-13T21:30:55.703
84
4804
[ "machine-learning", "data-mining", "classification" ]
"Granularity" refers to the resolution of the variables under analysis. If you are analyzing height of people, you could use course-grained variables that have only a few possible values -- e.g. "above-average, average, below-average" -- or a fine-grained variable, with many or an infinite number of values -- e.g. integer values or real number values. A measure is "fuzzy" if the distinction between alternative values is not crisp. In the course-grained variable for height, a "crisp" measure would mean that any given individual could only be assigned one value -- e.g. a tall-ish person is either "above-average", or "average". In contrast, a "fuzzy" measure allows for degrees of membership for each value, with "membership" taking values from 0 to 1.0. Thus, a tall-ish person could be a value of "0.5 above-average", "0.5 average", "0.0 below-average". Finally, a measure is "rough" when two values are given: upper and lower bounds as an estimate of the "crisp" measure. In our example of a tall-ish person, the rough measure would be {UPPER = above-average, LOWER = average}. Why use granular, fuzzy, or rough measures at all, you might ask? Why not measure everything in nice, precise real numbers? Because many real-world phenomena don't have a good, reliable intrinsic measure and measurement procedure that results in a real number. If you ask married couples to rate the quality of their marriage on a scale from 1 to 10, or 1.00 to 10.00, they might give you a number (or range of numbers), but how reliable are those reports? Using a course-grained measure (e.g. "happy", "neutral/mixed", "unhappy"), or fuzzy measure, or rough measure can be more reliable and more credible in your analysis. Generally, it's much better to use rough/crude measures well than to use precise/fine-grained measures poorly.
Fuzzy and FuzzyWuzzy: what are the differences in text comparison?
In the source code you can find a simple explanation of what does partial ratio in fuzzywuzzy does: > Return the ratio of the most similar substring as a number between 0 and 100 In this code snippet you can find the differences ``` from fuzzywuzzy import fuzz fuzz.ratio("this is a test", "this is a test!") Out: 97 fuzz.partial_ratio("this is a test", "this is a test!") Out: 100 ``` [1] [https://github.com/seatgeek/fuzzywuzzy](https://github.com/seatgeek/fuzzywuzzy) [2] [https://github.com/seatgeek/fuzzywuzzy/blob/master/fuzzywuzzy/fuzz.py](https://github.com/seatgeek/fuzzywuzzy/blob/master/fuzzywuzzy/fuzz.py)
2355
1
2371
null
3
3061
Sorry, if this topic is not connected directly to Data Science. I want to understand how the [Graphlab tool](http://graphlab.com/learn/gallery/index.html) works. Firstly I want to execute the toy examples from the Gallery site. When I try to execute the example code, everything is OK except one command: I can not see the graphlab plot after `show()`. The command `show()` returns to me some kind of object in IPython and nothing in the IPython Notebook. If the example code has the plot, which depends directly on the matplotlib module, I can produce the real plots and save it on my machine. Consequently, I suppose the main error depends on the graphlab (or object from its class). If somebody already used this tool and rendered the plot, can he/she tell me, how I can execute the plots command? ``` In [8]: import graphlab In [9]: from IPython.display import display from IPython.display import Image graphlab.canvas.set_target('ipynb') In [10]:import urllib url = 'https://s3.amazonaws.com/GraphLab-Datasets/americanMovies /freebase_performances.csv' urllib.urlretrieve(url, filename='freebase_performances.csv') # downloads an 8MB file to the working directory Out[10]: ('freebase_performances.csv', <httplib.HTTPMessage instance at 0x7f44e153cf38>) In [11]: data = graphlab.SFrame.read_csv('remote://freebase_performances.csv', column_type_hints={'year': int}) ``` ... ... ... ``` In [15]:data.show() ``` No plot after this line ... ... ``` In [19]:print data.show() <IPython.core.display.Javascript object at 0x7f44e14c0850> ``` The object of graphlab (?) after print command
How do I show plots when using Graphlab?
CC BY-SA 3.0
null
2014-10-27T09:55:36.887
2016-11-29T10:53:36.083
2016-11-29T10:53:36.083
26596
3281
[ "python", "graphs", "ipython" ]
I have found some solution and will post it here, because somebody, who works with graphlab, can have the same question. We can look at the example here: [Six degrees of Kevin Bacon](http://graphlab.com/learn/gallery/notebooks/graph_analytics_movies.html) At te beginning of the program execution you need to run next command: ``` graphlab.canvas.set_target('ipynb') ``` Exactly this is a key of the whole problem (at least by me:-) At the beginning it is important to know, which parameter of `set_target()` command you want to execute. You can use two options for argument of this command: `'ipynb'` (which is executed direct in iPython Notebook, like in example) or `'browser'` (which open the new window with the plots) On my machine 64-bit, Ubuntu, I can not use the command `'ipynb'`. Only the `'browser'`-command get me the plot back. I don't think, it is necessary to change https to http, but you can do it anyway. We have also the other machine by us (32-bit, Ubuntu) and it executes the other command `'ipynb'`, but not `'browser'` (without to change https to http)
No graph is displaying while plotting value with time interval using python
In python you generally have all the libraries available to you. It is hard to find sometimes but you should rarely need to write out so much code. Try this out. I created some dummy data using the same date formats as you have: ``` import pandas as pd import matplotlib.pyplot as plt data = {'date': ['08/06/2018', '8/6/2018', '8/6/2018', '9/6/2018'], 'time': ['6:15:00', '12:45:00', '18:15:00', '6:15:00'], 'x2': [1, 4, 8, 6]} ``` Now we will make a pandas DataFrame with this dummy data ``` df = pd.DataFrame(data) ``` Now we can get our x-axis datetimes by first concatenating the dates and times together separated by a space. Then we will get pandas to parse these datetimes. ``` datetimes = pd.to_datetime(df['date'] + ' ' + df['time'], format='%d/%m/%Y %H:%M:%S') ``` You can then plot your data using ``` plt.plot(datetimes, df['x2']) ``` [](https://i.stack.imgur.com/mz6vh.png) --- Put your csv file in your workspace. Then you can use this following code ``` import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv(r'temp.csv') datetimes = pd.to_datetime(df['date'] + ' ' + df['time'], format='%d/%m/%Y %H:%M:%S') plt.plot(datetimes, df['x']) plt.show() ``` [](https://i.stack.imgur.com/B4dHB.png) --- ``` import matplotlib.dates as mdates fig, ax = plt.subplots(1) fig.autofmt_xdate() plt.plot(datetimes, df['x']) plt.xticks(rotation=90) xfmt = mdates.DateFormatter('%d-%m-%y %H:%M') ax.xaxis.set_major_formatter(xfmt) plt.show() ``` [](https://i.stack.imgur.com/VrMl0.png)
2368
1
2370
null
63
66443
What are the common/best practices to handle time data for machine learning application? For example, if in data set there is a column with timestamp of event, such as "2014-05-05", how you can extract useful features from this column if any? Thanks in advance!
Machine learning - features engineering from date/time data
CC BY-SA 3.0
null
2014-10-29T05:25:55.603
2022-10-14T09:40:15.590
null
null
88
[ "machine-learning", "time-series", "feature-selection" ]
I would start by graphing the time variable vs other variables and looking for trends. ## For example ![enter image description here](https://i.stack.imgur.com/QGYUC.png) In this case there is a periodic weekly trend and a long term upwards trend. So you would want to encode two time variables: - day_of_week - absolute_time ## In general There are several common time frames that trends occur over: - absolute_time - day_of_year - day_of_week - month_of_year - hour_of_day - minute_of_hour Look for trends in all of these. ## Weird trends Look for weird trends too. For example you may see rare but persistent time based trends: - is_easter - is_superbowl - is_national_emergency - etc. These often require that you cross reference your data against some external source that maps events to time. ## Why graph? There are two reasons that I think graphing is so important. - Weird trends While the general trends can be automated pretty easily (just add them every time), weird trends will often require a human eye and knowledge of the world to find. This is one reason that graphing is so important. - Data errors All too often data has serious errors in it. For example, you may find that the dates were encoded in two formats and only one of them has been correctly loaded into your program. There are a myriad of such problems and they are surprisingly common. This is the other reason I think graphing is important, not just for time series, but for any data.
Building a machine learning model based on a set of timestamped features to predict/classify a label/value?
I don't know much about coffee or pharmaceuticals but I think the widely varying time samples is a problem. If I brewed one batch of coffee for a minute and another for 5 hours, I'm pretty sure the 5 hour batch would come out burnt-tasting in all cases. Can you break the samples up into cohorts by duration and then train on each cohort? You'd end up with a model for the "1 minute batch", a model for the "1 hour batch", etc.
2373
1
2376
null
4
312
The most online tutorials like to use a simple example to introduce to machine learning by classify unknown text in spam or not spam. They say that this is a binary-class problem. But why is this a binary-class problem? I think it is a one-class problem! I do only need positive samples of my inbox to learn what is not spam. If I do take a bunch of not spam textes as positiv samples and a bunch of spam-mails as negativ samples, then of course it's possible to train a binary-classifier and make predictions from unlabeled data, but where is the difference to the onc-class-approach? There I would just define a training-set of all non spam examples and train some one-class classifier. What do you think?
Detecting Spam using Machine Learning
CC BY-SA 3.0
null
2014-10-29T21:57:15.603
2014-10-30T14:48:34.907
null
null
4717
[ "machine-learning" ]
Strictly speaking, "one class classification" does not make sense as an idea. If there is only one possible state of a predicted value, then there is no prediction problem. The answer is always the single class. Concretely, if you only have spam examples, you would always achieve 100% accuracy by classifying all email as spam. This is clearly wrong, and the only way to know how it is wrong is to know where the classification is wrong -- where emails are not in the spam class. So-called [one-class classification](http://en.wikipedia.org/wiki/One-class_classification) techniques are really anomaly detection approaches. They have an implicit assumption that things unlike the examples are not part of the single class, but, this is just an assumption about data being probably not within the class. There's a binary classification problem lurking in there. What is wrong with a binary classifier?
How to learn spam email detection?
First of all check [this](https://archive.ics.uci.edu/ml/datasets/Spambase) carefully. You'll find a simple dataset and some papers to review. BUT as you want to start a simple learning project I recommend to not going through papers (which are obviously not basic) but try to build your own bayesian learner which is not so difficult. I personally suggest [Andrew Moore](http://www.cs.cmu.edu/~guestrin/Class/10708-F08/schedule.html)'s lecture slides on Probabilistic Graphical Models which are freely available and you can learn from them simply and step by step. If you need more detailed help just comment on this answer and I'll be glad to help :) Enjoy baysian learning!
2384
1
2393
null
4
2235
I'm planing to write a classification program that is able to classify unknown text in around 10 different categories and if none of them fits it would be nice to know that. It is also possible that more then one category is right. My predefined categories are: ``` c1 = "politics" c2 = "biology" c3 = "food" ... ``` I'm thinking about the right approach in how to represent my training-data or what kind of classification is the right one. The first challenge is about finding the right features. If I only have text (250 words each) what method would you recommend to find the right features? My first approach is to remove all stop-words and use the POS-Tagger ([Stanford NLP POS-Tagger](http://nlp.stanford.edu/software/tagger.shtml)) to find nouns, adjective etc. I count them an use all frequently appeared words as features. e.g. politics, I've around 2.000 text-entities. With the mentioned POS-Tagger I found: ``` law: 841 capitalism: 412 president: 397 democracy: 1007 executive: 112 ... ``` Would it be right to use only that as features? The trainings-set would then look like: ``` Training set for politics: feature law numeric feature capitalism numeric feature president numeric feature democracy numeric feature executive numeric class politics,all_others sample data: politics,5,7,1,9,3 politics,14,4,6,7,9 politics,9,9,9,4,2,1 politics,5,8,0,7,6 ... all_others,0,2,4,1,0 all_others,0,0,1,1,1 all_others,7,4,0,0,0 ... ``` Would this be a right approach for binary-classification? Or how would I define my sets? Or is multi-class classification the right approach? Then it would look like: ``` Training set for politics: feature law numeric feature capitalism numeric feature president numeric feature democracy numeric feature executive numeric feature genetics numeric feature muscle numeric feature blood numeric feature burger numeric feature salad numeric feature cooking numeric class politics,biology,food sample data: politics,5,7,1,9,3,0,0,2,1,0,1 politics,14,4,6,7,9,0,0,0,0,0,1 politics,9,9,9,4,2,1,1,1,1,0,3 politics,5,8,0,7,6,2,2,0,1,0,1 ... biology,0,2,4,1,0,4,19,5,0,2,2 biology,0,0,1,1,1,12,9,9,2,1,1 biology,7,4,0,0,0,10,10,3,0,0,7 ... ``` What would you say?
Text-Classification-Problem, what is the right approach?
CC BY-SA 3.0
null
2014-10-31T17:55:25.723
2020-08-18T16:28:32.957
null
null
4717
[ "machine-learning" ]
I think perhaps the first thing to decide that will help clarify some of your other questions is whether you want to perform binary classification or multi-class classification. If you're interested in classifying each instance in your dataset into more than one class, then this brings up a set of new concerns regarding setting up your data set, the experiments you want to run, and how you plan to evaluate your classifier(s). My hunch is that you could formulate your task as a binary one where you train and test one classifier for each class you want to predict, and simply set up the data matrix so that there are two classes to predict - (1) the one you're interested in classifying and (2) everything else. In that case, instead of your training set looking like this (where each row is a document and columns 1-3 contain features for that document, and the class column is the class to be predicted): ``` 1 2 3 class feature1 feature2 feature3 politics feature1 feature2 feature3 law feature1 feature2 feature3 president feature1 feature2 feature3 politics ``` it would look like the following in the case where you're interested in detecting the politics class against everything else: ``` 1 2 3 class feature1 feature2 feature3 politics feature1 feature2 feature3 non-politics feature1 feature2 feature3 non-politics feature1 feature2 feature3 politics ``` You would need to do this process for each class you're interested in predicting, and then train and test one classifier per class and evaluate each classifier according to your chosen metrics (usually accuracy, precision, or recall or some variation thereof). As far as choosing features, this requires quite a bit of thinking. Features can be highly dependent on the type of text you're trying to classify, so be sure to explore your dataset and get a sense for how people are writing in each domain. Qualitative investigation isn't enough to decide once and for all what are good features, but it is a good way to get ideas. Also, look into [TF-IDF](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) weighting of terms instead of just using their frequency within each instance of your dataset. This will help you pick up on (a) terms that are prevalent within a document (and possibly a target class) and (b) terms that distinguish a given document from other documents. I hope this helps a little.
Text to Text classification
## I think that is because logistic regression with ~3k labels is not a good choice you are right but I rephrase it a bit better: In general, Classification with ~3k labels is not a good choice! You basically have a Search/Recommendation problem. Given your input, you find the best fitting ticket/dashboard and assign it. It is a very interesting ML project actually! I give a confident starter. If it did not work, please come back with results and I update the answer: ## If you want to go Unsupervised Query-Document Matching - Use a simple TF-IDF to vectorise your text - Apply a dimensionality reduction to reduce high-dimensionality sparse vectors to low-dimensionality dense vectors. If you use matrix factorisations for this, you are basically doing famous classic LSA - In that vector space, you find the closest label to your query and assign it to the query Topic Modeling - Apply a simple LDA to model topics for the corpus - Given a query, find the best matching topic of that query and assign the query to that topic (cluster) - Please note that LDA finds intrinsic topics. So if your labels are different than topics that it finds, you need to rely on labels and ignore this solution ## A little bit more Supervised## - Create a dataset from your corpus (or maybe you already have it) in which sentence pairs (titles, descriptions, etc.) which belong to same topic/label have label $1$, and sentence pairs which belong to different topics/classes/labels have label $-1$ and sentence pairs with neutral relation have the label $0$. I put an example as PS at the end. - Feed this data to S-Bert to fine-tune the pre-trained model - Read this, learn it and use it for finding most similar ticket/dashboard to the query PS: How data for S-Bert looks like (I just made up some dummy examples! hope you get the idea) ``` sentence1: He is a man sentence2: He is male label: 1 sentence1: programming is hard sentence2: Maradona was a magician label: -1 sentence1: don't know what to write here sentence2: never mind, I think you got what I mean label: 0 . . . ```
2398
1
2401
null
3
431
I'd like to apply some of the more complex supervised machine learning techniques in python - deep learning, generalized addative models, proper implementation of regularization, other cool stuff I dont even know about, etc. Any recommendations how I could find expert ML folks that would like to collaborate on projects?
Python Machine Learning Experts
CC BY-SA 3.0
null
2014-11-04T00:58:00.137
2014-11-04T13:41:52.240
null
null
4910
[ "machine-learning", "python" ]
You could try some competitions from [kaggle](http://kaggle.com). Data Science courses from Coursera, edX, etc also provide forums for discussion. Linkedin or freelance sites could be other possibilities.
Clarifying some unclear Areas of model training, python, Machine Learning
Do you have a single class and you're trying to predict whether or not the input is an instance of it? In this case, you're doing binary classification with logistic regression, though you only need 1 output: your model would predict the probability that the input belongs to the class, and will vary between 0 and 1. If the output of the model is >= 0.5, then the input is predicted to belong to the class, otherwise no. If you have two or more classes, then you need two or more outputs (i.e. your y[]s) and you would want to do softmax regression where your model predicts the probability of each class and then you take the predicted class with the highest probability. For visualizing, you want to do dimensionality reduction. There are several ways to do this and it's a large subject in itself, but probably the easiest way to start is to project your 14 dimensions down to 2 and plot them. Scikit-learn has a PCA class which makes this straightforward: ``` pca = PCA(n_components=2) X_projected = pca.fit_transform(X_in) ``` To check for overfitting or underfitting, you generally want to separate out some percentage of your data for testing purposes only, not used for training/fitting the model. If I understand correctly, you have already done this with 30% of your data? Then you check the ability of your model to predict the class of the test data it hasn't seen. Overfitting models will perform much better on training data vs. test data. hth.
2473
1
2479
null
2
1462
I am dealing with a lot of categorical data right now and I would like to use an appropriate data mining method in any tool [preferably R] to find the effect of each parameter [categorical parameters] over my target variable. To give a brief notion about the data that am dealing with, my target variable denotes the product type [say, disposables and non-disposables] and I have parameters like root cause,symptom,customer name, product name etc. As my target can be considered as a binary value, I tried to find the combination of values leading to the desired categories using Apriori but, I have more than 2 categories in that attribute and I want to use all of them and find the effect of the mentioned parameters over each category. I really wanted to try SVM and use hyperplanes to separate the content and get n-dimensional view. But, I do not have enough knowledge to validate the technique, functions am using to do the analysis. Currently I have like 9000 records and each of them represents a complaint from the user. There are lot of columns available in the dataset which is what I am trying to use to determine the target variable [ myForumla <- Target~. ] I tried with just 4 categorical columns too. Not getting a proper result. Can just the categorical variables be used to develop a SVM model and get visualization with n hyper planes? Is there any appropriate data mining technique available for dealing with just the categorical data?
Choosing the right data mining method to find the effect of each parameter over the target
CC BY-SA 3.0
null
2014-11-14T19:03:35.603
2015-10-21T01:46:30.197
null
null
5043
[ "data-mining", "classification", "r", "svm", "categorical-data" ]
You can try Bayesian belief networks (BBNs). BBNs can easily handle categorical variables and give you the picture of the multivariable interactions. Furthermore, you may use sensitivity analysis to observe how each variable influences your class variable. Once you learn the structure of the BBN, you can identify the Markov blanket of the class variable. The variables in the Markov blanket of the class variable is a subset of all the variables, and you may use optimization techniques to see which combination of values in this Markov blanket maximizes your class prediction.
What is the best Data Mining algorithm for prediction based on a single variable?
Common rule in machine learning is to try simple things first. For predicting continuous variables there's nothing more basic than simple linear regression. "Simple" in the name means that there's only one predictor variable used (+ intercept, of course): ``` y = b0 + x*b1 ``` where `b0` is an intercept and `b1` is a slope. For example, you may want to predict lemonade consumption in a park based on temperature: ``` cons = b0 + temp * b1 ``` Temperature is in well-defined continuous variable. But if we talk about something more abstract like "weather", then it's harder to understand how we measure and encode it. It's ok if we say that the weather takes values `{terrible, bad, normal, good, excellent}` and assign values numbers from -2 to +2 (implying that "excellent" weather is twice as good as "good"). But what if the weather is given by words `{shiny, rainy, cool, ...}`? We can't give an order to these variables. We call such variables categorical. Since there's no natural order between different categories, we can't encode them as a single numerical variable (and linear regression expects numbers only), but we can use so-called dummy encoding: instead of a single variable `weather` we use 3 variables - `[weather_shiny, weather_rainy, weather_cool]`, only one of which can take value 1, and others should take value 0. In fact, we will have to drop one variable because of [collinearity](http://en.wikipedia.org/wiki/Multicollinearity). So model for predicting traffic from weather may look like this: ``` traffic = b0 + weather_shiny * b1 + weather_rainy * b2 # weather_cool dropped ``` where either `b1` or `b2` is 1, or both are 0. Note that you can also encounter non-linear dependency between predictor and predicted variables (you can easily check it by plotting `(x,y)` pairs). Simplest way to deal with it without refusing linear model is to use polynomial features - simply add polynomials of your feature as new features. E.g. for temperature example (for dummy variables it doesn't make sense, cause `1^n` and `0^n` are still 1 and 0 for any `n`): ``` traffic = b0 + temp * b1 + temp^2 * b2 [+ temp^3 * b3 + ...] ```
2486
1
2496
null
3
6403
I was wondering if anyone was aware of any methods for visualizing an SVM model where there are more than three continuous explanatory variables. In my particular situation, my response variable is binomial, with 6 continuous explanatory variables (predictors), one categorical explanatory variable (predictor). I have already reduced the number of predictors and I am primarily using R for my analysis. (I am unaware if such a task is possible/ worth pursuing.) Thanks for your time.
Visualizing Support Vector Machines (SVM) with Multiple Explanatory Variables
CC BY-SA 3.0
null
2014-11-16T21:03:09.037
2014-11-21T18:55:07.860
2014-11-21T10:39:38.337
847
5023
[ "machine-learning", "classification", "r", "visualization", "svm" ]
Does it matter that the model is created in the form of SVM? If no, I have seen a clever 6-D visualization. Its varieties are becoming popular in medical presentations. 3 dimensions are shown as usual, in orthographic projection. Dimension 4 is color (0..255) Dimension 5 is thickness of the symbol Dimension 6 requires animation. It is a frequency of vibration of a dot on the screen. In static, printed versions, one can replace frequency of vibration by blur around the point, for a comparable visual perception. If yes, and you specifically need to draw separating hyperplanes, and make them look like lines\planes, the previous trick will not produce good results. Multiple 3-D images are better.
Feature selection for Support Vector Machines
Personally, I like to divide feature selection in two: - unsupervised feature selection - supervised feature selection Unsupervised feature selection are things like clustering or PCA where you select the least redundant range of features (or create features with little redundancy). Supervised feature selection are things like Lasso where you select the features with most predictive power. I personally usually prefer what I call supervised feature selection. So, when using a linear regression, I would select features based on Lasso. Similar methods exist to induce sparseness in neural networks. But indeed, I don't see how I would go about doing that in a method using kernels, so you are probably better off using what I call unsupervised feature selection. EDIT: you also asked about regularization. I see regularization as helping mostly because we work with finite samples and so the training and testing distribution will always differ somewhat, and you want your model to not overfit. I am not sure it removes the need to avoid selecting features (if you indeed have too many). I think that selecting features (or creating a smaller subset of them) helps by making the features you do have more robust and avoid the model to learn from spurious correlations. So, regularization does help, but not sure that it is a complete alternative. But I haven't thought thoroughly enough about this.
2501
1
5626
null
5
204
I do at the moment some data experiments with the [Graphlab toolkit](http://graphlab.com/products/create/docs/). I have at the first next SFrame, with the three columns: ``` Users Items Rating ``` The pair in the same row from every `Users` and `Items` values build the unique key and the `Rating` is the corresponded float value. These values are not normalised. First of all, I do someself next normalisation: - Division of every rating value of specific user by the rating maximum from this user (scale between 0 and 1) - Take the logarithm by every rating value Afterward I create a recommender model and evaluate the basic metrics for it. In this topic I invite everybody to discuss another interesting normalisation methods. If anybody could tell some good method for data preparation, it would be great. The results could be evaluated because of the metrics and I can publish it here. PS My dataset is comming from some music site, the users rated some tracks. I have approximately 100 000 users and 300 000 tracks. Total number of ratings is over 3 millions (actually the matrix is sparse). This is the most simple data set, which I analyze now. In the future I can (and will) use some additional information about the users and tracks (f.e. duration, year, genre, band etc). At the moment I just interest to collect some methods for rating normalisation without to use additional information (users & items features). My problem is, the data set doesn't have any `Rating` at the first. I create someself the column `Rating`, based on the number of events for unique `User-Item` pair (I have this information). You can of course understand that some users can hear some tracks many times, and another users only one time. Consequently the dispersion is very high and I want to reduce it (normalise the ratings value).
Data scheduling for recommender
CC BY-SA 3.0
null
2014-11-19T12:38:05.597
2015-04-27T11:58:50.443
2014-11-20T14:56:14.813
3281
3281
[ "recommender-system", "data-cleaning" ]
Another good normalization is [zScore](http://en.wikipedia.org/wiki/Standard_score) normalization. Was already implemented in python in the module [scipy.stats](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.mstats.zscore.html)
What algorithms should I choose for a recommender system and why?
Short answer: It depends on your data. What do you want to do ? Longer answer: - Use content based approaches if you have data on your items. Use collaborative approaches is you have data on your users. Use both if you have both. I would say content based approaches are general machine learning problems (how do I extract meaningful information from data) whereas collaborative filtering is really a recommender system specific work (how user's behavior can suggest users/items similarity/connections). - Well you can. Neural nets are just a kind of algorithm, you surely can use them for content based analysis, and it might be possible to use them to enhance your collaborative algorithm. - NN use texts and images as numerical data, so I don't understand your question. If you want a good insight in today's recommender systems, take a look at this [article](https://datajobs.com/data-science-repo/Collaborative-Filtering-[Koren-and-Bell].pdf).
2504
1
5152
null
50
41238
I have a big data problem with a large dataset (take for example 50 million rows and 200 columns). The dataset consists of about 100 numerical columns and 100 categorical columns and a response column that represents a binary class problem. The cardinality of each of the categorical columns is less than 50. I want to know a priori whether I should go for deep learning methods or ensemble tree based methods (for example gradient boosting, adaboost, or random forests). Are there some exploratory data analysis or some other techniques that can help me decide for one method over the other?
Deep Learning vs gradient boosting: When to use what?
CC BY-SA 3.0
null
2014-11-20T06:49:00.357
2020-08-20T18:33:44.403
null
null
847
[ "machine-learning", "classification", "deep-learning" ]
Why restrict yourself to those two approaches? Because they're cool? I would always start with a simple linear classifier \ regressor. So in this case a Linear SVM or Logistic Regression, preferably with an algorithm implementation that can take advantage of sparsity due to the size of the data. It will take a long time to run a DL algorithm on that dataset, and I would only normally try deep learning on specialist problems where there's some hierarchical structure in the data, such as images or text. It's overkill for a lot of simpler learning problems, and takes a lot of time and expertise to learn and also DL algorithms are very slow to train. Additionally, just because you have 50M rows, doesn't mean you need to use the entire dataset to get good results. Depending on the data, you may get good results with a sample of a few 100,000 rows or a few million. I would start simple, with a small sample and a linear classifier, and get more complicated from there if the results are not satisfactory. At least that way you'll get a baseline. We've often found simple linear models to out perform more sophisticated models on most tasks, so you want to always start there.
Machine Learning vs Deep Learning
In addition to what Himanshu Rai said, Deep learning is a subfield which involves the use of neural networks.These neural networks try to learn the underlying distribution by modifying the weights between the layers. Now, consider the case of image recognition using deep learning: a neural network model is divided among layers, these layers are connected by links called weights, as the training process begins, these layers adjust the weights such that each layer tries to detect some feature and help the next layer for its processing.The key point to note is we don't explicitly tell the layer to learn to detect edges, or eyes, nose or faces.The model learns to do that itself.Unlike classical machine learning models.
2525
1
2589
null
4
480
I am having some difficulty in seeing connection between PCA on second order moment matrix in estimating parameters of Gaussian Mixture Models. Can anyone connect the above??
Can some one explain how PCA is relevant in extracting parameters of Gaussian Mixture Models
CC BY-SA 3.0
null
2014-11-23T02:27:10.670
2014-12-03T13:55:16.150
null
null
4686
[ "clustering" ]
I believe the claim that you are referring to is that the maximum-likelihood estimate of the component means in a GMM must lie in the span of the eigenvectors of the second moment matrix. This follows from two steps: - Each component mean in the maximum-likelihood estimate is a linear combination of the data points. (You can show this by setting the gradient of the log-likelihood function to zero.) - Any linear combination of the data points must lie in the span of the eigenvectors of the second moment matrix. (You can show this by first showing that any individual data point must lie in the span, and therefore any linear combination must also be in the span.)
How is PCA is different from SubSpace clustering and how do we extract variables responsible for the first PCA component?
Reducing the dimensionality of a dataset with PCA does not only benefits humans trying to look at the data in a graspable number of dimensions. It is also useful for machine learning algorithms to be trained on a subset of dimensions. Both to reduce the complexity of the data and the computational cost of training such machine learning model.
2527
1
2562
null
11
499
Hi this is my first question in the Data Science stack. I want to create an algorithm for text classification. Suppose i have a large set of text and articles. Lets say around 5000 plain texts. I first use a simple function to determine the frequency of all the four and above character words. I then use this as the feature of each training sample. Now i want my algorithm to be able to cluster the training sets to according to their features, which here is the frequency of each word in the article. (Note that in this example, each article would have its own unique feature since each article has a different feature, for example an article has 10 "water and 23 "pure" and another has 8 "politics" and 14 "leverage"). Can you suggest the best possible clustering algorithm for this example?
Using Clustering in text processing
CC BY-SA 3.0
null
2014-11-23T14:58:34.127
2017-06-08T00:24:37.560
null
null
5138
[ "text-mining", "clustering" ]
I don't know if you ever read SenseCluster by Ted Pedersen : [http://senseclusters.sourceforge.net/](http://senseclusters.sourceforge.net/). Very good paper for sense clustering. Also, when you analyze words, think that "computer", "computers", "computering", ... represent one concept, so only one feature. Very important for a correct analysis. To speak about the clustering algorithm, you could use a [hierarchical clustering](http://en.wikipedia.org/wiki/Hierarchical_clustering). At each step of the algo, you merge the 2 most similar texts according to their features (using a measure of dissimilarity, euclidean distance for example). With that measure of dissimilarity, you are able to find the best number of clusters and so, the best clustering for your texts and articles. Good luck :)
Algorithms for text clustering
Check the [Stanford NLP Group](http://www-nlp.stanford.edu/software)'s open source software, in particular, [Stanford Classifier](http://www-nlp.stanford.edu/software/classifier.shtml). The software is written in `Java`, which will likely delight you, but also has bindings for some other languages. Note, the licensing - if you plan to use their code in commercial products, you have to acquire commercial license. Another interesting set of open source libraries, IMHO suitable for this task and much more, is [parallel framework for machine learning GraphLab](http://select.cs.cmu.edu/code/graphlab), which includes [clustering library](http://select.cs.cmu.edu/code/graphlab/clustering.html), implementing various clustering algorithms. It is especially suitable for very large volume of data (like you have), as it implements `MapReduce` model and, thus, supports multicore and multiprocessor parallel processing. You most likely are aware of the following, but I will mention it just in case. [Natural Language Toolkit (NLTK)](http://www.nltk.org) for `Python` contains modules for clustering/classifying/categorizing text. Check the relevant chapter in the [NLTK Book](http://www.nltk.org/book/ch06.html). UPDATE: Speaking of algorithms, it seems that you've tried most of the ones from `scikit-learn`, such as illustrated in [this](http://scikit-learn.org/stable/auto_examples/applications/topics_extraction_with_nmf.html) topic extraction example. However, you may find useful other libraries, which implement a wide variety of clustering algorithms, including Non-Negative Matrix Factorization (NMF). One of such libraries is [Python Matrix Factorization (PyMF)](https://code.google.com/p/pymf) ([source code](https://github.com/nils-werner/pymf)). Another, even more interesting, library, also Python-based, is [NIMFA](http://nimfa.biolab.si), which implements various NMF algorithms. Here's a [research paper](http://jmlr.org/papers/volume13/zitnik12a/zitnik12a.pdf), describing `NIMFA`. [Here's](http://nimfa.biolab.si/nimfa.examples.documents.html) an example from its documentation, which presents the solution for very similar text processing problem of topic clustering.
2558
1
4896
null
3
66
What are some possible techniques for smoothing proportions across very large categories, in order to take into account the sample size? The application of interest here is to use the proportions as input into a predictive model, but I am wary of using the raw proportions in cases where there is little evidence and I don't want to overfit. Here is an example, where the ID denotes a customer and impressions and clicks are the number of ads shown and clicks the customer has made, respectively. ![enter image description here](https://i.stack.imgur.com/3oHzQ.jpg)
Smoothing Proportions :: Massive User Database
CC BY-SA 3.0
null
2014-11-28T03:02:44.323
2015-01-17T05:20:32.763
null
null
1138
[ "machine-learning", "predictive-modeling", "feature-extraction" ]
A simple way would be to consider Laplace Smoothing ([http://en.wikipedia.org/wiki/Additive_smoothing](http://en.wikipedia.org/wiki/Additive_smoothing) ) or something like it. Basically, instead of calculating your response rate as (Clicks)/(Impressions) you calculate (Clicks + X)/(Impressions + Y), with X and Y chosen, for example, so that X/Y is the global average of clicks/impressions. When Clicks and Impressions are both high, this smoothed response rate is basically equal to the true response rate (signal dominates the prior). When Clicks and Impressions are both low, the this smoothed response rate will be close to the global average response rate - a good guess when you have little data and don't want to put much weight on it! The absolute scale of X and Y will determine how many data points you consider "enough data". It's been argued that the right thing to do is set X to 1, and Y appropriately given that.
Large Scale Personalization - Per User vs Global Models
The answer to this question is going to vary pretty wildly depending on the size and nature of your data. At a high level, you could think of it as a special case of multilevel models; you have the option of estimating a model with complete pooling (i.e., a universal model that doesn't distinguish between users), models with no pooling (a separate model for each user), and partially pooled models (a mixture of the two). You should really read Andrew Gelman on this topic if you're interested. You can also think of this as a learning-to-rank problem that either tries to produce point-wise estimates using a single function or instead tries to optimize on some list-wise loss function (e.g., NDCG). As with most machine learning problems, it all depends on what kind of data you have, the quality of it, the sparseness of it, and what kinds of features you are able to extract from it. If you have reason to believe that each and every user is going to be pretty unique in their behavior, you might want to build a per-user model, but that's going to be unwieldy fast -- and what do you do when you are faced with a new user?
2579
1
2594
null
5
2342
Which of the following is best (or widely used) for calculating item-item similarity measure in mahout and why ? ``` Pearson Correlation Spearman Correlation Euclidean Distance Tanimoto Coefficient LogLikelihood Similarity ``` Is there any thumb-rule to chose from these set of algorithm also how to differentiate each of them ?
Mahout Similarity algorithm comparison
CC BY-SA 3.0
null
2014-12-02T11:12:06.103
2014-12-11T06:16:59.327
null
null
5091
[ "machine-learning", "data-mining", "statistics", "algorithms", "recommender-system" ]
For those not familiar, item-item recommenders calculate similarities between items, as opposed to user-user (or user-based) recommenders, which calculate similarities between users. Although some algorithms can be used for both, this question is in regard to item-item algorithms (thanks for being specific in your question). Accuracy or effectiveness of recommenders is evaluated based on comparing recommendations to a previously collected data set (training set). For example, I have shopping cart data from the last six months; I'll use the first 5 months as training data, then run my various algorithms, and compare the quality against what really happened during the 6th month. The reason Mahout ships with so many algorithms is because different algorithms are more or less effective in each data set you may work with. So, ideally, you do some testing as I described with many algorithms and compare the accuracy, then choose the winner. Interestingly, you can also take other factors into account, such as the need to minimize the data set (for performance reasons), and run your tests only with a certain portion of the training data available. In such a case, one algorithm may work better with the smaller data set, but another may work with the complete set. Then, you get to weigh performance VS accuracy VS challenge of implementation (such as deploying on a Hadoop cluster). Therefore, different algorithms are suited for different project. However, there are some general rules: - All algorithms always do better with unreduced data sets (more data is better). - More complex algorithms aren't necessarily better. I suggest starting with a simple algorithm and ensuring you have high quality data. If you have additional time, you can implement more complex algorithms and create a comparison which is unique to your data set. Most of my info comes from [This study](http://ai.arizona.edu/intranet/papers/comparative.ieeeis.pdf). You'll find lots of detail about implementation there.
Choosing a distance metric and measuring similarity
It appears to me that what you're looking for in your use-case is not clustering - it's a distance metric. When you get a new data point, you want to find the 3-5 most similar data points; there's no need for clustering for it. Calculate the distance from the new data point to each of the 'old' data points, and select the top 3-5. Now, which distance metric to pick? There are options. If you're using SKLearn, I'd look over [this page](https://scikit-learn.org/stable/modules/metrics.html) for example of distance(/similarity) metrics. If your features are continuous, you can normalize them and use [cosine similarity](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.cosine_similarity.html); Start with this, and see if it fits.
2582
1
2595
null
11
2129
I am currently using SVM and scaling my training features to the range of [0,1]. I first fit/transform my training set and then apply the same transformation to my testing set. For example: ``` ### Configure transformation and apply to training set min_max_scaler = MinMaxScaler(feature_range=(0, 1)) X_train = min_max_scaler.fit_transform(X_train) ### Perform transformation on testing set X_test = min_max_scaler.transform(X_test) ``` Let assume that a given feature in the training set has a range of [0,100], and that same feature in the testing set has a range of [-10,120]. In the training set that feature will be scaled appropriately to [0,1], while in the testing set that feature will be scaled to a range outside of that first specified, something like [-0.1,1.2]. I was wondering what the consequences of the testing set features being out of range of those being used to train the model? Is this a problem?
Consequence of Feature Scaling
CC BY-SA 3.0
null
2014-12-02T16:19:19.043
2014-12-03T18:57:22.773
null
null
802
[ "machine-learning", "svm", "feature-scaling" ]
Within each class, you'll have distributions of values for the features. That in itself is not a reason for concern. From a slightly theoretical point of view, you can ask yourself why you should scale your features and why you should scale them in exactly the chosen way. One reason may be that your particular training algorithm is known to converge faster (better) with values around 0 - 1 than with features which cover other orders of magnitude. In that case, you're probably fine. My guess is that your SVM is fine: you want to avoid too large numbers because of the inner product, but a max of 1.2 vs. a max of 1.0 won't make much of a difference. (OTOH, if you e.g. knew your algorithm to not accept negative values you'd obviously be in trouble. ) The practical question is whether your model performs well for cases that are slightly out of the range covered by training. This I believe can best and possibly only be answered by testing with such cases / inspecting test results for performance drop for cases outside the training domain. It is a valid concern and looking into this would be part of the validation of your model. Observing differences of the size you describe is IMHO a reason to have a pretty close look at model stability.
Feature Scaling and Mean Normalization
I agree with the existing answer that feature scaling is a superset into which techniques like mean normalization, residual normalization, etc falls under. So, assuming that by feature scaling, you mean the techniques other than mean normalization, I would attempt to answer your questions: > 1) Can I mix and match these two approaches? e.g. Feature Scale x1 and Mean Normalize x2? In most cases No. Generally, only one normalization technique is used and it pretty much suffices the need. In addition to that argument, it should also be noted that any normalization technique introduces duplication in the data records (not necessarily redundant duplication). So, pretty much a single normalization technique would suffice most of the times. > 2) How do you determine which of these options to apply? It seems that either could accomplish the task of increasing your convergence rates... I suppose you just need to know your data set to understand which will reliably reduce your values while leaving as few outliers as possible? Yes, you are right. The selection of the technique depends on the data. And the feature scaling (and normalization) process comes under the process of data cleaning. So, it is done immediately after the selection of the relevant data for the analytics process.
2593
1
2635
null
8
123
I want to write a data-mining service in [Google Go](http://golang.org) which collects data through scraping and APIs. However as Go lacks good ML support I would like to do the ML stuff in Python. Having a web background I would connect both services with something like RPC but as I believe that this is a common problem in data science I think that there is some better solution. For example most (web) protocols lack at: - buffering between processes - clustering over multiple instances So what (type of libraries) do data scientists use to connect different languages/processes? Bodo
How to connect data-mining with machine learner process
CC BY-SA 3.0
null
2014-12-03T15:56:50.687
2014-12-07T11:29:14.057
null
null
5266
[ "machine-learning", "data-mining" ]
I am not 100% if a message queue library will be the right tool for this job but so far it looks to me so. With a messaging library like: - nsq - zeromq - mqtt (?) You can connect different processes operating on different environment through a TCP based protocol. As these systems run distributed it is possible to connect multiple nodes. For nsq we even have a library in Python and Go!
References/tutorials about data mining and machine learning
I recommend "[An Introduction to Statistical Learning](https://faculty.marshall.usc.edu/gareth-james/)" (ISL) by Gareth James, Daniela Witten, Trevor Hastie and Rob Tibshirani. The book is [available online](https://www.statlearning.com/). The book covers a number of topics in ML and for each topic there is a "lab" in R (code is also available for [Python](https://github.com/emredjan/ISL-python), in fact there are several pages on Github covering the labs). ISL is not too technical but gives a sound introduction. In case you want to advance on some topics, you can switch to the advanced book "[Elements of Statistical Learning](https://web.stanford.edu/%7Ehastie/Papers/ESLII.pdf)" (ELS).
2596
1
2597
null
5
122
I've been toying with this idea for a while. I think there is probably some method in the text mining literature, but I haven't come across anything just right... What is/are some methods for tackling a problem where the number of variables it its self a variable. This is not a missing data problem, but one where the nature of the problem fundamentally changes. Consider the following example: Suppose I want to predict who will win a race, a simple multinomial classification problem. I have lots of past data on races, plenty to train on. Lets further suppose I have observed each contestant run multiple races. The problem however is that the number or racers is variable. Sometimes there are only 2 racers, sometimes there are as many as 100 racers. One solution might be to train a separate model for each number or racers, resulting in 99 models in this case, using any method I choose. E.g. I could have 100 random forests. Another solution might be to include an additional variable called 'number_of_contestants' and have input field for 100 racers and simply leave them blank when no racer is present. Intuitively, it seems that this method would have difficulties predicting the outcome of a 100 contestant race if the number of racers follows a Poisson distribution (which I didn't originally specify in the problem, but I am saying it here). Thoughts?
Method for solving problem with variable number of predictors
CC BY-SA 3.0
null
2014-12-03T21:47:34.907
2014-12-04T23:26:15.803
null
null
5247
[ "machine-learning", "data-mining", "classification", "statistics", "nlp" ]
I don't see the problem. All you need is a learner to map a bit string as long as the total number of contestants, representing the subset who are taking part, to another bit string (with only one bit set) representing the winner, or a ranked list, if you want them all (assuming you have the whole list in your training data). In the latter case you would have a learning-to-rank problem. If the contestant landscape can change it would help to find a vector space embedding for them so you can use the previous embeddings as an initial guess and rank anyone, even hypothetical, given their vector representation. As the number of users increases the embedding should stabilize and retraining should become less costly. The question is how to find the embedding, of course. If you have a lot of training data, you could probably find a randomized one along with the ranking function. If you don't, you would have to generate the embedding by some algorithm and estimate only the ranking function. I have not faced your problem before so I can't direct you to a particular paper, but the recent NLP literature should give you some inspiration, e.g. [this](http://jmlr.org/papers/volume13/shalit12a/shalit12a.pdf). I still think it is feasible.
How to build a predictive model with multiple features?
My recommendations is to OneHotEncode this variable, to finally obtain something like this: |ID_Parent |SumDollars |ConcatenatedFruit_Apple |ConcatenatedFruit_Banana |ConcatenatedFruit_Lime |ConcatenatedFruit_Orange |Etc |StartDate |CompletionDate | |---------|----------|-----------------------|------------------------|----------------------|------------------------|---|---------|--------------| |AA |500 |1 |0 |0 |0 |... |1/1/2020 |2/15/2020 | |AB |3000 |1 |1 |0 |0 |... |1/1/2020 |5/15/2020 | |AB |9000 |1 |0 |1 |1 |... |5/1/2020 |3/20/2020 | Moreover, if you OneHotEncoded this way, random forest can deal perfectly with this categorical feature. Here I provide you one code that will do what I commented: ``` import pandas as pd df = pd.DataFrame({'id': [0, 1, 2], 'class': ['2 3', '1 3', '3 5']}) df['class'] = df['class'].apply(lambda x: x.split(' ')) df_long = df.explode('class') df_one_hot_encoded = pd.concat([df, pd.get_dummies(df_long['class'],prefix='class', prefix_sep='_')], axis=1) df_one_hot_encoded_compact = df_one_hot_encoded.groupby('id').max().reset_index() ``` I've extracted it from [here](https://stackoverflow.com/questions/37646473/how-could-i-do-one-hot-encoding-with-multiple-values-in-one-cell) (answered by OmaymaS)
2598
1
2599
null
16
30469
I would like to know how exactly mahout user based and item based recommendation differ from each other. It defines that [User-based](https://mahout.apache.org/users/recommender/userbased-5-minutes.html): Recommend items by finding similar users. This is often harder to scale because of the dynamic nature of users. [Item-based](https://mahout.apache.org/users/recommender/intro-itembased-hadoop.html): Calculate similarity between items and make recommendations. Items usually don't change much, so this often can be computed off line. But though there are two kind of recommendation available, what I understand is that both these will take some data model ( say 1,2 or 1,2,.5 as item1,item2,value or user1,user2,value where value is not mandatory) and will perform all calculation as the similarity measure and recommender build-in function we chose and we can run both user/item based recommendation on the same data ( is this a correct assumption ?? ). So I would like to know how exactly and in which all aspects these two type of algorithm differ.
Item based and user based recommendation difference in Mahout
CC BY-SA 3.0
null
2014-12-04T05:18:03.720
2020-08-16T13:02:03.173
2015-11-24T12:20:21.137
5091
5091
[ "machine-learning", "data-mining", "algorithms", "recommender-system" ]
You are correct that both models work on the same data without any problem. Both items operate on a matrix of user-item ratings. In the user-based approach the algorithm produces a rating for an item `i` by a user `u` by combining the ratings of other users `u'` that are similar to `u`. Similar here means that the two user's ratings have a high Pearson correlation or cosine similarity or something similar. In the item-based approach we produce a rating for `i` by `u` by looking at the set of items `i'` that are similar to `i` (in the same sense as above except now we'd be looking at the ratings that items have received from users) that `u` has rated and then combines the ratings by `u` of `i'` into a predicted rating by `u` for `i`. The item-based approach was invented at [Amazon](http://dl.acm.org/citation.cfm?id=642471) to address their scale challenges with user-based filtering. The number of things they sell is much less and much less dynamic than the number of users so the item-item similarities can be computed offline and accessed when needed.
How to create user and item profile in an item to item collaborative filtering? (Non-rating case)
You can use your item and user profiles to generate a prediciton function (i.e. a function that will predict how relevant a coupon will be for a user, also known as a representation for your items and users). Typical functions used for this purpose are Dot Product and Cosine Similarity. This step ensures the intelligence in the recommendations will incorporate categories information. Once your scores are predicted, you must compare them to your historical interactions by using a loss function (some examples are Root-mean-square-error and Kullback-Leibler divergence). This step will produce an error that you can use to inform your algorithm for learning (e.g. adjust the weights in your representation functions for items and users). For more details I strongly suggest you to check out the great [slides](https://www.slideshare.net/JamesKirk58/boston-ml-architecting-recommender-systems) compiled by James Kirk, which provide a framework to unify the research that has been done on the topic.
2646
1
2648
null
20
23373
Are there any articles or discussions about extracting part of text that holds the most of information about current document. For example, I have a large corpus of documents from the same domain. There are parts of text that hold the key information what single document talks about. I want to extract some of those parts and use them as kind of a summary of the text. Is there any useful documentation about how to achieve something like this. It would be really helpful if someone could point me into the right direction what I should search for or read to get some insight in work that might have already been done in this field of Natural language processing.
Extract most informative parts of text from documents
CC BY-SA 3.0
null
2014-12-08T14:51:27.613
2019-03-19T15:01:33.430
null
null
2750
[ "nlp", "text-mining" ]
What you're describing is often achieved using a simple combination of [TF-IDF](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) and [extractive summarization](http://en.wikipedia.org/wiki/Automatic_summarization#Extraction-based_summarization). In a nutshell, TF-IDF tells you the relative importance of each word in each document, in comparison to the rest of your corpus. At this point, you have a score for each word in each document approximating its "importance." Then you can use these individual word scores to compute a composite score for each sentence by summing the scores of each word in each sentence. Finally, simply take the top-N scoring sentences from each document as its summary. Earlier this year, I put together an iPython Notebook that culminates with an implementation of this in Python using NLTK and Scikit-learn: [A Smattering of NLP in Python](https://github.com/charlieg/A-Smattering-of-NLP-in-Python).
Extract key phrases from a single document
A related keyword to your case can be Single Document Keyword Extraction. A good paper about this is: > We present a new keyword extraction algorithm that applies to a single document without using a corpus. Frequent terms are extracted first, then a set of cooccurrence between each term and the frequent terms, i.e., occurrences in the same sentences, is generated. Co-occurrence distribution shows importance of a term in the document as follows. If probability distribution of co-occurrence between term a and the frequent terms is biased to a particular subset of frequent terms, then term a is likely to be a keyword. The degree of biases of distribution is measured by the $\chi^2$-measure. Our algorithm shows comparable performance to tfidf without using a corpus. You can find the paper [here](https://ocs.aaai.org/Papers/FLAIRS/2003/Flairs03-076.pdf). In sum, this paper gives a rank on keywords based on the defined $\chi^2$-measure.
2654
1
2664
null
1
135
I'm wondering if there is a web framework well suited for placing recommendations on content. In most cases, a data scientist goes through after the fact and builds (or uses) a completely different tool to create recommendations. This involves analyzing traffic logs, a history of shopping cart data, ratings, and so forth. It usually comes from multiples sources (the web server, the application's database, Google Analytics, etc) and then has to be cleaned up and processed, THEN delivered back to the application in way it understands. Is there a web framework on the market which handles collecting this data up front, as to minimize the retrospective data wrangling?
Web Framework Built for Recommendations
CC BY-SA 3.0
null
2014-12-09T02:28:51.430
2014-12-10T02:05:33.430
null
null
3466
[ "predictive-modeling", "data-cleaning" ]
I haven't seen anything like that and very much doubt that such frameworks exist, at least, as complete frameworks. The reason for this is IMHO the fact that data transformation and cleaning is very domain- and project-specific. Having said that, there are multiple tools that can help with these activities in terms of partial automation and integration with and between existing statistical and Web frameworks. For example, for Python, the use of data manipulation library `pandas` as well as machine learning library `scikit-learn` can be easily integrated with Web frameworks (especially Python-based, but not necessarily), as these libraries are also Python-based. These and other Python data science tools that might be of interest can be found here: [http://pydata.org/downloads](http://pydata.org/downloads). Specifically, for cleaning and pre-processing tasks, which you asked about, `pandas` seem to be the first tool to explore. Again, for Python, the following discussion on StackOverflow on methods and approaches might be helpful: [https://stackoverflow.com/q/14262433/2872891](https://stackoverflow.com/q/14262433/2872891). Consider an example of another platform. The use of `pandas` for data transformation and cleaning is rather low-level. The platform that I like very much and currently use as the platform of choice for data science tasks is `R`. Rich ecosystem of R packages especially shines in the area of data transformation and cleaning. This is because, in addition to very flexible low-level methods of performing these tasks, there are some R packages, which take a higher-level approach to the problem, which may potentially improve developer's productivity and decrease the amount of defects. In particular, I'm talking about two packages, which I find very promising: `editrules` and `deducorrect`. You can find more detailed information about these and other R packages for data transformation and cleaning in my another answer here on Data Science StackExchange (paper that I reference in the last link there could be especially useful, as it presents an approach to data transformation and cleaning that is generic enough, so that could be used as a framework for this on any decent platform): [https://datascience.stackexchange.com/a/722/2452](https://datascience.stackexchange.com/a/722/2452). UPDATE: On the topic of recommender systems and their integration with data wrangling tools and Web frameworks, you may find my other answer here on DS SE useful: [https://datascience.stackexchange.com/a/836/2452](https://datascience.stackexchange.com/a/836/2452).
CV (Curriculum Vitae) Recommender System using Machine Learning, Python, Apache Solr(Back-end), AngularJS (Front-end)
I guess, there are many ways, to recommend CVs, but here is what I would do: I would use TF-IDF or LSI, so an unsupervised learning approach and I would use a corpus. I would apply word stemming, create a dictionary containing unique word stems of the CVs, create a corpus containing word stem id from dictionary and word stem count for each word stem in each CV, so the document frequencies (DF) for all word stems, then create [TF-IDF (term frequency inverse document frequency)](https://en.m.wikipedia.org/wiki/Tf%E2%80%93idf) model or an [LSI (latent semantic indexing)](https://en.m.wikipedia.org/wiki/Latent_semantic_analysis) model from dictionary and corpus. Then you have TF-IDF or LSI vectors for all CVs. For matching and recommending, you apply word stemming to text input you want to match CVs with, calculate the LSI or TF-IDF vector for this input and match it with the most similar CV by using [cosine similarity](https://en.m.wikipedia.org/wiki/Cosine_similarity) calculation. [Here](https://github.com/franziska-w/pythonscripts/blob/master/Python-Script%20(2.7)%20for%20LSI%20(Latent%20Semantic%20Indexing)%20Document%20Matching%20(Example).py) and [here](https://github.com/franziska-w/pythonscripts/blob/master/Python-Script%20(2.7)%20for%20TF-IDF%20(Term%20Frequency%20Inverse%20Document%20Frequency)%20Document%20Matching%20(Example).py) you can find short Python code examples for the approach I described using LSI or TF-IDF.
2659
1
2662
null
1
6643
EDIT It was pointed out in the Answers-section that I am confusing k-means and kNN. Indeed I was thinking about kNN but wrote k-means since I'm still new to this topic and confuse the terms quite often. So here is the changed question. I was looking at kNN today and something struck me as odd or - to be more precise - something that I was unable to find information about namely the following situation. Imagine that we pick kNN for some dataset. I want to remain as general as possible, thus $k$ will not be specified here. Further we select, at some point, an observation where the number of neighbors that fulfill the requirement to be in the neighbourhood are actually more than the specified $k$. What criterion/criteria should be applied here if we are restricted to use the specific K and thus cannot alter the structure of the neighborhood (number of neighbors). Which observations will be left out and why? Also is this a problem that occurs often, or is it something of an anomaly?
kNN - what happens if more than K observation have the same distance to the centroid of the cluster
CC BY-SA 3.0
null
2014-12-09T21:13:10.797
2014-12-12T21:05:30.020
2014-12-12T21:05:30.020
84
5356
[ "machine-learning", "classification" ]
You are mixing up kNN classification and k-means. There is nothing wrong with having more than k observations near a center in k-means. In fact, this it the usual case; you shouldn't choose k too large. If you have 1 million points, a k of 100 may be okay. K-means does not guarantee clusters of a particular size. Worst case, clusters in k-means can have only one element (outliers) or even disappear. What you probably meant to write, but got mixed up, is what to do if a point is at the same distance to two centers. From a statistical point of view, it doesn't matter. Both have the same squared error. From an implementation point of view, choose any deterministic rule, so that your algorithm converges and doesn't go into an infinite loop of reassignment. Update: with respect to kNN classification: There are many ways to resolve this, that will surprisingly often work just as good as the other, without a clear advantage of one over the other: - randomly choose a winner from the tied objects - take all into account with equal weighting - if you have m objects at the same distance where you expected only r, then put a weight of r/k on each of them. E.g. k=5. ``` distance label weight 0 A 1 1 B 1 1 A 1 2 A 2/3 2 B 2/3 2 B 2/3 ``` yields A=2.66, B=2.33 The reason that randomly choosing works just as good as the others is that usually, the majority decision in kNN will not be changed by contributions with a weight of less than 1; in particular when k is larger than say 10.
In K-means, what happens if a centroid is never the closest to any point?
It's usually indicating bad starting centroids. If it happens later in the process, it may indicate kmeans doesn't work well on this data, because a stable clustering just be easy to find.
2670
1
2679
null
15
4056
I'm trying to find an equivalent of Hinton Diagrams for multilayer networks to plot the weights during training. The trained network is somewhat similar to a Deep SRN, i.e. it has a high number of multiple weight matrices which would make the simultaneous plot of several Hinton Diagrams visually confusing. Does anyone know of a good way to visualize the weight update process for recurrent networks with multiple layers? I haven't found much papers on the topic. I was thinking to display time-related information on the weights per layer instead if I can't come up with something. E.g. the weight-delta over time for each layer (omitting the use of every single connection). PCA is another possibility, though I'd like to not produce much additional computations, since the visualization is done online during training.
Visualizing deep neural network training
CC BY-SA 3.0
null
2014-12-10T10:15:00.940
2020-08-05T14:57:38.660
null
null
5316
[ "machine-learning", "neural-network", "visualization", "deep-learning" ]
The closes thing I know is [ConvNetJS](http://cs.stanford.edu/people/karpathy/convnetjs/): > ConvNetJS is a Javascript library for training Deep Learning models (mainly Neural Networks) entirely in your browser. Open a tab and you're training. No software requirements, no compilers, no installations, no GPUs, no sweat. Demos on this site plot weighs and how do they change with time (bear in mind, its many parameters, as practical networks do have a lot of neurons). Moreover, if you are not satisfied with their plotting, there is access to networks parameters and you can plot as you wish (since it is JavaScript).
Visualizing convolutional neural networks embedding
Yes, the approach you propose is sound and applied widely. Instead of PCA, I suggest using [U-MAP](https://umap-learn.readthedocs.io/en/latest/), which will probably yield better results (also better than t-SNE). The representation you may use as input to U-MAP is the output last layer before the projection to the label space dimensionality (e.g. with a 5-class classifier, you would take the vector representation before projection to the 5-dimensional space).
2677
1
2678
null
8
1509
I have a document classification project where I am getting site content and then assigning one of numerous labels to the website according to content. I found out that [tf-idf](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) could be very useful for this. However, I was unsure as to when exactly to use it. Assumming a website that is concerned with a specific topic makes repeated mention of it, this was my current process: - Retrieve site content, parse for plain text - Normalize and stem content - Tokenize into unigrams (maybe bigrams too) - Retrieve a count of each unigram for the given document, filtering low length and low occurrence words - Train a classifier such as NaiveBayes on the resulting set My question is the following: Where would tf-idf fit in here? Before normalizing/stemming? After normalizing but before tokenizing? After tokenizing? Any insight would be greatly appreciated. --- Edit: Upon closer inspection, I think I may have run into a misunderstanding at to how TF-IDF operates. At the above step 4 that I describe, would I have to feed the entirety of my data into TF-IDF at once? If, for example, my data is as follows: ``` [({tokenized_content_site1}, category_string_site1), ({tokenized_content_site2}, category_string_site2), ... ({tokenized_content_siten}, category_string_siten)}] ``` Here, the outermost structure is a list, containing tuples, containing a dictionary (or hashmap) and a string. Would I have to feed the entirety of that data into the TF-IDF calculator at once to achieve the desired effect? Specifically, I have been looking at the [scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) TfidfVectorizer to do this, but I am a bit unsure as to its use as examples are pretty sparse.
Document classification: tf-idf prior to or after feature filtering?
CC BY-SA 3.0
null
2014-12-10T16:08:03.537
2014-12-11T16:38:53.367
2014-12-11T16:38:53.367
5199
5199
[ "classification", "feature-selection", "feature-extraction" ]
As you've described it, Step 4 is where you want to use TF-IDF. Essentially, TD-IDF will count each term in each document, and assign a score given the relative frequency across the collection of documents. There's one big step missing from your process, however: annotating a training set. Before you train your classifier, you'll need to manually annotate a sample of your data with the labels you want to be able to apply automatically using the classifier. To make all of this easier, you might want to consider using the [Stanford Classifier](http://nlp.stanford.edu/software/classifier.shtml). It will perform the feature extraction and build the classifier model (supporting several different machine learning algorithms), but you'll still need to annotate the training data by hand.
TF-IDF Features vs Embedding Layer
It is common for TFIDF to be a strong model. People constantly get high places in Kaggle competitions with TFIDF models. Here is a link to the winning solution that used TFIDF as one of its features ([1st place Otto product classification](https://www.kaggle.com/c/otto-group-product-classification-challenge/discussion/14335)). You will most likely get a stronger model if you combine the TFIDF and RNN into one ensemble. Other results from Kaggle: - 2nd place: https://www.kaggle.com/c/stumbleupon/discussion/6184 - 4th place: https://www.kaggle.com/c/avito-demand-prediction/discussion/59881 - 3rd place: https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/52762 [https://www.kaggle.com/c/avito-demand-prediction/discussion/56897](https://www.kaggle.com/c/avito-demand-prediction/discussion/56897): > A good number of kernels are going the traditional route with CountVectorizer/TF-IDF, and some brave souls (I say brave because training is slower and the results don't seem as spectacular so far) have been experimenting with embeddings, as per the previous competitions.
3690
1
3703
null
0
1424
I'm wonder if it's possible to export a model trained in R, to OpenCV's Machine Learning (ML) library format? The latter appears to save/read models in [XML/YAML](http://docs.opencv.org/modules/ml/doc/statistical_models.html#cvstatmodel-load), whereas the former might be exportable via [PMML](http://cran.r-project.org/web/packages/pmml/index.html). Specifically, I'm working with Random Forests, which are classifiers available both in R and OpenCV's ML library. Any advice on how I can get the two to share models would be greatly appreciated.
Exporting R model to OpenCV's Machine Learning Library
CC BY-SA 3.0
null
2014-12-11T23:56:03.023
2014-12-14T03:42:25.213
null
null
6390
[ "machine-learning", "r", "open-source" ]
Instead of exporting your models, consider creating an R-based interoperable environment for your modeling needs. Such environment would consists of R environment proper as well as integration layers for your third-party libraries. In particular, for the OpenCV project, consider either using `r-opencv` open source project ([https://code.google.com/p/r-opencv](https://code.google.com/p/r-opencv)), or integration via OpenCV C++ APIs and R `Rcpp` package ([http://dirk.eddelbuettel.com/code/rcpp.html](http://dirk.eddelbuettel.com/code/rcpp.html)). Finally, if you want to add PMML support to the mix and create a deployable-to-cloud solution, take a look at the following excellent blog post with relevant examples: [http://things-about-r.tumblr.com/post/37861967022/predictive-modeling-using-r-and-the](http://things-about-r.tumblr.com/post/37861967022/predictive-modeling-using-r-and-the).
How do I use the model generated by the R package poLCA to classify new data as belonging to one of the classes?
As Paolo says, use the `poLCA.poseterior()` function. The data comes out in the same format as the lca_model$posterior structure returned by the poLCA function. ``` library(poLCA) data(election) column_names <- c('MORALG', 'CARESG', 'KNOWG', 'LEADG', 'DISHONG', 'INTELG', 'MORALB', 'CARESB', 'KNOWB', 'LEADB', 'DISHONB', 'INTELB') election_matrix = as.matrix(mapply(as.numeric,election[,column_names])) election_matrix_no_na =election_matrix[apply(election_matrix, 1, function(x) all(is.finite(x)) ),] preds = poLCA.posterior(lc=lca_model, y=election_matrix_no_na) ```
3693
1
3701
null
8
2905
I'm working on a project which asks fellow students to share their original text data for further analysis using data mining techniques, and, I think it would be appropriate to anonymize student names with their submissions. Setting aside the better solutions of a url where students submit their work and a backend script inserts the anonymized ID, What sort of solutions could I direct students to implement on their own to anonymized their own names? I'm still a noob in this area. I don't know what are the norms. I was thinking the solution could be a hashing algorithm. That sounds like a better solution than making up a fake name as two people could pick the same fake name.possible people could pick the same fake name. What are some of the concerns I should be aware of?
What are the best practices to anonymize user names in data?
CC BY-SA 3.0
null
2014-12-12T03:00:57.507
2014-12-13T21:47:49.987
null
null
2742
[ "machine-learning", "data-cleaning" ]
I suspected you were using the names as identifiers. You shouldn't; they're not unique and they raise this privacy issue. Use instead their student numbers, which you can verify from their IDs, stored in hashed form. Use the student's last name as a salt, for good measure (form the string to be hashed by concatenating the ID number and the last name).
How can I transform names in a confidential data set to make it anonymous, but preserve some of the characteristics of the names?
One of the references I mentioned in the OP led me to a potential solution that seems quite powerful, described in "Privacy-preserving record linkage using Bloom filters" ([doi:10.1186/1472-6947-9-41](http://www.biomedcentral.com/1472-6947/9/41)): > A new protocol for privacy-preserving record linkage with encrypted identifiers allowing for errors in identifiers has been developed. The protocol is based on Bloom filters on q-grams of identifiers. The article goes into detail about the method, which I will summarize here to the best of my ability. A Bloom filter is a fixed-length series of bits storing the results of a fixed set of independent hash functions, each computed on the same input value. The output of each hash function should be an index value from among the possible indexes in the filter; i.e., if you have a 0-indexed series of 10 bits, hash functions should return (or be mapped to) values from 0 to 9. The filter starts with each bit set to 0. After hashing the input value with each function from the set of hash functions, each bit corresponding to an index value returned by any hash function is set to 1. If the same index is returned by more than one hash function, the bit at that index is only set once. You could consider the Bloom filter to be a superposition of the set of hashes onto the fixed range of bits. The protocol described in the above-linked article divides strings into n-grams, which are in this case sets of characters. As an example, `"hello"` might yield the following set of 2-grams: ``` ["_h", "he", "el", "ll", "lo", "o_"] ``` Padding the front and back with spaces seems to be generally optional when constructing n-grams; the examples given in the paper that proposes this method use such padding. Each n-gram can be hashed to produce a Bloom filter, and this set of Bloom filters can be superimposed on itself (bitwise OR operation) to produce the Bloom filter for the string. If the filter contains many more bits than there are hash functions or n-grams, arbitrary strings are relatively unlikely to produce exactly the same filter. However, the more n-grams two strings have in common, the more bits their filters will ultimately share. You can then compare any two filters `A, B` by means of their Dice coefficient: > DA, B = 2h / (a + b) Where `h` is the number of bits that are set to 1 in both filters, `a` is the number of bits set to 1 in only filter A, and `b` is the number of bits set to 1 in only filter B. If the strings are exactly the same, the Dice coefficient will be 1; the more they differ, the closer the coefficient will be to `0`. Because the hash functions are mapping an indeterminate number of unique inputs to a small number of possible bit indexes, different inputs may produce the same filter, so the coefficient indicates only a probability that the strings are the same or similar. The number of different hash functions and the number of bits in the filter are important parameters for determining the likelihood of false positives - pairs of inputs that are much less similar than the Dice coefficient produced by this method predicts. I found [this tutorial](http://billmill.org/bloomfilter-tutorial/) to be very helpful for understanding the Bloom filter. There is some flexibility in the implementation of this method; see also [this 2010 paper](https://www.uni-due.de/~hq0215/documents/2010/Bachteler_2010_An_Empirical_Comparison_Of_Approaches_To_Approximate_String_Matching_In_Private_Record_Linkage.pdf) (also linked at the end of the question) for some indications of how performant it is in relation to other methods, and with various parameters.
3700
1
3725
null
0
135
In [his thesis](http://web.cse.ohio-state.edu/~mbelkin/papers/PLM_UCTHESIS_03.pdf) (section 2.3.3) Belkin uses the heat equation to derive an approximation for $\mathcal{L}f$: $$\mathcal{L}f(x_i)\approx \frac{1}{t}\Big(f(x_i)-\alpha \sum_{x_j, ||x_i-x_j||<\epsilon}e^{-\frac{||x_i-x_j||^2}{4t}}f(x_j)\Big)$$ where $$\alpha=\Big(\sum_{x_j, ||x_i-x_j||<\epsilon}e^{-\frac{||x_i-x_j||^2}{4t}}\Big)^{-1}$$. However, I'm not sure how these considerations lead to this choice of weights for the weight matrix (which will be used to construct the Laplacian): $$W_{ij} = \begin{cases} e^{-\frac{||x_i-x_j||^2}{4t}} & if\ ||x_i-x_j||<\epsilon \\ 0 & otherwise \end{cases}$$ A very vague idea of mine was that the factors $\alpha$ and $\frac{1}{t}$ don't change for a given $x_i$ so if one choses the weights like above the resulting discrete Laplacian would (let aside those two constants) converge to the continuous version. Any ideas or tips what I'd have to read up to in order to get a better understanding?
Choice of weights for the Laplacian Eigenmaps algorithm
CC BY-SA 3.0
null
2014-12-13T19:27:56.213
2014-12-19T01:20:39.907
null
null
6415
[ "machine-learning" ]
Recall the definition he makes for the graph Laplacian earlier, $L = D -W $. Now consider the map in the RHS parentheses which I'll call $L^*$, $$ L^*f(x_i) := f(x_i) - \alpha \sum_{x_j, ||x_i-x_j||<\epsilon}e^{-\frac{||x_i-x_j||^2}{4t}}f(x_j).$$ The suggested weight matrix definition is natural because it lets us write $$ L^* := I - D^{-1}W. $$ Here's [a reference](http://www.cs.yale.edu/homes/singer/publications/laplacian_ACHA.pdf) to a related paper with some easy to read exposition. Hope this helps!
Spectral clustering with heat kernel weight matrix
$\sigma$ represents a typical distance between points. If all points of one cluster of your graph are separated from all points of another by a distance that is significantly higher than $\sigma$, then the spectral clustering will probably use this as a cut. If you already know the number of clusters (or cuts) that you want to make, $\sigma$ does not need to be tuned very finely. But it becomes really important if you have no idea how many cuts you want to make. A good approach is to make a parametric study over $\sigma$, and make the decision through the eigen values. Here's an example from one of my study cases, giving eigen values (in ascending order) of the Laplacian matrix, for 4 different values of $\sigma$: [](https://i.stack.imgur.com/GdeYF.png) What you are looking for is a break in the increase of eigen values: theoritically, a wide gap between two consecutive values ($n$ and $n+1$) should tell you that it is a good idea to make $n$ clusters. As you can see, small sigma values (upper left plot) lead to very low eigen values, sometimes even numerically negative. Very small values tend to make very small clusters containing outliers, and a very big cluster with all other points. High sigma values (lower right plot) will make your matrix look like the identity, with high eigen values, this usually won't bring anything interesting. What I usually do is try a wide window for sigma values, and shorten the range progressively, until I find a satisfying result. This is all graphical, and depends on the prior knowledge about your problem (how many clusters do you approximately expect? 2, 10, 100?). Eigen values represent the quality of the cut. Small values correspond to very distant clusters. For instance, in my upper right case, I could make 4 very well separated clusters (first 4 eigen values are almost 0). The next cut (to reach 5 clusters) would be less effective than the 3 previous ones. 13 clusters could also have been a good try. But I actually selected 9 clusters, because it was closer to what I actually expected.
3702
1
3713
null
4
1339
Is the k-nearest neighbour algorithm a discriminative or a generative classifier? My first thought on this was that it was generative, because it actually uses Bayes' theorem to compute the posterior. Searching further, it seems like it is a discriminative model. But I couldn't find the explanation. So is KNN discriminative first of all? And if it is, is that because it doesn't model the priors or the likelihood?
K nearest neighbour
CC BY-SA 4.0
null
2014-12-13T23:08:53.930
2022-02-28T19:28:12.320
2022-02-28T19:28:12.320
132929
6419
[ "classification" ]
See a similar answer [here](https://stats.stackexchange.com/questions/105979/is-knn-a-discriminative-learning-algorithm). To clarify, k nearest neighbor is a discriminative classifier. The difference between a generative and a discriminative classifier is that the former models the joint probability where as the latter models the conditional probability (the posterior) starting from the prior. In the case of nearest neighbors, the conditional probability of a class given a data point is modeled. To do this, one starts with the prior probability on the classes.
K-nearest neighbors complexity
The complexity in this instance is discussing the smoothness of the boundary between the different classes. One way of understanding this smoothness complexity is by asking how likely you are to be classified differently if you were to move slightly. If that likelihood is high then you have a complex decision boundary. For the $k$-NN algorithm the decision boundary is based on the chosen value for $k$, as that is how we will determine the class of a novel instance. As you decrease the value of $k$ you will end up making more granulated decisions thus the boundary between different classes will become more complex. You should note that this decision boundary is also highly dependent of the distribution of your classes. Let's see how the decision boundaries change when changing the value of $k$ below. We can see that nice boundaries are achieved for $k=20$ whereas $k=1$ has blue and red pockets in the other region, this is said to be more highly complex of a decision boundary than one which is smooth. --- First let's make some artificial data with 100 instances and 3 classes. ``` from sklearn.datasets.samples_generator import make_blobs X, y = make_blobs(n_samples=100, centers=3, n_features=2, cluster_std=5) ``` Let's plot this data to see what we are up against [](https://i.stack.imgur.com/QQxQ6.png) Now let's see how the boundary looks like for different values of $k$. I'll post the code I used for this below for your reference. # $k$ = 1 [](https://i.stack.imgur.com/I7hj2.png) # $k$ = 5 [](https://i.stack.imgur.com/FqIhP.png) # $k$ = 10 [](https://i.stack.imgur.com/Wj9Pr.png) # $k$ = 20 [](https://i.stack.imgur.com/SXOQd.png) --- # The code The code used for these experiments is as follows taken from [here](http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html#sphx-glr-auto-examples-neighbors-plot-classification-py) ``` from sklearn import neighbors k = 1 clf = neighbors.KNeighborsClassifier(20) clf.fit(X, y) from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF']) cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF']) # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, x_max]x[y_min, y_max]. x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure() plt.pcolormesh(xx, yy, Z, cmap=cmap_light) # Plot also the training points plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.show() ```
3711
1
3717
null
14
23253
Naive Bayes apparently handles missing data differently, depending on whether they exist in training or testing/classification instances. When classifying instances, the attribute with the missing value is simply not included in the probability calculation ([reference](http://www.inf.ed.ac.uk/teaching/courses/iaml/slides/naive-2x2.pdf)) In training, the instance [with the missing data] is not included in frequency count for attribute value-class combination. ([reference](http://www.csee.wvu.edu/%7Etimm/cs591o/old/BasicMethods.html)) Does that mean that particular training record simply isn't included in the training phase? Or does it mean something else?
How does the naive Bayes classifier handle missing data in training?
CC BY-SA 4.0
null
2014-12-16T13:07:55.063
2022-12-03T12:19:36.527
2020-08-17T15:55:05.267
98307
6451
[ "machine-learning", "data-mining", "classification", "naive-bayes-classifier" ]
In general, you have a choice when handling missing values hen training a naive Bayes classifier. You can choose to either - Omit records with any missing values, - Omit only the missing attributes. I'll use the example linked to above to demonstrate these two approaches. Suppose we add one more training record to that example. ``` Outlook Temperature Humidity Windy Play ------- ----------- -------- ----- ---- rainy cool normal TRUE no rainy mild high TRUE no sunny hot high FALSE no sunny hot high TRUE no sunny mild high FALSE no overcast cool normal TRUE yes overcast hot high FALSE yes overcast hot normal FALSE yes overcast mild high TRUE yes rainy cool normal FALSE yes rainy mild high FALSE yes rainy mild normal FALSE yes sunny cool normal FALSE yes sunny mild normal TRUE yes NA hot normal FALSE yes ``` - If we decide to omit the last record due to the missing outlook value, we would have the exact same trained model as discussed in the link. - We could also choose to use all of the information available from this record. We could choose to simply omit the attribute outlook from this record. This would yield the following updated table. Outlook Temperature Humidity ==================== ================= ================= Yes No Yes No Yes No Sunny 2 3 Hot 3 2 High 3 4 Overcast 4 0 Mild 4 2 Normal 7 1 Rainy 3 2 Cool 3 1 ----------- --------- ---------- Sunny 2/9 3/5 Hot 3/10 2/5 High 3/10 4/5 Overcast 4/9 0/5 Mild 4/10 2/5 Normal 7/10 1/5 Rainy 3/9 2/5 Cool 3/10 1/5 Windy Play ================= ======== Yes No Yes No False 7 2 10 5 True 3 3 ---------- ---------- False 7/10 2/5 10/15 5/15 True 3/10 3/5 Notice there are 15 observations for each attribute except `Outlook`, which has only 14. This is since that value was unavailable for the last record. All further development would continue as discussed in the linked article. For example in the R package `e1071` naiveBayes implementation has the option `na.action` which can be set to na.omit or na.pass.
How to handle missing data for machine learning
There are three main approaches to handling missing data. - Impute - use some method to fill in the missing values with reasonable guesses. You could interpolate between two time points, take the average value over all time points, or use a variety of other techniques leveraging co-occurrence of other variables to get a reasonable estimate. - Ignore - some methods can just ignore missing data, and not use it in the model at all - Utilize - for cases where data is not missing-at-random, missingness itself can be an informative feature. You could include missing values as another data point to model your output.
3721
1
3722
null
3
1838
With increasingly sophisticated methods that work on large scale datasets, financial applications are obvious. I am aware of machine learning being employed on financial services to detect fraud and flag fraudulent activities but I have a lesser understanding of how it helps to predict the price of the stock the next day and how many stocks of a particular company to buy. Do the hedge funds still employ portfolio optimization techniques that are right out of the mathematical finance literature or have they started to use machine learning to hedge their bets? More importantly, what are the features that are used by these hedge funds and what is a representative problem set up?
Machine Learning for hedging/ portfolio optimization?
CC BY-SA 3.0
null
2014-12-18T04:48:49.820
2014-12-18T08:25:09.517
null
null
847
[ "machine-learning", "feature-selection", "optimization" ]
That is a rather broad question, and there is tons of literature about quantitative analysis and stock market prediction using machine learning. The most classical example of predicting the stock market is employing neural networks; you can use whatever feature you think might be relevant for your prediction, for example the unemployment rate, the oil price, the gold price, the interest rates, and the timeseries itself, i. e. the volatility, the change in the last 2,3,7,..., days etc. - a more classical approach is the input-output-analysis in econometrics, or the autoregression analysis, but all of it can be modeled using neural networks or any other function approximator / regression in a very natural way. But, as said, there are tons of other possibilities to model the market, to name a few: Ant Colony Optimization (ACO), Classical regression analysis, genetic algorithms, decision trees, reinforcement learning etc. you name it, almost EVERYTHING has probably been applied to the stock market prediction problem. There are different fond manager types on the markets. There are still the Quants which are doing a quantitative analysis using classical financial maths and maths borrowed from the physics to describe the market movements. There are still the most conservative ones which do a long-term, fundamental analysis of the corporation, that is, looking in how the corporation earns money and where it spends money. Or the tactical analysts who just look for immediate signals to buy / sell a stock in the short term. And those quantitative guys who employ machine learning amongst other methods.
What's a good machine learning algorithm for low frequency trading?
Random forests, GBM or even the newer and fancier xgboost are not the best candidates for binary classification (predicting ups and down) of stocks predictions or forex trading or at least not as the main algorithm. The reason is that, for this particular problem, they require a huge amount of trees (and tree depth in case of GBM or xgboost) to obtain reasonable accuracy (Breiman suggested using at least 5000 trees and to "not be stingy" and in fact his main ML paper on RF he used 50,000 trees per run). However, some quants use random forests as feature selectors while others use it to generate new features. It all depends on the characteristics of the data. I would suggest you read this [question and answers on quant.stackexchange](https://quant.stackexchange.com/questions/9313/machine-learning-vs-regression-and-or-why-still-use-the-latter/9317#9317) where people discuss what methods are the best and when to use them, among them ISOMAP, Laplacian eigenmaps, ANNs, swarm optimization. Check out the [machine-learning tag on the same site](https://quant.stackexchange.com/tags/machine-learning/hot), there you might find information related to your particular dataset.
3728
1
3729
null
1
255
I was wondering if anyone knew which piece of software is being used in this video? It is an image recognition system that makes the training process very simple. [http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn#t-775098](http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn#t-775098) The example is with car images, though the video should start at the right spot.
What software is being used in this image recognition system?
CC BY-SA 3.0
null
2014-12-19T11:42:04.547
2014-12-19T16:02:06.330
null
null
5175
[ "classification" ]
I'm pretty sure that the software you're referring to is a some kind of internal research project software, developed by Enlitic ([http://www.enlitic.com](http://www.enlitic.com)), where Jeremy Howard works as a founder and CEO. By "internal research project software" I mean either a proof-of-concept software, or a prototype software.
How to Build Mobile Application for Image Recognition?
I would advise you to use [kivy](https://kivy.org/#home) for Python. It has an active community and there is also a book on this topic [Practical Computer Vision Applications Using Deep Learning with CNNs With Detailed Examples in Python Using TensorFlow and Kivy](https://rads.stackoverflow.com/amzn/click/com/1484241665). Kivy is very easy for newbies and you can develop multiplatform applications (Windows, iOs, Android).
3742
1
3753
null
0
547
I've been working in SAS for a few years but as my time as a student with a no-cost-to-me license comes to an end, I want to learn R. Is it possible to transpose a data set so that all the observations for a single ID are on the same line? (I have 2-8 observations per unique individual but they are currently arranged vertically rather than horizontally.) In SAS, I had been using PROC SQL and PROC TRANSPOSE depending on my analysis aims. Example: ``` ID date timeframe fruit_amt veg_amt <br/> 4352 05/23/2013 before 0.25 0.75 <br/> 5002 05/24/2014 after 0.06 0.25 <br/> 4352 04/16/2014 after 0 0 <br/> 4352 05/23/2013 after 0.06 0.25 <br/> 5002 05/24/2014 before 0.75 0.25 <br/> ``` Desired: ``` ID B_fr05/23/2013 B_veg05/23/2013 A_fr05/23/2013 A_veg05/23/2013 B_fr05/24/2014 B_veg05/24/2014 (etc) <br/> 4352 0.25 0.75 0.06 0.25 . . <br/> 5002 . . . . 0.75 0.25 <br/> ```
Data transposition code in R
CC BY-SA 3.0
null
2014-12-22T14:06:45.610
2014-12-23T20:36:06.047
2014-12-22T16:23:45.540
6491
6491
[ "data-mining", "r", "dataset", "beginner" ]
You can use the `reshape2` package for this task. First, transform the data to the long format with `melt`: ``` library(reshape2) dat_m <- melt(dat, measure.vars = c("fruit_amt", "veg_amt")) ``` where `dat` is the name of your data frame. Second, cast to the wide format: ``` dcast(dat_m, ID ~ timeframe + variable + date) ``` The result: ``` ID after_fruit_amt_04/16/2014 after_fruit_amt_05/23/2013 after_fruit_amt_05/24/2014 after_veg_amt_04/16/2014 1 4352 0 0.06 NA 0 2 5002 NA NA 0.06 NA after_veg_amt_05/23/2013 after_veg_amt_05/24/2014 before_fruit_amt_05/23/2013 before_fruit_amt_05/24/2014 1 0.25 NA 0.25 NA 2 NA 0.25 NA 0.75 before_veg_amt_05/23/2013 before_veg_amt_05/24/2014 1 0.75 NA 2 NA 0.25 > ```
How can I vectorize this code in R? Maybe with the apply() function?
First of all it should be noted that the code you posted does not actually replicate the output of the `dist` function, because the line: ``` distancematrix[i, j] <- sum(abs(myMatrix[i,] - myMatrix[j,])) ``` does not calculate the Euclidean distance; it should be: ``` distancematrix[i, j] <- sqrt(sum((myMatrix[i,] - myMatrix[j,]) ^ 2)) ``` Here are two solutions that rely on `apply`. They are simplified, and in particular do not take advantage of the symmetry of the distance matrix (which, if considered, would lead to a 2-fold speedup). First, generate some test data: ``` # Number of data points N <- 2000 # Dimensionality d <- 10 # Generate data myMatrix = matrix(rnorm(N * d), nrow = N) ``` For convenience, define: ``` # Wrapper for the distance function d_fun <- function(x_1, x_2) sqrt(sum((x_1 - x_2) ^ 2)) ``` The first approach is a combination of `apply` and `sapply`: ``` system.time( D_1 <- apply(myMatrix, 1, function(x_i) sapply(1:nrow(myMatrix), function(j) d_fun(x_i, myMatrix[j, ])) ) ) user system elapsed 14.041 0.100 14.001 ``` while the second uses only `apply` (but going over the indices, which are paired using `expand.grid`): ``` system.time( D_2 <- matrix(apply(expand.grid(i = 1:nrow(myMatrix), j = 1:nrow(myMatrix)), 1, function(I) d_fun(myMatrix[I[["i"]], ], myMatrix[I[["j"]], ]) ) ) ) user system elapsed 39.313 0.498 39.561 ``` However, as expected both are much slower than `dist`: ``` system.time( distancematrix <- as.matrix( dist(myMatrix, method = "euclidean", diag = T, upper = T) ) ) user system elapsed 0.337 0.054 0.388 ```
3743
1
3746
null
2
654
I recently read a lot about the n-armed bandit problem and its solution with various algorithms, for example for webscale content optimization. Some discussions were referring to 'contextual bandits', I couldn't find a clear definition what the word 'contextual' should mean here. Does anyone know what is meant by that, in contrast to 'usual' bandits?
What does 'contextual' mean in 'contextual bandits'?
CC BY-SA 3.0
null
2014-12-22T15:52:12.133
2014-12-22T18:44:18.983
null
null
3132
[ "machine-learning" ]
A contextual bandit algorithm not only adapts to the user-click feedback as the algorithm progresses, it also utilizes pre-existing information about the user's (and similar users) browsing patterns to select which content to display. So, rather than starting with no prediction (cold start) with what the user will click (traditional bandit and also traditional A/B testing), it takes other data into account (warm start) to help predict which content to display during the bandit test. See: [http://www.research.rutgers.edu/~lihong/pub/Li10Contextual.pdf](http://www.research.rutgers.edu/~lihong/pub/Li10Contextual.pdf)
Context classification problem
Spam detection can be done with many different methods, the same goes for your task. They do share the similar idea of processing a given text and classifying it to be one of 2 classes (science/not-science or spam/not-spam). What you first need to do is to turn the articles into a vector of constant size (for example with Word2vec which takes as its input a text and produces a vector space). Ones you have a vector representing each article, you can start training your classifier and feature extractor (these days they are trained together). As for determining which machine learning approach to take, you can try first using an SVM, it will probably be good enough. You can follow one of the following tutorials (there are many more), just replace their dataset with yours : [Email Spam Filtering: An Implementation with Python and Scikit-learn](https://www.kdnuggets.com/2017/03/email-spam-filtering-an-implementation-with-python-and-scikit-learn.html) [Spam Classifier in Python from scratch](https://towardsdatascience.com/spam-classifier-in-python-from-scratch-27a98ddd8e73)
3771
1
4872
null
1
179
I am trying to predict clients comportement from market rates. The value of the products depends on the actual rate but this is not enough. The comportement of the client also depends on their awareness wich depends on the evolution of rates. I've added this in model using past 6 month rates as features in polynomial regression. In fact media coverage of rate mostly depends on rate variations and I wanted to add that in my model. The idea would be to add a derivative/variation of rate as a feature. But I anticipated something wrong, example with only two month , my variation will be of the form $x_n - x_{n-1}$ that is a simple linear combination of actual and past rates. So for a 1d polynomial regression i will have: $$ x_{n+1} = a * x_{n} + b * x_{n-1} + c * (x_{n} - x_{n-1})$$ instead of: $$ x_{n+1} = a_0 * x_{n} + b_0 * x_{n-1}$$ wich is strictly equivalent with $ a + c = a_0 $ and $b-c= b_0$. Higher polynomial degree results in a more or less equivalent result. I am thinking about a way to include derivative information but it seems not possible. So I am wondering if all the information is included in my curve. Is this a general idea ? all information is somewhat directly contained in data and modifications of features will result in higher order objective function ?
Time series: variations as a feature
CC BY-SA 3.0
null
2014-12-28T14:42:52.403
2015-01-14T01:08:04.997
2015-01-09T15:28:36.703
303
303
[ "time-series", "feature-selection", "predictive-modeling", "optimization" ]
Using derivatives as features is almost the same as using past values, as both reconstruct phase or state space for dynamic system behind the time series. but they differ in some points, like noise amplification and how they carry information. (see Ref: State space reconstruction in the presence of noise; Martin Casdagli; Physica D - 1991 - section 2) Notice all information is embedded in time series, but using derivatives is going to reinterpret this information, which may be useful or useless. In your case, if you use all parameters and terms, i believe there is no use in it. but in case of using some algorithms like orthogonal forward regression (OFR) it may be beneficial. (see Ref: Orthogonal least squares methods and their application to non-linear system identification; S. CHEN, S. A. BILLINGS; INT. J. CONTROL, 1989)
Transforming time series into static features?
So the first that comes to mind for me is to ask, "What is the end goal"? Are you trying to classify them by how active they are and at what times? If you are then I would refer you to this paper [here](https://arxiv.org/pdf/1911.05702.pdf). The relevant section is 2.3 where they explain that there are two main approaches to dealing with this issue in the literature. - "The first approach is to feed time-series features to RNN and then concatenate with static features." - "The second approach for combining the two types of features is to include the time invariant features as part of the temporal features and feed them together to RNN units." In short, you can either train it all in one model or first use the time series model and feed to another model for the time invariant features. Where the paper mentions these solutions, there are further citations to other sources that discuss this further.
3811
1
3818
null
1
96
I'm building a neural network to analyze a business' sales. I'm normalizing all input values to the range `{0,1}`. I'm struggling with the day of the week column. Business days are identified by a number ranging `{1-5}` (1=Monday). Normalizing these values to the range `{0,1}` is straightforward, but results in a major bias in the final output. The reason is that the full range of normalized values for the business day column is explored with every week worth of data, whereas other price-related column explore their full range of normalized values infrequently. The business day column ends up being the largest contributor to the final output. How can I normalize it to make its contribution more in tune with the rest of the inputs?
normalize identification values properly
CC BY-SA 3.0
null
2015-01-04T21:52:38.977
2015-01-05T23:09:25.800
null
null
6669
[ "machine-learning", "neural-network" ]
It is possible that the other variables you're feeding into the NN are simply bad at predicting sales. Sell prediction is a notoriously hard problem. Specifically the addressing of mapping a multi-state categorical variable to the NN's {0,1} input range: Another idea is to change that one, 5-state variable into five boolean variables. Rather than {0,0.25,0.5,0.75,1.0} on your one variable, make each of the five boolean variables represent a single day and make [1,0,0,0,0] equal Monday, [0,1,0,0,0] equal Tuesday, etc. I've personally had more success both with training good networks and introspecting the network itself when spreading out states of classes like that. Other hacks you can try: * Take out the the 'day' column all together and see if any of the other variables get used. * Plot the distribution of spend as a function of day. Even if nothing else comes of this current model, it sounds like you've found one interesting insight already. * Consider also trying different models.
How to normalize complex-valued data?
First off, it's always helpful to [think geometrically](https://brilliant.org/wiki/complex-numbers-in-geometry/) about what complex numbers are, and what arithmetic operations achieve. --- In your function, you are using the mean and standard deviation of the absolute value of these complex numbers. That means that if you perform your operation to the absolute value of your data: ``` (tmp - tmp.mean()) / tmp.std() ``` you will end up with normalized data of mean 0 and standard deviation 1. Going back to thinking geometrically, when you perform your original operation: ``` (x_source - tmp.mean()) / tmp.std() ``` you are essentially moving your data's mean `tmp.mean()` units to the left, then scaling horizontally by `1/tmp.std()`. Notice none of this is vertical shift or scaling, so something smells funny. --- What I would do: I would normalize each coordinate independently. Finding the mean is fine -- the mean of complex data points is the same as means of components: $$ \bar z = (\bar x , \bar y) $$ So you can subtract the mean of the $x$ value's from each input value's $x$-coordinate. Ditto for $y$. Then you divide the real component by the standard deviation of the real component, and ditto for the imaginary component. It could also be appropriate to divide by the standard deviation of the (new) norms. This would ensure good properties involving your data lying within a circle of a certain radius. --- Code: ``` real_data = real(x_source) imag_data = imaginary(x_source) real_data = ( real_data - real_data.mean() ) / real_data.std() imag_data = ( imag_data - imag_data.mean() ) / imag_data.std() x_source_norm = real_data + i * imag_data ```
3826
1
3827
null
2
211
I'm trying to do a correlation analysis between inputs and outputs inspecting the data in order to understand which input variables to include. What could be a threshold in the correlation value to consider a variable eligible to be an input for my Neural Network?
Correlation threshold for Neural Network features selection
CC BY-SA 3.0
null
2015-01-06T20:43:35.480
2015-01-07T01:26:17.560
null
null
6559
[ "machine-learning" ]
Given non-linearity of neural networks, I believe correlation analysis isn't a good way to estimate importance of variables. For example, imagine that you have 2 input variables - `x1` and `x2` - and following conditions hold: - cor(x2, y) = 1 if x1 = 1 - cor(x2, y) = 0 otherwise - x1 = 1 in 10% of cases That is, `x2` is a very good predictor for `y`, but only given that `x1 = 1`, which is the case only in 10% of data. Taking into account correlations of `x1` and `x2` separately won't expose this dependency, and you will most likely drop out both variables. There are other ways to perform feature selection, however. Simplest one is to train your model with all possible sets of variables and check the best subset. This is pretty inefficient with many variables, though, so many ways to improve it exist. For a good introduction in best subset selection see chapter 6.1 of [Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/).
Correlation feature selection followed by regression
I think this is a case for linear regression with a lasso/ridge penalty. The lasso/ridge does „shrink“ features/variables, so that it is easy to see which features are important. Since you have 100 variables, you could opt for lasso, since lasso can also „automatically“ exclude features. Here is a lasso example in Python: [https://datascience.stackexchange.com/a/53639/71442](https://datascience.stackexchange.com/a/53639/71442).
4834
1
4835
null
1
35
I am working on a project where we would like to take the ratio of two measurements A/B and subject these ratios to a ranking algorithm. The ratio is normalized prior to ranking (though the ranking/normalization are not that import to my question). In most cases measurement A (the starting measurement) is a count with values greater than 1000. We expect an increase for measurement B for positive effects and a decrease in measurement B for negative effects. Here is the issue, some of our starting counts are nearly zero which we believe is an artifact of experimental preparation. This of course leads to some really high ratios/scaling issues for these data points. What is the best way to adjust these values in order to better understand the real role in our experiment? One suggestion we received was to add 1000 to all counts (from measurement A and B) to scale the values and remove the bias of such a low starting count, is this a viable option? Thank you in advance for your assistance, let me know if I am not being clear enough.
Correcting Datasets with artificially low starting values
CC BY-SA 3.0
null
2015-01-08T13:44:20.160
2015-01-08T23:25:55.630
2015-01-08T23:25:55.630
1367
7713
[ "statistics", "data-cleaning" ]
Yes, the general idea is to add a baseline small count to every category. The technical term for this is [Laplace smoothing](http://en.wikipedia.org/wiki/Additive_smoothing). Really it's not so much of a hack, as encoding the idea that you think there is some (uniform?) prior distribution of the events occurring.
Low scale ML/statistical techniques for data poor settings
Generally, I'd pick a very simple, transparent/explainable model and use the results in a semi-automated way. That is, do not just derive a prediction but rather insights. You could, for example, use a (or multiple) decision tree(s) which you pre or post prune. The result could be a tree with, let's say, just 1-3 features to find simple rules like "if a customer is married and at least X years old, they have a high chance of making a purchase". With logistic regression you may use coefficients to identify features which influence the dependent variable the most. These (qualitative or semi-quantitative) rules should then be validated with domain experts. Moreover, you need to be transparent about the accuracy and precision of your estimates. In the above example, leave purity would provide some intuition. If you report any quantitative measures (which I'd be careful with), you may want to consider confidence intervals (see [here](https://machinelearningmastery.com/confidence-intervals-for-machine-learning/) or chapter 5 in Tom Mitchell's "Machine Learning", for example). (With only 10 samples typical assumptions about normal distribution will not hold here though) Regarding the time series I would start even simpler. Depending on the number of customers, I'd start by visualizing some or all historical data in a line plot (sales per customer over time) and check the min, max and mean per customer. This gives some intuition regarding potential trends. For example, if all observations remain constant over time for a given customer, whether there is an upward/downward trend or if the data has high variance with no clear trend. Also, there may be clusters of customers which show similar patterns. Obviously, this is neither Machine Learning nor a rigorous statistical analysis but rather a pragmatic approach supported by some basic data analytics. What you need to be very careful with is the time horizon of any quantitative prediction: based on 10 observations at $t \in \{1,21,...,201\}$ you may derive some conclusions for, let's say, $t<=301$ (to make up a total out of the blue ballpark figure) but $t=621$ is very far in the future. Also, you need to keep seasonality in mind. For example: If your observations are all from October to April of a given year and assuming you have a winter/summer seasonal pattern. Then you cannot infer a lot for months May to September. To understand the limitations and forecast potential of your time series better, I'd speak to subject matter experts, e.g. in sales & marketing. It could also be helpful to understand their forecasting approach and cross check any insights you derive with their predictions. But, [as Erwan pointed out](https://datascience.stackexchange.com/a/94194/84891), be very careful to derive conclusions. Applying some "ML magic" will not find a useful pattern if there insufficient data to find any signal. And of course additional data collection would be reasonable if that is an option.
4873
1
4874
null
3
15055
How can I get information about an entity from DBpedia using Python? Eg: I need to get all DBpedia information about [USA](http://dbpedia.org/page/United_States). So I need to write the query from python (SPARQL) and need to get all attributes on USA as result. I tried : ``` PREFIX db: <http://dbpedia.org/resource/> SELECT ?p ?o WHERE { db:United_States ?p ?o } ``` But here all DBpedia information is not displaying. How can I do this and which all are the possible plugins/api available for python to connect with DBpedia ? Also what will be the SPARQL query for generating the above problem result?
Querying DBpedia from Python
CC BY-SA 4.0
null
2015-01-14T09:08:00.483
2019-03-28T16:07:29.957
2019-03-28T16:07:29.957
24968
5091
[ "python" ]
You do not need a wrapper for DBPedia, you need a library that can issue a SPARQL query to its SPARQL endpoint. Here is [an option for the library](https://pypi.python.org/pypi/SPARQLWrapper/1.6.4) and here is the URL to point it to: [http://dbpedia.org/sparql](http://dbpedia.org/sparql) You need to issue a DESCRIBE query on the United_States resource page: ``` PREFIX dbres: <http://dbpedia.org/resource/> DESCRIBE dbres:United_States ``` Please note this is a huge download of resulting triplets. Here is how you would issue the query: ``` from SPARQLWrapper import SPARQLWrapper, JSON def get_country_description(): sparql = SPARQLWrapper("http://dbpedia.org/sparql") sparql.setReturnFormat(JSON) sparql.setQuery(query) # the previous query as a literal string return sparql.query().convert() ```
Correlating activity between entities using Python
You may compute the $id$ co-occurrences frequencies in a given time-window. Suppose (without loss of generality) your criteria for co-occurrence is that both $id$s must occur on the second `t`, than using maximum likelihood estimate $P(id_{i}|id_{j})$ is: $P(id_{i}|id_{j}) = \frac{count(id_{i}, id_{j})}{count(id_{j})}$ and the maximum likelihood estimate for the joint probability is: $P(id_{i}, id_{j}) = \frac{count(id_{i}, id_{j})}{\sum{k \in IDs}{} \ count(id_{k})}$ Where $id_j, id_i \in IDs\ $ and $IDs$ is the set containing all $id$s (`AAAA,BBB,CCCC,...`), You can then calculate the [pointwise mutual information](https://en.wikipedia.org/wiki/Pointwise_mutual_information) between each $id$ pair, that is, how ofter two $id$s co-occur, compared with what we would expect if they were independent: $I(id_i, id_j) = \log_{2}{\frac{P(id_i, id_j)}{P(id_i)P(id_j)}}$ Given you a estimation of how strong is the association between $id_i$ and $id_j$. The same strategy may be used to find similarities, you may think about each $id_i$ being a $|IDs|$-dimensional vector with the co-occurrences frequencies being the values. You can then apply [cosine-similarity or pearson-correlation](https://brenocon.com/blog/2012/03/cosine-similarity-pearson-correlation-and-ols-coefficients/) to find the most similar vectors ($id$s). --- EDIT Complementing my answer follows python demonstrating the ideas above for the sample dataset given in the question. First we create our dataframe from the data in the question ``` import pandas as pd import numpy as np import collections d = {'ID': ['AAAA', 'AAAA', 'AAAA', 'AAAA', 'AAAA', 'BBBB', 'BBBB', 'BBBB', 'BBBB', 'CCCC', 'CCCC', 'CCCC', 'CCCC','DDDD', 'DDDD', 'DDDD', 'DDDD'], 'Time': [1, 6, 5, 2, 4, 2, 4, 6, 3, 3, 4, 1, 6, 7, 4, 5, 3]} df = pd.DataFrame(d) ``` Computes the co-occurrences ``` dfm = df.merge(df, on='Time') dfm = dfm[dfm.ID_x != dfm.ID_y] # ID_x and ID_y are created by the merge df_M = pd.get_dummies(dfm.ID_x).groupby(dfm.ID_y).apply(sum) print(df_M) ``` The dataframe `df_M` represents the co-occurrence matrix ``` AAAA BBBB CCCC DDDD ID_y AAAA 0 3 3 2 BBBB 3 0 3 2 CCCC 3 3 0 2 DDDD 2 2 2 0 ``` - Pairwise Mutual Information Let `N` be the total amount of co-occurrences, than I can compute every joint probability simply dividing the co-occurrences in the `df_M` matrix by `N`. ``` N = df_M.sum().sum() df_joint_P = df_M/N # Computes every joint probability P(id_i, id_j) ``` The probability of each $id$ is the sum of all its joint probabilities ``` df_ID_P = df_joint_P.sum(axis=0) # Marginalizes to produces P(id_i) ``` Now that we have all we need to compute a PMI dataframe ``` idx = [(r, c) for r in list(df_M) for c in list(df_M)] pmi_dict = collections.defaultdict(dict) for r,c in idx: pmi_dict[r][c] = np.log2(2*df_joint_P[r][c]/(df_ID_P[r] * df_ID_P[c])) if df_joint_P[r][c] > 0 else 0 pmi_df = pd.DataFrame(pmi_dict) print(pmi_df) ``` The pairwise mutual information between every pair of different $id$s is different than zero and like bellow ``` AAAA BBBB CCCC DDDD AAAA 0.000000 1.491853 1.491853 1.321928 BBBB 1.491853 0.000000 1.491853 1.321928 CCCC 1.491853 1.491853 0.000000 1.321928 DDDD 1.321928 1.321928 1.321928 0.000000 ``` We can see that every $id$ seems to be almost equally associated with each other. - Cosine Similarity Now we can compute the cosine similarity, remembering that it is the $l_2$-normalized dot-product ``` df_NM = df_M.div(df_M.pow(2).sum(axis=1).pow(0.5), axis=0) df_cos = df_NM.dot(df_NM.T) print(df_cos) ``` The cosines are ``` AAAA BBBB CCCC DDDD AAAA 1.000000 0.590909 0.590909 0.738549 BBBB 0.590909 1.000000 0.590909 0.738549 CCCC 0.590909 0.590909 1.000000 0.738549 DDDD 0.738549 0.738549 0.738549 1.000000 ``` Obviously every $id$ is (trivially) most similar to itself but from the data above we see that every $id$ is similar to all the others (confirming the result we found using PMI) but we may notice that $id$ `DDDD` is more similar to all the other $id$s (at least for this tiny example).
4884
1
4887
null
0
205
I was wondering whether we could list machine learning winning methods to apply in many fields of interest: NLP, image, vision, medical, deep package inspection, etc. I mean, if someone will get started a new ML project, what are the ML methods that cannot be forgotten?
What are the current killing machine learning methods?
CC BY-SA 3.0
null
2015-01-15T18:32:36.883
2015-01-16T01:00:45.890
null
null
6560
[ "machine-learning" ]
The question is very general. However, there are some studies being conducted to test which algorithms perform relatively well in a broad range of problems (I'll add link to papers later), concerning regression and classification. Lately Random Decision Forests, Support Vector Machines and certain variations of Neural Networks are being said to achieve the best results for very broad variety of problems. This does not mean that these are "the best algorithms" for any problem, that does not exist, and actually is not very realistic to pursue. Also it must be observed that both RDF and SVM are rather-easy methods to initially grasp and obtain good results, so they are becoming really popular. NN have been used intensively since couple of decades (after they revived), so they appear often in implementations. If you are interested in learning further you should look for an specific area and deal with a problem that can be solved nicely by machine learning to understand the main idea (and why is impossible to find the method). You will find common the task to try to predict the expected behavior of something given some known or observable characteristics (to learn the function that models the problem given input data), the issues related to dealing with data in high-dimensional spaces, the need for good quality data, the notable improvements that can give data pre-processing, and many others.
which machine learning technique can be used?
I think these are the methods that you can try out (Please feel free to add more to this list): - Highly precise with a little low recall is to use a dictionary with almost all possibilities (manual effort, but must be worth it.). - Using Word2Vec. Mikolov has already trained text data and created word vectors. Using this vector space, you can figure out which words are similar. You can try out and find a threshold above which you can say which words are similar (for example, yoga and exercise would have decent similarity.) - Train custom W2V, if you have enough data(This is an unsupervised model, so you don't need to worry about tagging the data but finding huge amounts of data relevant to the working domain.) - You can use an RNN to find the most similar words in a corpus and use it for queries. This gives a bit more flexibility than W2V.
4901
1
4912
null
1
216
I have a data set of video watching records in a 3G network. In this data set, 2 different kind of features are included: - user-side information, e.g., age, gender, data plan and etc; - Video watching records of these users, each of which associated with a download ratio and some detailed network condition metrics, say, download speed, RTT, and something similar. Under the scenario of internet streaming, a video is divided into several chunks and downloaded to end device one by one, so we have download ratio = download bytes / file size in bytes Now, Given this data set, I want to predict the download ratio of each video. Since it is a regression problem, so I use gradient boosting regression tree as model and run 10-fold cross validation. However, I have tried different model parameter configurations and even different models (linear regression, decision regress tree), the best root-mean-square error I can get is 0.3790, which is quite high, because if I don't use any complex models and just use the mean value of known labels as prediction values, then I can still have an RMSE of 0.3890. There is not obvious difference. For this problem, I have some questions: - Does this high error rate imply that the label in data set is unpredictable? - Apart from the feature problem, is there any other possibilities? If yes, how can I validate them?
Does high error rate in regression imply the data set is unpredictable?
CC BY-SA 3.0
null
2015-01-19T09:37:56.340
2015-01-27T11:58:33.430
2015-01-27T11:58:33.430
7867
7867
[ "feature-selection", "regression" ]
It's a little hasty to make too many conclusions about your data based on what you presented here. At the end of the day, all the information you have right now is that "GBT did not work well for this prediction problem and this metric", summed up by a single RMSE comparison. This isn't very much information - it could be that this is a bad dataset for GBT and some other model would work, it could be that the label can't be predicted from these features with any model, or there could be some error in model setup/validation. I'd recommend checking the following hypotheses: 1) Maybe, with your dataset size and the features you have, GBT isn't a very high-performance model. Try something completely different - maybe just a simple linear regression! Or a random forest. Or GBDT with very different parameter settings. Or something else. This will help you diagnose whether it's an issue with choice of models or with something else; if a few very different approaches give you roughly similar results, you'll know that it's not the model choice that is causing these results, and if one of those models behaves differently, then that gives you additional information to help diagnose the issue. 2) Maybe there's some issue with model setup and validation? I would recommend doing some exploration to get some intuition as to whether the RMSE you're getting is reasonable or whether you should expect better. Your post contained very little detail about what the data actually represents, what you know about the features and labels, etc. Perhaps you know those things but didn't include them here, but if not, you should go back and try to get additional understanding of the data before continuing. Look at some random data points, plot the columns against the target, look at the histograms of your features and labels, that sort of thing. There's no substitute for looking at the data. 3) Maybe there just aren't enough data points to justify complex models. When you have low numbers of data points (< 100), a simpler parametric model built with domain expertise and knowledge of what the features are may very well outperform a nonparametric model.
What could be a dataset in which the presence of an outlier dramatically affects the performance of Ordinary Least Squares (OLS) regression?
If you are looking for a real-world data set here is one on [Harvard's Dataverse that examines state social politics research](https://dataverse.harvard.edu/dataset.xhtml?persistentId=hdl:1902.1/16431) for outliers for the same purpose you are looking for. If you are looking for one for more illustrative purposes one data set worth knowing is [Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet) for demonstrating how misleading some descriptive statistics can be. For your own investigations, many data sets with and without outliers can be found on [Google's beta dataset search](https://toolbox.google.com/datasetsearch) and are worth exploring if you are curious!
4903
1
4904
null
16
18086
Can anybody tell me what the purpose of feature generation is? And why feature space enrichment is needed before classifying an image? Is it a necessary step? Is there any method to enrich feature space?
What is the difference between feature generation and feature extraction?
CC BY-SA 4.0
null
2015-01-19T14:26:57.117
2021-03-11T20:19:17.107
2021-03-11T20:19:17.107
29169
7873
[ "machine-learning", "classification" ]
Feature Generation -- This is the process of taking raw, unstructured data and defining features (i.e. variables) for potential use in your statistical analysis. For instance, in the case of text mining you may begin with a raw log of thousands of text messages (e.g. SMS, email, social network messages, etc) and generate features by removing low-value words (i.e. stopwords), using certain size blocks of words (i.e. n-grams) or applying other rules. Feature Extraction -- After generating features, it is often necessary to test transformations of the original features and select a subset of this pool of potential original and derived features for use in your model (i.e. feature extraction and selection). Testing derived values is a common step because the data may contain important information which has a non-linear pattern or relationship with your outcome, thus the importance of the data element may only be apparent in its transformed state (e.g. higher order derivatives). Using too many features can result in multicollinearity or otherwise confound statistical models, whereas extracting the minimum number of features to suit the purpose of your analysis follows the principle of parsimony. Enhancing your feature space in this way is often a necessary step in classification of images or other data objects because the raw feature space is typically filled with an overwhelming amount of unstructured and irrelevant data that comprises what's often referred to as "noise" in the paradigm of a "signal" and "noise" (which is to say that some data has predictive value and other data does not). By enhancing the feature space you can better identify the important data which has predictive or other value in your analysis (i.e. the "signal") while removing confounding information (i.e. "noise").
Is there any difference between feature extraction and feature learning?
Yes I think so. Just by looking at [Feature Learning](https://en.wikipedia.org/wiki/Feature_learning) and [Feature extraction](https://en.wikipedia.org/wiki/Feature_extraction) you can see it's a different problem. Feature extraction is just transforming your raw data into a sequence of feature vectors (e.g. a dataframe) that you can work on. In feature learning, you don't know what feature you can extract from your data. In fact, you will probably apply machine learning techniques just to discover what are good features to extract from your dataset. Then you can extract them them apply machine learning to the extracted features. Deep learning techniques are one example of this. In the word2vec toolkit, for instance, you extract vectors from documents which can't be easily interpreted by a human, you can't look at it and tell what features have been extracted at all. It's just a mass of vectors which, for some reason, give good empirical results.
4921
1
4922
null
2
157
I am COMPLETELY new to the field of Data Science, mainly because every employer I have worked for, simply COULDN'T sell any customers anything that would use techniques learned in this field. Of particular interest to me is machine learning/Predictive Analysis. I have attempted many "test projects" myself, but I seem to NEED some sort of outside "catalyst" to tell me a specific goal, and a specific set of guidelines, when I am trying to learn something. Otherwise, I tend to lose focus, and jump from one interesting topic to the next, without ever gaining any experience. Thank you!!
Could someone please offer me some guidance on some kind of particular, SPECIFIC project that I could attemp, to "get my feet wet, so to speak"
CC BY-SA 3.0
null
2015-01-22T00:52:51.710
2015-01-22T17:25:40.690
null
null
7909
[ "machine-learning", "data-mining", "bigdata", "predictive-modeling" ]
I would suggest Kaggle learning projects - [http://www.kaggle.com/competitions](http://www.kaggle.com/competitions) Look for the ones in the 101 section that offer knowledge. There's many pre-made solutions ready, which you can ingest and try variations of. Also, I have bookmarked a [Comprehensive learning path – Data Science in Python](http://www.analyticsvidhya.com/blog/learning-path-data-science-python/), which among other things gives a few answers to your specific question.
Book suggestions
For Python, [Python: The Complete Reference](https://www.amazon.in/Python-Complete-Reference-Martin-Brown/dp/9387572943/ref=sr_1_3?dchild=1&keywords=python%20book&qid=1599758714&sr=8-3) and [Head First Python](https://www.amazon.in/Head-First-Python-Brain-Friendly-Guide/dp/9352134826/ref=sr_1_1?crid=1E0OSYEZSHQY3&dchild=1&keywords=head%20first%20python&qid=1599758926&sprefix=Head%20first%20%2Caps%2C353&sr=8-1).These two should be good enough. For Machine learning, it really depends whether you want to get into all the math or not. If not, I think there are enough resources online. But if you do, I would recommend [Pattern Recognition and Machine Learning](https://www.amazon.in/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1599759042&sr=8-1) by Christopher Bishop. It really dives deep into the math for ML.
4957
1
4966
null
5
19439
Do you know of any machine learning add-ins that I could use within Excel? For example I would like to be able to select a range of data and use that for training purposes and then use another sheet for getting the results of different learning algorithms.
Machine learning toolkit for Excel
CC BY-SA 3.0
null
2015-01-27T15:13:09.157
2020-02-19T03:55:45.710
null
null
7982
[ "machine-learning", "neural-network" ]
As far as I know, currently there are not that many projects and products that allow you to perform serious machine learning (ML) work from within Excel. However, the situation seems to be changing rapidly due to active Microsoft's efforts in popularizing its ML cloud platform Azure ML (along with ML Studio). The [recent acquisition](http://blogs.microsoft.com/blog/2015/01/23/microsoft-acquire-revolution-analytics-help-customers-find-big-data-value-advanced-statistical-analysis) of R-focused company Revolution Analytics by Microsoft (which appears to me as more of acqui-hiring to a large extent) is an example of the company's aggressive data science market strategy. In regard to ML toolkits for Excel, as a confirmation that we should expect most Excel-enabled ML projects and products to be Azure ML-focused, consider the following two projects (the latter is an open source): - Excel DataScope (Microsoft Research): https://www.microsoft.com/en-us/research/video/excel-datascope-overview/ - Azure ML Excel Add-In (seems to be Microsoft sponsored): https://azuremlexcel.codeplex.com
What are some easy to learn machine-learning applications?
I would recommend to start with some MOOC on machine learning. For example Andrew Ng's [course](https://www.coursera.org/course/ml) at coursera. You should also take a look at [Orange](http://orange.biolab.si/) application. It has a graphical interface and probably it is easier to understand some ML techniques using it.
4978
1
4981
null
0
1066
When considering Support Vector Machine, in an take in multiple inputs. Can each of these inputs be a vector?? What i am trying to say is, can the input be a 2 dimensional vector??
Can 2 dimensional input be applied to SVM?
CC BY-SA 3.0
null
2015-01-29T09:04:56.190
2015-01-29T12:42:05.757
null
null
8013
[ "machine-learning", "svm" ]
If I understand your question correctly. Yes, SVM can take multiple inputs. My suggestion for handling a vector as a feature would be to expand it out. For example, ``` x0 = (1,2) x0 = 1 x1 = .4 -----> x1 = 2 x2 = 0 x2 = .4 x3 = 0 ``` If this does not capture all of the characteristics of the vector that are important, then you may want to add other features (like magnitude of the vector) as well.
How to apply two input and one output with LR and SVM
The simplest option in order to represent the two sentences independently of each other is to represent each of the two sentences with its own TFIDF vector of features and concatenate the two vectors. In other words you obtain 2 * N features where N is the size of the vocabulary. But at first sight it looks like the wrong approach for the problem that you're trying to solve: LR or SVM are unlikely to capture the high-level nature of paraphrasing, especially if fed with only basic vocabulary features like this. A slightly more advanced approach would be to provide the model with features which represent the relationship between the two sentences: length, words in common, readability measure, etc.
4985
1
5011
null
3
1197
I am currently working on a multi-class classification problem with a large training set. However, it has some specific characteristics, which induced me to experiment with it, resulting in few versions of the training set (as a result of re-sampling, removing observations, etc). I want to perform pre-processing of the data, that is to scale, center and impute (not much imputation though) values. This is the point where I've started to get confused. I've been taught that you should always pre-process the test set in the same way you've pre-processed the training set, that is (for scaling and centering) to measure the mean and standard deviation on the training set and apply those values to the test set. This seems reasonably to me. But what to do in case when you have shrinked/resampled training set? Should one focus on characteristics of the data that is actually feeding the model (that is what would 'train' function in R's caret package suggest, as you can put the pre-processing object in there directly) and apply these to the test set, or maybe one should capture the real characteristics of the data (from the whole untouched training set) and apply these? If the second option is better, maybe it would be worth it to capture the characteristics of the data by merging the training and test data together just for pre-processing step to get as accurate estimates as possible (I've actually never heard of anyone doing that though)? I know I can simply test some of the approaches specified here, and I surely will, but are there any suggestions based on theory or your intuition/experience on how to tackle this problem? I also have one additional and optional question. Does it make sense to center but NOT scale the data (or the other way around) in any case? Can anyone present any example where that approach would be reasonable? Thank you very much in advance.
Pre-processing (center, scale, impute) among training sets (different forms) and the test set - what is a good approach?
CC BY-SA 3.0
null
2015-01-29T13:54:24.940
2015-05-02T03:55:45.303
null
null
8017
[ "machine-learning", "data-mining", "dataset", "processing", "feature-scaling" ]
I thought about it this way: the training and test sets are both a sample of the unknown population. We assume that the training set is representative of the population we're studying. That is, whatever transformations we make to the training set are what we would make to the overall population. In addition, whatever subset of the training data we use, we assume that this subset represents the training set, which represents the population. So in response to your first question, it's fine to use that shrinked/resmpled training as long as you feel it's still representative of that population. That's assuming your untouched training set captures the "real characteristics" in the first place :) As for your second question, don't merge the training and testing set. The testing set is there to act as future unknown observations. If you build these into the model then you won't know if the model wrong or not, because you used up the data you were going to test it with.
Training and Test set
In general, generating independently training and test sets is a legitimate option. The crucial aspect is that both the generating processes are equal. You can check this looking at [this example](http://appliedpredictivemodeling.com/blog/2013/4/11/a-classification-simulation-system) from the author of the caret R package and the Applied Predictive Modeling book. However, it is something that can be easily proven with simulations. In what follows, both generating independently training and testing data or splitting the same data in training and testing subsets give the same results. The glm has a median accuracy of 92%. ``` # simulations with training and test data generating at the same time n <- 100 accuracy <- vector("numeric") for (i in 1:1000){ #create data x <- rnorm(n) # generate X z <- 1 + 4*x + rnorm(n) # linear combination with error pr <- 1/(1+exp(-z)) # inv-logit function y <- pr > 0.5 # 1 (True) if probability > 0.5 df <- data.frame(y = y, x = x) train <- sample(x = 1:n, size=n%/%2, replace = F) # sampling training data units glm.fit <- glm(y ~ x, data = df[train,]) # fit on the training data predicted <- predict.glm(glm.fit, newdata = df[-train,]) # predict on the other data units accuracy=c(accuracy, sum(diag(table(predicted>0.5, df[-train,]$y)))/(n%/%2)) # collect accuracy } quantile(accuracy, probs = c(0.025, 0.5, 0.975)) # glm accuracy # simulations with training and test data generating independently n <- 100 # dataset size accuracy <- vector("numeric") for (i in 1:1000){ #create data x <- rnorm(n%/%2) # generate X z <- 1 + 4*x + rnorm(n%/%2) # linear combination with error pr <- 1/(1+exp(-z)) # inv-logit function y <- pr > 0.5 # 1 (True) if probability > 0.5 df.train <- data.frame(y = y, x = x) glm.fit <- glm(y ~ x, data = df.train) # fit on the training data # generating independent test data x <- rnorm(n%/%2) z <- 1 + 4*x + rnorm(n%/%2) # linear combination with error pr <- 1/(1+exp(-z)) # inv-logit function y <- pr > 0.5 # 1 (True) if probability > 0.5 df.test <- data.frame(y = y, x = x) predicted <- predict.glm(glm.fit, newdata = df.test) # predict on the test data accuracy=c(accuracy, sum(diag(table(predicted>0.5, df.test$y)))/(n%/%2)) # collect accuracy } quantile(accuracy, probs = c(0.025, 0.5, 0.975)) # glm accuracy ```
4992
1
5060
null
11
402
I'm trying to build a cosine locality sensitive hash so I can find candidate similar pairs of items without having to compare every possible pair. I have it basically working, but most of the pairs in my data seem to have cosine similarity in the -0.2 to +0.2 range so I'm trying to dice it quite finely and pick things with cosine similarity 0.1 and above. I've been reading Mining Massive Datasets chapter 3. This talks about increasing the accuracy of candidate pair selection by Amplifying a Locality-Sensitive Family. I think I just about understand the mathematical explanation, but I'm struggling to see how I implement this practically. What I have so far is as follows - I have say 1000 movies each with ratings from some selection of 1M users. Each movie is represented by a sparse vector of user scores (row number = user ID, value = user's score) - I build N random vectors. The vector length matches the length of the movie vectors (i.e. the number of users). The vector values are +1 or -1. I actually encode these vectors as binary to save space, with +1 mapped to 1 and -1 mapped to 0 - I build sketch vectors for each movie by taking the dot product of the movie and each of the N random vectors (or rather, if I create a matrix R by laying the N random vectors horizontally and layering them on top of each other then the sketch for movie m is R*m), then taking the sign of each element in the resulting vector, so I end with a sketch vector for each movie of +1s and -1s, which again I encode as binary. Each vector is length N bits. - Next I look for similar sketches by doing the following I split the sketch vector into b bands of r bits Each band of r bits is a number. I combine that number with the band number and add the movie to a hash bucket under that number. Each movie can be added to more than one bucket. I then look in each bucket. Any movies that are in the same bucket are candidate pairs. Comparing this to 3.6.3 of mmds, my AND step is when I look at bands of r bits - a pair of movies pass the AND step if the r bits have the same value. My OR step happens in the buckets: movies are candidate pairs if they are both in any of the buckets. The book suggests I can "amplify" my results by adding more AND and OR steps, but I'm at a loss for how to do this practically as the explanation of the construction process for further layers is in terms of checking pairwise equality rather than coming up with bucket numbers. Can anyone help me understand how to do this?
Amplifying a Locality Sensitive Hash
CC BY-SA 3.0
null
2015-01-30T11:08:37.280
2016-09-12T03:51:16.957
null
null
8030
[ "machine-learning" ]
I think I've worked something out. Basically I'm looking for an approach that works in a map/reduce type environment and I think this approach does it. So, - suppose I have b bands of r rows and I want to add another AND stage, say another c ANDs. - so instead of b * r bits I need hashes of b * r * c bits - and I run my previous procedure c times, each time on b * r bits - If x and y are found to be a candidate pair by any of these procedures it emits a key value pair ((x, y), 1), with the tuple of IDs (x,y) as the key and the value 1 - At the end of the c procedures I group these pairs by key and sum - Any pair (x,y) with a sum equal to c was a candidate pair in each of the c rounds, and so is a candidate pair of the entire procedure. So now I have a workable solution, and all I need to do is work out whether using 3 steps like this will actually help me get a better result with fewer overall hash bits or better overall performance...
Clustering by using Locality sensitive hashing *after* Random projection
It makes sense to reduce the dimensionality with Random Projection (RP) and then cluster with Locality Sensitive Hashing (LSH). One of the primary ways of improving LSH is running it multiple times and taking the consensus clusters. That process would be much faster on fewer dimensions. As far as redundancy - both methods rely on randomness. There is a small chance that the sequential randomness could yield non-robust results. If possible, run the process multiple times to find consistent results.
5000
1
5019
null
9
8621
Maybe it is a bit general question. I am trying to solve various regression tasks and I try various algorithms for them. For example, multivariate linear regression or an SVR. I know that the output can't be negative and I never have negative output values in my training set, though I could have 0's in it (for example, I predict 'amount of cars on the road' - it can't be negative but can be 0). Rather often I face a problem that I am able to train relatively good algorithm (maybe fit a good regression line to my data) and I have relatively small average squared error on training set. But when I try to run my regression algorithm against new data I sometimes get a negative output. Obviously, I can't accept negative output since it is not a valid value. The question is - what is the proper way of working with such output? Should I think of negative output as a 0 output? Is there any general advice for such cases?
Proper way of fighting negative outputs of a regression algorithms where output must be positive all the way
CC BY-SA 3.0
null
2015-01-30T21:30:18.077
2015-02-02T20:20:08.503
null
null
7969
[ "machine-learning", "regression" ]
The problem is your model choice, as you seem to recognize. In the case of linear regression, there is no restriction on your outputs. Often this is fine when predictions need to be non-negative so long as they are far enough away from zero. However, since many of your training examples are zero-valued, this isn't the case. If your data is non-negative and discrete (as in the case with number of cars on the road), you could model using a generalized linear model (GLM) with a log link function. This is known as Poisson regression and is helpful for modeling discrete non-negative counts such as the problem you described. The Poisson distribution is parameterized by a single value $\lambda$, which describes both the expected value and the variance of the distribution. This results in an approach similar to the one described by Emre in that you are attempting to fit a linear model to the log of your observations.
How do I force specified coefficients in a Linear Regression model to be positive?
Sorry, but on the surface, this sounds like a terrible idea to me: if linear regression gives you negative coefficients for some explanatory variables that you think should be positive, then it means that either your data is "wrong" (typically noisy or too small) or your intuition is misguided. I can't see any good reason why one would use a data-driven approach if the goal is to manually force the model in a particular way. This is the equivalent of breaking the thermometer to hide the fever. I'd suggest the following instead: - In general an unexpected outcome is arguably a good thing, in the sense that it tells us something we didn't know about the data. That's a cue to investigate what happens in the data. Linear regression is simple enough to analyze: one can look at the correlation, plot the relation between the variables etc. - If there's really something suspicious going on with some variables, maybe some errors in the data which make them behave in a way they shouldn't, then it's much better to discard them altogether from the model rather than fixing their coefficient, because this way the model won't rely on them at all.
5013
1
5016
null
0
2152
Our main use case is object detection in 3d lidar point clouds i.e. data is not in RGB-D format. We are planning to use CNN for this purpose using theano. Hardware limitations are CPU: 32 GB RAM Intel 47XX 4th Gen core i7 and GPU: Nvidia quadro k1100M 2GB. Kindly help me with recommendation for architecture. I am thinking in the lines of 27000 input neurons on basis of 30x30x30 voxel grid but can't tell in advance if this is a good option. Additional Note: Dataset has 4500 points on average per view per point cloud
Machine learning for Point Clouds Lidar data
CC BY-SA 3.0
null
2015-02-01T09:35:16.113
2017-05-19T16:12:46.690
2017-05-19T16:12:46.690
21
8051
[ "machine-learning", "dataset" ]
First, CNNs are great for image recognition, where you usually take sub sampled windows of about 80 by 80 pixels, 27,000 input neurons is too large and it will take you forever to train a CNN on that. Furthermore, why did you choose CNN? Why don't you try some more down to earth algorithms fisrst? Like SVMs, or Logistic regressions. 4500 Data points and 27000 features seems unrealistic to me, and very prone to over fitting. Check this first. [http://scikit-learn.org/stable/tutorial/machine_learning_map/](http://scikit-learn.org/stable/tutorial/machine_learning_map/)
How do linear learning systems classify datapoints that fall on the hyperplane
Linear, binary classifiers can choose either class (but consistently) when the datapoint which is to classify is on the hyperplane. It just depends on how you programmed it. Also, it doesn't really matter. This is very unlikely to happen. In fact, if we had arbitrary precision computing and normal distributed features, there would be a probability of 0 (exactly, not rounded) that this would happen. We have IEEE 754 floats, so the probability is not 0, but still so small that there are much more important factors to worry about.
5014
1
5015
null
3
351
Recently I read about path ranking algorithm in a paper (source: [Knowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion](https://www.cs.cmu.edu/~nlao/publication/2014.kdd.pdf)). In this paper was a table (Table 3) with facts and I tried to understand how they were calculated. F1 (harmonic mean of precision and recall) = 0.04 P (precision) = 0.03 R (recall) = 0.33 W (weight given to this feature by logistic regression) I found a formula for F1 via Google which is $F1 = 2 * \frac{precision * recall}{precision + recall}$ The problem is that I get the result of 0.055 with this formula, but not the expected result of 0.04. Can someone help me to get this part? Also, does someone know how 'W' can be calculated? Thanks.
How to compute F1 score?
CC BY-SA 3.0
null
2015-02-02T14:53:51.810
2018-06-21T18:24:00.583
2015-02-02T14:59:43.687
8063
8063
[ "machine-learning" ]
First you need to learn about Logistic Regression, it is an algorithm that will assign weights to different features given some training data. Read the wiki intro, is quite helpful, basically the Betas there are the same as the Ws in the paper. The formula you have is correct, and those value do seem off. It also depends on the number of significant figures you have, perhaps they are making their calculations with more than the ones they are reporting. But honestly, you can't understand much of the paper unless you understand LR
Selecting threshold for F1 Score
Ideally, the threshold should be selected on your training set. Your holdout set is just there to double confirm that whatever has worked on your training set will generalize to images outside of the training set. This is the reason why hyperparameters tuning like GridSearch and RandomizedSearch in python has a cv parameter to cross-validate between different folds of your training set instead of allowing to choose the best parameters based on metric measured using the holdout set.
5023
1
5032
null
4
8697
With respect to [ROC](http://en.wikipedia.org/wiki/Receiver_operating_characteristic) can anyone please tell me what the phrase "discrimination threshold of binary classifier system" means? I know what a binary classifier is.
What is a discrimination threshold of binary classifier?
CC BY-SA 3.0
null
2015-02-03T08:52:42.133
2021-02-10T17:49:46.840
2021-02-10T17:49:46.840
85045
8013
[ "classification", "graphs", "classifier", "roc" ]
Just to add a bit. Like it was mentioned before, if you have a classifier (probabilistic) your output is a probability (a number between 0 and 1), ideally you want to say that everything larger than 0.5 is part of one class and anything less than 0.5 is the other class. But if you are classifying cancer rates, you are deeply concerned with false negatives (telling some he does not have cancer, when he does) while a false positive (telling someone he does have cancer when he doesn't) is not as critical (IDK - being told you've cancer coudl be psychologically very costly). So you might artificially move that threshold from 0.5 to higher or lower values, to change the sensitivity of the model in general. By doing this, you can generate the ROC plot for different thresholds.
Classifier Threshold
The threshold you choose depends on the specifics of the problem you are trying to solve. More specifically, it should be based on how you weigh false positives vs. false negatives, i.e. how bad each of these are relative to each other. You mention that you are trying to maximize recall on the positive class, but if that were true you would should just classify everything as a positive class, and get a recall of 1.0. Based on the domain you are working in, you should decide how much a false positive 'costs' vs. how much a false negative 'costs'. Once you decide this, you can find the threshold that minimizes the function total cost = false negative count x FN cost + false positive count x FP cost
5038
1
5071
null
0
579
I'm trying to use a particular cost function (based on doubling rate of wealth) for a classification problem, and the solution works well in MATLAB. See [https://github.com/acmyers/compareCostFXs](https://github.com/acmyers/compareCostFXs) When I try to do this in Python 2.7.6 I don't get any errors, but it only returns zeros for the theta values. Here is the cost function and optimization method I've used in Python: ``` def costFunctionDRW(theta, X, y): # Initialize useful values m = len(y) # Marginal probability of acceptance marg_pA = sum(y)/m # Marginal probability of rejection marg_pR = 1 - marg_pA # ============================================================= pred = sigmoid(np.dot(X,theta)) final_wealth_individual = (pred/marg_pA)*y + ((1-pred)/marg_pR)*(1-y) final_wealth = np.prod(final_wealth_individual) final_wealth = -final_wealth return final_wealth result = scipy.optimize.fmin(costFunctionDRW, x0=initial_theta, \ args=(X_array, y_array), maxiter=1000, disp=False, full_output=True ) ``` Any advice would be much appreciated!
minimization with a negative cost function: works in MATLAB, not in Python
CC BY-SA 3.0
null
2015-02-04T04:51:28.063
2015-02-06T09:41:54.033
null
null
985
[ "classification", "python" ]
The problem is with division on python 2.x. In python 2.x, division involving two integers produces an integer result. So `1/2==0`. Python 3.x does not have this problem, `1/2==.5`. There are two ways to avoid this. First, you can always convert one value you a float. So `1./2==0.5` and `1/2.==0.5`. However, you have to remember to do this everywhere, and if you forget it can lead to hard-to-find errors. The more reliable method is to always put this at the top of your code: `from __future__ import division`. This will switch python 2.x to the python 3 behavior, so `1/2==.5`. In python 3.x it does nothing, so it also makes your code python 3.x compatible in this regard.
Should the minimum value of a cost (loss) function be equal to zero?
Saying that the well-known loss functions, like MSE or Categorical Cross Entropy, has a global minimum value equal to zero is flawed . The idea behind loss function is to measure how near the model predictions are to the actuals(in case of a regression). Now ideally , you would want your model to predict exactly equal to the actuals . In that case only , we get loss equal to zero. Otherwise , loss is non zero almost all the time . If you remember the loss function for a linear regression setting , [](https://i.stack.imgur.com/W5Dik.png) We need to minimise ![](https://latex.codecogs.com/gif.latex?J(%5Ctheta))
5040
1
5043
null
1
1340
I'm using a set of features, says $X_1, X_2, ..., X_m $, to predict a target value $Y$, which is a continuous value from zero to one. At first, I try to use a linear regression model to do the prediction, but it does not perform well. The root-mean-squared error is about 0.35, which is quite high for prediction of a value from 0 to 1. Then, I have tried different models, e.g., decision-tree-based regression, random-forest-based regression, gradient boosting tree regression and etc. However, all of these models also do not perform well. (RMSE $\approx $0.35, there is not significant difference with linear regression) I understand there are many possible reasons for this problem, such as: feature selection or choice of model, but maybe more fundamentally, the quality of data set is not good. My question is: how can I examine whether it is caused by bad data quality? BTW, for the size of data set, there are more than 10K data points, each of which associated with 105 features. I have also tried to investigate importance of each feature by using decision-tree-based regression, it turns out that, only one feature (which should not be the most outstanding feature in my knowledge to this problem) have an importance of 0.2, while the rest of them only have an importance less than 0.1.
How to determine whether a bad performance is caused by data quality?
CC BY-SA 3.0
null
2015-02-04T06:10:34.893
2015-02-04T07:37:14.677
2015-02-04T07:37:14.677
7867
7867
[ "machine-learning" ]
First, it sounds like your choice of model selection is a problem here. Your outputs are binary-valued, not continuous. Specifically you may have a classification problem on your hands rather than a traditional regression problem. My first recommendation would be to try a simple classification approach such as logistic regression or linear discriminant analysis. Regarding your suspicions of bad data, what would bad data look like in this situation? Do you have reason to suspect that your $X$ values are noisy or that your $y$ values are mislabeled? It is also possible that there is not a strong relationship between any of your features and your targets. Since your targets are binary, you should look at histograms of each of your features to get a rough sense of the class conditional distributions, i.e. $p(X_1|y=1)$ vs $p(X_1|y=0)$. In general though, you will need to be more specific about what "bad data" means to you.
What metrics determine the quality of the model?
> Unscaled and scaled r2's are not highly correlated (0.31 AAMOF). Which one would best describe the accuracy of the model on unseen data? I don't think this is a matter of which will describe the generalization error better, because both of them are describing the same thing, just on different scales. So, the advice would be to use the accuracy metric consistent with the metric that will be used for predictions on unseen data. > Why isn't the unscaled r2 the same as the scaled r2? This is because [MSE is scale dependent.](https://stats.stackexchange.com/questions/11636/the-difference-between-mse-and-mape) > The model r2 is not the same as any of the validation r2's during training (val_r2_keras). Shouldn't the trained model r2 be the same as the one reported during the training? Why do you think so? They are different because the datasets for training and for validation are different.
5093
1
5765
null
2
1286
I'm very new to this community, so please overlook my noobness. I have a data set with 2948 instances and I tried to remove outliers using InterquartileRange filter in Weka. The issue is that the number of 'YES' instances in ExtremeValues and Outliers takes up to 2947 and 2946 respectively. In other words, all my data are considered outliers. What does this say about my data set? Or am I not meant to perform IQR on this data, if so, is there other algorithms to identify outliers other than IQR? And how would one perform regression on such a data set? Thank you.
InterquartileRange takes up most instances in data set
CC BY-SA 3.0
null
2015-02-10T01:04:15.963
2015-05-12T10:59:50.547
null
null
4803
[ "dataset", "data-cleaning" ]
The InterQuartileRangeFilter from weka library uses an IQR formula to designate some values as outliers/extreme values. Any value outside this range $[Q_1 - k(Q_3-Q_1), Q_3 + k(Q_3-Q_1)]$ is considered some sort of an outlier, where $k$ is some constant, and $ IQR = Q_3 - Q_1$. By default weka uses $k=3$ to define something as outlier, and $k=3*2$ to define something as extreme value (extreme outlier). The formula guarantees that at least 50% value are considered non-outliers. Having a single variable (univariate sample of values), it's practically impossible to reproduce your result. Note however that this filter can be applied to a data frame. When applied like this, it will consider as an outlier any instance of the data frame which has at least one value of the instance considered as outlier for that variable. Now, supposing that you have a data frame with 2 variables, totally uncorrelated (independent). Considering again that only 10% of the values from each variable are considered outliers, due to independence, one can expect that $(1-0.9)^2$ values will not be outliers. If you have $p$ variables like that in your data frame, you might expect to have only $(1-0.9)^k$ normal values, and is not very hard to arrive in that situation. There are two things which you will have to consider. One is to increase the factors for outliers if in general too many values are considered outliers (ideally you would like to take a look at each variable graphically and if possible to get some idea about the distribution beneath). The second one is to check if you have many values which are totally independent. The second hint does not solve your problem but might give you a reason why it happens.
Found input variables with inconsistent numbers of samples
It seems that I missed the word "scoring". In fact, the extra 3 was related to the number of characters of 'mae'. ``` def Ridgecv(alpha): return cross_val_score(Ridge(alpha=float(alpha), random_state=2), X_train, y_train, scoring='mae', cv=5).mean() ```
5097
1
5098
null
0
253
What are some data analytic package & feature in python which helps do data analytic?
Python for data analytics
CC BY-SA 3.0
null
2015-02-10T12:36:48.250
2015-02-11T06:46:22.097
2015-02-11T06:46:22.097
8195
8195
[ "data-mining", "python" ]
You're looking for this answer: [https://www.quora.com/Why-is-Python-a-language-of-choice-for-data-scientists](https://www.quora.com/Why-is-Python-a-language-of-choice-for-data-scientists)
Is Python a viable language to do statistical analysis in?
Python is more "general purpose" while R has a clear(er) focus on statistics. However, most (if not all) things you can do in R can be done in Python as well. The difference is that you need to use additional packages in Python for some things you can do in base R. Some examples: - Data frames are base R while you need to use Pandas in Python. - Linear models (lm) are base R while you need to use statsmodels or scikit in Python. There are important conceptional differences to be considered. - For some rather basic mathematical operations you would need to use numpy. Overall this leads to some additional effort (and knowledge) needed to work fluently in Python. I personally often feel more comfortable working with base R since I feel like being "closer to the data" in (base) R. However, in other cases, e.g. when I use boosting or neural nets, Python seems to have an advantage over R. Many algorithms are developed in `C++` (e.g. [Keras](https://github.com/jjallaire/deep-learning-with-r-notebooks), [LightGBM](https://lightgbm.readthedocs.io/en/latest/)) and adapted to Python and (often later to) R. At least when you work with Windows, this often works better with Python. You can use things like Tensorflow/Keras, LightGBM, Catboost in R, but it sometimes can be daunting to get the additional package running in R (especially with GPU support). Many packages/methods are available for R and Python, such as GLMnet ([for R](https://web.stanford.edu/%7Ehastie/glmnet/glmnet_alpha.html) / [for Python](https://web.stanford.edu/%7Ehastie/glmnet_python/)). You can also see based on the Labs of "[Introduction to Statistical Learning](http://faculty.marshall.usc.edu/gareth-james/ISL/index.html)" - which are available [for R](http://faculty.marshall.usc.edu/gareth-james/ISL/code.html) and [for Python](https://github.com/JWarmenhoven/ISLR-python) as well - that there is not so much of a difference between the two languages in terms of what you can do. The difference is more like how things are done. Finally, since Python is more "general purpose" than R (at least in my view), there are [interesting and funny things](https://realpython.com/what-can-i-do-with-python/) you can do with Python (beyond statistics) which you cannot do with R (at least it is harder).
5109
1
5111
null
4
3443
Disclaimer: although I know some things about big data and am currently learning some other things about machine learning, the specific area that I wish to study is vague, or at least appears vague to me now. I'll do my best to describe it, but this question could still be categorised as too vague or not really a question. Hopefully, I'll be able to reword it more precisely once I get a reaction. So, I have some experience with Hadoop and the Hadoop stack (gained via using CDH), and I'm reading a book about Mahout, which is a collection of machine learning libraries. I also think I know enough statistics to be able to comprehend the math behind the machine learning algorithms, and I have some experience with R. My ultimate goal is making a setup that would make trading predictions and deal with financial data in real time. I wonder if there're any materials that I can further read to help me understand ways of managing that problem; books, video tutorials and exercises with example datasets are all welcome.
Machine Learning on financial big data
CC BY-SA 3.0
null
2015-02-11T10:48:51.903
2015-02-16T03:03:43.883
2015-02-11T16:12:47.457
97
8214
[ "machine-learning", "bigdata", "finance" ]
There are tons of materials on financial (big) data analysis that you can read and peruse. I'm not an expert in finance, but am curious about the field, especially in the context of data science and R. Therefore, the following are selected relevant resource suggestions that I have for you. I hope that they will be useful. Books: Financial analysis (general / non-R) - Statistics and Finance: An Introduction; - Statistical Models and Methods for Financial Markets. Books: Machine Learning in Finance - Machine Learning for Financial Engineering (!) - seems to be an edited collection of papers; - Neural Networks in Finance: Gaining Predictive Edge in the Market. Books: Financial analysis with R - Statistical Analysis of Financial Data in R; - Statistics and Data Analysis for Financial Engineering; - Financial Risk Modelling and Portfolio Optimization with R - Statistics of Financial Markets: An Introduction (code in R and MATLAB). Academic Journals - Algorithmic Finance (open access) Web sites - RMetrics - Quantitative Finance on StackExchange R Packages - the above-mentioned RMetrics site (see this page for general description); - CRAN Task Views, including Finance, Econometrics and several other Task Views. Competitions - MODELOFF (The Financial Modeling World Championships) Educational Programs - MS in Financial Engineering - Columbia University; - Computational Finance - Hong Kong University. Blogs (Finance/R) - Timely Portfolio; - Systematic Investor; - Money-making Mankind.
Structure the dataset for financial machine learning
You are going to have to consider three different factors: 1 - What data are you going to have available when you run your predictions? Are you going to have to pre-process that data? Are you going to have the time to do it? You should be setting your focus on this and work backwards from what runtime predictions look like 2 - When it comes to time series, you have to think of it in terms of (1) lookback windows, or how many periods prior are you considering and (2) time shifts, or how many periods forward are you predicting? This can result in an amazing number of combinations for you to model. You should end up with data sets where you have n features for X time periods and your target variable (your labeled data) is the result Y time periods from X where you are creating a prediction. 3 - The cardinal sin of time series is that you should never model on data that was not available at the specific runtime date. It's a common beginner mistake where you mix up your time periods and end up taking in data that was not actually available when it was released; in other words, it will not be available to you (or at least not correct) when you go to make a prediction. This can be common in financial data where entities can come back and "re-state" numbers at a future date. You have to make sure you are modeling on data as it stood on the target date you are modeling. With these thoughts in mind, you should be on your way to prepping your data in a way that makes sense.
5113
1
5114
null
1
2441
A book I'm now reading, "Apache Mahout Cookbook" by Pierro Giacomelli, states that > To avoid [this], you need to divide the vector files into two sets called the 80-20 split <...> A good dividing percentage is shown to be 80% and 20%. Is there a strict statistical proof of this being the best percentage, or is it a euristic result?
Dividing percentage
CC BY-SA 3.0
null
2015-02-11T13:26:27.753
2015-02-11T13:56:30.847
null
null
8214
[ "machine-learning", "statistics" ]
If this is about splitting your data into training and testing data, then 80/20 is a common rule of thumb. An "optimal" split (which would need to be operationalized) would likely depend on your sample size, distributions and relationships between your variables. It is also common to split your data three ways (e.g., 60/20/20 - again rules of thumb), into a training set that you train your models on and a test set which you test your model on. You will iterate training and testing until you like the result. Then, and only then you apply the final model (trained on both the training and test set) on the third validation set. This avoids "overfitting on the test set". However, [cross-validation](http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29) is much better than a simple data split. Your textbook should also cover cross-validation. If it doesn't, get a better textbook.
Get the percentage of each class in classification
So many model classes in Sci-Kit learn have a method called "predict_proba(X)". This is a method that can predict the probability of the predicted classification. Below is a link from the sci-kit learn documentation that shows an example of how this would work in practice. [https://scikit-learn.org/stable/auto_examples/calibration/plot_calibration.html?highlight=classification](https://scikit-learn.org/stable/auto_examples/calibration/plot_calibration.html?highlight=classification)
5119
1
5135
null
1
1720
is the survival table classification method on the Kaggle Titanic dataset an example of an implementation of Naive Bayes ? I am asking because I am reading up on Naive Bayes and the basic idea is as follows: "Find out the probability of the previously unseen instance belonging to each class, then simply pick the most probable class" The survival table ([http://www.markhneedham.com/blog/tag/kaggle/](http://www.markhneedham.com/blog/tag/kaggle/)) seems like an evaluation of the possibilities of survival given possible combinations of values of the chosen features and I'm wondering if it could be an example of Naive Bayes in another name. Can someone shed light on this ?
Kaggle Titanic Survival Table an example of Naive Bayes?
CC BY-SA 3.0
null
2015-02-12T07:00:38.160
2015-02-13T09:06:33.070
null
null
8234
[ "machine-learning", "classification" ]
Naive Bayes is just one of the several approaches that you may apply in order to solve the Titanic's problem. The aim of the Kaggle's Titanic problem is to build a classification system that is able to predict one outcome (whether one person survived or not) given some input data. The survival table is a training dataset, that is, a table containing a set of examples to train your system with. As I mentioned before, you could apply Naive Bayes to build your classification system to solve the Titanic problem. Naive Bayes is one of the simplest classification algorithms out there. It assumes that the data in your dataset has a very specific structure. Sometimes Naive Bayes can provide you with results that are good enough. Even if that is not the case, Naive Bayes may be useful as a first step; the information you obtain by analyzing Naive Bayes' results, and by further data analysis, will help you to choose which classification algorithm you could try next. Other examples of classification methods are k-nearest neighbours, neural networks, and logistic regression, but this is just a short list. If you are new to Machine Learning, I recommend you to take a look to this course from Stanford: [https://www.coursera.org/course/ml](https://www.coursera.org/course/ml)
What Shape Does Naive Bayes make?
Specifically talking about [Gaussian Naive Bayes](https://www.cs.cornell.edu/courses/cs4780/2018fa/lectures/lecturenote05.html), the decision boundary are ellipsoids characterized by the mean and standard deviation of the Gaussian distribution. [](https://i.stack.imgur.com/yT52J.jpg) Image: [https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#sphx-glr-auto-examples-classification-plot-classifier-comparison-py](https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#sphx-glr-auto-examples-classification-plot-classifier-comparison-py)
5122
1
5125
null
5
2660
I'm curious if anyone has Python library suggestions for inferential statistics. I'm currently reading [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), which uses R for the example code, but ideally I'd like to use Python as well. Most of my data experience is with Pandas, Matplotlib, and Sklearn doing predictive modeling. So far I've found [statsmodels](https://pypi.python.org/pypi/statsmodels). Is this what is recommended or is there something else? Thanks!
Best Python library for statistical inference
CC BY-SA 3.0
null
2015-02-12T10:46:41.280
2015-02-12T11:23:23.487
null
null
8236
[ "python", "statistics" ]
[statsmodels](http://statsmodels.sourceforge.net/devel/) is a good, and fairly standard, package to statistics. For Bayesian interference you can go with [PyMC](http://pymc-devs.github.io/pymc/) - see as in [Cam Davidson-Pilon, Probabilistic Programming & Bayesian Methods for Hackers](http://camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/).
Is Python a viable language to do statistical analysis in?
Python is more "general purpose" while R has a clear(er) focus on statistics. However, most (if not all) things you can do in R can be done in Python as well. The difference is that you need to use additional packages in Python for some things you can do in base R. Some examples: - Data frames are base R while you need to use Pandas in Python. - Linear models (lm) are base R while you need to use statsmodels or scikit in Python. There are important conceptional differences to be considered. - For some rather basic mathematical operations you would need to use numpy. Overall this leads to some additional effort (and knowledge) needed to work fluently in Python. I personally often feel more comfortable working with base R since I feel like being "closer to the data" in (base) R. However, in other cases, e.g. when I use boosting or neural nets, Python seems to have an advantage over R. Many algorithms are developed in `C++` (e.g. [Keras](https://github.com/jjallaire/deep-learning-with-r-notebooks), [LightGBM](https://lightgbm.readthedocs.io/en/latest/)) and adapted to Python and (often later to) R. At least when you work with Windows, this often works better with Python. You can use things like Tensorflow/Keras, LightGBM, Catboost in R, but it sometimes can be daunting to get the additional package running in R (especially with GPU support). Many packages/methods are available for R and Python, such as GLMnet ([for R](https://web.stanford.edu/%7Ehastie/glmnet/glmnet_alpha.html) / [for Python](https://web.stanford.edu/%7Ehastie/glmnet_python/)). You can also see based on the Labs of "[Introduction to Statistical Learning](http://faculty.marshall.usc.edu/gareth-james/ISL/index.html)" - which are available [for R](http://faculty.marshall.usc.edu/gareth-james/ISL/code.html) and [for Python](https://github.com/JWarmenhoven/ISLR-python) as well - that there is not so much of a difference between the two languages in terms of what you can do. The difference is more like how things are done. Finally, since Python is more "general purpose" than R (at least in my view), there are [interesting and funny things](https://realpython.com/what-can-i-do-with-python/) you can do with Python (beyond statistics) which you cannot do with R (at least it is harder).
5166
1
5223
null
1
953
I have the following data ($x^1_i$, $y^1_i$) for $i=1,2,...N_1$ ($x^2_i$, $y^2_i$) for $i=1,2,...N_2$ ... ($x^m_i$, $y^m_i$) for $i=1,2,...N_m$ Is it possible to train a neural net to produce some $y_k$ where $k<=min(N)$ given a input ${x_1, x_2, ..., x_{k-1}}$? If so any suggestion of documentation/ library I can look at (preferably python)?
training neural net with multiple sets of time-series data
CC BY-SA 3.0
null
2015-02-17T21:50:10.210
2020-12-17T17:26:07.017
null
null
8302
[ "machine-learning", "dataset", "neural-network", "time-series", "regression" ]
Yes, this is a straightforward application for neural networks. In this case yk are the outputs of the last layer ("classifier"); xk is a feature vector and yk is what it gets classified into. For simplicity prepare your data so that N is the same for all. The problem you have is perhaps that in the case of time series you won't have enough data: you need (ideally) many 1000's of examples to train a network, which in this case means time series, not points. Look at the specialized literature on neural networks for time series prediction for ideas on network architecture. Library: try Pylearn2 at [http://deeplearning.net/software/pylearn2/](http://deeplearning.net/software/pylearn2/) It's not the only good option but it should serve you well.
Time series forecasting using multiple time series as training data
First cluster the events that have the most similarities. Then use a comparable (or more than one of then ) to forecast the sales of the new events that you do not have historical data. Use all other information you have as regressor. Here is a code to do the forecast in R. You will be able to combine different forecasting models with this code: ``` choose_model<-function(x,h,reg,new_reg,end_train,start_test){ library(forecast) library(forecastHybrid) library(tidyverse) #train data x_train <- window(x, end = end_train ) x_test <- window(x, start = start_test) #train and test for regressors reg_train <- window(reg, end = end_train ) reg_test <- window(reg, start = start_test) h1=length(x_test) #model1 stlf(x_train , method="arima",s.window= nrow(x_train),xreg = reg_train, newxreg = reg_test, h=h1)-> fc_stlf_xreg #model2 auto.arima(x_train, stepwise = FALSE, approximation = FALSE,xreg=reg_train)%>%forecast(h=h1,xreg=reg_test) -> fc_arima_xreg #model3 set.seed(12345)#for nnetar model nnetar(x_train, MaxNWts=nrow(x), xreg=reg_train)%>%forecast(h=h1, xreg=reg_test) -> fc_nnetar_xreg #model4 stlf(x_train , method= "ets",s.window= 12, h=h1)-> fc_stlf_ets #Combination mod1 <- lm(x_test ~ 0 + fc_stlf_xreg$mean + fc_arima_xreg$mean + fc_nnetar_xreg$mean + fc_stlf_ets$mean) mod2 <- lm(x_test/I(sum(coef(mod1))) ~ 0 + fc_stlf_xreg$mean + fc_arima_xreg$mean + fc_nnetar_xreg$mean + fc_stlf_ets$mean) #model1 stlf(x, method="arima",s.window= 12,xreg=reg, newxreg=new_reg, h=h)-> fc_stlf #model2 auto.arima(x, stepwise = FALSE, approximation = FALSE,xreg=reg)%>%forecast(h=h,xreg=new_reg) -> fc_arima #model3 set.seed(12345)#for nnetar model nnetar(x, MaxNWts=nrow(x), xreg=reg)%>%forecast(h=h, xreg=new_reg) -> fc_nnetar #model4 stlf(x , method= "ets",s.window= 12, h=h)-> fc_stlf_e #Combination Combi <- (mod2$coefficients[[1]]*fc_stlf$mean + mod2$coefficients[[2]]*fc_arima$mean + mod2$coefficients[[3]]*fc_nnetar$mean + mod2$coefficients[[4]]*fc_stlf_e$mean) return(Combi) } ```
5198
1
5210
null
3
572
Are there commonly accepted ways to visualize the results of a multivariate regression for a non-quantitative audience? In particular, I'm asking how one should present data on coefficients and T statistics (or p-values) for a regression with around 5 independent variables.
How to visualize multivariate regression results
CC BY-SA 3.0
null
2015-02-20T23:16:18.767
2015-02-23T08:17:30.120
null
null
6403
[ "visualization", "regression", "linear-regression" ]
I personally like dotcharts of standardized regression coefficients, possibly with standard error bars to denote uncertainty. Make sure to standardize coefficients (and SEs!) appropriately so they "mean" something to your non-quantitative audience: "As you see, an increase of 1 unit in Z is associated with an increase of 0.3 units in X." In R (without standardization): ``` set.seed(1) foo <- data.frame(X=rnorm(30),Y=rnorm(30),Z=rnorm(30)) model <- lm(X~Y+Z,foo) coefs <- coefficients(model) std.errs <- summary(model)$coefficients[,2] dotchart(coefs,pch=19,xlim=range(c(coefs+std.errs,coefs-std.errs))) lines(rbind(coefs+std.errs,coefs-std.errs,NA),rbind(1:3,1:3,NA)) abline(v=0,lty=2) ``` ![regression visualization](https://i.stack.imgur.com/0AwtB.png)
How to visualize (make plot) of regression output against categorical input variable?
One possible first step is to convert the data back to the original coding. This is called in SQL unpivot, in R melt. Here an R example ``` > my.df <- read.table( + text = "DistrictA DistrictB DistrictC DistrictD DistrictE Price + 1 0 0 0 0 10000 + 0 1 0 0 0 20000 + 0 0 1 0 0 30000 + 0 0 0 1 0 40000 + 0 0 0 0 1 50000" + , header = TRUE) > my.df DistrictA DistrictB DistrictC DistrictD DistrictE Price 1 1 0 0 0 0 10000 2 0 1 0 0 0 20000 3 0 0 1 0 0 30000 4 0 0 0 1 0 40000 5 0 0 0 0 1 50000 > library(reshape) > subset(melt(my.df, id="Price", variable = "District"),value == 1)[,c(1,2)] Price District 1 10000 DistrictA 7 20000 DistrictB 13 30000 DistrictC 19 40000 DistrictD 25 50000 DistrictE ``` After that you plot the Price dependent on a factor variable. You may additionally consider to order the factor based on the predicted price. I provide no details, as you don't tagged your tool, but I would recommend additional to a scatter plot to consider a box plot and/or density plot - always combined with the prediction value from the model for each factor level.
5199
1
5201
null
1
1030
This question is likely somewhat naive. I know I (and my colleagues) can install and use Python on local machines. But is that really a best practice? I have no idea. Is there value in setting up a Python "server"? A box on the network where we develop our data science related Python code. If so, what are the hardware requirements for such a box? Do I need to be concerned about any specific packages or conflicts between projects?
What is a good hardware setup for using Python across multiple users
CC BY-SA 3.0
null
2015-02-21T01:16:34.387
2015-02-21T11:59:31.860
null
null
8368
[ "python" ]
Is installing Python locally a good practice? Yes, if you are going to develop in Python, it is always a good idea to have a local environment where you can break things safely. Is there value in setting up a Python "server"? Yes, but before doing so, be sure to be able to share your code with your colleagues using a [version control system](http://en.wikipedia.org/wiki/Revision_control). My reasoning would be that, before you move things to a server, you can move a great deal forward by being able to test several different versions in the local environment mentioned above. Examples of VCS are [git](http://git-scm.com), [svn](https://subversion.apache.org), and for the deep nerds, [darcs](http://darcs.net). Furthermore, a "Python server" where you can deploy your software once it is integrated into a releasable version is something usually called "[staging server](http://en.wikipedia.org/wiki/Staging_site)". There is a whole philosophy in software engineering — [Continuous Integration](http://en.wikipedia.org/wiki/Continuous_integration) — that advocates staging whatever you have in VCS daily or even on each change. In the end, this means that some automated program, running on the staging server, checks out your code, sees that it compiles, runs all defined tests and maybe outputs a package with a version number. Examples of such programs are [Jenkins](http://jenkins-ci.org), [Buildbot](http://buildbot.net) (this one is Python-specific), and [Travis](https://travis-ci.org/recent) (for cloud-hosted projects). What are the hardware requirements for such a box? None, as far as I can tell. Whenever it runs out of disk space, you will have to clean up. Having more CPU speed and memory will make concurrent builds easier, but there is no real minimum. Do I need to be concerned about any specific packages or conflicts between projects? Yes, this has been identified as a problem, not only in Python, but in many other systems (see [Dependency hell](http://en.wikipedia.org/wiki/Dependency_hell)). The established practice is to keep projects isolated from each other as far as their dependencies are concerned. This means, avoid installing dependencies on the system Python interpreter, even locally; always define a [virtual environment](http://virtualenv.readthedocs.org/en/latest/) and install dependencies there. Many of the aforementioned CI servers will do that for you anyway.
Is Python suitable for big data
To clarify, I feel like the original question references by OP probably isn't be best for a SO-type format, but I will certainly represent `python` in this particular case. Let me just start by saying that regardless of your data size, `python` shouldn't be your limiting factor. In fact, there are just a couple main issues that you're going to run into dealing with large datasets: - Reading data into memory - This is by far the most common issue faced in the world of big data. Basically, you can't read in more data than you have memory (RAM) for. The best way to fix this is by making atomic operations on your data instead of trying to read everything in at once. - Storing data - This is actually just another form of the earlier issue, by the time to get up to about 1TB, you start having to look elsewhere for storage. AWS S3 is the most common resource, and python has the fantastic boto library to facilitate leading with large pieces of data. - Network latency - Moving data around between different services is going to be your bottleneck. There's not a huge amount you can do to fix this, other than trying to pick co-located resources and plugging into the wall.
5204
1
5237
null
1
362
Most literature focus on either explicit rating data or implicit (like/unknown) data. Are there any good publications to handle like/dislike/unknown data? That is, in the data matrix there are three values, and I'd like to recommend from unknown entries. And are there any good open source implementations on this? Thanks.
Matrix factorization for like/dislike/unknown data
CC BY-SA 3.0
null
2015-02-21T14:02:39.170
2015-04-27T12:48:00.957
null
null
1376
[ "machine-learning", "recommender-system" ]
This is very similar to the netflix problem, most matrix factorization methods can be adapted so that the error function is only evaluated at known points. For instance, you can take the gradient descent approach to SVD (minimizing the frobenius norm) but only evaluate the error and calculate the gradient at known points. I believe you can easily find code for this. Another option would be exploiting the binary nature of your matrix and adapting binary matrix factorization tools in order to enforce binary factors (if you require them). I'm sure you can adapt one of the methods described [here](http://www.hongliangjie.com/2011/03/15/reviews-on-binary-matrix-decomposition/) to work with unknown data using a similar trick as the one above.
What is a good explanation of Non Negative Matrix Factorization?
Non-Negative Matrix Factorization (NMF) is described well in the paper by [Lee and Seung, 1999](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjQvKC6zc_QAhXLMSYKHY9pDVwQFggcMAA&url=http%3A%2F%2Fwww.columbia.edu%2F~jwp2128%2FTeaching%2FW4721%2Fpapers%2Fnmf_nature.pdf&usg=AFQjCNHOf7BKOMfBKKs1wJ2SxSwfj7bgaA). Simply Put NMF takes as an input a [term-document matrix](https://en.wikipedia.org/wiki/Document-term_matrix) and generates a set of topics that represent weighted sets of co-occurring terms. The discovered topics form a basis that provides an efficient representation of the original documents. About NMF NMF is used for [feature extraction](https://en.wikipedia.org/wiki/Feature_extraction) and is generally seen to be useful when there are many attributes, particularly when the attributes are ambiguous or are not strong predictors. By combining attributes NMF can display patterns, topics, or themes which have importance. In practice, one encounters NMF typically where text is involved. Consider an example, where the same word (love) in a document could different meanings: - I love lettuce wraps. - I love the way I feel when I'm on vacation in Mexico. - I love my dog, Euclid. - I love being a Data Scientist. In all 4 cases, the word 'love' is used, but it has a different meaning to the reader. By combining attributes, NMF introduces context which creates additional predictive power. $"love" + "lettuce \ wraps" \ \Rightarrow \ "pleasure \ by \ food"$ $"love" + "vacation \ in \ Mexico" \ \Rightarrow \ "pleasure \ by \ relaxation"$ $"love" + "dog" \ \Rightarrow \ "pleasure \ by \ companionship"$ $"love" + "Data \ Scientist" \ \Rightarrow \ "pleasure \ by \ occupation"$ How Does It Happen NMF breaks down the multivariate data by creating a user-defined number of features. Each one of these features is a combination of the original attribute set. It is also key to remember these coefficients of these linear combinations are non-negative. Another way to think about it is that NMF breaks your original data features (let's call it V) into the product of two lower ranked matrices (let's call it W and H). NMF uses an iterative approach to modify the initial values of W and H so that the product approaches V. When the approximation error converges or the user-defined number of iterations is reached, NMF terminates. NMF data preparation - Numeric attributes are normalized. - Missing numerical values are replaced with the mean. - Missing categorical values are replaced with the mode. It is important to note that outliers can impact NMF significantly. In practice, most Data Scientist use a clipping transformation before binning or normalizing. In addition, NMF in many cases will benefit from normalization. As in many other algorithmic cases, to improve matrix factorization, one needs to decrease the error tolerance (which will increase compute time).
5208
1
5211
null
1
193
As an example. If you are tying to classify humans from dogs. Is it possible to approach this problem by classifying different kinds of animals (birds, fish, reptiles, mammals, ...) or even smaller subsets (dogs, cats, whales, lions, ...) Then when you try to classify a new data set, anything that did not fall into one of those classes can be considered a human. If this is possible, are there any benefits into breaking a binary class problem into several classes (or perhaps labels)? Benefits I am looking into are: accuracy/precision of the classifier, parallel learning.
Splitting binary classification into smaller susbsets
CC BY-SA 3.0
null
2015-02-22T20:30:30.630
2015-02-23T11:43:11.833
2015-02-23T00:19:21.277
8381
8381
[ "machine-learning", "classification" ]
If you try to get the best accuracy, etc... for a given question you should always learn on a training set that is labeled exactly according to your questions. You shouldn't expect to get better results if you are using more granular class labels. The classifier then would then try to pick up the differences in the classes and try to separate them apart. Since in practice your variables in the training set will not perfectly explain the more granular classification question you shouldn't expect to get a better answer for your less granular classification problem. If you are not happy with the accuracy of your model you try the following instead: - review the explanatory variables. Think about what might influence the classification problem. Maybe there us a clever way to construct new variables (from your existing ones) that helps. It's nowpossible to give a general advise on that since you have to consider the properties of your classifier - if your class distribution is very skewed you might consider over/undersampling - you might run more different classifiers and then classify based on the majority vote. Note that you will most likely sacrifice explainability of your model. Also you seem to have some missunderstanding, when you write 'you would assign it to human if it doesn't fall into any of the granular classes'. Note that you always try to pick class labels covering the whole universe (all possible classes). This can be always defined as the complement of the other classes. Also you will have to have instances for each class in your training set.
Binary Classification
As your data is highly imbalanced, and as per your task, it is a case of anomaly detection. Anomaly detection is a case where your data has one kind of examples in very low number and other in very high number, like your membership division here. Other examples are like detecting flaw in car engines- out of 10000 engines, you get flaw in 40 only. Similarly, members compared to non-members are very less. So treat those person who are members as anomaly. [https://www.allerin.com/blog/machine-learning-for-anomaly-detection](https://www.allerin.com/blog/machine-learning-for-anomaly-detection) As you can check in above link, there are both supervised and unsupervised methods available for these kind of tasks. I suggest you try those methods. Also you can check this link for some more explanation- [https://www.datascience.com/blog/python-anomaly-detection](https://www.datascience.com/blog/python-anomaly-detection)
5209
1
5998
null
4
3472
I am performing Named Entity Recognition using Stanford NER. I have successfully trained and tested my model. Now I want to know: 1) What is the general way of measuring accuracy of NER model ?? For example what techniques or approaches are used ?? 2) Is there any built-in method in STANFORD NER for evaluating the accuracy ??
Accuracy of Stanford NER
CC BY-SA 3.0
null
2015-02-23T08:00:03.360
2015-10-05T20:39:30.453
null
null
8016
[ "nlp", "performance" ]
[http://en.wikipedia.org/wiki/Named-entity_recognition#Formal_evaluation](http://en.wikipedia.org/wiki/Named-entity_recognition#Formal_evaluation) : > To evaluate the quality of a NER system's output, several measures have been defined. While accuracy on the token level is one possibility, it suffers from two problems: the vast majority of tokens in real-world text are not part of entity names as usually defined, so the baseline accuracy (always predict "not an entity") is extravagantly high, typically >90%; and mispredicting the full span of an entity name is not properly penalized (finding only a person's first name when their last name follows is scored as ½ accuracy). In academic conferences such as CoNLL, a variant of the F1 score has been defined as follows: Precision is the number of predicted entity name spans that line up exactly with spans in the gold standard evaluation data. I.e. when [Person Hans] [Person Blick] is predicted but [Person Hans Blick] was required, precision for the predicted name is zero. Precision is then averaged over all predicted entity names. Recall is similarly the number of names in the gold standard that appear at exactly the same location in the predictions. F1 score is the harmonic mean of these two. It follows from the above definition that any prediction that misses a single token, includes a spurious token, or has the wrong class, "scores no points", i.e. does not contribute to either precision or recall.
What dataset was Stanford NER trained on?
The [original paper](https://nlp.stanford.edu/~manning/papers/gibbscrf3.pdf) mentions two corpora: CoNLL 2003 ([apparently here now](https://www.clips.uantwerpen.be/conll2003/ner/)) and the "CMU Seminar Announcements Task". However according to the page linked in the question the actual NER was trained on a larger combination of corpora: > Our big English NER models were trained on a mixture of CoNLL, MUC-6, MUC-7 and ACE named entity corpora, and as a result the models are fairly robust across domains. So it might be difficult to obtain the exact original training data. However most of these corpora were compiled for some shared tasks and should be available online. There are probably more recent ones as well: a quick search "named entity recognition shared task" returns many hits.
5212
1
5220
null
1
515
We are currently developing a customer relationship management software for SME's. What I'd like to structure for our future CRM is developing CRM with a social-based approach (Social CRM). Therefore we will provide our users (SME's) to integrate their CRM into their social network accounts. Also CRM will be enhance intercorporate communication of owner company. All these processes I've just indicated above will certainly generate lots of unstructured data. I am wondering how can we integrate big data and data-mining contepts for our project; especially for the datas generated by social network? I am not the expert of these topics but I really want to start from somewhere. ### Basic capabilities of CRM (Modules) -Contacts: People who you have a business relationship. -Accounts: Clients who you've done a business before. -Leads: Accounts who are your potential customers. -Oppurtunites: Any business opportunity for an account or a lead. -Sales Orders -Calendar -Tasks What kind of unstructured data or the ways (ideas) could be useful for the modules I've just wrote above? If you need more specific information please write in comments.
Big data and data mining for CRM?
CC BY-SA 3.0
null
2015-02-23T12:03:47.773
2015-07-31T08:33:51.563
2020-06-16T11:08:43.077
-1
8386
[ "data-mining", "bigdata", "software-development" ]
The two modules where you can really harness data mining and big data techniques are probably Leads and Opportunities. The reason is that, as you've written yourself, both contain 'potential' information that you can harness (through predictive algorithms) to get more customers. Taking Leads as an example, you can use a variety of machine learning algorithms to assign a probability to each account, based on that account's potential for becoming your customer in the near future. Since you already have an Accounts module which gives you information about your current customers, you can use this information to train your machine learning algorithms. This is all at a very high level but hopefully, you're getting the gist of what I'm saying.
Machine Learning Best Practices for Big Dataset
I'll list some practices I've found useful, hope this helps: - Irrespective of whether the data is huge or not, cross validation is a must when building any model. If this takes more time than an end consumer is willing to wait, you may need to reset their expectations, or get faster hardware/software to build the model; but do not skip cross validation. Plotting learning curves and cross-validation are effective steps to help guide us so we recognize and correct mistakes earlier in the process. I've experienced instances when a simple train-test set does not reveal any problems until I run cross-fold validations and find a large variance in the performance of the algorithm on different folds. - Before sizing up a dataset, eliminate the records with missing values of key variables and outliers, columns of highly correlated variables, and near zero variance variables. This will give you a much better estimate of the real usable dataset. Sometimes you may end up with only a fraction of the available dataset that can actually be used to build a model. - When sizing up a dataset for building a model, it is easier to estimate the computing resources if you enumerate the dataset in rows and columns and memory size of the final numeric matrix. Since every machine learning algorithm is ultimately going to convert the dataset into a numeric matrix, enumerating the dataset size in terms of GBs/TBs of raw input data (which may be mostly strings/textual nominal variables/etc.) is often misleading and the dataset may appear to be more daunting and gigantic to work with than it is. - Once you know (or estimate) the final usable size of your dataset, check if you have a suitable machine to be able to load that into memory and train the model. If your dataset size is smaller than memory available/usable by the software, then you need not worry about the size any longer. - If the dataset size is larger than the memory available to train a model, then you could try these approaches (starting from the simplest ones first): Use a machine with more memory: If you're using a cloud service provider then the simplest approach could be just to provision more memory and continue building the model as usual. For physical machines, try to procure additional RAM, its price continues to reduce and if your dataset is going to remain this big or grow bigger over time, then it is a good investment. Add nodes to the cluster: For Hadoop and Spark based cluster computing deployments, training on a larger data-set is as easy as adding more machines to the cluster. Quite often classification tasks require training on data with highly imbalanced classes, the ratio of positive to negative classes could sometimes be as large as 1:1000 or more. A straightforward method to improve accuracy in these cases is to either over-sample the minority class or under-sample the majority class, or do both together. If you have a large dataset, under-sampling the majority class is a very good option which will improve your algorithm's accuracy as well as reduce training time. Build an ensemble: Split the dataset randomly and train several base learners on each part, then combine these to get the final prediction. This would most effectively make use of the large dataset and produce a more accurate model. But you need to spend more time to carefully build the ensemble and keep clear of the usual pitfalls of ensemble building. If you're using an ensemble, train many single-thread models in parallel. Almost all ML software provide features to train multiple models on different cores or separate nodes altogether. Evaluate multiple different algorithms on the time taken to train them for your specific dataset vs. their accuracy. While there is no universal answer, but I've found when using noisy data, SVMs take much longer time to train than carefully built ensemble of regularized regression models, but may be only slightly more accurate in performance; and a well built neural network may take a very long time to train as compared to a CART tree, but perform significantly more accurately that the tree. To reduce time taken to build the model, try to automate as much of the process as you can. A few hours spent automating a complex error-prone manual task may save your team a hundred hours later in the project. If available, use those algorithm implementations which use parallel processing, sparse matrices and cache aware computing, these reduce processing time significantly. For example, use xgboost instead of a single-core implementation of GBM. If nothing else works, train the model on a smaller dataset; as Emre has suggested in his answer, use learning curves to fix the smallest sample size required for training the model, adding more training records than this size does not improve model accuracy noticeably. Here is a good article which explores this situation - largetrain.pdf
5224
1
9227
null
41
41402
I would like to use a neural network for image classification. I'll start with pre-trained CaffeNet and train it for my application. # How should I prepare the input images? In this case, all the images are of the same object but with variations (think: quality control). They are at somewhat different scales/resolutions/distances/lighting conditions (and in many cases I don't know the scale). Also, in each image there is an area (known) around the object of interest that should be ignored by the network. I could (for example) crop the center of each image, which is guaranteed to contain a portion of the object of interest and none of the ignored area; but that seems like it would throw away information, and also the results wouldn't be really the same scale (maybe 1.5x variation). # Dataset augmentation I've heard of creating more training data by random crop/mirror/etc, is there a standard method for this? Any results on how much improvement it produces to classifier accuracy?
How to prepare/augment images for neural network?
CC BY-SA 3.0
null
2015-02-24T11:59:36.033
2022-06-16T19:08:44.980
2022-06-16T19:08:44.980
29169
26
[ "neural-network", "image-classification", "convolutional-neural-network", "preprocessing" ]
The idea with Neural Networks is that they need little pre-processing since the heavy lifting is done by the algorithm which is the one in charge of learning the features. The winners of the Data Science Bowl 2015 have a great write-up regarding their approach, so most of this answer's content was taken from: [Classifying plankton with deep neural networks](https://benanne.github.io/2015/03/17/plankton.html). I suggest you read it, specially the part about Pre-processing and data augmentation. - Resize Images As for different sizes, resolutions or distances you can do the following. You can simply rescale the largest side of each image to a fixed length. Another option is to use openCV or scipy. and this will resize the image to have 100 cols (width) and 50 rows (height): ``` resized_image = cv2.resize(image, (100, 50)) ``` Yet another option is to use scipy module, by using: ``` small = scipy.misc.imresize(image, 0.5) ``` - Data Augmentation Data Augmentation always improves performance though the amount depends on the dataset. If you want to augmented the data to artificially increase the size of the dataset you can do the following if the case applies (it wouldn't apply if for example were images of houses or people where if you rotate them 180degrees they would lose all information but not if you flip them like a mirror does): - rotation: random with angle between 0° and 360° (uniform) - translation: random with shift between -10 and 10 pixels (uniform) - rescaling: random with scale factor between 1/1.6 and 1.6 (log-uniform) - flipping: yes or no (bernoulli) - shearing: random with angle between -20° and 20° (uniform) - stretching: random with stretch factor between 1/1.3 and 1.3 (log-uniform) You can see the results on the Data Science bowl images. Pre-processed images [](https://i.stack.imgur.com/0S0Y0.png) augmented versions of the same images [](https://i.stack.imgur.com/KJXZK.png) -Other techniques These will deal with other image properties like lighting and are already related to the main algorithm more like a simple pre-processing step. Check the full list on: [UFLDL Tutorial](http://ufldl.stanford.edu/tutorial/unsupervised/PCAWhitening/)
How to prepare colored images for neural networks?
Your R,G, and B pixel values can be broken into 3 separate channels (and in most cases this is done for you). These channels are treated no differently than feature maps in higher levels of the network. Convolution extends naturally to more than 2 dimensions. Imagine the greyscale, single-channel example. Say you have N feature maps to learn in the first layer. Then the output of this layer (and therefore the input to the second layer) will be comprised of N channels, each of which is the result of convolving a feature map with each window in your image. Having 3 channels in your first layer is no different. This tutorial does a nice job on convolution in general. [http://deeplearning.net/tutorial/lenet.html](http://deeplearning.net/tutorial/lenet.html)
5226
1
5229
null
85
142811
I am doing some problems on an application of decision tree/random forest. I am trying to fit a problem which has numbers as well as strings (such as country name) as features. Now the library, [scikit-learn](http://scikit-learn.org) takes only numbers as parameters, but I want to inject the strings as well as they carry a significant amount of knowledge. How do I handle such a scenario? I can convert a string to numbers by some mechanism such as hashing in Python. But I would like to know the best practice on how strings are handled in decision tree problems.
strings as features in decision tree/random forest
CC BY-SA 4.0
null
2015-02-25T01:07:14.717
2020-10-29T06:16:43.570
2019-10-02T14:32:28.693
26686
8409
[ "machine-learning", "python", "scikit-learn", "random-forest", "decision-trees" ]
In most of the well-established machine learning systems, categorical variables are handled naturally. For example in R you would use factors, in WEKA you would use nominal variables. This is not the case in scikit-learn. The decision trees implemented in scikit-learn uses only numerical features and these features are interpreted always as continuous numeric variables. Thus, simply replacing the strings with a hash code should be avoided, because being considered as a continuous numerical feature any coding you will use will induce an order which simply does not exist in your data. One example is to code ['red','green','blue'] with [1,2,3], would produce weird things like 'red' is lower than 'blue', and if you average a 'red' and a 'blue' you will get a 'green'. Another more subtle example might happen when you code ['low', 'medium', 'high'] with [1,2,3]. In the latter case it might happen to have an ordering which makes sense, however, some subtle inconsistencies might happen when 'medium' in not in the middle of 'low' and 'high'. Finally, the answer to your question lies in coding the categorical feature into multiple binary features. For example, you might code ['red','green','blue'] with 3 columns, one for each category, having 1 when the category match and 0 otherwise. This is called one-hot-encoding, binary encoding, one-of-k-encoding or whatever. You can check documentation here for [encoding categorical features](http://scikit-learn.org/stable/modules/preprocessing.html) and [feature extraction - hashing and dicts](http://scikit-learn.org/stable/modules/feature_extraction.html#dict-feature-extraction). Obviously one-hot-encoding will expand your space requirements and sometimes it hurts the performance as well.
Performance difference between decision trees and logistic regression when one of the features is a string
String data can be either categorical (where you have e.g. more than 10 examples of each string) or free text. If it's the former, a decision tree can deal with it no problem. You don't have to convert it into a numeric. For regression you cannot directly use categorical variables. If you want to use them in a regression, you will need to create dummy variables to encode the values. e.g. if your categories are "Red", "Yellow", "Blue" for the colour variable, you create variables "Red" (which will take a 1 or a 0) and "Yellow" (which takes a 1 or 0). If both are 0, the colour must be "Blue". There are functions in sklearn to do this automatically. If your string is just free text then you will need a better way of grabbing information out of it. You can use text mining such as tokenizing, TF-IDF etc. to convert it into numerical and categorical information that can be fed into a classifier.
5227
1
5235
null
1
128
I am struggling to choose a right data prediction method for the following problem. Essentially I am trying to model a scheduler operation, trying to predict its scheduling without knowing the scheduling mechanism and having incomplete data. (1) There are M available resource blocks that can carry data, N data channels that must be scheduled every time instance i (2) Inputs into the scheduler: - Matrix $X_i$ size M by N, consisting of N column vectors from each data source. Each of M elements is index from 1 to 32 carrying information about quality of data channel for particular resource block. 1 - really bad quality, 32 - excellent quality. - Data which contains type of data to be carried (voice/internet etc) Scheduler prioritizes number of resource blocks occupied by each channel every time instant i. Given that - I CAN see resource allocation map every time instant - I DO have access to matrix $X_i$ - I DON'T know the algorithm of scheduler and - I dont have access to the type of data to be scheduled. I want to have a best guess (prediction) how the data will be scheduled based on this incomplete information i.e, which resource block will be occupied by which data channel. What is the best choice of prediction/modelling algorithm? Any help appreciated!
problem of choosing right statistical method for scheduler prediction
CC BY-SA 3.0
null
2015-02-25T03:24:20.450
2015-02-26T05:08:31.993
null
null
8410
[ "predictive-modeling" ]
Do you know if the scheduler has a memory? Let us assume for a moment that the scheduler has no memory. This is a straightforward classification (supervised learning) problem: the inputs are X, the outputs are the schedules (N->M maps). Actually, if every N gets scheduled and the only question is which M it gets, the outputs are lists which channel (or none) is scheduled to each block, and there is only a certain possible number of those, so you can model them as discrete outputs (classes) with their own probabilities. Use whatever you like (AdaBoost, Naive Bayes, RBF SVM, Random Forest...) as a classifier. I think you will quickly learn about the general behavior of the scheduler. If the scheduler has a memory, then things get complicated. I think you might approach that as a hidden Markov model: but the number of individual states may be quite large, and so it may be essentially impossible to build a complete map of transition probabilities.
which forecasting models could be chosen?
> Should the data be considered as a multidimensional time series? This depends on whether the target variable (the one that you want to predict) depends on the others. If not, there is no point in doing it. A fast way of checking if the variables are linearly dependent and, therefore, multidimensional forecasting is meaningful is by checking the linear [correlation](https://en.wikipedia.org/wiki/Correlation_coefficient) of the variables. Then select only the variables that have high correlation (>0.5) with the target variable to include them in your prediction model. > I am still thinking which methods, techniques can be chosen. The model that I recommend for time series forecasting is a Recurrent Neural Network. This is because of its inherent ability to store previous timesteps in its memory and to incorporate them into future predictions. This is very important, because it is among the few approaches that exploit the temporal dependencies between samples. > I'm currently looking into the direction of some kind of neural network and implementation in python (but I still don’t know which package). The most convenient way of implementing a recurrent neural network in Python is by utilizing the [Keras](https://keras.io/) framework. Please go carefully through this [tutorial](https://machinelearningmastery.com/multivariate-time-series-forecasting-lstms-keras/), as it will definitely be a very good first step to attack your problem.
5231
1
5239
null
1
566
I produced association rules by using the arules package (apriori). I'm left with +/- 250 rules. I would like to test/validate the rules that I have, like answering the question: How do I know that these association rules are true? How can I validate them? What are common practice to test it? I thought about cross validation (with training data and test data) as I read that it's not impossible to use it on unsupervised learning methods..but I'm not sure if it makes sense since I don't use labeled data. If someone has a clue, even if it's not specifically about association rules (but testing other unsupervised learning methods), that would also be helpful to me. I uploaded an example of the data that I use here in case it's relevant: [https://www.mediafire.com/?4b1zqpkbjf15iuy](https://www.mediafire.com/?4b1zqpkbjf15iuy)
How to test/validate unlabeled data in association rules in R?
CC BY-SA 3.0
null
2015-02-25T14:10:04.777
2015-02-27T00:45:39.740
null
null
8422
[ "machine-learning", "r", "cross-validation" ]
You may want to consider using your own `APparameter` object to put "significance" constraints on the rules learned by Apriori. See page 13 of the [arules documentation](http://cran.r-project.org/web/packages/arules/arules.pdf). This could reduce the number of uninteresting rules returned in your run. In lieu of gold standard data for your domain, consider bootstrap resampling as a form of validation, as described [in this article](http://eprints.pascal-network.org/archive/00003198/01/lal.pdf).
Association rules in R / Subset rules based on feature
- I do not recall lhs or rhs having intervals. Can you share an example? - You can use the "appearance" parameter when calling apriori. You can find an example in the documentation of APappearance-class of arules or at http://r-statistics.co/Association-Mining-With-R.html under section "How to Find Rules Related To Given Item/s"
5243
1
5286
null
0
283
I have been working in the last years with statistics and have gone pretty deep in programming with R. I have however always felt that I wasn't completely grasping what I was doing, still understanding all passages and procedures conceptually. I wanted to get a bit deeper into the math behind it all. I've been looking online for texts and tips, but all texts start with a very high level. Any suggestions on where to start? To be more precise, I'm not looking for an exaustive list of statistical models and how they work, I kind of get those. I was looking for something like "Basics of statistical modelling"
Interested in Mathematical Statistics... where to start from?
CC BY-SA 3.0
null
2015-02-27T10:22:59.577
2015-03-06T03:46:41.693
null
null
null
[ "statistics", "predictive-modeling" ]
When looking for texts to learn advanced topics, I start with a web search for relevant grad courses and textbooks, or background tech/math books like those from Dover. To wit, Theoretical Statistics by Keener looks relevant: [http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-93838-7](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-93838-7) And this: "Looking for a good Mathematical Statistics self-study book (I'm a physics student and my class & current book are useless to me)" [http://www.reddit.com/r/statistics/comments/1n6o19/looking_for_a_good_mathematical_statistics/](http://www.reddit.com/r/statistics/comments/1n6o19/looking_for_a_good_mathematical_statistics/)
(Basic) statistics
This is not necessarily unexpected (or broken). Imagine that users tend to use none or all of the applications. For a specific example, suppose 90 users use no apps at all, and 10 use all (say) 11 of them. Then the average apps used by a user is $(90\cdot0+10\cdot11)/100=1.1$, but for each app, the average app-usage of a user who uses that app is $(10\cdot11)/10=11$. (In your case, for a sanity check maybe compute the number of users who use no apps.)
5244
1
5281
null
3
1203
Many of us are very familiar with using R in reproducible, but very much targeted, ad-hoc analysis. Given that R is currently the best collection of cutting-edge scientific methods from world-class experts in each particular field, and given that plenty of libraries exist for data io in R, it seems very natural to extend its applications into production environments for live decision making. Therefore my questions are: - did someone of you go into production with pure R (I know of shiny, yhat etc, but would be very interesting to hear of pure R); - is there a good book/guide/article on the topic of building R into some serious live decision-making pipelines (such as e.g. credit scoring); - I would like to hear also if you think it's not a good idea at all;
R in production
CC BY-SA 3.0
null
2015-02-27T10:42:35.407
2022-03-16T17:41:50.737
null
null
8310
[ "r", "predictive-modeling", "scoring" ]
Speed of code execution is rarely an issue. The important speed in business is almost always the speed of designing, deploying, and maintaining the application. An experienced programmer can optimize where necessary to get code execution fast enough. In these cases, R can make a lot of sense in production. In cases where speed of execution IS an issue, you are already going to find an optimized C++ or some such real-time decision engine. So your choices are integrate an R process, or add the bits you need to the engine. The latter is probably the only option, not because of the speed of R, but because you don't have the time to incorporate any external process. If the company has nothing to start with, I can't imagine everyone saying "let's build our time critical real-time engine in R because of the great statistical libraries". I'll give a few examples from my corporate experiences, where I use R in production: - Delivering Shiny applications dealing with data that is not/ not yet institutionalized. I will generally load already-processed data frames and use Shiny to display different graphs and charts. Computation is minimal. - Decision making analysis that requires heavy use of advanced libraries (mcclust, machine learning) but done on a daily or longer time-scale. In this case there is no reason to use any other language. I've already done the prototyping in R, so my fastest and best option is to keep things there. I did not use R for production when integrating with a real-time C++ decision engine. Issues: - An additional layer of complication to spawn R processes and integrate the results - A suitable machine-learning library (Waffles) was available in C++ The caveat in the latter case: I still use R to generate the training files.
Using R and Python together
Several clarifications: - you can program with object-oriented (OOP) concepts in R, even though OOP in R has slightly different syntax from other languages. Methods do not bind to objects. In R, different method versions will be involved based on the input argument classes (and types). (Ref: Advanced R) - you can also replace nans with mean / any stat. / value in R using a mask if you store the data in a dataframe (See SO post) - There is no problem using them interchangeably. I use R from Python using the package RPy2. I assume it is equally easy to do it the other way round. At the end of the day, any language is only as good as how much the users know about it. Use one that you are more familiar with and try to learn it properly using the vast online resources online.
5249
1
5259
null
1
633
I start with a data.frame (or a data_frame) containing my dependent Y variable for analysis, my independent X variables, and some "Z" variables -- extra columns that I don't need for my modeling exercise. What I would like to do is: - Create an analysis data set without the Z variables; - Break this data set into random training and test sets; - Find my best model; - Predict on both the training and test sets using this model; - Recombine the training and test sets by rows; and finally - Recombine these data with the Z variables, by column. It's the last step, of course, that presents the problem -- how do I make sure that the rows in the recombined training and test sets match the rows in the original data set? We might try to use the row.names variable from the original set, but I agree with Hadley that this is an error-prone kludge (my words, not his) -- why have a special column that's treated differently from all other data columns? One alternative is to create an ID column that uniquely identifies each row, and then keep this column around when dividing into the train and test sets (but excluding it from all modeling formulas, of course). This seems clumsy as well, and would make all my formulas harder to read. This must be a solved problem -- could people tell me how they deal with this? Especially using the plyr/dplyr/tidyr package framework?
Combining data sets without using row.name
CC BY-SA 3.0
null
2015-02-28T00:44:04.353
2015-03-01T07:16:51.440
null
null
3510
[ "machine-learning", "r", "predictive-modeling" ]
You need neither to use the row names or to create an additonal ID column. Here is an approach based on the indices of the training set. An example data set: ``` set.seed(1) dat <- data.frame(Y = rnorm(10), X1 = rnorm(10), X2 = rnorm(10), Z1 = rnorm(10), Z2 = rnorm(10)) ``` Now, your steps: - Create an analysis data set without the Z variables dat2 <- dat[grep("Z", names(dat), invert = TRUE)] dat2 # Y X1 X2 # 1 -0.6264538 1.51178117 0.91897737 # 2 0.1836433 0.38984324 0.78213630 # 3 -0.8356286 -0.62124058 0.07456498 # 4 1.5952808 -2.21469989 -1.98935170 # 5 0.3295078 1.12493092 0.61982575 # 6 -0.8204684 -0.04493361 -0.05612874 # 7 0.4874291 -0.01619026 -0.15579551 # 8 0.7383247 0.94383621 -1.47075238 # 9 0.5757814 0.82122120 -0.47815006 # 10 -0.3053884 0.59390132 0.41794156 - Break this data set into random training and test sets train_idx <- sample(nrow(dat2), 0.8 * nrow(dat2)) train_idx # [1] 7 4 3 10 9 2 1 5 train <- dat2[train_idx, ] train # Y X1 X2 # 7 0.4874291 -0.01619026 -0.15579551 # 4 1.5952808 -2.21469989 -1.98935170 # 3 -0.8356286 -0.62124058 0.07456498 # 10 -0.3053884 0.59390132 0.41794156 # 9 0.5757814 0.82122120 -0.47815006 # 2 0.1836433 0.38984324 0.78213630 # 1 -0.6264538 1.51178117 0.91897737 # 5 0.3295078 1.12493092 0.61982575 test_idx <- setdiff(seq(nrow(dat2)), train_idx) test_idx # [1] 6 8 test <- dat2[test_idx, ] test # Y X1 X2 # 6 -0.8204684 -0.04493361 -0.05612874 # 8 0.7383247 0.94383621 -1.47075238 - Find my best model ... - Predict on both the training and test sets using this model ... - Recombine the training and test sets by rows idx <- order(c(train_idx, test_idx)) dat3 <- rbind(train, test)[idx, ] identical(dat3, dat2) # [1] TRUE - Recombine these data with the Z variables, by column dat4 <- cbind(dat3, dat[grep("Z", names(dat))]) identical(dat, dat4) # [1] TRUE In summary, we can use the indices of the training and test data to combine the data in the rows in the original order.
How to merge all the data to have a final dataset
A slightly hacky way to get there maybe but you can do this to get what you want from the second table; ``` df2['count'] = 1 pivot = df.pivot_table(df, index='userid', columns='productid', values = 'count').reset_index() pivot = pivot.fillna(0) ``` You would then want to merge this to the first dataset like this; ``` finaldf = pd.merge(df1, pivot, left_on='userid', right_on='userid') ``` another great thing to use for generating the dummies for categorical variables is ``` pd.get_dummies() ``` The approach seems ok to me and making some more features would also not be a bad idea.
5264
1
5269
null
1
815
Does reinforcement learning always need a grid world problem to be applied to? Can anyone give me any other example of how reinforcement learning can be applied to something which does not have a grid world scenario?
Does reinforcement learning only work on grid world?
CC BY-SA 4.0
null
2015-03-02T05:10:01.897
2019-12-03T06:39:57.647
2019-12-03T06:39:57.647
52298
8013
[ "machine-learning", "reinforcement-learning" ]
The short answer is no! Reinforcement Learning is not limited to discrete spaces. But most of the introductory literature does deal with discrete spaces. As you might know by now that there are three important components in any Reinforcement Learning problem: Rewards, States and Actions. The first is a scalar quantity and theoretically the latter two can either be discrete or continuous. The convergence proofs and analyses of the various algorithms are easier to understand for the discrete case and also the corresponding algorithms are easier to code. That is one of the reasons, most introductory material focuses on them. Having said that, it should be interesting to note that the early research on Reinforcement Learning actually focussed on continuous state representations. It was only in the the 90s since the literature started representing all the standard algorithms for discrete spaces as we had a lot of proofs for them. Finally, if you noticed carefully, I said continuous states only. Mapping continuous states and continuous actions is hard. Nevertheless, we do have some solutions for now. But it is an active area of Research in RL. This [paper by Sutton](http://webdocs.cs.ualberta.ca/~sutton/papers/SSR-98.pdf) from '98 should be a good start for your exploration!
When to stop calculating values of each cell in the grid in Reinforcement Learning(dynamic programming) applied on gridworld
1- You should set a threshold (a hyper-param) that will allow you to quit the loop. Let V the values for all state s and V' the new values after value iteration. if $\sum_s|V(s) - V’(s)| \le threshold$, quit 2 - V is a function for every cell in the grid yes because you need to update every cell. Hope it helps.
5268
1
5674
null
7
804
The project I am working on allows users to create Stock Screeners based on both technical and fundamental criteria. Stock Screeners are then "backtested" by simulating the results of applying in over the last 10 years using Point-in-Time data. I get back the list of trades and overall graph of performance. (If that is unclear, I have an overview [here](https://www.equitieslab.com/features/stock-screener/) and [there](https://www.equitieslab.com/wiki/QuickStart/StockScreener) with more details). Now a common problem is that users create overfitted stock screeners. I would love to give them a warning when the screen is likely to be over-fitted. Fields I have to work with - All trades made by the Stock Screener Stock, Start Date, Start Price, End Date, End Price - S&P 500 performance for the same time frame - Market Cap, Sector, and Industry of each Stock
How to detect overfitting of a stock screener
CC BY-SA 3.0
null
2015-03-02T23:02:45.583
2015-05-03T00:30:18.570
null
null
8344
[ "machine-learning", "data-mining", "classification", "bigdata", "statistics" ]
Learning curves or bias-variance decomposition are the gold standard for detecting high variance, aka: overfitting. Separate your data (in your case the "back data") into 60% training data and 40% testing data. Fit the model on the training data as you usually would and see how well it is working with the test data. Finally, when you think you have the model that you want, split each of the training and test sets into 10-100 subsets and retrain and test with incrementally larger sets. Apply your favorite performance metric and plot the results of performance vs. the number of cases used for testing and training. The curves will never come together if the model is overfit (high variance). The curves will come together but the performance will be lower than desired if the model is underfit (high bias) and the lines will come together at an acceptable performance for a well performing model that is not overfit. Here is an example of overfitting and underfitting with root mean square error as the performance metric: ![Bias-Variance decomposition via learning curves](https://i.stack.imgur.com/m3a90.png) [Here is a pretty good link](https://followthedata.wordpress.com/2012/06/02/practical-advice-for-machine-learning-bias-variance/) on the process and [here is another one](http://www.astroml.org/sklearn_tutorial/practical.html). Hope this helps!
How many ways are there to check model overfitting?
- The direct way to check your model for overfitting is to compare its performance on a training set with its performance on a testing set; overfitting is when your train score is significantly above your cv score. According to your comments, your r2 score is 0.97 on the training set, and 0.86 on your testing set (or similarly, 0.88 cv score, mean across 10 folds). That's somewhat overfitting, but not extremely so; think if 0.88 is "good enough" for your requirements - The r2 score is 1 - MSE of errors / variance of true values. In the example you showed, all of the three true values were the same; i.e. their variance is zero. The r2 score should've been a negative infinite, but apparently sklearn corrects this to 0; you can verify that changing y_true to [0.9, 0.9, 0.90001] changes your r2 score to a very large negative number (around -2*10**9). This is why checking r2 against a small sample is not a good idea; the mean of the small sample contains too much important information. - You added that you want to know which parameters to tune in order to prevent over-fitting. In your edit to your question, you said you're using grid-search over n_estimators (3 options), min_samples_split (2 options) and min_sample_leaf (2 options). There are other parameters you can try, and in my experience max_depth is important to tune. This question on Stack Overflow and this question on Cross Validated deal with overfitting, and there are good options there. I'd add that if you're trying many options, then maybe you'd be better off doing using Bayesian Optimization (there's a package that functions well with SKLearn: https://scikit-optimize.github.io/stable/auto_examples/sklearn-gridsearchcv-replacement.html).
5291
1
5763
null
1
185
I am using twitteR package to retrievie timeline data. My request looks as follows: `tweets <- try(userTimeline(user , n=50),silent=TRUE)` and this worked quite well for a time, but now I receive this error message: ``` Error in function (type, msg, asError = TRUE) : easy handle already used in multi handle ``` In a related question on Stackoverflow one answer is to use Rcurl directly but this does not seem to work with twitteR package. Anybody got an idea on this?
Error using twitter R package's userTimlien
CC BY-SA 3.0
null
2015-03-06T10:11:02.133
2015-05-12T09:30:09.273
null
null
8549
[ "data-mining", "r" ]
It seems to be working well on my configuration: Ubuntu Vivid and R: ``` > sessionInfo() R version 3.1.2 (2014-10-31) Platform: x86_64-pc-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=en_US.UTF-8 LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] twitteR_1.1.8 loaded via a namespace (and not attached): [1] bit_1.1-12 bit64_0.9-4 bitops_1.0-6 DBI_0.3.1 httr_0.6.1 [6] magrittr_1.5 RCurl_1.95-4.6 rjson_0.2.15 stringi_0.4-1 stringr_1.0.0 [11] tools_3.1.2 ``` Maybe you should update packages versions?
Dataset of Tweets by Same User
You'll find great datasets for this task called author verification from the PAN workshop series. Afaik the last one specifically on this task was in 2015: [https://pan.webis.de/clef15/pan15-web/author-identification.html](https://pan.webis.de/clef15/pan15-web/author-identification.html). I recommend exploring the website, there are many other datasets for related tasks.