Id
stringlengths
2
6
PostTypeId
stringclasses
1 value
AcceptedAnswerId
stringlengths
2
6
ParentId
stringclasses
0 values
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
34
27.1k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
1 value
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
2
6
OwnerUserId
stringlengths
2
6
Tags
sequencelengths
1
5
Answer
stringlengths
32
27.2k
SimilarQuestion
stringlengths
15
150
SimilarQuestionAnswer
stringlengths
44
22.3k
6384
1
6390
null
4
6590
I'm trying to understand PCA, but I don't have a machine learning background. I come from software engineering, but the literature I've tried to read so far is hard for me to digest. As far as I understand PCA, it will take a set of datapoints from an N dimensional space and translate them to an M dimensional space, where N > M. I don't yet understand what the actual output of PCA is. For example, take this 5 dimensional input data with values in the range [0,10): ``` // dimensions: // a b c d e [[ 4, 1, 2, 8, 8], // component 1 [ 3, 0, 2, 9, 8], [ 4, 0, 0, 9, 1], ... [ 7, 9, 1, 2, 3], // component 2 [ 9, 9, 0, 2, 7], [ 7, 8, 1, 0, 0]] ``` My assumption is that PCA could be used to reduce the data from 5 dimensions to, say, 1 dimension. ### Data details: There are two "components" in the data. - One component has mid a levels, low b and c levels, high d, and nondeterministic e levels. - The other component has high a and b levels, low c and d levels, and nondeterministic e levels. This means that the two components are most differentiated by `b` and `d`, somewhat differentiated by `a`, and negligibly differentiated by `c` and `e`. ### Outputs? I'm making this up, but say the (non-normalized) linear combination with the highest differentiating power is something like ``` 5*a + 10*b + 0*c + 10*d + 0*e ``` The above input data translated along that single axis is: ``` [[110], [105], [110], ...etc ``` Is that linear combination (or a vector describing it) the output of PCA? Or is the output the actual reduced dataset? Or something else entirely?
What is the actual output of Principal Component Analysis?
CC BY-SA 3.0
null
2015-07-07T23:15:34.657
2015-07-08T16:22:11.793
2020-06-16T11:08:43.077
-1
10531
[ "machine-learning", "classification" ]
I agree with dpmcmlxxvi's answer that the common "output" of PCA is computing and finding the eigenvectors for the principal components and the eigenvalues for the variances, but I can't add comments yet and would still like to contribute. Once you hit this step of calculating the eigenvectors and eigenvalues of the principal components, you can do many types of analyses depending on your needs. I believe the "output" you are specifically asking about in your question is the resultant data set of applying a transformation or projection of the original data set into the desired linear subspace (of n-dimensions). This is taking the output of PCA and applying it on your original data set. This [PCA step by step example](http://sebastianraschka.com/Articles/2014_pca_step_by_step.html) may help. The ultimate output of this 6 step analysis was the projection of a 3 dimensional data set into 2 dimensions. Here are the high level steps: > Taking the whole dataset ignoring the class labels Compute the d-dimensional mean vector Computing the scatter matrix (alternatively, the covariance matrix) Computing eigenvectors and corresponding eigenvalues Ranking and choosing k eigenvectors Transforming the samples onto the new subspace Ultimately, step 4 is the "output" since that is where the common requirements for performing PCA are fulfilled. We can make different decisions at steps 5 and 6 and produce alternative output there. A few more possibilities: - You could decide to project the observations with outliers removed - Another possible outcome here would be to calculate the proportion of variance explained by one or any combination of principal components. For example, the proportion of variance explained by the first two principal components of K components is (λ1+λ2)/(λ1+λ2+. . .+λK). - After plotting the projected observations into the first two principal components (as in the given example), you can impose a plot of the loadings of each of the original dimensions into the subspace (scaled by the standard deviation of the principal components). This way, we can see the contribution of the original dimensions (in your case a - e) to principal component 1 and 2. The biplot is another common product of PCA.
how to derive the steps of principal component analysis?
\begin{align} \frac1N\sum_{i=1}^N Y_{1i}^2 &= \frac1N\sum_{i=1}^N (v_1'z_i)(z_i'v_1)\\ &=v_1'\left(\frac1N \sum_{i=1}^N z_iz_i' \right) v_1 \\ &= v_1'Rv_1 \end{align} where $R=\frac1N \sum_{i=1}^N z_iz_i'.$
6391
1
9993
null
9
8793
I know there are similar question on stats.SE, but I didn't find one that fulfills my request; please, before mark the question as a duplicate, ping me in the comment. I run a neural network based on `neuralnet` to forecast SP500 index time series and I want to understand how I can interpret the plot posted below: ![enter image description here](https://i.stack.imgur.com/EFQwW.png) Particularly, I'm interested to understand what is the interpretation of the hidden layer weight and the input weight; could someone explain me how to interpret that number, please? Any hint will be appreciated.
R - Interpreting neural networks plot
CC BY-SA 3.0
null
2015-07-08T12:05:49.663
2018-10-31T19:29:49.427
2015-07-08T18:28:23.360
8953
9225
[ "machine-learning", "r", "neural-network", "predictive-modeling", "forecast" ]
As David states in the comments if you want to interpret a model you likely want to explore something besides neural nets. That said it you want to intuitively understand the network plot it is best to think of it with respect to images (something neural networks are very good at). - The left-most nodes (i.e. input nodes) are your raw data variables. - The arrows in black (and associated numbers) are the weights which you can think of as how much that variable contributes to the next node. The blue lines are the bias weights. You can find the purpose of these weights in the excellent answer here. - The middle nodes (i.e. anything between the input and output nodes) are your hidden nodes. This is where the image analogy helps. Each of these nodes constitute a component that the network is learning to recognize. For example a nose, mouth, or eye. This is not easily determined and is far more abstract when you are dealing with non-image data. - The far-right (output node(s)) node is the final output of your neural network. Note that this all is omitting the activation function that would be applied at each layer of the network as well.
What is a good interpretation of this 'learning curve' plot?
- The X axis is the number of instances in the training set, so this plot is a data ablation study: it shows what happens for different amount of training data. - The Y axis is an error score, so lower value means better performance. - In the leftmost part of the graph, the fact that the error is zero on the training set until around 6000 instances points to overfitting, and the very large difference of the error between the training and validation confirms this. - In the right half of the graph the difference in performance starts to decrease and the performance on the validation set seems to be come stable. The fact that the training error becomes higher than zero is good: it means that the model starts generalizing instead of just recording every detail of the data. Yet the difference is still important, so there is still a high amount of overfitting.
6395
1
13456
null
10
454
It seems standard in many neural network packages to pair up the objective function to be minimised with the activation function in the output layer. For instance, for a linear output layer used for regression it is standard (and often only choice) to have a squared error objective function. Another usual pairing is logistic output and log loss (or cross-entropy). And yet another is softmax and multi log loss. Using notation, $z$ for pre-activation value (sum of weights times activations from previous layer), $a$ for activation, $y$ for ground truth used for training, $i$ for index of output neuron. - Linear activation $a_i=z_i$ goes with squared error $\frac{1}{2} \sum\limits_{\forall i} (y_i-a_i)^2$ - Sigmoid activation $a_i = \frac{1}{1+e^{-z_i}}$ goes with logloss/cross-entropy objective $-\sum\limits_{\forall i} (y_i*log(a_i) + (1-y_i)*log(1-a_i))$ - Softmax activation $a_i = \frac{e^{z_i}}{\sum_{\forall j} e^{z_j}}$ goes with multiclass logloss objective $-\sum\limits_{\forall i} (y_i*log(a_i))$ Those are the ones I know, and I expect there are many that I still haven't heard of. It seems that log loss would only work and be numerically stable when the output and targets are in range [0,1]. So it may not make sense to try linear output layer with a logloss objective function. Unless there is a more general logloss function that can cope with values of $y$ that are outside of the range? However, it doesn't seem quite so bad to try sigmoid output with a squared error objective. It should be stable and converge at least. I understand that some of the design behind these pairings is that it makes the formula for $\frac{\delta E}{\delta z}$ - where $E$ is the value of the objective function - easy for back propagation. But it should still be possible to find that derivative using other pairings. Also, there are many other activation functions that are not commonly seen in output layers, but feasibly could be, such as `tanh`, and where it is not clear what objective function could be applied. Are there any situations when designing the architecture of a neural network, that you would or should use "non-standard" pairings of output activation and objective functions?
How flexible is the link between objective function and output layer activation function?
CC BY-SA 3.0
null
2015-07-08T20:04:16.703
2016-08-16T13:05:06.970
2015-07-10T18:03:47.263
836
836
[ "neural-network", "gradient-descent" ]
It is not so much which activation function which you use which determines which loss funtion you should use, but rather what the interpretation you have of the output is. If the output is supposed to be a probability, then log-loss is the way to go. If the output is a generic value then mean squared error is default way to go. So for example, if your output was a grey scale pixel with grey-scale labelled by a number from 0 to 1, it might make sense to use a sigmoid activation function with a mean squared error objective function.
Lack of activation function in output layer at regression?
Activation "linear" is identical to "no activation function". The term "linear output layer" also means precisely "the last layer has no activation function". Whether you use one or the other term might be down to how your NN library implements it. You may also see it described either way around in documents, but it is exactly the same thing mathematically: $$a^{out}_j = b^{out}_j + \sum_{i=1}^{N^{hidden}} W_{ij}a^{hidden}_i$$ Where $a$ values are activation, $b$ are biases, $W$ is weight matrix. For a regression problem with a mean squared error objective, this is the most common approach. There is nothing stopping you using other activation functions. They might help you if they match the target variable distribution. About the only rule is that your network output should be able to cover possible values of the target variable. So if the target variable is always between -1.0 and 1.0, with higher density around 0.0, perhaps tanh could also work for you.
6410
1
6413
null
2
130
In Matlab, if you build a simple network and train it: ``` OP = feedforwardnet(5, 'traingdm'); inputsVals = [0,1,2,3,4]; targetVals = [3,2,5,1,9]; OP = train(OP,inputsVals,targetVals); ``` then you train it again so another `OP = train(OP,inputsVals,targetVals);` What is happens to the network? Does it train again based on what it learned the first time you did `OP = train(OP,inputsVals,targetVals);` or does it train as if it were the first time training the network.
Does the network learn based on previous training or does it restart? Matlab, neuralnetworks
CC BY-SA 3.0
null
2015-07-09T16:12:00.940
2016-04-27T10:25:52.970
null
null
10584
[ "neural-network", "matlab" ]
It trains again based on what it learned the first time you did `OP = train(OP,inputsVals,targetVals)`. More generally, `train` uses your network's weights, i.e. it does not initialize the weights. The weight initialization happens in `feedforwardnet`. Example: ``` % To generate reproducible results % http://stackoverflow.com/a/7797635/395857 rng(1234,'twister') % Prepare input and target vectors [x,t] = simplefit_dataset; % Create ANN net = feedforwardnet(10); % Loop to see where train() initializes the weights for i = 1:10 % Learn net.trainParam.epochs = 1; net = train(net,x,t); % Score y = net(x); perf = perform(net,y,t) end ``` yields ``` perf = 0.4825 perf = 0.0093 perf = 0.0034 perf = 0.0034 perf = 0.0034 perf = 0.0034 perf = 0.0034 perf = 0.0034 perf = 0.0034 perf = 0.0028 ```
How to retrain the neural network when new data comes in?
Its extremely simple. There are a lot of ways of doing it. I am assuming you are familiar with Stochastic Gradient Descent. I am going to tell one naive way of doing it. - Reload the model into RAM. - Write a SGD function like SGD(X,y). It will take the new sample and label and run one step of SGD on it and save the updated model. - As you can see this will be highly inefficient, a better way is to save a number of samples and then run a step of stochastic batch gradient descent on it. So that you dont have to reload the updated model every time you give it a new sample. I hope this gives you a rough idea of how the implementation can be done. You can easily find much more efficient and scalable ways of doing this. If you are not familiar waith algorithms like SGD, I would recommend to get familiar with them because online learning is just a one sample mini batch gradient descent algorithm.
6414
1
6421
null
2
433
I believe sci kit learn is written in python,however that not scalable.Spark mlib or ml is scalabale but written in scala.I am looking for an ongoing effort where a machine learning library is being built in python (available in github or so) so that I can contribute to that.Is anyone aware of such effort.
Scalable open source machine learning library written in python
CC BY-SA 3.0
null
2015-07-09T20:38:22.933
2015-07-09T23:46:29.367
null
null
10327
[ "machine-learning", "scalability", "scikit-learn", "apache-spark" ]
Is there a specific reason beside the fact that you would like to contribute? I am asking because there is always [pyspark](https://github.com/apache/spark/tree/master/python/pyspark/mllib) that you can use, the Spark python API that exposes the Spark programming model to Python. For deep learning specifically, there are a lot of frameworks built on top of [Theano](https://github.com/Theano/Theano) -which is a python library for mathematical expressions involving multi-dimensional arrays-, like Lasagne, so they are able to use GPU for intense training. Getting an EC2 instance with GPU on AWS is always an option.
Machine Learning library in Python, list or numpy or pandas
Pandas does normally a decent job allowing dataframes to behave as numpy arrays. My recommendation is to use numpy types, the reason is that, for consistency with pretty much what the industry is doing, you are much safer with numpy. I love pandas, and I love the dataframes, but they provide extra functionality that the model does NOT need, the same way in general programming you will not use a String to represent a boolean (even though tou could do it with a String), simply because you should use whatever data types provides you the functionality you need... and nothing else. So, numpy is the way to go. As for python lists, you do not get the mathematical operations that you get with numpy, so do not consider them.
6417
1
6432
null
4
482
I am reading Applied Predictive Modeling by Max Khun. I chapter 16 he discusses using alternate cutoffs as a remedy for class imbalance. Suppose our model predicts the most likely outcome of 2 events, e1 and e2. We have e1 occurring with a predicted probability 0.52 and e2 with a predicted probability 0.48. Using the standard 0.5 for e1 cutoff we would predict e1, but using an alternative cutoff of 0.56 for e1 we would predict e2 because we only predict e1 when p(e1) > 0.56. My question is, does it make sense to also readjust the probabilities when using alternate cutoffs. For example, in my previous example using 0.56 cutoff of e1. p(e1) = 0.52; p(e2) = 0.48 Then we apply an adjustment of 0.56 - 0.5 = 0.06. So p_adj(e1) = 0.52 - 0.06 = 0.46; p_adj(e2) = 0.48 + 0.06 = 0.54 Basically we shift the probabilities so that they predict e1 when p_adj(e1) > 0.5. I apologize if there is something obviously flawed with my logic but it feels intuitively wrong to me to predict e2 when p(e1) > p(e2). Which probabilities would be more in line with the real-world probabilities?
Adjusting Probabilities When Using Alternative Cutoffs For Classification
CC BY-SA 3.0
null
2015-07-09T21:09:15.107
2015-07-10T22:42:22.630
null
null
2817
[ "machine-learning", "classification" ]
First of all, you cannot always consider what a machine learning algorithm outputs as a "probability". Logistic regression outputs a sigmoid activation on a `(0, 1)` scale, but that doesn't magically make it so! We simply often scale things to a `(0, 1)` scale in ML as a measure of confidence. Also in your example, if the events are mutually exclusive (like classification), just think of them as "event 1" and "NOT event 1". Something like `p(e1) + p(~e1) = 1`. So when your book tells you to lower the threshold, it is simply saying that you require a smaller level of confidence to choose e1 over e2. This doesn't mean you are choosing one with smaller likelihood, you are simply making a conscious choice to adjust your [precision-recall curve](https://en.wikipedia.org/wiki/Precision_and_recall). There are other ways to combat class imbalance, but changing the threshold to be more sensitive to any indication of confidence of one class over another is certainly a way to do that.
How to select 'cutoff' of classifier probability
What you're looking for is something along the line of an [ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#:%7E:text=A%20receiver%20operating%20characteristic%20curve,its%20discrimination%20threshold%20is%20varied.): Using the threshold as a decision parameter, you can observe the trade-off between FPR (False Positive Rate: how many of the articles not belonging to the author will be correctly classified) and TPR (True Positive Rate, aka recall: how many of the articles which are really by the author will be classified as such). When the parameter is at one end, you'll classify all documents as belonging to the author (100% recall, but pretty bad precision), and at the other hand, you'll have 100% precision but pretty bad recall. The plot will allow you to decide on a value that satisfies your requirements (i.e. how much will your precision suffer when you want 95% recall). You can select it based on your desired value in one metric (e.g. 95% recall), but really I'd just plot it and have a look. You can do it in SKLearn with [plot_roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_roc_curve.html).
6423
1
6424
null
1
70
Given a sample of hexadecimal data, I would like to identify UNKNOWN sequences of bytes that are repeated throughout the sample. (Not searching for a known string or value) I am attempting to reverse engineer a network protocol, and I am working on determining data structures within the packet. As an example of what I'm trying to do (albeit on a smaller scale): ``` (af:b6:ea:3d:83:02:00:00):{21:03:00:00}:[b3:49:96:23:01] {21:03:00:00}:(af:b6:ea:3d:83:02:00:00):01:42:00:00:00:00:01:57 ``` And ``` (38:64:88:6e:83:02:00:00):{26:03:00:00}:[b3:49:96:23:01] {26:03:00:00}:(38:64:88:6e:83:02:00:00):01:42:00:00:00:00:00:01 ``` Obviously, these are easy to spot by eye, but patterns that are hundreds of chars into the data are not. I'm not expecting a magic bullet for the solution, just a nudge in the right direction, or even better, a premade tool. I'm currently needing this for a C# project, but I am open to any and all tools.
Identifying repeating sequences of data in byte array
CC BY-SA 3.0
null
2015-07-10T03:47:16.397
2015-07-10T05:34:10.930
null
null
10596
[ "data-mining" ]
I believe the problem that you are referring to, is that of "Motif Discovery in Time Series Data". An appreciable amount of research literature already exists in this domain, so you can look through those. If the data that you handle is not very large, you can find some relatively easy to implement algorithms. If the data is large , you can look at more recent publications in this domain. As a starting point I would recommend taking a look at how Motif Discovery is done in SAX. SAX takes continuous signals as inputs and discretizes them. These discrete levels are then stored as alphabets. This resulting data looks very much like yours in my opinion. Take a look at what they do in "Mining Motifs in Massive Time Series Databases".
How to Identify Repeating Data Entries when the Repeated Entries are Spelled or Constructed Differently
For R: Have a look and the [stringr package](https://stringr.tidyverse.org/). I would use for example the str_detect() function as follows: str_detect(column_of_different_names,"DOE|company_name"). This will return TRUE for each string that includes "DOE" or the company name in "company_name".
6433
1
6437
null
2
787
If I have 3 separate feedforward neural networks in Matlab, is it possible to connect them so that, given input data and target data the 3 work in parallel to produce output? If so, how do I do this?
Is it possible to connect three neural networks in Matlab?
CC BY-SA 3.0
null
2015-07-11T01:55:26.303
2015-07-18T17:26:04.860
null
null
10584
[ "neural-network", "matlab" ]
If you want to combine the results from three different Neural Networks to "boost" the performance :) , you might want to look at the different Ensemble Learning Methods as I mentioned earlier. Which method you should use, depends on how you share or divide the training data between the three NNs. For example if the NNs are trained on same data but have different parameters, you can look at simple voting ( if you are doing a classification task) or averaging ( if you are using them for regression). The more advanced methods like AdaBoost divide the training data between the classifiers. You can read about it in [Boosting Neural Networks](http://www.iro.umontreal.ca/~lisa/pointeurs/ada-nc.pdf)
How do I feed three-dimensional input layer into a Neural Network?
The most common options are: - Change the input shape. Use numpy.reshape to transform the original shape three-dimensional into a two-dimensional. - Change the model architecture. Each layer that receives a three-dimensional representation will have to be three-dimensional size or add a layer than learns a two-dimensional representation.
6438
1
6458
null
2
120
If I provide: - A list of possible transforms, and, - A list of input states, and, - A corresponding list of output states for each input state, and, - A fitness function to score each output state Which subset of machine learning can direct me towards an optimization algorithm that can map each input state to a dictionary of input states, and, failing to find a match, apply the necessary transforms to get me to the closest-related output state? An example involving polygon legalization: - Any given "window" can contain N different polygons, where each polygon has lower-left and upper-right co-ordinates, as well as a polygon "type". - The input state of the polygons may or may not be "illegal". - A list of transforms includes: move, copy, rotate, resize - If the input state maps directly to any output state, the input state is decided to be legal. Nothing more to be done; move on the next window. - If the input state matches any previously seen input state, transform to the matching (known-legal) output state. Nothing more to be done; move on the next window. - Attempt transforms in different sequences until a state is reached that satisfies a fitness function. Store this input:output state combination. Move on to the next window. Would this imply some combination of neural networking (for classification) and genetic/evolutionary algorithms? Or, does the presence of a fitness function negate the need to store combinations of input:output states?
Machine learning for state-based transforms?
CC BY-SA 3.0
null
2015-07-11T23:08:14.300
2015-07-14T21:30:00.377
null
null
10627
[ "machine-learning", "optimization" ]
If i get it correctly: - You have an input polygon - As a first step you want to "match" that against a list of previously seen templates. If this is successful, you pick it's corresponding output and move on. - If not, you wish to find some optimal transformation, in order for it to satisfy some constraints that you have (your "objective function"). Then add the original+transformed shape to the templates list and move on. Is this correct? I'll risk an answer anyways: For the first part, I believe that there is a [slew of literature out there](https://www.google.com.tr/search?rls=en&q=shape%20matching%20algorithm&ie=UTF-8&oe=UTF-8). It's not my expertise, but first thing that comes to mind is measuring the distance in feature space between your shape and each template, and picking the closest one, it the distance is below a threshold that you set. "Feature" here would be either some low-level polygon property, e.g. x and y coordinates of vertices, or an abstraction, e.g. perimeter, area, no. vertices, mean side length/side length variance, etc. For the second part, it really depends on the nature of your constraints/objective functions. Are they convex? Uni- or multi-modal? Single or multi-objective? Do you want to incorporate some domain knowledge (i.e. knowledge about what "good" transformations would be?)? One can really not tell without further details. Evolutionary algorithms are quite versatile but expensive methods (although some argue on that). If you can spare the possibly large amount of function evaluations, you could try EAs as a first step, and then refine your approach. Finally, while not exactly related to what you describe in your process, I believe you may benefit by taking a look into auto-associative networks (and models in general); these are models that are able to perform constraint-satisfaction on their input, effectively enforcing learned relationships on input values. I could see this being used in your case by inputing a shape, and having a transformed shape as an output, which would be "legal", i.e. satisfying the constraints learned by the auto associative model. Thus, you would eliminate the need for a template matching + optimization altogether.
Machine learning for object states
To me this problem looks similar to [language modeling](https://en.wikipedia.org/wiki/Language_model): a model is trained on a large amount of sequences, and then it can predict the probability of any input sequence. In your case a low probability would indicate an abnormal sequence. My background is in NLP that's why I think language modeling, but I guess the same techniques are used for other problems as well. The fact that you have transitions and states suggests Markov Models, for which there are known methods for inference and estimation. So maybe you could design a more specific kind of model for your case and use something like the [Baum–Welch algorithm](https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm).
6459
1
6485
null
4
2754
I am planning on making an AI song composer that would take in a bunch of songs of one instrument, extract musical notes (like ABCDEFG) and certain features from the sound wave, preform machine learning (most likely through recurrent neural networks), and output a sequence of ABCDEFG notes (aka generate its own songs / music). I think that this would be an unsupervised learning problem, but I am not really sure. I figured that I would use recurrent neural networks, but I have a few questions on how to approach this: - What features from the sound wave I should extract so that the output music is melodious? - Is it possible, with recurrent neural networks, to output a vector of sequenced musical notes (ABCDEF)? - Any smart way I can feed in the features of the soundwaves as well as sequence of musical notes?
What features from sound waves to use for an AI song composer?
CC BY-SA 3.0
null
2015-07-14T22:39:37.943
2017-07-20T13:31:20.510
null
null
10523
[ "machine-learning", "neural-network", "feature-selection", "feature-extraction" ]
First off, ignore the haters. I started working on ML in Music a long time ago and got several degrees using that work. When I started I was asking people the same kind of questions you are. It is a fascinating field and there is always room for someone new. We all have to start somewhere. The areas of study you are inquiring about are Music Information Retrieval ([Wiki Link](https://en.wikipedia.org/wiki/Music_information_retrieval)) and Computer Music ([Wiki Link](https://en.wikipedia.org/wiki/Computer_music)) . You have made a good choice in narrowing your problem to a single instrument (monophonic music) as polyphonic music increases the difficulty greatly. You're trying to solve two problems really: 1) Automatic Transcription of Monophonic Music ([More Readings](https://scholar.google.com/scholar?q=Automatic+Transcription+of+Monophonic+Music)) which is the problem of extracting the notes from a single instrument musical piece. 2) Algorithmic Composition ([More Readings](https://scholar.google.com/scholar?q=Algorithmic%20Composition)) which is the problem of generating new music using a corpus of transcribed music. To answer your questions directly: > I think that this would be an unsupervised learning problem, but I am not really sure. Since there are two learning problems here there are two answers. For the Automatic Transcription you will probably want to follow a supervised learning approach, where your classification are the notes you are trying to extract. For the Algorithmic Composition problem it can actually go either way. Some reading in both areas will clear this up a lot. > What features from the sound wave I should extract so that the output music is melodious? There are a lot of features used commonly in MIR. @abhnj listed MFCC's in his answer but there are a lot more. Feature analysis in MIR takes place in several domains and there are features for each. Some Domains are: - The Frequency Domain (these are the values we hear played through a speaker) - The Spectral Domain (This domain is calculated via the Fourier function (Read about the Fast Fourier Transform) and can be transformed using several functions (Magnitude, Power, Log Magnitude, Log Power) - The Peak Domain (A domain of amplitude and spectral peaks over the spectral domain) - The Harmonic Domain One of the first problems you will face is how to segment or "cut up" your music signal so that you can extract features. This is the problem of Segmentation ([Some Readings](https://scholar.google.com/scholar?hl=en&q=music%20segmentation)) which is complex in itself. Once you have cut your sound source up you can apply various functions to your segments before extracting features from them. Some of these functions (called window functions) are the: Rectangular, Hamming, Hann, Bartlett, Triangular, Bartlett_hann, Blackman, and Blackman_harris. Once you have your segments cut from your domain you can then extract features to represent those segments. Some of these will depend on the domain you selected. A few example of features are: Your normal statistical features (Mean, Variance, Skewness, etc.), ZCR, RMS, Spectral Centroid, Spectral Irregularity, Spectral Flatness, Spectral Tonality, Spectral Crest, Spectral Slope, Spectral Rolloff, Spectral Loudness, Spectral Pitch, Harmonic Odd Even Ratio, MFCC's and Bark Scale. There are many more but these are some good basics. > Is it possible, with recurrent neural networks, to output a vector of sequenced musical notes (ABCDEF)? Yes it is. There have been several works to do this already. ([Here are several readings](https://scholar.google.com/scholar?q=Algorithmic%20Composition%20with%20Neural%20Networks)) > Any smart way I can feed in the features of the soundwaves as well as sequence of musical notes? The standard method is to use the explanation I made above (Domain, Segment, Feature Extract) etc. To save yourself some work I highly recommend starting with a MIR framework such as MARSYAS ([Marsyas](http://marsyas.info/)). They will provide you with all the basics of feature extraction. There are many frameworks so just find one that uses a language you are comfortable in.
Selecting ML algorithm for music composition
Let us formulate this problem in such a way that it can be understood from a machine learning perspective. You have a set of instances $X$ where each instance $x_i \in \mathbb{R}^m$ where $m$ is the dimensionality of the instance. In other words $m$ is the number of features that describe the instance. Your problem intends to go from a set of features to a class label good or bad. Thus, this is a mapping from $\mathbb{R}^m$ to $y \in \{0, 1\}$. # How to achieve this mapping? This is when we will use the machine learning algorithms. We will train a model to effectively approximate the function which gives the output label from a set of inputs. It is evident that sparse features (low information entropy) will complicate the mapping function and will thus provide worse results. This is why feature engineering is of upmost importance for machine learning. It is probably the hardest part of the machine learning pipeline, however it is the lead factor in dictating your results. You can use some feature reduction techniques in order to remove features which are uninformative with respect to the output label. Some techniques that I use frequently are principle component analysis (PCA), linear discriminant analysis (LDA). Alternatively, you can use some projection methods to reduce the dimensionality of the data whilst maintaining separation between the classes. Such techniques are Isomap, MDS, Spectral Embeddings and TSNE. You can check to see which is best suited for your type of data. # How to choose a model? Firstly, your problem is a supervised classification problem. This already narrows the types of models you can use. Furthermore, model selection is based on some key factors such as: the number of instances you have, the number of features per instance and the number of output nodes. You should also keep in consideration that the separability of the probability distribution between the output classes will impact the performance of the model directly. For example discriminating between cars and oranges is much easier than oranges and clementines. In your case, you have 1,000 instances and around 13 features. This means that deep learning based techniques are possible but discouraged. You do not have enough data. You can then attempt the following popular classification models - Support Vector Classifier - Naive Bayes - K-Nearest Neighbors - Decision Trees - Random Forests To evaluate which model performs the best you will use the accuracy attained with a trained model on a test set. This set should be drawn independently from the training set as to catch overfitting. This is when the model cannot generalize to new data. --- # In code Assuming matrix $X$ contains the data where rows are the instances and columns are the features, and matrix $Y$ contains the labels. First we split our data into a training and testing set ``` X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33) from sklearn.svm import SVC clf = SVC() clf.fit(X_train, y_train) print('Score: ', clf.score(X_test, y_test)) from sklearn.neighbors import KNeighborsClassifier neigh = KNeighborsClassifier(n_neighbors=3) neigh.fit(X_train, y_train) print('Score: ', neigh.score(X_test, y_test)) from sklearn import tree clf = tree.DecisionTreeClassifier() clf.fit(X_train, y_train) print('Score: ', clf.score(X_test, y_test)) from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier(n_estimators = 100) forest.fit(X_train, y_train) print('Score: ', forest.score(X_test, y_test)) ``` --- This should be a starting point. Let us know if you fall into any problems, and let us know what accuracy you are getting we can then look deeper into these models and better suit them to your data source.
6467
1
6471
null
4
142
I run into this problem from time to time and have always felt like there should be an obvious answer. I have probabilities for potential classes (from some classifier). I will offer the prediction of the class with the highest probability, however, I would also like to attach a confidence for that prediction. Example: If I have Classes `[C1, C2, C3, C4, C5]` and my Probabilities are `{C1: 50, C2: 12, C3: 13, C4: 12, C5:13}` my confidence in predicting C1 should be higher than if I had Probabilities `{C1: 50, C2: 45, C3: 2, C4: 1, C5: 2}`. Reporting that I predict class C1 with 60% probability isn't the whole story. I should be able to derive a confidence from the distribution of probabilities as well. I am certain there is a known method for solving this but I do not know what it is. EDIT: Taking this to the extreme for clarification: If I had a class C1 with 100% probability (and assuming the classifier had an accurate representation of each class) then I would be extremely confident that C1 was the correct classification. On the other hand if all 5 classes had almost equal probability (Say they are all roughly 20%) than I would be very uncertain claiming that any one was the correct classification. These two extreme cases are more obvious, the challenge is derive a confidence for intermediate examples like the one above. Any suggestions or references would be of great help. Thanks in advance.
Deriving Confidences from Distribution of Class Probabilities for a Prediction
CC BY-SA 3.0
0
2015-07-15T17:59:31.063
2015-07-15T19:58:35.510
2015-07-15T19:01:17.677
10701
10701
[ "machine-learning", "classification" ]
As @David says, in your initial example, your confidence about C1 is the same in both cases. In your second example, you most certainly are less confident about the most-probable class in the second case, since the most-probable class is far less probable! You may have to unpack what you're getting at when you say 'confidence' then, since here you're not using it as a term of art but an English word. I suspect you may be looking for the idea of [entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)), or uncertainty present in the distribution of all class probabilities. In your first example, it is indeed lower in the second case than the first. I don't think what you're getting at is just a function of the most-probable class, that is.
predict_proba to print specific class probablity
Once you fit your sklearn classifier, it will generally have a `classes_` attribute. This attribute contains your class labels (as strings). So you could do something as follows: ``` probas = model.predict_proba(dataframe) classes = model.classes_ for class_name, proba in zip(classes, probas): print(f"{class_name}: {proba}") ``` And to find a specific index, you can use numpy's `where` function: ``` import numpy as np class_label = "XYZ" class_index = np.where(model.classes_ == class_label) proba = model.predict_proba(dataframe)[class_index] ```
6492
1
6493
null
3
4829
I have the following CSV data: ``` shot_id,round_id,hole,shotType,clubType,desiredShape,lineDirection,shotQuality,note 48,2,1,tee,driver,straight,straight,good, 49,2,1,approach,iron,straight,right,bad, 50,2,1,approach,wedge,straight,straight,bad, 51,2,1,approach,wedge,straight,straight,bad, 52,2,1,putt,putter,straight,straight,good, 53,2,1,putt,putter,straight,straight,good, 54,2,2,tee,driver,draw,straight,good, 55,2,2,approach,iron,draw,straight,good, 56,2,2,putt,putter,straight,straight,good, 57,2,2,putt,putter,straight,straight,good, 58,2,3,tee,driver,draw,straight,good, 59,2,3,approach,iron,straight,right,good, 60,2,3,chip,wedge,straight,straight,good, 61,2,3,putt,putter,straight,straight,good, 62,2,4,tee,iron,straight,straight,good, 63,2,4,putt,putter,straight,straight,good, 64,2,4,putt,putter,straight,straight,good, 65,2,5,tee,driver,straight,left,good, 66,2,5,approach,wedge,straight,straight,good, 67,2,5,putt,putter,straight,straight,bad, 68,2,5,putt,putter,straight,straight,good, 69,2,6,tee,driver,draw,straight,bad, 70,2,6,approach,hybrid,draw,straight,good, 71,2,6,putt,putter,straight,straight,good, 72,2,6,putt,putter,straight,straight,good, 73,2,7,tee,driver,straight,straight,good, 74,2,7,approach,wood,fade,straight,good, 75,2,7,approach,wedge,straight,straight,bad,long 76,2,7,putt,putter,straight,straight,good, 77,2,7,putt,putter,straight,straight,good, 78,2,8,tee,iron,straight,right,bad, 79,2,8,approach,wedge,straight,straight,good, 80,2,8,putt,putter,straight,straight,bad, 81,2,9,tee,driver,straight,straight,good, 82,2,9,approach,iron,straight,straight,good, 83,2,9,approach,wedge,straight,straight,bad, 84,2,9,putt,putter,straight,straight,good, 85,2,9,putt,putter,straight,straight,good, 86,2,10,tee,driver,straight,left,good, 87,2,10,approach,iron,straight,left,good, 88,2,10,chip,wedge,straight,straight,good, 89,2,10,putt,putter,straight,straight,good, 90,2,10,putt,putter,straight,straight,good, 91,2,11,tee,driver,draw,straight,good, 92,2,11,approach,iron,draw,straight,good, 93,2,11,putt,putter,straight,straight,good, 94,2,11,putt,putter,straight,straight,good, 95,2,12,tee,iron,draw,straight,good, 96,2,12,putt,putter,straight,straight,good, 97,2,12,putt,putter,straight,straight,good, 98,2,13,tee,driver,draw,straight,good, 99,2,13,approach,wood,straight,straight,bad,topped 100,2,13,putt,putter,straight,straight,good, 101,2,13,putt,putter,straight,straight,good, 102,2,14,tee,driver,draw,straight,good, 103,2,14,approach,wood,straight,straight,bad, 104,2,14,approach,iron,draw,straight,good, 105,2,14,approach,wedge,straight,straight,bad, 106,2,14,putt,putter,straight,straight,bad, 107,2,14,putt,putter,straight,straight,good, 108,2,15,tee,iron,draw,right,bad, 109,2,15,approach,wedge,straight,straight,good, 110,2,15,putt,putter,straight,straight,good, 111,2,15,putt,putter,straight,straight,good, 112,2,16,tee,driver,draw,right,good, 113,2,16,approach,iron,straight,left,bad, 114,2,16,approach,wedge,straight,left,bad, 115,2,16,putt,putter,straight,straight,good, 116,2,17,tee,driver,straight,straight,good, 117,2,17,approach,wood,straight,right,bad, 118,2,17,approach,wedge,straight,straight,good, 119,2,17,putt,putter,straight,straight,good, 120,2,17,putt,putter,straight,straight,good, 121,2,18,tee,driver,fade,right,bad, 122,2,18,approach,wedge,straight,straight,good, 123,2,18,approach,wedge,straight,straight,good, 124,2,18,putt,putter,straight,straight,good, 125,2,18,putt,putter,straight,straight,good, ``` And I would like to be able to identify which combinations of values are the most frequently occurring. - club types: driver, wood, iron, wedge, putter - Shot types: tee, approach, chip, putt - line directions: left, center, right - shot qualities: good, bad, neutral Where ideally I'd be able to identify a sweet spot (no pun intended) combination: "driver" + "tee" + "straight" + "good" I intend only to measure this for a static dataset, not for any future values or prediction. So, my thought is that this is probably a clustering / k-means problem. Is that correct? If so, how would I begin doing a K-Mean analysis with these types of values in R? If it isn't a kmeans problem, then what is it?
How to calculate most frequent value combinations
CC BY-SA 3.0
null
2015-07-17T18:50:39.330
2015-07-17T19:57:02.793
null
null
10761
[ "r", "clustering", "k-means" ]
If I understand your question you want to know which combination is most frequent or how frequent a combination is relative to others. This is a static method that will determine the unique combinations in total (i.e., combinations of all five columns). The `plyr` package has a nifty utility for grouping unique combinations of columns in a `data.frame`. We can specify the names of the columns we want to group by, and then specify a function to perform for each of those combinations. In this case, we specify the columns associated with your golf shot qualities and use the function `nrow` which will count the number of rows in every subset of the large data.frame for which the columns are the identical. ``` # You need this library for the ddply() function require(plyr) # These are the columns that determine a unique situation (change this if you need) qualities <- c("shotType","clubType","desiredShape","lineDirection","shotQuality") # The call to ddply() actually gives us what we want, which is the number # of times that combination is present in the dataset countedCombos <- ddply(golf,qualities,nrow) # To be nice, let's give that newly added column a meaningful name names(countedCombos) <- c(qualities,"count") # Finally, you probably want to order it (decreasing, in this case) countedCombos <- countedCombos[with(countedCombos, order(-count)),] ``` Now check out your product. The final column has the count associated with each unique combination of columns you provided to `ddply`: ``` head(countedCombos) shotType clubType desiredShape lineDirection shotQuality count 16 putt putter straight straight good 30 10 approach wedge straight straight good 6 9 approach wedge straight straight bad 5 19 tee driver draw straight good 5 22 tee driver straight straight good 4 2 approach iron draw straight good 3 ``` To see the results for a particular cross-section (say, for example, the driver `clubType`): ``` countedCombos[which(countedCombos$clubType=="driver"),] shotType clubType desiredShape lineDirection shotQuality count 19 tee driver draw straight good 5 22 tee driver straight straight good 4 21 tee driver straight left good 2 17 tee driver draw right good 1 18 tee driver draw straight bad 1 20 tee driver fade right bad 1 ``` As a bonus, you can dig into these results with `ddply` again. For example, if you wanted to look at the ratio of "good" to "bad" shotQuality based on `shotType` and `clubType`: ``` shotPerformance <- ddply(countedCombos,c("shotType","clubType"), function(x){ total<- length(x$shotQuality) good <- length(which(x$shotQuality=="good")) bad <- length(which(x$shotQuality=="bad")) c(total,good,bad,good/(good+bad)) } ) names(shotPerformance)<-c("count","shotType","clubType","good","bad","goodPct") ``` This gives you a new breakdown of some math performed on the counts of a character field (`shotQuality`) and shows you how you can build custom functions for `ddply`. Of course, you can still order these whichever way you want, too. ``` head(shotPerformance) shotType clubType total good bad goodPct 1 approach hybrid 1 1 0 1.0000000 2 approach iron 6 4 2 0.6666667 3 approach wedge 3 1 2 0.3333333 4 approach wood 3 1 2 0.3333333 5 chip wedge 1 1 0 1.0000000 6 putt putter 2 1 1 0.5000000 ```
Calculating possible number of configuration
The total number is: $$5 \times 5 \times 6 \times 4 \times 4 \times 8$$ which is equal to $19200$. Here, we just count the number of possible values for each parameter.
6494
1
6496
null
4
202
I have just learned Markov Chains which I am using to model a real world problem. The model comprises 3 states `[a b c]`. For now I am collection data and calculating transitional probabilities:- ``` T[a][b] = #transitions from a to b / #total transitions to a ``` However I am stuck at determining the correct Transition Matrix. As I am getting more data, the matrix is changing drastically. So when do I finalize Transition Matrix? Does that mean that my data is too random and cannot be modelled or I am doing some mistake here?
Markov Chains: How much steps to conclude a Transition Matrix
CC BY-SA 3.0
null
2015-07-17T21:04:15.433
2015-10-16T07:40:49.880
null
null
8338
[ "machine-learning", "markov-process" ]
I expect you have, or can make, a matrix of transition counts. Consider the data in each row to be draws from a multinomial distribution. Then you should be able to use [sample size calculations for the multinomial](https://stats.stackexchange.com/questions/19120/sample-size-for-a-variable-number-of-answers) to get off the ground. It is also possible that your data is not well described by a simple Markov chain. There are some available techniques for this, e.g. [multistate modelling](https://cran.r-project.org/web/packages/msm/vignettes/msm-manual.pdf), but which may or may not fit your particular problem.
Markov Chains for sequential data
If you know what the state history is, you don't need a 'hidden' Markov model, you just need a Markov model (or some other mechanism). The 'hidden' part implies a distinction between some sequence of unobservable states, and some observations that are related to them. In your case, you say you have observed the past states for each customer, so you don't necessarily need to infer anything 'hidden'. The simplest way to proceed in your case would be to calculate a transition matrix, i.e. probability of state given previous state. That's a very simple model but it might do what you want. To do this, just look at all state pairs, and count to get p(s2 | s1) = p(s1 & s2)/p(s1). This is equivalent to a 1-gram model that you've probably read about. Each state is akin to a word. You could also make a more complex model, like a 2-gram model or even an RNN. Honestly, since you have a fixed amount of history, you can just throw your data into an scikit-learn model or xgboost or something, where each customer's history is the vector of predictors and the next state is the outcome. It won't know the sequential dependencies, but you are essentially indexing the past states by time, so it may work pretty well. If you need more clarification about part of this, just ask.
6506
1
6521
null
12
9346
I want to compute the semantic similarity of two words using their vector representations (obtained using e.g. word2vec, GloVe, etc.). Shall I use the Euclidean Distance or the Cosine Similarity? The [GloVe website](http://nlp.stanford.edu/projects/glove/) mentions both measures without telling the pros and cons of each: > The Euclidean distance (or cosine similarity) between two word vectors provides an effective method for measuring the linguistic or semantic similarity of the corresponding words.
Shall I use the Euclidean Distance or the Cosine Similarity to compute the semantic similarity of two words?
CC BY-SA 3.0
null
2015-07-20T04:48:17.547
2021-04-25T14:47:10.563
2015-07-28T17:55:16.293
843
843
[ "nlp", "word-embeddings" ]
First of all, if GloVe gives you normalized unit vectors, then the two calculations are equivalent. In general, I would use the cosine similarity since it removes the effect of document length. For example, a postcard and a full-length book may be about the same topic, but will likely be quite far apart in pure "term frequency" space using the Euclidean distance. They will be right on top of each other in cosine similarity.
Euclidean vs. cosine similarity
On L2 normalized data it is an easy and good exercise to prove that they are equivalent. So you should try to solve the math yourself. Hint: use squared Euclidean. Note that it is common with tfidf to not have normalized data because of various technical reasons, e.g., when using inverted indexes in text search. Furthermore, cosine is faster on very sparse data.
6519
1
6522
null
2
1566
I have multiple datasets, with slightly differing features. What tools can I use to make this a homogeneous dataset? Dataset1: ``` featureA,featureB,featureC 1,7,3 4,8,4 ``` Dataset2: ``` featureA,featureC,featureD,featureE 3,4,5,6 9,8,4,6 ``` Homogeneous Dataset ``` featureA,featureB,featureC,featureD,featureE 1,7,3,, 4,8,4,, 3,,4,5,6 9,,8,4,6 ```
Combining Datasets with Different Features
CC BY-SA 3.0
null
2015-07-20T20:13:53.770
2015-07-24T15:55:45.900
null
null
10799
[ "machine-learning", "dataset" ]
You can use [R](http://www.r-project.org/) to do that. [The smartbind function](http://www.inside-r.org/packages/cran/gtools/docs/smartbind) is the perfect way to combine datsets in the way you are asking for: ``` library(gtools) d1<-as.data.frame(rbind(c(1,7,3),c(4,8,4)))) names(d1)<-c("featureA","featureB","featureC") d2<-as.data.frame(rbind(c(3,4,5,6),c(9,8,4,6))) names(d2)<-c("featureA","featureC","featureD","featureE") d3<-smartbind(d1,d2) ```
Machine learning methods on 1 feature dataset
# Can I use any machine learning methods having only one feature? Yes! In fact, many NLP classifications tasks are in this format. Given 1 piece of text, classify something. For example: - Given 1 review, classify the sentiment - Given 1 news article, classify the topic - Given 1 chat message, classify the intent And now you have: - Given 1 name, classify the Fullname # Can a better method be used? Like you mentioned, you could just find the most common `Fullname` for a given `name` and every time you get a `name` you have a lookup table for the `Fullname`. However, what will happen when a `name` you have never seen before appears, how do you classify it? Are you also assuming that you already have the full list of `Fullname`s? ## Assumption: you know all Names and Fullnames In this case, do as you suggested. Create a dictionary mapping `Name`-`Fullname` by finding the most common `Fullname` for every `Name`. ## Assumption: you know all Fullnames but not all Names Let say you have the mappings: ``` Peter -> Johnson John -> Smith ``` Then, there is a name you have never seen before, `Pete` for example, which does not appear in your mapping table. You could try two approaches: - The simple way - find which name in the mapping is closest to Pete using some word distance measure, like Levenshtein. - The more robust way - forget the notion of mapping table and use a machine learning model. You will need the following things: A text vectorizer to transform your text into a numerical vector. I would suggest a character level n-gram TF-IDF. A classifier. If you use the vectorizer I suggested, then you will need a linear classifier, like an SVM. If you go to with approach two, when you encounter the name `Pete`, it will be spit into n-grams (e.g. `[pe, et, te, pet, ete]`) and vectorized. ## Assumption: you don't know all Fullnames and you don't know all Names This gets more interesting because you could be working with `Fullname` generation. It could be used when you move to names from other countries as well. For example, you already have the mapping: ``` Peter -> Johnson John -> Smith ``` Then you start dealing with Dutch names and encounter `Pieter` and `Jan`. Then you might want to get the following results where even the `Fullname`s are different: ``` Pieter -> Janssen Jan -> Smeets ``` For this, you could use a seq-to-seq Recurrent Neural Network. The architecture can be similar to ones used for neural language translation. However, all embeddings you create have to be character level. Instead of learning an embedding for every word, you learn for every character. You also feed your network one character at a time. This way, you will be less likely to find "out of vocabulary" tokens (except for when you find character from another alphabet).
6528
1
6529
null
1
454
Is there any implementation of Newton-Raphson or EM Algorithm? Can I get the source code of it? I tried googling, but didn't come across any. So asking here. Thanks!
Newton-Raphson or EM Algorithm in Python
CC BY-SA 3.0
null
2015-07-21T09:39:32.693
2015-07-21T11:50:48.820
null
null
10810
[ "python", "algorithms" ]
scikit learn has the EM algorithm [here](http://scikit-learn.org/stable/modules/mixture.html). Source code is available. And if you are an R fan the `mclust` package is available [here](http://www.stat.washington.edu/mclust/).
Newton's method optimization for Deep Learning
If you take a look at section 2, it says > The central idea motivating Newton’s method is that $f$ can be locally approximated around each $\theta$, up to 2nd-order, by the quadratic: $$ f(\theta + p) \approx q_\theta(p) \equiv f(\theta) + \nabla f(\theta)^Tp + \frac{1}{2} p^TBp \, \, (1) $$ where $B = H(\theta)$ is the Hessian matrix of $f$ at $\theta$. Finding a good search direction then reduces to minimizing this quadratic with respect to $p$. To minimize, you need to take the derivative of (1) with respect to $p$ and set it to zero: $$\Rightarrow \nabla f(\theta) + Bp = 0$$ which is equivalent to $Bp = -\nabla f(\theta)$.
6550
1
6991
null
2
312
What would be a good non cryptographic Hash function to use for converting string features to a numerical representation for feeding into machine learning algorithms? To explain the scenario my feature set has both categorical data (e.g.: `Country`) and non categorical data (e.g.: `IP Address`, `Email address`). I have used MurMur3 Hash function so far, is there some better algorithm?
What is a good non cryptographic Hash for string feature translation?
CC BY-SA 3.0
null
2015-07-22T18:58:53.040
2015-09-05T13:24:59.433
null
null
10836
[ "machine-learning", "data-mining" ]
See also: [Neural Network parse string data?](https://datascience.stackexchange.com/questions/869/neural-network-parse-string-data) I do not see a problem with using MurMur3 per se. For the categorical labels, you can use one-hot encoding / one-of-k encoding. For the strings, it's an application-specific question. Presumably if you use exactly those strings as features, it will be very sparse. The effect of this will depend on the algorithm that you are using, and how the training data compare to the data you see in practice. You are running the risk that you will effectively either only create a traditional IP/email whitelist/blacklist OR throw out the feature altogether. You must decide what you want (eg should a certain email address always get a certain output label?) and have some intuition about the application so as to generate more features from IP address and email address. For example, from email address you can extract the local part (eg "john1972") and domain, and from each of those you can extract: - length - character tri-grams - count/proportion of numbers to alphachars - number of hyphens - dictionary validity ... (From domain you can also extract TLD and possibly subdomains.) You can try to tokenise . You can even hit external services to get information like number of Google hits, detected language, spam score etc.
Word Embedding or Hash?
It depends… The general rule of thumb is that there should be at least 40 occurrences of an item to train an embedding model to find a robust representation. If most follower IDs repeat then an embedding model can learn which ones co-occur. If follower IDs are sparse then hashing (which randomly assigns numbers) is a better choice. Which method is better is an empirical question. You can create both models, benchmark, and then choose the data processing pipeline that is best for your task.
6570
1
6572
null
2
431
I have a Healthcare dataset. I have been told to look at non-parametric approach to solve certain questions related to the dataset. I am little bit confused about non-parametric approach. Do they mean density plot based approach (such as looking at the histogram)? I know this is a vague question to ask here. However, I don't have access to anybody else whom I can ask and hence I am asking for some input from others in this forum. Any response/thought would be appreciated. Thanks and regards.
Non-parametric approach to healthcare dataset?
CC BY-SA 3.0
null
2015-07-24T13:21:32.583
2015-07-24T14:56:44.693
null
null
3314
[ "data-mining" ]
They are not specifically referring to a plot based approach. They are referring to a class of methods that must be employed when the data is not normal enough or not well-powered enough to use regular statistics. Parametric and nonparametric are two broad classifications of statistical procedures with loose definitions separating them: - Parametric tests usually assume that the data are approximately normally distributed. - Nonparametric tests do not rely on a normally distributed data assumption. - Using parametric statistics on non-normal data could lead to incorrect results. - If you are not sure that your data is normal enough or that your sample size is big enough (n < 30), use nonparametric procedures rather than parametric procedures. - Nonparametric procedures generally have less power for the same sample size than the corresponding parametric procedure if the data truly are normal. Take a look at some examples of parametric and analogous nonparametric tests from [Tanya Hoskin's Demystifying Summary](http://www.mayo.edu/mayo-edu-docs/center-for-translational-science-activities-documents/berd-5-6.pdf): [](https://i.stack.imgur.com/Q5JE7.png) Here are some summary references: - Another general table with some different information - Nonparametric Statistics - All of Nonparametric Statistics, by Larry Wasserman - R tutorial - Nonparametric Econometrics with Python
How to approach a new data set with no dependent variable
Your assignment is basically the process we call EDA - Explorative Data Analysis. So what should you do? Simply explore! - What is the shape of your dataset? - How do variables behave, do they have a factor structure, correlate, etc. - What are the main descriptives of your dataset, to they tell an interesting story, etc. And once you start doing this you will find something that might be interesting to explore a bit deeper depending on your dataset. Do not just use summary functions like mean, median, etc. but also try to build simple graphs and comment everything in a neat notebook! My tip: Look at some EDA notebooks on Kaggle for inspiration or watch this superior video by a master at work: [https://www.youtube.com/watch?v=go5Au01Jrvs](https://www.youtube.com/watch?v=go5Au01Jrvs) Also here is a beginner guide as well: [https://towardsdatascience.com/exploratory-data-analysis-eda-a-practical-guide-and-template-for-structured-data-abfbf3ee3bd9](https://towardsdatascience.com/exploratory-data-analysis-eda-a-practical-guide-and-template-for-structured-data-abfbf3ee3bd9)
6590
1
12016
null
9
1033
My question is three-fold In the context of "Kernelized" support vector machines - Is variable/feature selection desirable - especially since we regularize the parameter C to prevent overfitting and the main motive behind introducing kernels to a SVM is to increase the dimensionality of the problem, in such a case reducing the dimensions by parameter reduction seems counter-intuitive - If the answer to the 1st question is "NO", then, On what conditions would the answer change that one should keep in mind ? - Are there any good methods that have been tried to bring about feature reduction for SVMs in scikit-learn library of python - I have tried the SelectFpr method and am looking for people with experiences with different methods.
Feature selection for Support Vector Machines
CC BY-SA 3.0
null
2015-07-26T12:17:09.947
2016-06-16T15:21:02.050
2015-07-26T14:10:04.970
9061
9061
[ "svm", "feature-selection", "scikit-learn" ]
Personally, I like to divide feature selection in two: - unsupervised feature selection - supervised feature selection Unsupervised feature selection are things like clustering or PCA where you select the least redundant range of features (or create features with little redundancy). Supervised feature selection are things like Lasso where you select the features with most predictive power. I personally usually prefer what I call supervised feature selection. So, when using a linear regression, I would select features based on Lasso. Similar methods exist to induce sparseness in neural networks. But indeed, I don't see how I would go about doing that in a method using kernels, so you are probably better off using what I call unsupervised feature selection. EDIT: you also asked about regularization. I see regularization as helping mostly because we work with finite samples and so the training and testing distribution will always differ somewhat, and you want your model to not overfit. I am not sure it removes the need to avoid selecting features (if you indeed have too many). I think that selecting features (or creating a smaller subset of them) helps by making the features you do have more robust and avoid the model to learn from spurious correlations. So, regularization does help, but not sure that it is a complete alternative. But I haven't thought thoroughly enough about this.
How to select the best features for Support Vector Classification
Please read about feature selection. Have you are a bunch of methods: - Univariate Selection - Feature Importance - Correlation Matrix with Heatmap Check them out and choose the best. Sample implementation you find at the link: [https://towardsdatascience.com/feature-selection-techniques-in-machine-learning-with-python-f24e7da3f36e](https://towardsdatascience.com/feature-selection-techniques-in-machine-learning-with-python-f24e7da3f36e)
6595
1
6609
null
0
1153
I have a dataset with 261 predictors scraped from a larger set of survey questions. 224 have values which are in a range of scale (some 1-10, some 1-4, some simply binary, all using 0 where no value is given), and the rest are unordered categories. I'm trying to perform classification using these predictors and identify the top n predictors. Am thinking of the following approach: - convert the 224 ordered predictors into numeric, centered, and scaled. - Run separate modeling (I use caret from R): one for using the numeric predictors, another using the remaining 37 categorical predictors (both cross-validated within each modeling exercise). - Choose the respective best-fitting models modelN and modelC for the numeric and categorical predictors. - Choose top n (say 10) predictors from model N and model C. - Combine them in an ensemble model that can handle both numeric and categorical data (say, random forest). - Choose top n predictors in the ensemble model. I am going through this a roundabout way rather than directly fitting all predictors into an ensemble model to try and reduce the complexity of the problem first (and because in R, I'm having a problem with too many levels from the predictors). Would this be a valid approach to identifying the n most salient predictors? Any possible issues to mitigate?
Identifying top predictors from a mix of categorical and ordinal data
CC BY-SA 3.0
null
2015-07-27T05:55:57.130
2016-11-30T23:07:45.650
2016-11-30T23:07:45.650
26596
1133
[ "r", "classification", "feature-selection", "categorical-data" ]
Ricky, Loose thoughts: - Depending on the algorithm you intend to use, centering might not be a good idea (e.g. if you go for SVM, centering will destroy sparsity) - I would suggest not to handle ordered / unordered separately, as you are likely to miss interactions that way. If the categorical ones don't have too many possible values, randomForest in R can handle factors. - if that is an issue (as you seem to hint), I think you have two possibilities: binary indicators or response rates - if it's feasible in terms of computational cost, i would convert all factors to binaries (use sparse matrices if necessary) and then try a greedy feature selection. caret, if memory serves, has rfe or somesuch. - if that's too much trouble, try calculating response rates / average values per factor level (I don't see any info whether your problem is classification or regression): you split your set into folds, and then for each fold fit a mixed effects model (e.g. via lme4) on the remainder, using the factor of interest as the main variable. It's a bit of a pain to setup all the cv correctly, but it's the only way to avoid leaking information. Hope this helps, K
Categorical and ordinal feature data representation in regression analysis?
The distinction between ordinal and categorical does matter. If in truth the difference between white and red was drastically different from red and black, your (10,20,30) ordinal model would not have performed well. One hot encoding can learn the relationship between the ordinal values more finely, but throws out the information that the variables are related. Similarly, with insufficient data it is more likely to overfit. Ordinal variables lessen those problems but at the cost of forcing you to define the interval. There are a number of methods for defining the values of your ordinal variables, like rologit.
6604
1
6606
null
0
144
From [http://scikit-learn.org/stable/modules/linear_model.html#bayesian-ridge-regression](http://scikit-learn.org/stable/modules/linear_model.html#bayesian-ridge-regression), they gave the bayesian ridge distribution as this: $p(w|\lambda) = \mathcal{N}(w|0,\lambda^{-1}{I_{p}})$ And there is a variable $I_p$ but it's unexplained what does the $I_p$ refer to? Also, the variable $\mathcal{N}$ is unexplained but I'm not sure whether I've guessed correctly but is that the Gaussian prior as described in the Bayesian regression section above the Bayesian Ridge?
What does the Ip mean in the Bayesian Ridge Regression formula?
CC BY-SA 3.0
null
2015-07-28T08:53:18.033
2015-07-28T11:13:17.270
2015-07-28T11:07:07.237
21
122
[ "regression" ]
$\mathcal{N}$ does indeed denote a (multivariate) normal / Gaussian distribution. $I_p$ is just an identity matrix of dimension $p$. So this a matrix with $\lambda^{-1}$ along the diagonal. Read this as the covariance matrix, so this is a spherical Gaussian (0 covariance between different dimensions) where each variable has variance $\lambda^{-1}$.
What does a negative coefficient of determination mean for evaluating ridge regression?
A negative value means you're getting a terrible fit - which makes sense if you create a test set that doesn't have the same distribution as the training set. From the [sklearn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html#sklearn.linear_model.Ridge.score): > The coefficient $R^2$ is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a $R^2$ score of 0.0.
6610
1
6615
null
3
364
I worked at a startup/medium sized company and I am concerned that we may be over-engineering one of our products. In essence, we will be consuming real-time coordinates from vehicles and users and performing analytics and machine learning on this incoming data. This processing can be rather intensive as we try predict the ETAs of this entities matched to historical data and static paths. The approach they want to take is using the latest and most powerful technology stack, that being Hadoop, Storm etc to process these coordinates. Problem is that no-one in the team has implemented such a system and only has had the last month or so to skill up on it. My belief is that a safer approach would be to use NoSQL storage such as "Azure Table Storage" in an event based system to achieve the same result in less time. To me it's the agile approach, as this is a system that we are familiar with. Then if the demand warrants it, we can look at implementing Hadoop in the future. I haven't done a significant amount of research in this field, so would appreciate your input. Questions: - How many tracking entities (sending coordinates every 10 seconds) would warrant Hadoop? - Would it be easy to initially start off with a simpler approach such as "Azure Table Storage" then onto Hadoop at a later point? - If you had to estimate, how long would you say a team of 3 developers would take to implement a basic Hadoop/Storm system? - Is Hadoop necessary to invest from the get go as we will quickly incur major costs? I know these are vague questions, but I want to make sure we aren't going to invest unnecessary resources with a deadline coming up.
Is our data "Big Data" (Startup)
CC BY-SA 3.0
null
2015-07-28T14:55:13.340
2015-07-31T06:44:49.997
null
null
10940
[ "machine-learning", "data-mining", "bigdata", "statistics", "apache-hadoop" ]
Yes, this is a how-long-is-a-piece-of-string question. I think it's good to beware of over-engineering, while also making sure you engineer for where you think you'll be in a year. First I'd suggest you distinguish between processing and storage. Storm is a (stream) processing framework; NoSQL databases are a storage paradigm. These are not alternatives. The Hadoop ecosystem has HBase for NoSQL; I suspect Azure has some kind of stream processing story. The bigger difference in your two alternatives is consuming a cloud provider's ecosystem vs Hadoop. The upside to Azure, or AWS, or GCE, is that these services optimize for integrating with each other, with billing, machine management, etc. The downside is being locked in to the cloud provider; you can't run Azure stuff anywhere but Azure. Hadoop takes more work to integrate since it's really a confederation of sometimes loosely-related projects. You're investing in both a distribution, and a place to run that distribution. But, you get a lot less lock-in, and probably more easy access to talent, and a broader choice of tools. The Azure road is also a "big data" solution in that it has a lot of the scalability properties you want for big data, and the complexity as well. It does not strike me as an easier route. Do you need to invest in distributed/cloud anything at this scale? given your IoT-themed use case, I believe you will need to soon, if not now, so yes. You're not talking about gigabytes, but many terabytes in just the first year. I'd give a fresh team 6-12 months to fully productionize something based on either of these platforms. That can certainly be staged as a POC, followed by more elaborate engineering.
What is Big Data?
If I want to quote from [Wikipedia](https://en.wikipedia.org/wiki/Big_data), Big data is data sets that are so voluminous and complex that traditional data-processing application software are inadequate to deal with them. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, information privacy and data source. There are five concepts associated with big data: volume, variety, velocity and, the recently added, veracity and value. Big data can be described by the following characteristics: - Volume The quantity of generated and stored data. The size of the data determines the value and potential insight, and whether it can be considered big data or not. - Variety The type and nature of the data. This helps people who analyze it to effectively use the resulting insight. Big data draws from text, images, audio, video; plus it completes missing pieces through data fusion. - Velocity In this context, the speed at which the data is generated and processed to meet the demands and challenges that lie in the path of growth and development. Big data is often available in real-time. - Variability Inconsistency of the data set can hamper processes to handle and manage it. - Veracity The data quality of captured data can vary greatly, affecting the accurate analysis. --- To me, big data is highly connected to the deep-learning era. The reason is that during past decades, people could make good descriptions and models of data using machine-learning and data-mining but because everyday new data is coming out, social networks increase rapidly and digital gadgets' popularity is increasing among different nations, the demand for processing data and converting them to information and knowledge is increasing. If we want to use previous techniques to gather information from raw data, it will take too much time, if possible, to reach to appropriate results. In big data and deep-learning era, we need more complicated algorithms and more powerful hardware to deal with difficulties. You can also take a look at [here](https://www.sas.com/en_us/insights/big-data/what-is-big-data.html) and [here](http://searchcloudcomputing.techtarget.com/definition/big-data-Big-Data) which have relatively different perspective. Big data is a term that describes the large volume of data – both structured and unstructured – that inundates a business on a day-to-day basis. But it’s not the amount of data that’s important. It’s what organizations do with the data that matters. Big data can be analyzed for insights that lead to better decisions and strategic business moves.
6631
1
6638
null
4
46
Another post where I don't know enough terminology to describe things efficiently. For the comments, please suggest some tags and keywords I can add to this post to make it better. Say I have a 2D data structure where 'orientation' doesn't matter. The examples I ran into: - The state of a 2048 game. In terms of symmetry groups this would be D4 / D8, except that an operation doesn't yield an identical state, it just yields another state that has the same solution. - Images of plankton or galaxies (without background). Somewhat similar to above except that any rotation (not just 90o) yields an equally valid image (and one might take scale into account, but let's forget about that). In both cases I've wanted to transform all these equivalent states/images to remove all but one of the equivalent images. To illustrate with two that worked: - I can use image moments M10 and M01 to transform horizontally and vertically mirrored equivalent data. E.g. apply horizontal mirroring iff it makes M10 bigger. This would transform a 2048 state and it's horizontal mirror image to the same state. - I can use the eigenvector of the covariance matrix which has the largest eigenvalue as the orientation. Then I can rotate the image to align this eigenvector with some predetermined axis (e.g. horizontally). That still leaves a lot of operations though (diagonal mirroring, rotations around the center, inversion). And these operations do not commute (D8 is non-Abelian). Is there any comprehensive approach? The reason I want to do this is to help machine learning methods by removing variance that isn't actually meaningful. Hopefully that makes sure they don't have to learn these equivalences, so possibly need less train data (and time).
Alignment of square nonorientable images/data
CC BY-SA 3.0
null
2015-07-30T14:07:31.163
2015-07-31T17:45:31.420
null
null
10907
[ "data-cleaning", "processing" ]
Fun with Group Theory! There are only 8 unique rotation-inversion operations for a square matrix. The four rotation operators are `(0,90,180,270)`. Further rotation or rotation in the reverse direction is the same as these four. Two successive rotations just yields one of the rotation operators, so we will only consider these four rotations applied one time. The five inversion operators are `(0,/,\,|,-)`. Two successive inversions just yields a rotation, so we will only allow for a single inversion. We can thus derive all operators by combining these two vectors, which yields `4*5=20` possible states. ``` (0,90,180,270,0/,90/,180/,270/,0\,90\,180\,270\,0|,90|,180|,270|,0-,90-,180-,270-) ``` But there is still symmetry to be exploited in the inversion operators. You can probably intuit that the 4x4 matrix only has 8 final states: inverted or not and rotated by (0,90,180,270). It turns out that you can arrive at any possible state involving an inversion using any of the other inversion operators followed by one of the rotations. So we only need to retain a single inversion operator! So the final set of 8 orthogonal operations are: ``` (0,90,180,270,0|,90|,180|,270|) ``` If there is any symmetry in the matrix's members then some of the resulting states may be degenerate. In terms of mapping possible states into a ground state, it makes sense to apply a set of successive deterministic rules to determine the ground state orientation. I suggest finding the largest corner square and locating it in the lower right corner. If there are multiple candidates with equally large values in the corner square then use the next closest square as a tie breaker. There are 16 squares, so you can eventually break all ties or declare degeneracy. There is one remaining `\` operation that you can decide to apply in order to locate the larger of the two squares adjacent to the lower right corner at the bottom. Again, you can use squares adjacent to these as tie breakers.
Normalising Image Data
The method is the same as it is for traditional ML problems, i.e. you need to apply the same mean and standard deviation to the test data as you do for the training data. The mean and standard deviation used are derived from the training data, but depending on the type of problem and data used you can also use the values derived from the ImageNet dataset.
6639
1
6644
null
26
10937
I'm working through a question from the [online book](http://neuralnetworksanddeeplearning.com/chap1.html). I can understand that if the additional output layer is of 5 output neurons, I could probably set bias at 0.5 and weight of 0.5 each for the previous layer. But the question now ask for a new layer of four output neurons - which is more than enough to represent 10 possible outputs at $2^{4}$. Can someone walk me through the steps involved in understanding and solving this problem? The exercise question: There is a way of determining the bitwise representation of a digit by adding an extra layer to the three-layer network above. The extra layer converts the output from the previous layer into a binary representation, as illustrated in the figure below. Find a set of weights and biases for the new output layer. Assume that the first 3 layers of neurons are such that the correct output in the third layer (i.e., the old output layer) has activation at least 0.99, and incorrect outputs have activation less than 0.01. [](https://i.stack.imgur.com/OqQ6N.png)
Extra output layer in a neural network (Decimal to binary)
CC BY-SA 4.0
null
2015-07-31T00:25:36.347
2020-11-07T00:19:45.413
2020-08-05T11:06:35.280
98307
10990
[ "neural-network" ]
The question is asking you to make the following mapping between old representation and new representation: ``` Represent Old New 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 2 0 0 1 0 0 0 0 0 0 0 0 0 1 0 3 0 0 0 1 0 0 0 0 0 0 0 0 1 1 4 0 0 0 0 1 0 0 0 0 0 0 1 0 0 5 0 0 0 0 0 1 0 0 0 0 0 1 0 1 6 0 0 0 0 0 0 1 0 0 0 0 1 1 0 7 0 0 0 0 0 0 0 1 0 0 0 1 1 1 8 0 0 0 0 0 0 0 0 1 0 1 0 0 0 9 0 0 0 0 0 0 0 0 0 1 1 0 0 1 ``` Because the old output layer has a simple form, this is quite easy to achieve. Each output neuron should have a positive weight between itself and output neurons which should be on to represent it, and a negative weight between itself and output neurons that should be off. The values should combine to be large enough to cleanly switch on or off, so I would use largish weights, such as +10 and -10. If you have sigmoid activations here, the bias is not that relevant. You just want to simply saturate each neuron towards on or off. The question has allowed you to assume very clear signals in the old output layer. So taking example of representing a 3 and using zero-indexing for the neurons in the order I am showing them (these options are not set in the question), I might have weights going from activation of old output $i=3$, $A_3^{Old}$ to logit of new outputs $Z_j^{New}$, where $Z_j^{New} = \Sigma_{i=0}^{i=9} W_{ij} * A_i^{Old}$ as follows: $$W_{3,0} = -10$$ $$W_{3,1} = -10$$ $$W_{3,2} = +10$$ $$W_{3,3} = +10$$ This should clearly produce close to `0 0 1 1` output when only the old output layer's neuron representing a "3" is active. In the question, you can assume 0.99 activation of one neuron and <0.01 for competing ones in the old layer. So, if you use the same magnitude of weights throughout, then relatively small values coming from +-0.1 (0.01 * 10) from the other old layer activation values will not seriously affect the +-9.9 value, and the outputs in the new layer will be saturated at very close to either 0 or 1.
Neural network for Multiple integer output
welcome to the site! I think the key word you need to know that defines your task is: multi-target classification or regression. You can find an explanation and some possible techniques at this [link](https://towardsdatascience.com/regression-models-with-multiple-target-variables-8baa75aacd). For neural networks: The key is to remember that the last layer should have linear activations (i.e. no activation at all). As per your requirements, the shape of the input layer would be a vector (135,) and the output (132,). The usual loss function used for regression problems is mean squared error (MSE). Here's an example of multidimensional regression using Keras: ``` model = Sequential() model.add(Dense(200, input_dim = (135,))) model.add(Activation('relu')) model.add(Dense(200)) model.add(Activation('relu')) model.add(Dropout(0.3)) model.add(Dense(132)) model.compile(loss='mean_absolute_error', optimizer='Adam') ```
6642
1
9257
null
3
1254
I have a predictive model which I trained on a training set. I have written it in R. Now I want to deploy it as a web service so anyone can just input the data into it and get the output from the predictive model. I wanted to use Azure ML for deploying. I wanted to know whether I can drag and drop my already created/custom trained model to Azure ML studio instead of re-training it there ? I know we can train and save models in AML Studio but I am not sure about adding already created models and using them in AML solution. Help regarding this will be appreciated.
Custom trained model in Azure ML
CC BY-SA 3.0
null
2015-07-31T07:08:07.880
2016-05-27T16:21:48.643
2015-12-07T18:34:20.597
843
8016
[ "machine-learning", "r", "predictive-modeling", "azure-ml" ]
No you cannot. I had a discussion with someone in their development/support team on the MSDN forums and currently they don't support 'drag and drop' type of functionality. However you CAN serialize the model output and then de-serialize them in Azure. [](https://i.stack.imgur.com/QIILN.png) Note that the answer in the image is a bit outdated and there is the 'Create R Script' module to replace the serialization-deserialization steps within Azure. However I believe you can still serialize outside Asure (in your Desktop version of R) and deserialize them in Azure. Link to the conversation in Image: [https://social.msdn.microsoft.com/Forums/azure/en-US/5944c342-79ac-4ada-8006-8edf40f36ee1/r-script-as-a-trained-model?forum=MachineLearning](https://social.msdn.microsoft.com/Forums/azure/en-US/5944c342-79ac-4ada-8006-8edf40f36ee1/r-script-as-a-trained-model?forum=MachineLearning)
How to bypass ID column without being used in the training model but have it as output - Azure ML
You have two options to do this, one less elegant than the other: - You can drop whatever column you don't want to use and add them together after getting the scores. Though you might need to reorder columns etc. [](https://i.stack.imgur.com/4OOCc.png) Or... - There is a block called Edit Metadata which can be used to do what you need. Select whatever columns from the column selector you don't want to use and use the option Fields > Clear Feature from the block properties. These columns won't be used in the calculations but will be present when you visualise the Score Model output together with the rest of the columns and the scores. Moreover, columns in that output statistics will be labelled as Feature type if they were used in the calculations. So you straightaway know which columns were actually used in the calculations or not. Block help is quite useful here. [](https://i.stack.imgur.com/YXsZ7.png) [](https://i.stack.imgur.com/UykMc.png)
6643
1
6650
null
5
6866
I am new in deep learning. I am running a MacBook Pro yosemite (upgraded from Snowleopard). I don't have a CUDA-enabled card GPU, and running the code on the CPU is extremely slow. I heard that I can buy some instances on AWS, but it seems that they don't support macOS. My question is, to continue with the deep learning, do I need to purchase a graphic card? Or is there other solution? I don't want to spend too much on this...
Do I need to buy a NVIDIA graphic card to run deep learning algorithm?
CC BY-SA 3.0
null
2015-07-31T08:21:49.083
2017-05-29T04:59:32.770
2017-05-29T04:59:32.770
8432
10994
[ "deep-learning" ]
I would recommend familiarizing yourself with [AWS spot instances](https://www.youtube.com/watch?v=Py0VInjRSBE). It's the most practical solution I can think of for your problem, and [it works your computer too](https://www.youtube.com/watch?v=NdR03RpCpac). So, no you don't have to buy an Nvidia card, but as of today you will want to use one since almost all the solutions rely on them.
Is deep learning a must in a Data Science MSc programme?
No, it's not problematic. Most data scientists do not need or use deep learning. Deep learning is very popular right now, but that does not mean it's widely used. Deep learning can lead to substantial overfitting on small to medium datasets (I'm arbitrarily going to say that means less than 2 GB), which are the sizes that most people have. Deep learning is primarily used for object recognition in images, or text/speech models. If you're not doing either of these two things, you probably don't need to use DL.
6648
1
6651
null
5
2648
I have some features and I am using Weka to classify my instances. For example I have: `Number of adj number of adverb number of punctuation` in my feature set. However, I would like to know the contribution of each feature in the feature set. So what metrics or parameters are helpful to get the contribution of features?
Contributions of each feature in classification?
CC BY-SA 4.0
null
2015-07-27T12:59:05.890
2019-06-07T21:52:01.317
2019-06-07T21:52:01.317
29169
11072
[ "machine-learning", "nlp" ]
This is called feature ranking, which is closely related to [feature selection](https://en.wikipedia.org/wiki/Feature_selection). - feature ranking = determining the importance of any individual feature - feature selection = selecting a subset of relevant features for use in model construction. So if you are able to ranked features, you can use it to select features, and if you can select a subset of useful features, you've done at least a partial ranking by removing the useless ones. This [Wikipedia page](https://en.wikipedia.org/wiki/Feature_selection) and this [Quora post](https://www.quora.com/How-do-I-perform-feature-selection?share=1) should give some ideas. The distinction filter methods vs. wrapper based methods vs. embedded methods is the most common one. --- One straightforward approximate way is to use [feature importance with forests of trees](http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html): [](https://i.stack.imgur.com/r7Io5.png) Other common ways: - recursive feature elimination. - stepwise regression (or LARS Lasso). If you use scikit-learn, check out [module-sklearn.feature_selection](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.feature_selection). I'd guess Weka has some similar functions.
feature importance after classification
You may use Permutation importance - Get your base-line score - Permutate a feature values. May replace with Random values - Calculate the score again - The dip is the feature importance for that Feature - Repeat for all the Features > ....Breiman and Cutler also described permutation importance, which measures the importance of a feature as follows. Record a baseline accuracy (classifier) or R2 score (regressor) by passing a validation set or the out-of-bag (OOB) samples through the Random Forest. Permute the column values of a single predictor feature and then pass all test samples back through the Random Forest and recompute the accuracy or R To check the importance for the individual Class i.e. 0/1 Extrapolate the same to check if the increase is more for False-Positive or False-Negative. Read [Beware Default Random Forest Importances](https://explained.ai/rf-importance/) for more explanation. Few other quotes from the page- > Any machine learning model can use the strategy of permuting columns to compute feature importances. This fact is under-appreciated in academia and industry. > The permutation mechanism is much more computationally expensive than the mean decrease in impurity mechanism, but the results are more reliable. The permutation importance strategy does not require retraining the model after permuting each column; we just have to re-run the perturbed test samples through the already-trained model.
6676
1
9794
null
28
45619
A way to train a Logistic Regression is by using stochastic gradient descent, which scikit-learn offers an interface to. What I would like to do is take a scikit-learn's [SGDClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html) and have it score the same as a Logistic Regression [here](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html). However, I must be missing some machine learning enhancements, since my scores are not equivalent. This is my current code. What am I missing on the SGDClassifier which would have it produce the same results as a Logistic Regression? ``` from sklearn import datasets from sklearn.linear_model import LogisticRegression from sklearn.linear_model import SGDClassifier import numpy as np import pandas as pd from sklearn.cross_validation import KFold from sklearn.metrics import accuracy_score # Note that the iris dataset is available in sklearn by default. # This data is also conveniently preprocessed. iris = datasets.load_iris() X = iris["data"] Y = iris["target"] numFolds = 10 kf = KFold(len(X), numFolds, shuffle=True) # These are "Class objects". For each Class, find the AUC through # 10 fold cross validation. Models = [LogisticRegression, SGDClassifier] params = [{}, {"loss": "log", "penalty": "l2"}] for param, Model in zip(params, Models): total = 0 for train_indices, test_indices in kf: train_X = X[train_indices, :]; train_Y = Y[train_indices] test_X = X[test_indices, :]; test_Y = Y[test_indices] reg = Model(**param) reg.fit(train_X, train_Y) predictions = reg.predict(test_X) total += accuracy_score(test_Y, predictions) accuracy = total / numFolds print "Accuracy score of {0}: {1}".format(Model.__name__, accuracy) ``` My output: ``` Accuracy score of LogisticRegression: 0.946666666667 Accuracy score of SGDClassifier: 0.76 ```
Scikit-learn: Getting SGDClassifier to predict as well as a Logistic Regression
CC BY-SA 3.0
null
2015-08-04T08:11:30.990
2018-02-16T07:01:59.707
null
null
8774
[ "python", "logistic-regression", "scikit-learn", "gradient-descent" ]
The comments about iteration number are spot on. The default `SGDClassifier` `n_iter` is `5` meaning you do `5 * num_rows` steps in weight space. The [sklearn rule of thumb](http://scikit-learn.org/stable/modules/sgd.html#tips-on-practical-use) is ~ 1 million steps for typical data. For your example, just set it to 1000 and it might reach tolerance first. Your accuracy is lower with `SGDClassifier` because it's hitting iteration limit before tolerance so you are "early stopping" Modifying your code quick and dirty I get: ``` # Added n_iter here params = [{}, {"loss": "log", "penalty": "l2", 'n_iter':1000}] for param, Model in zip(params, Models): total = 0 for train_indices, test_indices in kf: train_X = X[train_indices, :]; train_Y = Y[train_indices] test_X = X[test_indices, :]; test_Y = Y[test_indices] reg = Model(**param) reg.fit(train_X, train_Y) predictions = reg.predict(test_X) total += accuracy_score(test_Y, predictions) accuracy = total / numFolds print "Accuracy score of {0}: {1}".format(Model.__name__, accuracy) Accuracy score of LogisticRegression: 0.96 Accuracy score of SGDClassifier: 0.96 ```
How to change Linear model in SGDClassifier scikit learn?
The specific linear classifier can be defined with the loss function argument. The options are { ‘hinge’, ‘log’, ‘modified_huber’, ‘squared_hinge’, ‘perceptron’}. For example, [hinge loss is equivalent to a linear SVM](https://scikit-learn.org/stable/modules/sgd.html#classification) and [log loss is equivalent to Logistic Regression](https://scikit-learn.org/stable/modules/sgd.html#mathematical-formulation).
6679
1
6704
null
1
336
I’m working on comparing bacteria metabolic models. Each model has a set of metabolites (around 2000) and their concentration for 200 time points. I’m in the process of comparing the models to cluster them based on their similarity. One method I followed is I did a pair wise comparison for each of the metabolite pairs in two models using Euclidean distance. Below is how my data look like. This is a [sample data file](http://1drv.ms/1JD8PHX). [](https://i.stack.imgur.com/yvoPj.jpg) I computed pair wise Euclidean distance for Met1 from Model A and Met1 from Model B. Likewise computed the distances for all the common metabolites between the 2 models (Met4 in Model A and Met4 in Model B) and summed up the distances to get a distance (dissimilarity) between the two models. Similarly I computed the dissimilarity matrix for all the models and I used hierarchical clustering to cluster them. As mentioned above, now I want to compute the dissimilarity of the models using Discrete Wavelet Transformation as my distance measure. I would like to know how to use Discrete Wavelet Transformation to compute a dissimilarity distance between 2 time series and hence for my models. Previously I used DWT as a distance measure with DBSCAN for clustering metabolites in one model according to their behavior. It worked fine.
R: Comparing dissimilarity between metabolic models with discrete wavelet transformation
CC BY-SA 3.0
null
2015-08-04T10:48:20.727
2015-08-05T14:35:57.897
null
null
11063
[ "r", "clustering", "time-series", "bioinformatics" ]
Take a look at the `TSclust` package. Here how you would apply it to your sample data. ``` require(TSclust) #read in the data model_a <- read.csv("~/Desktop/Model A.csv", header = TRUE, stringsAsFactors = FALSE) model_b <- read.csv("~/Desktop/Model B.csv", header = TRUE, stringsAsFactors = FALSE) #data must be in rows rather than columns model_a <- as.data.frame(t(model_a)) model_b <- as.data.frame(t(model_b)) #calculate dissimlarities between metabolites in models 1 and 2 met1_DWT.diss <- as.numeric(diss.DWT(rbind(model_a['Met1', ], model_b['Met1', ]))) met1_DWT.diss [1] 90.80332 met2_DWT.diss <- as.numeric(diss.DWT(rbind(model_a['Met2', ], model_b['Met2', ]))) met2_DWT.diss [1] 1.499241 ```
difference between scaling/normalizing data at a specific step
Case I - With a single scaling step, you might leak the test info into the train. In this approach you have a common min/max otherwise it would have been two pairs See, the plot for one of the Features of Iris dataset Also, we don't scale the target. But I see this in your code. [](https://i.stack.imgur.com/yJOBc.png) Case II - This is fine but you should also consider the online cases, where you will not have a test set to scale the new test data. Case III - This is a better and an agnostic approach. Your code implementation incorrect as suggested in the comment
6680
1
6685
null
1
343
Is it possible to use a sequence of numbers as one feature? For example, using libsvm data format: ``` <label> <index1>:<value1> <index2>:<value2> +1 1:123.02 2:1.23 3:5.45,2.22,6.76 +1 1:120.12 2:2.23 3:4.98,2.55,4.45 -1 1:199.99 2:2.13 3:4.98,2.22,6.98 ... ``` Is there any special machine learning algorithm for this kind of data?
Sequence of numbers as single feature
CC BY-SA 3.0
null
2015-08-04T12:36:20.967
2017-03-06T00:01:14.520
2017-03-05T22:39:52.887
28347
11064
[ "machine-learning", "dataset", "algorithms" ]
2 solutions: - You aggregate each sequence of numbers into a single number, which use as a feature. There exist plenty of aggregation functions, such as some derived from descriptive statistics root-mean-square, kurtosis, skewness, max, min, duration, standard deviation, crest factor, mean, or more specific aggregation such as fourier transforms or wavelet transforms. - You use some model that accepts sequences as input. Sequences may be of variable length. Example of such model: recurrent neural networks, Dynamic Bayesian networks.
Predicting Sequence based on Tabular Features
I think you have 2 different ways to approach this problem. But the variable number of features and targets make it challenging. I am not aware of any well established ways to treat such issues but I find the problem interesting and would like to share my opinion. The first way is to define this as a classification problem. The order of cars matters, in the sense that it is loss of a problem if you classify car in class 1 as a class 2, compared to identifying it as a class - say - 15 in a 20 car case. This is an "Ordinal Classification" problem and as far as I know, there is no established loss function for this. But someone implemented their own and [shared it](https://github.com/JHart96/keras_ordinal_categorical_crossentropy), but I never used it so I don't know how it works. Of course, this is not a must a you can use a regular classifier loss function. Now you have to deal with variable number of cars. If you have a maximum possible number of cars present at each time, you can use that to define the number of features in your Neural Network. For the cases where there are less number of cars you can set the features associated with the non-existent cars to a number (for example -1), and put a constraint to the loss function such that cars with such features are ordered after the present cars only, and drop them at the end. This might sound complicated, it certainly is so to type it here; so I hope I was able to explain it. The second possible way is to define a clustering problem. In order to deal with variable car numbers at each case you can define the total number of cars as an additional feature. In the example you give above they all will have a car number feature = 3. And then use any clustering algorithm you can think of, such as KNN with K=max number of cars. But this also has drawbacks: When 3 cars are present, cars can be clustered as 1,2,3; but they can just as easily be clustered into 4,6,9; or even 1,1,1. And I have no idea how you would handle such an issue. Problems with the classification algorithm defined above is less likely to cause such problems, but it is much more difficult to implement. I know these options are far from optimal so I really hope someone else comes up with a better answer. Good luck.
6694
1
10408
null
11
3545
I have been trying to understand reinforcement learning for quite sometime, but somehow I am not able to visualize how to write a program for reinforcement learning to solve a grid world problem. Can you suggest me some text books which would help me build a clear conception of Reinforcement Learning?
Books on Reinforcement Learning
CC BY-SA 3.0
null
2015-08-05T05:58:44.543
2021-02-13T08:11:34.423
2016-01-18T14:36:36.497
8820
8013
[ "machine-learning", "books", "reinforcement-learning" ]
Here you have some good references on Reinforcement Learning: Classic Sutton RS, Barto AG. Reinforcement Learning: An Introduction. Cambridge, Mass: A Bradford Book; 1998. 322 p. The draft for the second edition is available for free: [Reinforcement Learning: An Introduction](http://incompleteideas.net/book/the-book-2nd.html) Russell/Norvig Chapter 21: Russell SJ, Norvig P, Davis E. Artificial intelligence: a modern approach. Upper Saddle River, NJ: Prentice Hall; 2010. More technical Szepesvári C. Algorithms for reinforcement learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. 2010;4(1):1–103. [Algorithms of Reinforcement Learning | Csaba Szepesvári](http://www.ualberta.ca/%7Eszepesva/RLBook.html) Bertsekas DP. Dynamic Programming and Optimal Control. 3rd edition. Belmont, Mass.: Athena Scientific; 2007. 1270 p. Chapter 6, vol 2 is available for free: [Dynamic Programming and Optimal Control 3rd Edition, Volume II | Massachusetts Institute of Technology](http://web.mit.edu/dimitrib/www/dpchapter.pdf) For more recent developments Wiering M, van Otterlo M, editors. Reinforcement Learning. Berlin, Heidelberg: Springer Berlin Heidelberg; 2012 Available from: [Reinforcement Learning | SpringerLink](http://link.springer.com/10.1007/978-3-642-27645-3) Kochenderfer MJ, Amato C, Chowdhary G, How JP, Reynolds HJD, Thornton JR, et al. Decision Making Under Uncertainty: Theory and Application. 1 edition. Cambridge, Massachusetts: The MIT Press; 2015. 352 p. Multi-agent reinforcement learning Buşoniu L, Babuška R, Schutter BD. Multi-agent Reinforcement Learning: An Overview. In: Srinivasan D, Jain LC, editors. Innovations in Multi-Agent Systems and Applications - 1 . Springer Berlin Heidelberg; 2010 p. 183–221. Available from: [Multi-agent Reinforcement Learning: An Overview](http://link.springer.com/chapter/10.1007/978-3-642-14435-6_7) Schwartz HM. Multi-agent machine learning : a reinforcement approach. Hoboken, New Jersey: Wiley; 2014. Videos / Courses I would also suggest David Silver course in YouTube: [RL Course by David Silver](https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLqYmG7hTraZBiG_XpjnPrSNw-1XQaM_gB)
Reinforcement learning algorithms
As your question was focused on reinforcement learning with RStudio I.e., in R language BOOKS - Hands on Reinforcement learning with R You Tube - Reinforcement Learn Techniques with R, packtpub tutorial series Reinforcement Learn Techniques with R : What Reinforcement Learning Can Do for You | packtpub.com Your First Reinforcement Learning Program Programming the Environment | packtpub.com - Discover Algorithms for Reward-Based Learning in R | packtpub.com The Course Overview First model based program: Policy Evaluation and Iteration Programming model free environment using Monte Carlo & Q- learning Building Actions, Rewards, Punishments using Simulated Annealing Alt to Q-Learning - Hands on Reinforcement learning with R | code in action (packt) Markov decision process in action Multi-Armed bandit models Dynamic programming for optimal policies Monte Carlo methods for prediction Temporal difference learning Reinforcement learning in Game applications MAB for financial engineering TD learning in healthcare Exploring deep reinforcement learning methods Deep Q learning using keras PDF - Reinforcement Learning in R - Reinforcement Learning in R by Nicolas Pröllochs, Stefan Feuerriegel Tutorial links - HOW TO PERFORM REINFORCEMENT LEARNING WITH R - Reinforcement Learning (Q-learning) An Introduction (Part 1) Implementation using R (Part 2) COURSES - Reinforcement learning with R:Algorithms-Agents-Environment-Udemy enter link description here - Reinforcement learning specialisation Coursera Lecture NOTES - Reinforcement learning R slides - Algorithms for Reinforcement Learning OTHER GENERAL RESOURCES (not specific to R) - The chapter by Bertsekas - Mastering Reinforcement Learning with Python: Build next-generation, self-learning models using reinforcement learning techniques and best practices - Reinforcement Learning Algorithms with Python: Learn, understand, and develop smart algorithms for addressing AI challenges - Python Reinforcement Learning Projects: Eight hands-on projects exploring reinforcement learning algorithms using TensorFlow - Reinforcement Learning: Industrial Applications of Intelligent Agents - Handbook of Reinforcement Learning and Control: 325 (Studies in Systems, Decision, and Control) - Algorithms for Reinforcement Learning: Csaba Szepesvari. Nice compendium of ready to be implemented algorithms. - Reinforcement Learning and Dynamic Programming using Function Approximators. Busoniu, Lucian; Robert Babuska ; Bart De Schutter ; Damien Ernst (2010). This is a very practical book that explains some state-of-the-art algorithms (i.e., useful for real world problems) like fitted-Q-iteration and its variations. - Reinforcement Learning: State-of-the-Art. Vol. 12 of Adaptation, Learning and Optimization. Wiering, M., van Otterlo, M. (Eds.), 2012. Springer, Berlin. In Sutton's words "This book is a valuable resource for students wanting to
go beyond the older textbooks and for researchers wanting to easily catch up with
recent developments". - Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles : Draguna Vrabie, Kyriakos G. Vamvoudakis , Frank L. Lewis. I am not familiar with this one, but I have seen it recommended. - Markov Decision Processes in Artificial Intelligence, Sigaud O. & Buffet O. editors, ISTE Ld., Wiley and Sons Inc, 2010. I definitely suggest the books by Sutton and Barto as an excellent intro, the chapter by Bertsekas for getting a solid theoretical background and the book by Busoniu et al. for practical algorithms that can solve some non-toy problems. I also find useful the book by Szepesvari as a quick reference for understanding an comparing algorithms. There are also several good specialized monographs and surveys on the topic, some of these are: - "From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to Optimization and Planning" by Remi Munos (New trends on Machine Learning). This monograph covers important nonconvex optimistic optimization methods that can be applied for policy search. - "Reinforcement Learning in Robotics: A Survey" by J. Kober, J. A. Bagnell and J. Peters. - "A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning" by A. Geramifard, T. J. Walsh, S. Tllex, G. Chowdhary, N. Roy and J. P. How (Foundations and Trends in Machine Learning). - "A Survey on Policy Search for Robotic" by Newmann and Peters (Foundations and Trends in Machine Learning). - markov decision process - Algorithms for Reinforcement Learning (Synthesis Lectures on Artificial Intelligence and Machine Learning) - Neuro-Dynamic Programming (Optimization and Neural Computation Series, 3)
6700
1
6707
null
3
176
Considering application of Reinforcement learning(dynamic programming method performing value iteration) on grid world, in each of the iteration, I go through each of the cell of the grid and update its value depending on its present value and the present value of the taking action from that state. Now - How long do I keep updating value of each cell? Shall I keep updating unless the change in the previous and the present value function is the least? I am not able to understand how to implement the stopping mechanism in the grid-world scenario(discount not considered) - Is the value function the values of all the grids in the grid world?
When to stop calculating values of each cell in the grid in Reinforcement Learning(dynamic programming) applied on gridworld
CC BY-SA 3.0
null
2015-08-05T10:27:51.370
2016-01-18T14:37:01.720
2016-01-18T14:37:01.720
8820
8013
[ "machine-learning", "markov-process", "reinforcement-learning" ]
1- You should set a threshold (a hyper-param) that will allow you to quit the loop. Let V the values for all state s and V' the new values after value iteration. if $\sum_s|V(s) - V’(s)| \le threshold$, quit 2 - V is a function for every cell in the grid yes because you need to update every cell. Hope it helps.
Does reinforcement learning only work on grid world?
The short answer is no! Reinforcement Learning is not limited to discrete spaces. But most of the introductory literature does deal with discrete spaces. As you might know by now that there are three important components in any Reinforcement Learning problem: Rewards, States and Actions. The first is a scalar quantity and theoretically the latter two can either be discrete or continuous. The convergence proofs and analyses of the various algorithms are easier to understand for the discrete case and also the corresponding algorithms are easier to code. That is one of the reasons, most introductory material focuses on them. Having said that, it should be interesting to note that the early research on Reinforcement Learning actually focussed on continuous state representations. It was only in the the 90s since the literature started representing all the standard algorithms for discrete spaces as we had a lot of proofs for them. Finally, if you noticed carefully, I said continuous states only. Mapping continuous states and continuous actions is hard. Nevertheless, we do have some solutions for now. But it is an active area of Research in RL. This [paper by Sutton](http://webdocs.cs.ualberta.ca/~sutton/papers/SSR-98.pdf) from '98 should be a good start for your exploration!
6709
1
6711
null
0
686
What I understood for value iteration while coding is that we need to have a policy fixed. According to that policy the value function of each state will be calculated. Right? But in policy iteration the policy will change from time to time. Am I right?
Confusion in Policy Iteration and Value iteration in Reinforcement learning in Dynamic Programming
CC BY-SA 3.0
null
2015-08-06T06:03:20.613
2017-03-31T17:14:15.470
2016-01-18T14:37:14.050
8820
8013
[ "machine-learning", "reinforcement-learning" ]
In policy iteration, you define a starting policy and iterate towards the best one, by estimating the state value associated with the policy, and making changes to action choices. So the policy is explicitly stored and tracked on each major step. After each iteration of the policy, you re-calculate the value function for that policy to within a certain precision. That means you also work with value functions that measure actual policies. If you halted the iteration just after the value estimate, you would have a non-optimal policy and the value function for that policy. In value iteration, you implicitly solve for the state values under an ideal policy. There is no need to define an actual policy during the iterations, you can derive it at the end from the values that you calculate. You could if you wish, after any iteration, use the state values to determine what "current" policy is predicted. The values will likely not approximate the value function for that predicted policy, although towards the end they will probably be close.
Value Updation Dynamic Programming Reinforcement learning
The probabilities you describe refer only to the go-north action. It means that if you want to go north, you have 80% chance of actually going north and 20% of going left or right, making the problem more difficult (non-deterministic). This rule applies to every direction. Also, the formula does not tell which action to chose, just how to update the values. In order to select an action, assuming a greedy-policy, you'd select the one with the highest expected value ($V(s')$). The formula says to sum the values for all possible outcomes from the best action. So, supposing go-north is indeed the best action, you have: $$.8 * (-.1 + 0) + .1 * (-.1 + 0) + .1 * (-.1 + 0) = -.1$$ But let us suppose that you still don't know which is the best action and want to select one greedily. Then you must compute the sum for each possible action (north, south, east, west). Your example has all values set to 0 and the same reward and so is not very interesting. Let's say you have a +1 reward to east (-0.1 for the remaining directions) and that south already has V(s) = 0.5 (0 for the remaining states). Then you compute the value for each action (let $\gamma = 1$, since it is a user-adjusted parameter): - North: $.8 * (-.1 + 0) + .1 * (-.1 + 0) + .1 * (1 + 0) = -.08 - .01 + .1 = .01$ - South: $.8 * (-.1 + .5) + .1 * (-.1 + 0) + .1 * (1 + 0) = 0.32 - .01 + .1 = .41$ - East: $.8 * (1 + 0) + .1 * (-.1 + 0) + .1 * (-.1 + .5) = .8 - .01 + .04 = .83$ - West: $.8 * (-.1 + 0) + .1 * (-.1 + 0) + .1 * (-.1 + .5) = -.08 - .01 + .04 = -.05$ So you would update your policy to go East from the current state, and update the current state value to 0.83.
6715
1
6718
null
35
61381
Is it necessary to standardize your data before cluster? In the example from `scikit learn` about DBSCAN, [here](http://scikit-learn.org/stable/auto_examples/cluster/plot_dbscan.html#example-cluster-plot-dbscan-py) they do this in the line: ``` X = StandardScaler().fit_transform(X) ``` But I do not understand why it is necessary. After all, clustering does not assume any particular distribution of data - it is an unsupervised learning method so its objective is to explore the data. Why would it be necessary to transform the data?
Is it necessary to standardize your data before clustering?
CC BY-SA 3.0
null
2015-08-06T20:58:57.380
2021-04-17T05:28:30.623
null
null
10512
[ "python", "clustering", "anomaly-detection" ]
Normalization is not always required, but it rarely hurts. Some examples: [K-means](https://stats.stackexchange.com/a/21226/12359): > K-means clustering is "isotropic" in all directions of space and therefore tends to produce more or less round (rather than elongated) clusters. In this situation leaving variances unequal is equivalent to putting more weight on variables with smaller variance. Example in Matlab: ``` X = [randn(100,2)+ones(100,2);... randn(100,2)-ones(100,2)]; % Introduce denormalization % X(:, 2) = X(:, 2) * 1000 + 500; opts = statset('Display','final'); [idx,ctrs] = kmeans(X,2,... 'Distance','city',... 'Replicates',5,... 'Options',opts); plot(X(idx==1,1),X(idx==1,2),'r.','MarkerSize',12) hold on plot(X(idx==2,1),X(idx==2,2),'b.','MarkerSize',12) plot(ctrs(:,1),ctrs(:,2),'kx',... 'MarkerSize',12,'LineWidth',2) plot(ctrs(:,1),ctrs(:,2),'ko',... 'MarkerSize',12,'LineWidth',2) legend('Cluster 1','Cluster 2','Centroids',... 'Location','NW') title('K-means with normalization') ``` ![enter image description here](https://i.stack.imgur.com/N2unM.png) ![enter image description here](https://i.stack.imgur.com/e5G2M.png) (FYI: [How can I detect if my dataset is clustered or unclustered (i.e. forming one single cluster](https://www.quora.com/Machine-Learning/How-can-I-detect-if-my-dataset-is-clustered-or-unclustered-i-e-forming-one-single-cluster/answer/Franck-Dernoncourt)) [Distributed clustering](http://www.medwelljournals.com/fulltext/?doi=ijscomp.2009.168.172): > The comparative analysis shows that the distributed clustering results depend on the type of normalization procedure. [Artificial neural network (inputs)](https://stackoverflow.com/a/4674770/395857): > If the input variables are combined linearly, as in an MLP, then it is rarely strictly necessary to standardize the inputs, at least in theory. The reason is that any rescaling of an input vector can be effectively undone by changing the corresponding weights and biases, leaving you with the exact same outputs as you had before. However, there are a variety of practical reasons why standardizing the inputs can make training faster and reduce the chances of getting stuck in local optima. Also, weight decay and Bayesian estimation can be done more conveniently with standardized inputs. [Artificial neural network (inputs/outputs)](http://www.faqs.org/faqs/ai-faq/neural-nets/part2/section-16.html) > Should you do any of these things to your data? The answer is, it depends. Standardizing either input or target variables tends to make the training process better behaved by improving the numerical condition (see ftp://ftp.sas.com/pub/neural/illcond/illcond.html) of the optimization problem and ensuring that various default values involved in initialization and termination are appropriate. Standardizing targets can also affect the objective function. Standardization of cases should be approached with caution because it discards information. If that information is irrelevant, then standardizing cases can be quite helpful. If that information is important, then standardizing cases can be disastrous. --- Interestingly, changing the measurement units may even lead one to see a very different clustering structure: [Kaufman, Leonard, and Peter J. Rousseeuw.. "Finding groups in data: An introduction to cluster analysis." (2005).](http://rads.stackoverflow.com/amzn/click/0471735787) > In some applications, changing the measurement units may even lead one to see a very different clustering structure. For example, the age (in years) and height (in centimeters) of four imaginary people are given in Table 3 and plotted in Figure 3. It appears that {A, B ) and { C, 0) are two well-separated clusters. On the other hand, when height is expressed in feet one obtains Table 4 and Figure 4, where the obvious clusters are now {A, C} and { B, D}. This partition is completely different from the first because each subject has received another companion. (Figure 4 would have been flattened even more if age had been measured in days.) To avoid this dependence on the choice of measurement units, one has the option of standardizing the data. This converts the original measurements to unitless variables. ![enter image description here](https://i.stack.imgur.com/Ppu5Y.png) ![enter image description here](https://i.stack.imgur.com/zW6rI.png) [Kaufman et al.](http://rads.stackoverflow.com/amzn/click/0471735787) continues with some interesting considerations (page 11): > From a philosophical point of view, standardization does not really solve the problem. Indeed, the choice of measurement units gives rise to relative weights of the variables. Expressing a variable in smaller units will lead to a larger range for that variable, which will then have a large effect on the resulting structure. On the other hand, by standardizing one attempts to give all variables an equal weight, in the hope of achieving objectivity. As such, it may be used by a practitioner who possesses no prior knowledge. However, it may well be that some variables are intrinsically more important than others in a particular application, and then the assignment of weights should be based on subject-matter knowledge (see, e.g., Abrahamowicz, 1985). On the other hand, there have been attempts to devise clustering techniques that are independent of the scale of the variables (Friedman and Rubin, 1967). The proposal of Hardy and Rasson (1982) is to search for a partition that minimizes the total volume of the convex hulls of the clusters. In principle such a method is invariant with respect to linear transformations of the data, but unfortunately no algorithm exists for its implementation (except for an approximation that is restricted to two dimensions). Therefore, the dilemma of standardization appears unavoidable at present and the programs described in this book leave the choice up to the user.
When should I normalize data?
As @Daniel Chepenko pointed out, there are models that are robust w.r.t. feature transformations (like Random Forest). But for model which made operations on the features (like Neural Networks), usually you need to normalize data for three reasons: 1) Numerical stability: computers cannot represent every number, because the electronic which make them exist deals with binaries (zeros and ones). So they use a representation based on Floating Point arithmetic. In practice, this means that the numerical behavior in the range [0.0, 1.0] is not the same of the range [1'000'000.0, 1'000'001.0]. So having two features that have very different scales can lead to numerical instability, and finally to a model unable to learn anything. 2) Control of the gradient: imagine that you have a feature that spans in a range [-1, 1], and another one that spans in a range [-1'000'000, 1'000'000]: the weights associated to the first feature are much more sensitive to small variations, and so their gradient will become much more variable in the direction described by that feature. This can lead to other instabilities: some values of learning rate (LR) can be too small for one feature (and so the convergence will be slow) but too big for the second feature (and so you jump over the optimal values). And so, at the end of the training process you will have a sub-optimal model. 3) control of the variance of the data: if you have skewed features, and you don't transform them, you risk that the model will simply ignore the elements in the tail of the distributions. And in some cases, the tails are much more informative than the bulk of the distributions.
6719
1
6720
null
4
130
Regarding Value Iteration of Dynamic Programming(reinforcement learning) in grid world, the value updation of each state is given by: [](https://i.stack.imgur.com/Y3Sar.png) Now Suppose i am in say box (3,2). I can go to (4,2)(up) (3,3)(right) and (1,3)(left) and none of these are my final state so i get a reward of -0.1 for going in each of the states. The present value of all states are 0. The probability of going north is 0.8, and going left/right is 0.1 each. So since going left/right gives me more reward(as reward*probability will be negative) i go left or right. Is this the mechanism. Am I correct? But In the formula there is a summation term given. So I basically cannot understand this formula. Can anyone explain me with an example?
Value Updation Dynamic Programming Reinforcement learning
CC BY-SA 3.0
null
2015-08-07T04:31:06.123
2016-03-05T20:28:23.367
2016-01-18T14:36:57.447
8820
8013
[ "machine-learning", "reinforcement-learning" ]
The probabilities you describe refer only to the go-north action. It means that if you want to go north, you have 80% chance of actually going north and 20% of going left or right, making the problem more difficult (non-deterministic). This rule applies to every direction. Also, the formula does not tell which action to chose, just how to update the values. In order to select an action, assuming a greedy-policy, you'd select the one with the highest expected value ($V(s')$). The formula says to sum the values for all possible outcomes from the best action. So, supposing go-north is indeed the best action, you have: $$.8 * (-.1 + 0) + .1 * (-.1 + 0) + .1 * (-.1 + 0) = -.1$$ But let us suppose that you still don't know which is the best action and want to select one greedily. Then you must compute the sum for each possible action (north, south, east, west). Your example has all values set to 0 and the same reward and so is not very interesting. Let's say you have a +1 reward to east (-0.1 for the remaining directions) and that south already has V(s) = 0.5 (0 for the remaining states). Then you compute the value for each action (let $\gamma = 1$, since it is a user-adjusted parameter): - North: $.8 * (-.1 + 0) + .1 * (-.1 + 0) + .1 * (1 + 0) = -.08 - .01 + .1 = .01$ - South: $.8 * (-.1 + .5) + .1 * (-.1 + 0) + .1 * (1 + 0) = 0.32 - .01 + .1 = .41$ - East: $.8 * (1 + 0) + .1 * (-.1 + 0) + .1 * (-.1 + .5) = .8 - .01 + .04 = .83$ - West: $.8 * (-.1 + 0) + .1 * (-.1 + 0) + .1 * (-.1 + .5) = -.08 - .01 + .04 = -.05$ So you would update your policy to go East from the current state, and update the current state value to 0.83.
Confusion in Policy Iteration and Value iteration in Reinforcement learning in Dynamic Programming
In policy iteration, you define a starting policy and iterate towards the best one, by estimating the state value associated with the policy, and making changes to action choices. So the policy is explicitly stored and tracked on each major step. After each iteration of the policy, you re-calculate the value function for that policy to within a certain precision. That means you also work with value functions that measure actual policies. If you halted the iteration just after the value estimate, you would have a non-optimal policy and the value function for that policy. In value iteration, you implicitly solve for the state values under an ideal policy. There is no need to define an actual policy during the iterations, you can derive it at the end from the values that you calculate. You could if you wish, after any iteration, use the state values to determine what "current" policy is predicted. The values will likely not approximate the value function for that predicted policy, although towards the end they will probably be close.
6721
1
6725
null
7
12973
I want to use some Decision Tree learning, such as the Random Forest classifier. I have data of different types: continuous, discrete and categorical. How do I have to preprocess data in order to have consistent results?
How to preprocess different kinds of data (continuous, discrete, categorical) before Decision Tree learning
CC BY-SA 3.0
null
2015-08-07T10:43:50.747
2015-08-08T04:33:31.700
null
null
133
[ "data-mining", "random-forest", "data", "decision-trees", "preprocessing" ]
One of the benefits of decision trees is that ordinal (continuous or discrete) input data does not require any significant preprocessing. In fact, the results should be consistent regardless of any scaling or translational normalization, since the trees can choose equivalent splitting points. The best preprocessing for decision trees is typically whatever is easiest or whatever is best for visualization, as long as it doesn't change the relative order of values within each data dimension. Categorical inputs, which have no sensible order, are a special case. If your random forest implementation doesn't have a built-in way to deal with categorical input, you should probably use a 1-hot encoding: - If a categorical value has $n$ categories, you encode the value using $n$ dimensions, one corresponding to each category. - For each data point, if it is in category $k$, the corresponding $k$th dimension is set to 1, while the rest are set to 0. This 1-hot encoding allows decision trees to perform category equality tests in one split since inequality splits on non-ordinal data doesn't make much sense.
How to make a decision tree with both continuous and categorical variables in the dataset?
Decision trees can handle both categorical and numerical variables at the same time as features, there is not any problem in doing that. ### Theory Every split in a decision tree is based on a feature. If the feature is categorical, the split is done with the elements belonging to a particular class. If the feature is contiuous, the split is done with the elements higher than a threshold. At every split, the decision tree will take the best variable at that moment. This will be done according to an impurity measure with the splitted branches. And the fact that the variable used to do split is categorical or continuous is irrelevant (in fact, decision trees categorize contiuous variables by creating binary regions with the threshold). ### Implementation Although, at a theoretical level, is very natural for a decision tree to handle categorical variables, most of the implementations don't do it and only accept continuous variables: - This answer reflects on decision trees on scikit-learn not handling categorical variables. However, one of the scikit-learn developers argues that > At the moment it cannot. However RF tends to be very robust to categorical features abusively encoded as integer features in practice. - This other post comments about xgboost not handling categorical variables. - rpart in R can handle categories passed as factors, as explained in here - Lightgbm and catboost can handle categories. Catboost does an "on the fly" target encoding, while lightgbm needs you to encode the categorical variable using ordinal encoding. Here's an example of how lightgbm handles categories: ``` import pandas as pd from sklearn.datasets import load_iris from lightgbm import LGBMRegressor from category_encoders import OrdinalEncoder X = load_iris()['data'] y = load_iris()['target'] X = OrdinalEncoder(cols=[3]).fit_transform(X) dt = LGBMRegressor() dt.fit(X, y, categorical_feature=[3]) ```
6727
1
6747
null
3
5750
I have a CSV file with around 1 Million rows. Let say its have details like ``` Name | Age | Salary name 1 52 10000 name 2 55 10043 name 3 50 100054 name 2 55 10023 name 1 52 100322... ``` and soon . but i need to merge the redundant details . and need a output like ``` Name | Age | Salary name 1 52 110322* name 2 55 20066 * name 3 50 100054 ``` you might notice that the repeating Name 1 and Name 2 details are merged and the Salary values are added .So i'm looking for a way to apply this change to my original data set. so i need a python script to fix my problem .
Merging repeating data cells in csv
CC BY-SA 3.0
null
2015-08-08T11:14:35.630
2015-08-14T17:21:38.663
null
null
9035
[ "python", "csv" ]
[Pandas](http://pandas.pydata.org/) is a python library that you will find very useful for these types of tasks. [Here is a stack overflow post](https://stackoverflow.com/questions/29583312/pandas-sum-of-duplicate-attributes) that tells you how to do what you want to accomplish. It boils down to three very pythonic lines with a [groupby and transformation](http://pandas.pydata.org/pandas-docs/stable/groupby.html) followed by a [drop_duplicates](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html): ``` import pandas df = pandas.read_csv('csvfile.csv', header = 0) df['Total'] = df.groupby(['Name', 'Age'])['Salary'].transform('sum') df.drop_duplicates(take_last=True) ```
How to merge all the data to have a final dataset
A slightly hacky way to get there maybe but you can do this to get what you want from the second table; ``` df2['count'] = 1 pivot = df.pivot_table(df, index='userid', columns='productid', values = 'count').reset_index() pivot = pivot.fillna(0) ``` You would then want to merge this to the first dataset like this; ``` finaldf = pd.merge(df1, pivot, left_on='userid', right_on='userid') ``` another great thing to use for generating the dummies for categorical variables is ``` pd.get_dummies() ``` The approach seems ok to me and making some more features would also not be a bad idea.
6765
1
6767
null
3
107
I am a CS graduate but am very new to data science. I could use some expert advise/insight on a problem I am trying to solve. I've been through the titanic tutorial on gaggle.com which I think was helpful but my problem is a bit different. I am trying to predict diabetes risk based upon Age, Sex...and other factors given this data: [http://www.healthindicators.gov/Indicators/Diabetes-new-cases-per-1000_555/Profile/ClassicData](http://www.healthindicators.gov/Indicators/Diabetes-new-cases-per-1000_555/Profile/ClassicData) The data gives new cases people per 1,000 people for each dimension (Age, Sex...etc). What I would like to do is devise a way to predict, given a list of dimensions (Age, Sex...etc) a probability factor for a new diagnosis. So far my strategy is to load this data into R and use some package to create a decision tree, similar to what I saw in the titanic example on kaggle.com, then feed in a dimension list. However, I am a bit overwhelmed. Any direction on what I should be studying, packages/methods/examples would be helpful.
Advise on making predictions given collection of dimensions and corresponding probabilities
CC BY-SA 3.0
null
2015-08-11T19:28:29.793
2015-08-11T23:13:22.700
null
null
12203
[ "machine-learning", "data-mining", "r", "predictive-modeling", "data" ]
## Aggregate Data Since you're only given aggregate data, and not individual examples, machine learning techniques like decision trees won't really help you much. Those algorithms gain a lot of traction by looking at correlations within a single example. For instance, the increase in risk from being both obese and over 40 might be much higher than the sum of the individual risks of being obese or over 40 (i.e. the effect is greater than the sum of its parts). Aggregate data loses this information. ## The Bayesian Approach On the bright side, though, using aggregate data like this is fairly straightforward, but requires some probability theory. If $D$ is whether the person has diabetes and $F_1,\ldots,F_n$ are the factors from that link you provided, and if I'm doing my math correctly, we can use the formula: $$ \text{Prob}(D\ |\ F_1,\ldots,F_n) \propto \frac{\prod_{k=1}^n \text{Prob}(D\ |\ F_k)}{\text{Prob}(D)^{n-1}} $$ (The proof for this is an extension of the one found [here](https://stats.stackexchange.com/a/112361)). This assumes that the factors $F_1,\ldots,F_n$ are conditionally independent given $D$, though that's usually reasonable. To calculate the probabilities, compute the outputs for $D=\text{Diabetes}$ and $\neg D=\text{No diabetes}$ and divide them both by their sum so that they add to 1. ## Example Suppose we had a married, 48-year-old male. Looking at the 2010-2012 data, 0.73% of all people get diabetes ($\text{Prob}(D) = 0.73\%$), 0.77% of married people get diabetes ($\text{Prob}(D\ |\ F_1)$$= 0.77\%$), 1.02% of people age 45-54 get diabetes ($\text{Prob}(D\ |\ F_2) = 1.02\%$), and 0.70% of males get diabetes ($\text{Prob}(D\ |\ F_3) = 0.70\%$). This gives us the unnormalized probabilities: $$ \begin{align*} P(D\ |\ F_1,F_2,F_3) &= \frac{(0.77\%)(1.02\%)(0.70\%)}{(0.73\%)^2} &= 0.0103 \\ P(\neg D\ |\ F_1,F_2,F_3) &= \frac{(99.23\%)(98.98\%)(99.30\%)}{(99.27\%)^2} &= 0.9897 \end{align*}$$ After normalizing these to add to one (which they already do in this case), we get a 1.03% chance of this person getting diabetes, and a 98.97% chance for them not getting diabetes.
Using Neural Networks To Predict Sets
The `75^6` option is not only bad for speed, but it is a very difficult representation to train, because the NN doesn't "understand" that any of the output categories are related. You would need an immense amount of data to train such a network, because ideally you need at least a few examples in any category that you expect the network to predict. Unless you had literally billions of examples to train from, the chances are certain combinations will never occur in your training set, thus could never be predicted with any confidence. Therefore I would probably use 75 outputs, one for each object representing the probability that it would be chosen. This is easy to create training data for, if you have training examples with the 6 favoured objects - just a 1 for the objects chosen and 0 for all others as a 75-wide label. For prediction, select the 6 objects with the highest probabilities. If these choices are part of a recommender system (i.e. may be presented to same person as being predicted for), then you can select items randomly using the outputs as weights. You may even find that this weighted Monte Carlo selection works well for predicting bulk user behaviour as well (e.g. for predictions fed into stock purchases). In addition, this stochastic approach can be made to predict duplicates (but not accurately, except perhaps averaged over many predictions). A sigmoid transfer function on the output layer is good for representing non-exclusive probability. The logloss objective function can be used to generate the error values and train the network. If you want to accurately predict duplicate choices out of the 6 items chosen, then you will need plenty of examples where duplicates happened and have some way to represent that in the output layer. For example, you could have double the number of output neurons, with two assigned to each object. The first probability would then be probability of selecting the item once, and the second probability would be for selecting it twice. --- The question has since been updated, and it appears there are strong relationships between items making the choice of a set of items potentially very recipe-like. That may reduce the effectiveness of the ideas outlined above in this answer. However, using 75 outputs may still work better than other approaches, and is maybe the simplest setup, so I suggest still giving it a try, even if just to establish a benchmark for other ideas. This will work best when decisions are driven heavily by the feature data available, and when in practice there are lots of valid choices for combining items so there is a strong element of player preference. It will work less well if there is a large element of game mastery and logic in player decisions in order to combine items.
6809
1
6811
null
5
14412
I have done some clustering and I would like to visualize the results. Here is the function I have written to plot my clusters: ``` import sklearn from sklearn.cluster import DBSCAN from sklearn import metrics from sklearn.preprocessing import StandardScaler from sklearn.cluster import DBSCAN from sklearn import metrics from sklearn.datasets.samples_generator import make_blobs from sklearn.preprocessing import StandardScaler def plot_cluster(cluster, sample_matrix): '''Input: "cluster", which is an object from DBSCAN, e.g. dbscan_object = DBSCAN(3.0,4) "sample_matrix" which is a data matrix: X = [ [0,5,1,2], [0,4,1,3], [0,5,1,3], [0,5,0,2], [5,5,5,5], ] Output: Plots the clusters nicely. ''' import matplotlib.pyplot as plt import numpy as np f = lambda row: [float(x) for x in row] sample_matrix = map(f,sample_matrix) print sample_matrix sample_matrix = StandardScaler().fit_transform(sample_matrix) core_samples_mask = np.zeros_like(cluster.labels_, dtype=bool) core_samples_mask[cluster.core_sample_indices_] = True labels = cluster.labels_ # Black removed and is used for noise instead. unique_labels = set(labels) colors = plt.cm.Spectral(np.linspace(0, 1, len(unique_labels))) for k, col in zip(unique_labels, colors): if k == -1: # Black used for noise. col = 'k' class_member_mask = (labels == k) # generator comprehension # X is your data matrix X = np.array(sample_matrix) xy = X[class_member_mask & core_samples_mask] plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=14) xy = X[class_member_mask & ~core_samples_mask] plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=6) plt.ylim([0,10]) plt.xlim([0,10]) # plt.title('Estimated number of clusters: %d' % n_clusters_) plt.savefig('cluster.png') ``` The function above is copied almost verbatim from the scikit-learn demo [here](http://scikit-learn.org/stable/auto_examples/cluster/plot_dbscan.html#example-cluster-plot-dbscan-py). Yet, when I try it on the following: ``` dbscan_object = DBSCAN(3.0,4) X = [ [0,5,1,2], [0,4,1,3], [0,5,1,3], [0,5,0,2], [5,5,5,5], ] result = dbscan_object.fit(X) print result.labels_ print 'plotting ' plot_cluster(result, X) ``` ...It produces a single point. What is the best way to plot clusters in python?
How to plot/visualize clusters in scikit-learn (sklearn)?
CC BY-SA 3.0
null
2015-08-17T08:07:58.280
2016-03-02T15:51:37.527
2016-03-02T15:51:37.527
13727
10512
[ "python", "scikit-learn", "clustering", "dbscan" ]
When I run the code you posted, I get three points on my plot: [](https://i.stack.imgur.com/KpbTX.png) The "point" at (0, 4) corresponds to `X[1]` and the "point" at (0, 5) is actually three points, corresponding to `X[0]`, `X[2]`, and `X[3]`. The point at (5, 5) is the last point in your `X` array. The data at (0, 4) and (0, 5) belong to one cluster, and the point at (5, 5) is considered noise (plotted in black). The issue here seems to be that you're trying to run the `DBSCAN` algorithm on a dataset containing 5 points, with at least 4 points required per cluster (the second argument to the `DBSCAN` constructor). In the `sklearn` example, the clustering algorithm is run on a dataset containing 750 points with three distinct centers. Try creating a larger `X` dataset and running this code again. You might also want to remove the `plt.ylim([0,10])` and `plt.xlim([0,10])` lines from the code; they're making it a bit difficult to see the points on the edge of the plot! If you omit the `ylim` and `xlim` then `matplotlib` will automatically determine the plot limits.
What is the most straightforward way to visualize color-coded clusters along with the cluster centers?
There can be multiple ways, one can be - - Plot the points with hue=cluster_number - Plot the Centroid with a different markers Code for 3 Clusters on 2 Iris Features - ``` from sklearn import datasets iris = datasets.load_iris() X = iris.data y = iris.target X=X[:,:-2] X = (X - X.mean())/X.std() def create_cluster(k=3): from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=k) kmeans.fit(X) return kmeans kmeans = create_cluster() y_pred = kmeans.predict(X) centroid = kmeans.cluster_centers_ _, ax = plt.subplots(1,1,figsize=(10,6)) color = ["#e74c3c", "#34495e", "#2ecc71"] sns.scatterplot(X[:,0],X[:,1], hue=y_pred, palette=sns.color_palette(color),ax=ax) sns.pointplot(centroid[:,0], centroid[:,1],markers='^',join=False,ax=ax) ``` $\hspace{2cm}$Output - Ignore the convergence quality $\hspace{2cm}$ [](https://i.stack.imgur.com/SLVc0.png)
6817
1
6822
null
3
357
I have the following curve as the result of a azure machine learning classification experiment. [](https://i.stack.imgur.com/XcyS6.png) This obviously shows a lot of false negatives, but I'm struggling to interpret this, as my data has a high number of negatives and a low number of positives (I'm doing a binary classification). I am interpreting this as saying my data is almost entirely made up of positive values, with the majority being identified as negative. I believe that the false negatives on the chart are actually true negatives. Am I reading the chart correctly?
Clasification - ROC Curve with very high number of false negatives
CC BY-SA 3.0
null
2015-08-18T09:34:51.143
2015-08-19T11:24:48.323
2020-06-16T11:08:43.077
-1
12314
[ "classification" ]
Notice how the `precision` is very high and all of the other metrics are very low. Now look at the class balance of your problem: $$TP+FN=Actual Positive=31,245$$ $$TN+FP=Actual Negative=508$$ So your data is heavily skewed toward positives. To have gotten a model that is producing this poorly, I think you may have provided the model with the `precision` as the cross validation metric. The `precision` is a very bad cross validation metric in this case since it will result in poor `accuracy` and poor `recall`. `accuracy` is also not a good metric as your model could classify everything as positive and get an accuracy of: $$AC=\frac{31,245}{31,763}=.984$$ For cases like this where the classes are grossly weighted toward one value, I suggest using the `F1-score` as your cross validation metric. The `F1-score` is the `harmonic mean` of `precision` and `recall` and hence balances these two factors nicely. Wikipedia actually has a [very nice explanation of classification metrics here](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_.28classification_context.29) and [this paper is top notch](http://rali.iro.umontreal.ca/rali/sites/default/files/publis/SokolovaLapalme-JIPM09.pdf) if you even need to understand multi-class metrics and confusion matrices. Hope this helps!
ROC curve and optimal threshold
> I see that this threshold always matches the percentage of observations equal to 1 in my original data. Is there any conceptual explanation for this? Yes, although the fact it always matches exactly is probably a coincidence or maybe due to a small sample. The training data contains a proportion $p$ of instances labelled 1. From the ROC plot you can see all the possible values for setting the threshold at a certain level and the resulting performance; for every possible level you can calculate the corresponding proportion $q$ of instances predicted as 1: - if $q$ is much lower than $p$, then the system predicts many 0s, so there are many false negative errors and that makes the recall lower. Precision is high in this case. - if $q$ is much higher than $p$, then the system predicts many 1s, so there are many false positive errors and that makes the precision lower. Recall is high in this case. I assume that you optimize on the F1-score right? The fact that the F1-score is based on the product of the precision and recall means that both values need to be reasonably high, otherwise the F1-score drops. As seen above, having very different values for $p$ and $q$ will cause either the precision or recall to be low. Therefore the optimal F1-score is achieved when $q$ is close to $p$.
6827
1
8020
null
1
251
I am currently collecting mobile device data about user location and the time at which an app is being used for a cohort of users and apps. I am trying to predict which apps are likely to be used at a given time by a given user. Which model should i use in order to predict the apps that are likely to be used by a particular user? For example, should I collect data for 5 days a week then use the 3 days for training and then the other 2 days for testing? Given location, time of day, and day, what model should I use to predict which app is likely to be used?
Machine learning to predict apps (recomendation)
CC BY-SA 3.0
null
2015-08-19T11:09:40.287
2015-10-06T11:15:40.863
2015-08-19T18:39:07.883
9420
12333
[ "machine-learning" ]
Much depends on the data available to you. Perhaps you can be more specific about the scale and scope. Modelling time is the straightforward bit. To understand how to conceptualise time as useful features, see [this excellent answer](https://datascience.stackexchange.com/a/2370/12363) on [Machine learning - features engineering from date/time data](https://datascience.stackexchange.com/questions/2368/machine-learning-features-engineering-from-date-time-data) Modelling the user is more complicated. You will likely not have enough data on each user, but you can build some user models. (Too few, then the system will make similar predictions for all users, without nuance. Too many and there will be sparsity, overfitting, and generally the same problems as having no profile models at all, ie one model per actual user.) This can be done supervised or unsupervised, finding representative clusters. (Search for user profile categorisation, user models, user model clustering)
Predicting which apps users may be interested in
Broadly speaking, what you want is a [recommender system](https://en.wikipedia.org/wiki/Recommender_system). If you consider only the user-app association data (app installs, likes, etc.), and no user or app metadata, then you should look at [collaborative filtering](https://en.wikipedia.org/wiki/Collaborative_filtering). More specifically, a simpler technique you can consider is [item-similarity](https://en.wikipedia.org/wiki/Item-item_collaborative_filtering) based recommendation (users who install app A, also install app B). A slightly more complex method involves factorization of the partial user-app association matrix and then inferring missing points in the matrix. This [paper](https://endymecy.gitbooks.io/spark-ml-source-analysis/content/%E6%8E%A8%E8%8D%90/papers/Large-scale%20Parallel%20Collaborative%20Filtering%20the%20Netflix%20Prize.pdf) describes one such factorization technique used in the Netflix Prize movie recommendation challenge.
6830
1
6833
null
4
4218
I have a set of user objects that I want to group using a $k$-means function from their quiz answers. Each quiz question had predefined answers with letter values "a", "b", "c", "d". If a user answers the question #1 with letter "b", I put this answer into vector $(0, 1, 0, 0)$. The $k$-means function I have to use takes a two-dimensional array of numbers as an input vector (in this case array[user][question]), and I can't figure out how to use it, because, instead of a number value representing a user's answer to question, I have a vector input. How can I convert my vector values to numbers so that I can use my $k$-means function?
How to convert vector values to fit k-means algorithm function?
CC BY-SA 3.0
null
2015-08-19T13:06:42.750
2015-11-24T01:52:00.027
2015-11-24T01:52:00.027
13413
12335
[ "clustering", "k-means" ]
You are 95% there, you just have one hangup... The vectorization that you are doing is alternatively known as binarization or [one-hot encoding](https://en.wikipedia.org/wiki/One-hot). The only thing you need to do now is break apart all of those vectors and think of them as individual features. So instead of thinking of the question one vector as $(0,0,1,0)$ and the question two vector as $(0,1,0,0)$, you can now think of them as individual features. So this: ``` - q1, q2 - (a,b,c,d), (a,b,c,d) user1 (0,0,1,0), (0,1,0,0) user2 (1,0,0,0), (0,0,0,1) ``` Becomes this: ``` - q1a,q1b,q1c,q1d,q2a,q2b,q2c,q2d user1 0 0 1 0 0 1 0 0 user2 1 0 0 0 0 0 0 1 ``` And you can think of each one of those binary features as an orthogonal dimension in your data that lies in a 8-dimensional space. Hope this helps!
K-means Clustering algorithm problems
Wikipedia says: "Assign each observation to the cluster whose mean yields the least within-cluster sum of squares (WCSS)" I think in your case, this is translatable to: $c_i$ is assigned to the closest centroid by euclidean distance. For your second question, the centroid should $\mu$ should have the same number of dimensions as each training point $x_i$. They are both points in the co-ordinate system. You can use a high number of features with K-means, for example, text analytics might reduce a corpora of news articles to 10,000+ dimensions. Depending on the package you use these might be represented as a sparse matrix.
6838
1
6855
null
39
96789
When would one use `Random Forest` over `SVM` and vice versa? I understand that `cross-validation` and model comparison is an important aspect of choosing a model, but here I would like to learn more about rules of thumb and heuristics of the two methods. Can someone please explain the subtleties, strengths, and weaknesses of the classifiers as well as problems, which are best suited to each of them?
When to use Random Forest over SVM and vice versa?
CC BY-SA 3.0
null
2015-08-20T04:16:43.303
2022-06-11T01:09:55.867
2017-04-26T16:24:32.367
26686
12350
[ "machine-learning", "classification", "random-forest", "svm" ]
I would say, the choice depends very much on what data you have and what is your purpose. A few "rules of thumb". Random Forest is intrinsically suited for multiclass problems, while SVM is intrinsically two-class. For multiclass problem you will need to reduce it into multiple binary classification problems. Random Forest works well with a mixture of numerical and categorical features. When features are on the various scales, it is also fine. Roughly speaking, with Random Forest you can use data as they are. SVM maximizes the "margin" and thus relies on the concept of "distance" between different points. It is up to you to decide if "distance" is meaningful. As a consequence, one-hot encoding for categorical features is a must-do. Further, min-max or other scaling is highly recommended at preprocessing step. If you have data with $n$ points and $m$ features, an intermediate step in SVM is constructing an $n\times n$ matrix (think about memory requirements for storage) by calculating $n^2$ dot products (computational complexity). Therefore, as a rule of thumb, SVM is hardly scalable beyond 10^5 points. Large number of features (homogeneous features with meaningful distance, pixel of image would be a perfect example) is generally not a problem. For a classification problem Random Forest gives you probability of belonging to class. SVM gives you distance to the boundary, you still need to convert it to probability somehow if you need probability. For those problems, where SVM applies, it generally performs better than Random Forest. SVM gives you "support vectors", that is points in each class closest to the boundary between classes. They may be of interest by themselves for interpretation.
Algorithm selection rationale (Random Forest vs Logistic Regression vs SVM)
I suppose I will suggest as a starting point and expand on what you suggested by just adding the following - Knowing the type of data you are working with and it's characteristics, (categorical, supervised/unsupervised, data size etc.). - Knowing what accuracy requirements you need, timeframe and computational power you have at your disposal vs accuracy and really answering "why, am I trying to solve this problem?" After answering these questions you can at least narrow down slightly what you may use (and eliminate those you clearly don't believe fit). After that, I suppose it's trial and error, experience and comparing to others who dealt with similar datasets and problems. I have this crude flow chart I found in my favourites from the scikitlearn website. Not sure where I found it to be honest. Take it for what you will, hopefully it helps somewhat: [https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html)
6841
1
6881
null
1
665
I have a data frame of the following format: Symbol Date Time Profit $BANKNIFTY 4/1/2010 9:55:00 -1.18% <br>$BANKNIFTY 4/1/2010 12:30:00 -2.84% $BANKNIFTY 4/1/2010 12:45:00 7.17% <br>$BANKNIFTY 5/1/2010 11:40:00 -7.11% ZEEL 26/6/2012 13:50:00 24.75% ZEEL 27/6/2012 15:15:00 -1.90% ZEEL 28/6/2012 9:45:00 37.58% ZEEL 28/6/2012 14:55:00 23.95% ZEEL 29/6/2012 14:20:00 -4.65% ZEEL 29/6/2012 14:30:00 -6.01% ZEEL 29/6/2012 14:55:00 -12.23% ZEEL 29/6/2012 15:15:00 35.13% What I'd like to achieve is convert that data frame into a data frame which has dates for row names, symbol names for columns and sum of percentage profit for each day. Like in the following: Date BankNifty ZEEL 4/1/2010 3.15% 0 5/1/2010 -7.11% 0 26/6/2012 0 24.75% 27/6/2012 0 -1.90% 28/6/2012 0 61.53% 29/6/2012 0 12.24% How can I achieve that in R? `dplyr` mutation or some apply function? I'm a beginner in R programming. Thanks in advance. The data in R is ``` structure(list(Symbol = structure(c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label = c("BANKNIFTY", "ZEEL"), class = "factor"), Date = structure(c(5L, 5L, 5L, 6L, 1L, 2L, 3L, 3L, 4L, 4L, 4L, 4L), .Label = c("26/6/2012", "27/6/2012", "28/6/2012", "29/6/2012", "4/1/2010", "5/1/2010"), class = "factor"), Time = structure(c(10L, 2L, 3L, 1L, 4L, 8L, 9L, 7L, 5L, 6L, 7L, 8L), .Label = c("11:40:00", "12:30:00", "12:45:00", "13:50:00", "14:20:00", "14:30:00", "14:55:00", "15:15:00", "9:45:00", "9:55:00"), class = "factor"), Profit = structure(c(1L, 4L, 12L, 7L, 9L, 2L, 11L, 8L, 5L, 6L, 3L, 10L), .Label = c("-1.18%", "-1.90%", "-12.23%", "-2.84%", "-4.65%", "-6.01%", "-7.11%", "23.95%", "24.75%", "35.13%", "37.58%", "7.17%"), class = "factor")), .Names = c("Symbol", "Date", "Time", "Profit"), class = "data.frame", row.names = c(NA, -12L)) ```
Data frame mutation in R
CC BY-SA 3.0
null
2015-08-20T09:49:51.230
2015-08-25T04:49:19.930
2015-08-25T04:49:19.930
12384
12351
[ "r", "data-cleaning", "data" ]
The fastest way would be, ``` require(data.table) data <- data.table(data) # Remove the percentage from your file and convert the field to numeric. data[, Profit := as.numeric(gsub("%", "", Profit))] data ## Symbol Date Time Profit ## 1: BANKNIFTY 4/1/2010 9:55:00 -1.18 ## 2: BANKNIFTY 4/1/2010 12:30:00 -2.84 ## 3: BANKNIFTY 4/1/2010 12:45:00 7.17 ## 4: BANKNIFTY 5/1/2010 11:40:00 -7.11 ## 5: ZEEL 26/6/2012 13:50:00 24.75 ## 6: ZEEL 27/6/2012 15:15:00 -1.90 ## 7: ZEEL 28/6/2012 9:45:00 37.58 ## 8: ZEEL 28/6/2012 14:55:00 23.95 ## 9: ZEEL 29/6/2012 14:20:00 -4.65 ## 10: ZEEL 29/6/2012 14:30:00 -6.01 ## 11: ZEEL 29/6/2012 14:55:00 -12.23 ## 12: ZEEL 29/6/2012 15:15:00 35.13 # Melt the data so that we can easily dcast afterwards. molten_data <- melt(data[, list(Symbol, Date, Profit)] # Create a summary by date and Symbol. dcast(molten_data, id = c("Symbol", "Date")), Date ~ variable + Symbol, fun = sum) ## Date Profit_BANKNIFTY Profit_ZEEL ## 1: 26/6/2012 0.00 24.75 ## 2: 27/6/2012 0.00 -1.90 ## 3: 28/6/2012 0.00 61.53 ## 4: 29/6/2012 0.00 12.24 ## 5: 4/1/2010 3.15 0.00 ## 6: 5/1/2010 -7.11 0.00 ```
Replacing values in multiple columns of a data frame in R
Given: ``` > dam name re1 re2 re3 1 a yes yes yes 2 b no no no 3 c yes yes yes 4 d no no no 5 e yes yes yes ``` do this: ``` dam2 = reshape2::dcast( dplyr::mutate( reshape2::melt(dam,id.var="name"), value=plyr::mapvalues( value, c("yes","no"),c("OK","notOK")) ),name~variable) ``` get that: ``` > dam2 name re1 re2 re3 1 a OK OK OK 2 b notOK notOK notOK 3 c OK OK OK 4 d notOK notOK notOK 5 e OK OK OK ``` I've recoded it to "OK" and "notOK" because your remapping doesn't make sense. The "from values" should be unique, not have repeated "yes" and "no" in them. Note how this is done. Make a tidy data set by melting. Mutate it. Cast it back into untidy format. Yes you could use pipes.
6847
1
6849
null
2
73
I need an advice. I can resume my problem like that : I have some travels in a database, for example : ``` Person1 travelled from CityA to CityB on Date1 Person1 travelled from CityB to CityC on Date2 Person2 travelled from CityB to CityD on Date3 ... ``` We can consider that these cities are in the complete graph. Now, according to all the travels in the database, I would like to know where a PersonX is likely to go. I can know when he come from (or not). I don't know if I should use machine learning, data-mining or graph theory.
Estimating destination according to previous data
CC BY-SA 3.0
null
2015-08-20T14:56:40.847
2015-08-20T17:52:15.133
null
null
12362
[ "machine-learning", "data-mining", "graphs" ]
This is a spatio-temporal clustering problem that is likely best solved with a Markov model. You could reasonable group this into machine learning or data mining. Develop your model using machine learning and then (the data mining part) leverage those pattern recognition techniques (that have been developed in machine learning). I think there are at least one or two threads on this over at Cross-Validated that go into more detail. Here are a couple of papers to look at if you are just getting started. [Using GPS to learn significant locations and predict movement across multiple users](http://link.springer.com/article/10.1007/s00779-003-0240-0) [Predicting Future Locations with Hidden Markov Models](http://lbsn2012.cmuchimps.org/papers/Paper16_Mathew.pdf)
How to predict next visit date based on this data
Assume x person in df table has visit_data and scheduled visit for Y person, initally calculate the difference and create that as your target variable. there after you can create a model and predict the no.of days, once you predict the no.of days add same to your visit date to get the schedule date. below is the process in code ``` #calculating the differnce in visit and schedule visit df['#.of days']= df['scheduled_visit']-df['visit_date'] #now my table as below visit_date|#serial.no|#.of days #create Regression or ARMIA model on serial.no and #.of days #start predicting the values for next 30 events, you will get to know the no.of days values #finally add those values to your visit_date you will get the schedule visit date ``` Start with small model like ARIMA and check with results instead of deeplearning models(RNN),choose your model based the data size.
6866
1
10150
null
3
2280
I'm working on a problem with data from a continuous real-valued signal. The goal is to use ML to smooth the signal based off of past data. To accomplish this, the signal is windowed into a period that's meaningful within the domain. The problem is that this period is highly variable in length. I've reviewed [this question](https://datascience.stackexchange.com/questions/2673/training-neural-networks-with-unknown-length-of-input?rq=1) and [this question](https://datascience.stackexchange.com/questions/595/how-to-use-neural-networks-with-large-and-variable-number-of-inputs?rq=1) and neither solve the problem, they are more about how to deal with missing values. Seeing as denoising autoencoders are based off of matrix multiplication, this presents a serious problem. What is the standard approach in such a situation? Should I define an arbitrary (large) window size, and expand windows that are too small (and vice versa)? Or is there a better approach for variable length inputs?
Denoising Autoenoders with Variable Length Input
CC BY-SA 3.0
null
2015-08-21T14:26:10.297
2016-02-09T09:06:10.080
2015-08-21T14:37:46.083
12381
12381
[ "neural-network", "preprocessing" ]
Recurrent Neural Networks can deal with variable length data. You might want to have a look at: - Andrej Karpathy: The Unreasonable Effectiveness of Recurrent Neural Networks. - Christopher Olah: Understanding LSTM Networks. - Hochreiter, Schmidthuber: Long short-term memory. Another idea (which I have not tested so far and just came to my mind) is using a histogram approach: You could probably make fixed-size windows, get the data from those windows and make it discrete (e.g. vector quantization, k-means). After that, you can make a histogram of how often those vectors appeared. You could also use [HMMs](https://en.wikipedia.org/wiki/Hidden_Markov_model) for recognition of variable length data. Transformations (e.g. [Fourier transform](https://en.wikipedia.org/wiki/Fourier_transform) from the time domain in the [frequency domain](https://en.wikipedia.org/wiki/Frequency_domain)) might also come in handy.
Adapting the Keras variational autoencoder for denoising images
Since I asked this question here as well, I am pasting my answer to it here. I used a different way to define the VAE loss, as demonstrated in: [https://github.com/keras-team/keras/blob/keras-2/examples/variational_autoencoder.py](https://github.com/keras-team/keras/blob/keras-2/examples/variational_autoencoder.py) I changed it to allow for denoising of the data. It works now, but I'll have to play around with the hyperparameters to allow it to correctly reconstruct the original images. ``` import numpy as np import time import sys import os from scipy.stats import norm from keras.layers import Input, Dense, Lambda from keras.models import Model from keras import backend as K from keras import metrics from keras.datasets import mnist from keras.callbacks import ModelCheckpoint filepath_for_w='denoise_by_VAE_weights_1.h5' ########### ########## experiment_dir= 'exp_'+str(int(time.time())) os.mkdir(experiment_dir) this_script=sys.argv[0] from shutil import copyfile copyfile(this_script, experiment_dir+'/'+this_script) ########## ########### batch_size = 100 original_dim = 784 latent_dim = 2 intermediate_dim = 256 epochs = 10 epsilon_std = 1.0 x = Input(batch_shape=(batch_size, original_dim)) h = Dense(intermediate_dim, activation='relu')(x) z_mean = Dense(latent_dim)(h) z_log_var = Dense(latent_dim)(h) def sampling(args): z_mean, z_log_var = args epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0., stddev=epsilon_std) return z_mean + K.exp(z_log_var / 2) * epsilon # note that "output_shape" isn't necessary with the TensorFlow backend z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var]) # we instantiate these layers separately so as to reuse them later decoder_h = Dense(intermediate_dim, activation='relu') decoder_mean = Dense(original_dim, activation='sigmoid') h_decoded = decoder_h(z) x_decoded_mean = decoder_mean(h_decoded) def vae_loss(x, x_decoded_mean): xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean) kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) return xent_loss + kl_loss vae = Model(x, x_decoded_mean) vae.compile(optimizer='rmsprop', loss=vae_loss) # train the VAE on MNIST digits (x_train, y_train), (x_test, y_test) = mnist.load_data() #after loading the data, change to the new experiment dir os.chdir(experiment_dir) # ########################## x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) noise_factor = 0.5 x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test.shape) x_test_noisy = np.clip(x_test_noisy, 0., 1.) for i in range (10): x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape) x_train_noisy = np.clip(x_train_noisy, 0., 1.) checkpointer=ModelCheckpoint(filepath_for_w, monitor='val_loss', verbose=0, save_best_only=True, save_weights_only=True, mode='auto', period=1) vae.fit(x_train_noisy, x_train, shuffle=True, epochs=epochs, batch_size=batch_size, validation_data=(x_test_noisy, x_test), callbacks=[checkpointer]) vae.load_weights(filepath_for_w) #print (x_train.shape) #print (x_test.shape) decoded_imgs = vae.predict(x_test,batch_size=batch_size) np.save('decoded'+str(i)+'.npy',decoded_imgs) np.save('tested.npy',x_test_noisy) #np.save ('true_catagories.npy',y_test) np.save('original.npy',x_test) ```
6884
1
6896
null
3
2602
I am trying to understand the abstract details that explain how h2o is faster than R and SAS for data science computations.
How is H2O faster than R or SAS?
CC BY-SA 3.0
null
2015-08-24T07:38:03.147
2016-12-01T15:46:37.783
2016-12-01T15:46:37.783
8501
12411
[ "machine-learning", "r", "bigdata", "performance", "sas" ]
I have used R, SAS Base and H2O. First, I do not think that H2O seeks to be either R or SAS. H2O provides data mining algorithms that are highly efficient. You can interface with H2O using several APIs such as their R API. The benefit of combining R and H2O is that H2O is very good at exploiting multi-cores or clusters with minimal effort of the user. It is much harder to achieve the same efficiency in R alone. The reason why H2O is much faster is that they have a very good indexing of their data and their algorithms are written such that they exploit parallelism to the fullest. See [http://h2o.ai/blog/2014/03/h2o-architecture/](http://h2o.ai/blog/2014/03/h2o-architecture/) R with the default matrix dynamic libraries can only use one CPU core. Revolution R community edition ships with the Intel Math Kernel Library. This allows for some matrix computations in parallel but definitely not as efficient as H2O. For SAS it is a bit harder to say anything considering it's closed source but based on my CPU utilization I would assume that they have a similar approach as Revolution R. Their matrix algebra exploits parallelism but they algorithms are not as efficient as H2O. Their data storage is also not as efficient as H2O. Lastly, H2O with R comes at a very different price tag than SAS. Hope this clarifies a bit.
How does SQL Server Analysis Services compare to R?
In my opinion, it seems that SSAS makes more sense for someone who: - has significantly invested in Microsoft's technology stack and platform; - prefer point-and-click interface (GUI) to command line; - focus on data warehousing (OLAP cubes, etc.); - has limited needs in terms of statistical methods and algorithms variety; - has limited needs in cross-language integration; - doesn't care much about openness, cross-platform integration and vendor lock-in. You can find useful [this blog post](http://blog.samibadawi.com/2010/04/r-rapidminer-statistica-ssas-or-weka.html) by [Sami Badawi](http://twitter.com/Sami_Badawi). However, note that the post is not recent, so some information might be outdated. Plus, the post contains an initial review, which might be not very accurate or comprehensive. If you're thinking about data science, while considering staying within Microsoft ecosystem, I suggest you to take a look at Microsoft's own machine learning platform Azure ML. [This blog post](https://dbolab.com/microsoft-azure-ml-vs-ssas-mining-structures) presents a brief comparison of (early) Azure ML and SSAS.
6904
1
6906
null
2
200
We are working with a complex application i.e. a physical measurement in a lab, that has approximately 230 different input parameters, many of which are ranges or multiple-value. The application produces a single output, which is then verified in an external (physical) process. At the end of the process the individual tests are marked as "success" or "fail". That is, despite the many input parameters, the output is assessed in a boolean manner. When tests fail, the parameters are 'loosened' slightly and re-tested. We have about 20,000 entries in our database, with both "success" and "fail", and we are considering a machine learning application to help in two areas: 1) Initial selection of optimum parameters 2) Suggestions for how to tune the parameters after a "fail" Many of the input parameters are strongly related to each other. I studied computer science in the mid-90s, when the focus was mostly expert systems and neural networks. We also have access to some free CPU hours of Microsoft Azure Machine Learning. What type of machine learning would fit these use-cases?
Which type of machine learning to use
CC BY-SA 3.0
null
2015-08-26T12:22:06.380
2015-08-27T14:09:17.757
2015-08-27T14:09:17.757
9420
12454
[ "machine-learning", "neural-network" ]
With using R, You could look at trees / randomforests. Since you have correlated variables, you could look into Projection pursuit classification trees (R package pptree). And there soon will be a ppforest package. But this is still under development. You could also combine randomforest with the package forestFloor to see the curvature of the randomforest and work from there.
which machine learning technique can be used?
I think these are the methods that you can try out (Please feel free to add more to this list): - Highly precise with a little low recall is to use a dictionary with almost all possibilities (manual effort, but must be worth it.). - Using Word2Vec. Mikolov has already trained text data and created word vectors. Using this vector space, you can figure out which words are similar. You can try out and find a threshold above which you can say which words are similar (for example, yoga and exercise would have decent similarity.) - Train custom W2V, if you have enough data(This is an unsupervised model, so you don't need to worry about tagging the data but finding huge amounts of data relevant to the working domain.) - You can use an RNN to find the most similar words in a corpus and use it for queries. This gives a bit more flexibility than W2V.
6910
1
6916
null
5
1427
Is it correct to say that any statistical learning algorithm (linear/logistic regression, SVM, neural network, random forest) can be implemented inside a Map Reduce framework? Or are there restrictions? I guess there may be some algorithms that is not possible to parallelize?
Can all statistical algorithms be parallelized using a Map Reduce framework
CC BY-SA 3.0
null
2015-08-26T20:44:06.540
2016-11-14T17:51:33.400
null
null
10522
[ "machine-learning", "apache-hadoop", "map-reduce" ]
Indeed there are: - Gradient Boosting is by construction sequential, so parallelization is not really possible Generalized Linear Models need all data at the same time, although technically you can parallelize some of the inner linear algebra nuts and bolts Support Vector Machines
Parallel Data preprocessing
Yes - there are a lot of approaches. Depending on the language you are using / packages. Assuming Python: - Multiprocessing: Dask, pool.map, modin, pandarallel, spark - GPU: CuDF from RAPIDS - Multi-GPU: Cudf-Dask If you have a Nvidia GPU - I would highly recommend the RAPIDs framework, they have plotting, machine learning, dataframes etc...
6921
1
6923
null
9
5884
I am looking at how to implement dropout on deep neural networks and found something counter intuitive. In the forward phase dropout mask activations with a random tensor of 1s and 0s to force net to learn the average of the weights. This help the net to generalize better. But during the update phase of the gradient descent the activations are not masked. This to me seems counter intuitive. If I mask connections activations with dropout, why I should not mask the gradient descent phase?
Understanding dropout and gradient descent
CC BY-SA 4.0
null
2015-08-27T19:36:53.297
2021-02-18T20:51:03.967
2021-02-18T20:51:03.967
29169
10938
[ "neural-network", "deep-learning", "gradient-descent" ]
In dropout as described in [here](http://www.cs.toronto.edu/%7Ehinton/absps/JMLRdropout.pdf), weights are not masked. Instead, the neuron activations are masked, per example as it is presented for training (i.e. the mask is randomised for each run forward and gradient backprop, not ever repeated). The activations are masked during forward pass, and gradient calculations use the same mask during back-propagation of that example. This can be implemented as a modifier within a layer description, or as a separate dropout layer. During weight update phase, typically applied on a mini-batch (where each example would have had different mask applied) there is no further use of dropout masks. The gradient values used for update have already been affected by masks applied during back propagation. I found a useful reference for learning how dropout works, for maybe implementing yourself, is the [Deep Learn Toolbox](https://github.com/rasmusbergpalm/DeepLearnToolbox) for Matlab/Octave.
Dropout in Deep Neural Networks
I guess you have not figured out the concept of dropout very well. First, the reason we apply it is that we add some noise to the architecture in order not be dependant on any special node. The reason is that it was observed that while training a network, after overfitting, the weights for some of neurons increases and cause the network to be dependant on them. By exploiting dropout, we are not dependant on any node anymore due to it is possible to drop it while training. Now, answers to your question. First, you have to bear this point in mind that the probability shows the chance of dropping a node in a layer. Consequently, chance 0.5 does not mean you, for instance, will have those two nodes. It just means after employing dropout, the chance of dropping for each node is half. Dropout is used for layers. It is customary to use it in fully connected layers. You set the hyper-parameter and it is the chance of keeping the nodes in the layer. While testing, you don't drop any node. We don't multiply neurons to the probability. The probability specifies the chance of existence of that node. --- Okey-doke! I update the answer. As you can read in the paper, > At test time, it is not feasible to explicitly average the predictions from exponentially many thinned models. However, a very simple approximate averaging method works well in practice. The idea is to use a single neural net at test time without dropout. The weights of this network are scaled-down versions of the trained weights. If a unit is retained with probability p during training, the outgoing weights of that unit are multiplied by p at test time as shown in Figure 2. This ensures that for any hidden unit the expected output (under the distribution used to drop units at training time) is the same as the actual output at test time. By doing this scaling, 2n networks with shared weights can be combined into a single neural network to be used at test time. We found that training a network with dropout and using this approximate averaging method at test time leads to signicantly lower generalization error on a wide variety of classication problems compared to training with other regularization methods. I guess the easiest way to understand it is to watch [this](https://www.coursera.org/lecture/deep-neural-network/dropout-regularization-eM33A) video. As you can see there are different implementations for that but the reason it is multiplied is that for any hidden unit the expected output (under the distribution used to drop units at training time) is the same as the actual output at test time. To be concise, it is done in order not to change the distribution that the outputs of the layer have.
6929
1
6933
null
4
170
Hello I am a layman trying to analyze game data from League of Legends, specifically looking at predicting the win rate for a given champion given an item build. ### Outline A player can own up to 6 items at the end of a game. They could have purchased these items in different orders or adjusted their inventory position during the course of the game. In this fashion the dataset may contain the following rows with: ``` champion id | items ids | win(1)/loss(0) ---------------------------------------------------------------------------- 45 | [3089, 3135, 3151, 3157, 3165, 3285] | 1 45 | [3151, 3285, 3135, 3089, 3157, 3165] | 1 45 | [3165, 3285, 3089, 3135, 3157, 3151] | 0 ``` While the items are in a different order the build is the same, my initial thought would be to simply multiply the item ids as this would give me an integer value representing that combination of 6 items. While there are hundreds of items, in reality a champion draws off a small subset (~20) of those to form the core (3 items) of their build. A game may also finish before players have had time to purchase 6 items: ``` items ids ------------------------------------------ [3089, XXXX, 3151, 3285, 3165, 0000] [XXXX, 3285, XXXX, 3165, 3151, 0000] [3165, 3285, 3089, XXXX, 0000, 0000] XXXX item from outside core subset 0000 empty inventory slot ``` As item 3089 compliments champion 45 core builds that have item 3089 have a higher win rate than core builds which are missing item 3089. The size of the data set available for each champion varies between 10000 and 100000. The mean is probably around 35000. ### Questions - Is this a suitable problem for supervised classification? - How should I approach finding groups of core items and their win rates?
Classification problem where one attribute is a vector
CC BY-SA 3.0
null
2015-08-28T12:56:52.233
2015-08-28T18:07:22.253
2020-06-16T11:08:43.077
-1
12497
[ "machine-learning", "classification", "clustering" ]
1) If you want to build a model with: ``` Input: Items bought Output: Win/Loss ``` then you will probably want to learn a non-linear combination of the inputs to represent a build. For example `item_X` may have very different purpose when paired with `item_Y` than with `item_Z`. For the input format, you may consider creating a binary vector from the item list. For example if there were only ten items, a game in which the champion purchased `items 1,4,5,9` (in any order) would look like row 1; a game where he also purchased `item 2` and `7` would look like row 2: ``` item_ID | 0 1 2 3 4 5 6 7 8 9 ________________________________________ champion_1| 0 1 0 0 1 1 0 0 0 1 champion_1| 0 1 1 0 1 1 0 1 0 1 ``` There are a variety of models that might suit this task. You might use [decision trees](https://en.wikipedia.org/wiki/Decision_tree_learning) for interpretability. A simple [neural net](https://en.wikipedia.org/wiki/Artificial_neural_network) or [SVM](https://en.wikipedia.org/wiki/Support_vector_machine) would likely also do a good job. These should all be found in most basic ML packages. 2) The win rates of various items are directly computable. Simply count the number of times a champion used the items in question and won and divide by the total number of times a champion used that item combination. You can do this for any given group size (1 to 6)
How can I implement classification for this problem?
If I understand the problem correctly, the input dataset consists of a 2 columns. Column A - Previous Case Summary, Column B - the range/bin of damage awarded And you want to map a new unseen case summary, to one of the existing column B values ranges/bin based on the similarity of new case summary to most similar Column A case summary text. I recently worked on a similar problem, where instead of case summary, I had fields/labels mapped to their description and I wanted to map a new/unseen field to one of the given descriptions. [Mapping of an unseen Field/word to an existing description (in the input data), given Field and their respective descriptions as input/training data](https://datascience.stackexchange.com/questions/112095/mapping-of-an-unseen-field-word-to-an-existing-description-in-the-input-data) My approach was doing Bert Embedding and then doing cosine similarity on field/labels and based on the similarity value to one of the existing fields, taking its description. This could be one of the approaches. Let me know, if you need the sample code. Happy to help.
6931
1
6941
null
0
1052
I tried SOMpy, though it is very crude now and works only with oldest versions of matplotlib. Is there any fancy lib that can build SOM based on array and visualize it in Python?
Any usable libs to build and visualise SOM in python?
CC BY-SA 3.0
null
2015-08-28T14:53:34.843
2020-08-03T08:57:42.937
null
null
12503
[ "python", "visualization" ]
You could have a try on [this package](http://www.pymvpa.org/examples/som.html). There is a working example on this page. If what you are interested is the Manifold learning, you could also apply [many packages](http://scikit-learn.org/stable/modules/manifold.html) from sklearn.
Are there any python based data visualization toolkits?
There is a Tablaeu API and you can use Python to use it, but maybe not in the sense that you think. There is a Data Extract API that you could use to import your data into Python and do your visualizations there, so I do not know if this is going to answer your question entirely. As in the first comment you can use Matplotlib from [Matplotlib website](http://www.matplotlib.org), or you could install Canopy from Enthought which has it available, there is also Pandas, which you could also use for data analysis and some visualizations. There is also a package called `ggplot` which is used in `R` alot, but is also made for Python, which you can find here [ggplot for python](https://pypi.python.org/pypi/ggplot). The Tableau data extract API and some information about it can be found [at this link](http://www.tableausoftware.com/new-features/data-engine-api-0). There are a few web sources that I found concerning it using duckduckgo [at this link](https://duckduckgo.com/?q=tableau%20PYTHON%20API&kp=1&kd=-1). Here are some samples: [Link 1](https://www.interworks.com/blogs/bbickell/2012/12/06/introducing-python-tableau-data-extract-api-csv-extract-example) [Link 2](http://ryrobes.com/python/building-tableau-data-extract-files-with-python-in-tableau-8-sample-usage/) [Link 3](http://nbviewer.ipython.org/github/Btibert3/tableau-r/blob/master/Python-R-Tableau-Predictive-Modeling.ipynb) As far as an API like matplotlib, I cannot say for certain that one exists. Hopefully this gives some sort of reference to help answer your question. Also to help avoid closure flags and downvotes you should try and show some of what you have tried to do or find, this makes for a better question and helps to illicit responses.
6939
1
8224
null
4
11700
Suppose I have 100 positive samples. How many negative samples do I need to have in order to make the classifier work the best. In many papers, I have noticed that they take 4 times or 5 times the number of positive data sample to get the negative data sample. Will such a data set be useful?
Ratio of positive to negative sample in data set for best classification
CC BY-SA 3.0
null
2015-08-29T10:27:02.597
2021-02-12T12:15:39.243
2015-09-28T21:17:48.647
97
8013
[ "classification", "dataset", "class-imbalance" ]
I guess you are not limited to these 100 samples. Generate more, and let each 5th be negative. Then reduce number of positives by random removing 4/5 of them. And check this out [Training imbalanced data set](https://datascience.stackexchange.com/questions/1107/quick-guide-into-training-highly-imbalanced-data-sets) This is small quantity, you'd better have 50:50 negative vs positive.
What do you call the ratio of positive to negative samples?
You can use class ratio / sample class ratio. Which will make it more intuitive for any reader while going through the details. As its not used for model performance analysis hence I think we don’t have a metric name for this.
6950
1
6951
null
0
1241
I'm performing a logistic regression on my training data. I used the glm function to get the model m. Now using the below codes from this [link](https://cran.r-project.org/doc/contrib/Sharma-CreditScoring.pdf), I calculated AUC > $test\$score<-predict(m,type = 'response',test)$ $pred <- prediction(test\$score,test\$good_bad)$ $perf <- performance(pred,"tpr","fpr")$ where score is the dependent variable (0 or 1). To score the tpr (True positive rate) and fpr (False positive rate), you have to classify the predicted probabilities into 1 or 0. What is the cutoff used for that? how can we change it? Could not find anything useful in this [main documentation](http://www.hpl.hp.com/techreports/2003/HPL-2003-4.pdf) as well.
How does Performance function classify predictions as positive or negative? Package:ROCR
CC BY-SA 3.0
null
2015-08-31T10:13:43.080
2016-10-08T04:22:42.267
null
null
10050
[ "r", "classification", "logistic-regression" ]
I cant access an R console at the moment to check, but I'm quite certain the cutoff is 0.5: if your glm model does prediction, it first produces real values and then applies the link function on top. To the best of my knowledge, you can't change it inside the glm function, so your best bet is probably to check ROC, find what the optimal threshold is and use that as cutoff.
Clasification - ROC Curve with very high number of false negatives
Notice how the `precision` is very high and all of the other metrics are very low. Now look at the class balance of your problem: $$TP+FN=Actual Positive=31,245$$ $$TN+FP=Actual Negative=508$$ So your data is heavily skewed toward positives. To have gotten a model that is producing this poorly, I think you may have provided the model with the `precision` as the cross validation metric. The `precision` is a very bad cross validation metric in this case since it will result in poor `accuracy` and poor `recall`. `accuracy` is also not a good metric as your model could classify everything as positive and get an accuracy of: $$AC=\frac{31,245}{31,763}=.984$$ For cases like this where the classes are grossly weighted toward one value, I suggest using the `F1-score` as your cross validation metric. The `F1-score` is the `harmonic mean` of `precision` and `recall` and hence balances these two factors nicely. Wikipedia actually has a [very nice explanation of classification metrics here](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_.28classification_context.29) and [this paper is top notch](http://rali.iro.umontreal.ca/rali/sites/default/files/publis/SokolovaLapalme-JIPM09.pdf) if you even need to understand multi-class metrics and confusion matrices. Hope this helps!
6960
1
7000
null
2
5628
I am trying to predict tags for stackoverflow questions and I am not able to decide which Machine Learning algorithm will be a correct approach for this. Input: As a dataset I have mined stackoverflow questions, I have tokenized the data set and removed stopwords and punctuation from this data. Things i have tried: - TF-IDF - Trained Naive Bayes on the dataset and then gave user defined input to predict tags, but its not working correctly - Linear SVM Which ML algorithm I should use Supervised or Unsupervised? If possible please, suggest a correct ML approach from the scratch. PS: I have the list of all tags present on StackOverflow so, will this help in anyway? Thanks
StackOverflow Tags Predictor...Suggest an Machine Learning Approach please?
CC BY-SA 3.0
null
2015-08-31T18:39:36.183
2015-09-03T22:30:58.170
null
null
12557
[ "machine-learning", "classification" ]
This exact problem was a kaggle competition sponsored by Facebook. The particular forum thread of interest for you is the one where many of the top competitors explained their methodology, this should provide you with more information than you were probably looking for: [https://www.kaggle.com/c/facebook-recruiting-iii-keyword-extraction/forums/t/6650/share-your-approach](https://www.kaggle.com/c/facebook-recruiting-iii-keyword-extraction/forums/t/6650/share-your-approach) In general, it appears that most people treated the problem as a supervised one. Their primary feature was a tf-idf, or unweighted BOW, representations of the text and they ensembled 1000s of single-tag models. Owen, the winner of the competition, noted that the title text was a more powerful feature than the content of the body of the post.
What machine learning algorithms to use for unsupervised POS tagging?
There are no unsupervised methods to train a POS-Tagger that have similar performance to human annotations or supervised methods. [The current state-of-the-art supervised methods for training POS-Tagger are Long short-term memory (LSTM) neural networks](http://aclweb.org/anthology/D17-1076).
6970
1
6979
null
5
10213
I am new to machine learning and I am confused with the terminology. Thus far, I used to view a hypothesis class as different instance of hypothesis function... Example: If we are talking about linear classification then different lines characterized by different weights would together form the hypothesis class. Is my understanding correct or can a hypothesis class represent anything which could approximate the target function? For instance, can a linear or quadratic function that approximates the target function together form a single hypothesis class or both are from different hypothesis classes?
Newbie: What is the difference between hypothesis class and models?
CC BY-SA 3.0
null
2015-09-01T09:57:10.733
2015-09-01T21:54:01.573
2015-09-01T18:32:16.947
9420
12569
[ "machine-learning" ]
Your hypothesis class consists of all possible hypotheses that you are searching over, regardless of their form. For convenience's sake, the hypothesis class is usually constrained to be only one type of function or model at a time, since learning methods typically only work on one type at a time. This doesn't have to be the case, though: - Hypothesis classes don't have to consist of only one type of function. If you're searching over all linear, quadratic, and exponential functions, then those are what your combined hypothesis class contains. - Hypothesis classes also don't have to consist of only simple functions. If you manage to search over all piecewise-$\tanh^2$ functions, then those functions are what your hypothesis class includes. The big tradeoff is that the larger your hypothesis class, the better the best hypothesis models the underlying true function, but the harder it is to find that best hypothesis. This is related to the [bias–variance tradeoff](https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff).
What is the difference between classification and regression?
For the sake of illustration, let's imagine that you're trying to predict the amount of gas in the tank of your car. A classification problem statement of this question would be whether you have gas in your car (yes or no). A regression statement of this problem would predict the level of gas in your car (anywhere between completely full or completely empty) and could take any value. The output of a classification model can be one of n options, where n is the number of classes (and/or the probability associated with each class). The output of a regression model is a (possibly bounded) continuous value.
6988
1
8001
null
5
1768
I would like to understand regularization/shrinkage in the light of MLE/Gradient Descent. I know both concepts but I do not know/understand whether both are used to determine coefficients of a linear model. If so, what are the steps followed? To further elaborate, regularization is used to reduce variance which is accomplished through penalizing coefficients of a linear model. The tuning parameter, lambda, is determined through cross-validation. Once, lambda is determined the coefficients are automatically determined, right? Hence, why do we need to minimize (RSS + regularization term) to find coefficients? Are the steps the following: - Find lambda through cross-validation - Minimize (RSS + regularization) through MLE or GD - Find coefficients - Penalize coefficients to decrease variance - We are left with a small subset of coefficients
Connection between Regularization and Gradient Descent
CC BY-SA 3.0
null
2015-09-02T15:21:23.047
2015-09-04T04:48:33.520
null
null
12350
[ "machine-learning", "data-mining" ]
The fitting procedure is the one that actually finds the coefficients of the model. The regularization term is used to indirectly find the coefficients by penalizing big coefficients during the fitting procedure. A simple (albeit somewhat biased/naive) example might help illustrate this difference between regularization and gradient descent: ``` X, y <- read input data for different values of lambda L for each fold of cross-validation using X,y,L theta <- minimize (RSS + regularization using L) via MLE/GD score <- calculate performance of model using theta on the validation set if average score across folds for L is better than the current best average score L_best <- L ``` As you can see, the fitting procedure (MLE or GD in our case) finds the best coefficients given the specific value of lambda. As a side note, I would look at this answer [here](https://stats.stackexchange.com/questions/137481/how-bad-is-hyperparameter-tuning-outside-cross-validation) about tuning the regularization parameter, because it tends a little bit murky in terms of bias.
Why would we add regularization loss to the gradient itself in an SVM?
The l2 regularization term is being added to the loss itself. But then you need to find the gradient of this new loss; since gradients are additive, this is the same as the gradient of the unpenalized loss plus the gradient of the l2 term, the latter of which is the quantity specified in the last line of code. Note that it makes sense: when updating the weights, you will subtract some multiple of the gradient, so are moving the weights opposite their current location, i.e. toward the origin, as you expect regularization to accomplish.
6989
1
8006
null
3
1030
For the first time, I am playing around with a [Cascade Classifier](http://docs.opencv.org/doc/user_guide/ug_traincascade.html) with the OpenCV package (also new to the latter). I realized that it would probably be faster to write my own GUI/script to generate the needed positive and negative images from the set of images I have than to open each file in Photoshop or Paint, but I also suspect this has been done many times before. In particular, I am looking for a GUI that lets users page through files in a directory and then use mouse clicks to draw rectangles on a particular image and have the coordinates of the rectangle recorded for later purposes. Any suggestions? If not, I'll be sure to post a link when/if I finish this. It seems like something of general enough utility I am surprised I can't find it in the OpenCV package itself.
Python script/GUI to generate positive/negative images for CascadeClassifier?
CC BY-SA 3.0
null
2015-09-02T18:20:20.493
2018-12-11T15:38:25.610
2015-09-04T15:26:46.787
12586
12586
[ "classification", "image-classification" ]
So I wrote the script. It gave me an excuse to learn Tkinter. It's pasted below. Note this is a one-off, not a model of good programming practice! If anyone uses this and has bugs or suggestions, let me know. Here's the [git link](https://github.com/sunnysideprodcorp/CascadeImagesorter/) and model code is pasted below: ``` import Tkinter import Image, ImageTk from Tkinter import Tk, BOTH from ttk import Frame, Button, Style import cv2 import os import time import itertools IMAGE_DIRECTORY = # Directory of existing files POSITIVE_DIRECTORY = # Where to store 'positive' cropped images NEGATIVE_DIRECTORY = # Where to store 'negative' cropped images (generated automatically based on 'positive' image cropping IMAGE_RESIZE_FACTOR = # How much to scale images for display purposes. Images are not scaled when saved. # Everything stuffed into one class, not exactly model programming but it works for now class Example(Frame): def __init__(self, parent, list_of_files, write_file): Frame.__init__(self, parent) self.parent = parent self.list_of_files = list_of_files self.write_file = write_file self.image = None self.canvas = None self.corners = [] self.index = -1 self.loadImage() self.initUI() self.resetCanvas() def loadImage(self): self.index += 1 img = cv2.imread(self.list_of_files[self.index]) print(self.list_of_files[self.index]) while not img.shape[0]: self.index += 1 img = cv2.imread(self.list_of_files[self.index]) self.cv_img = img img_small = cv2.resize(img, (0,0), fx = IMAGE_RESIZE_FACTOR, fy = IMAGE_RESIZE_FACTOR) b, g, r = cv2.split(img_small) img_small = cv2.merge((r,g,b)) im = Image.fromarray(img_small) self.image = ImageTk.PhotoImage(image=im) def resetCanvas(self): self.canvas.create_image(0, 0, image=self.image, anchor="nw") self.canvas.configure(height = self.image.height(), width = self.image.width()) self.canvas.place(x = 0, y = 0, height = self.image.height(), width = self.image.width()) def initUI(self): self.style = Style() self.style.theme_use("default") self.pack(fill=BOTH, expand=1) print "width and height of image should be ", self.image.width(), self.image.height() self.canvas = Tkinter.Canvas(self, width = self.image.width(), height = self.image.height()) self.canvas.bind("<Button-1>", self.OnMouseDown) self.canvas.pack() nextButton = Button(self, text="Next", command=self.nextButton) nextButton.place(x=0, y=0) restartButton = Button(self, text="Restart", command=self.restart) restartButton.place(x=0, y=22) def nextButton(self): new_img = self.cv_img[self.corners[0][1]/IMAGE_RESIZE_FACTOR:self.corners[1][1]/IMAGE_RESIZE_FACTOR, self.corners[0][0]/IMAGE_RESIZE_FACTOR:self.corners[1][0]/IMAGE_RESIZE_FACTOR] files = self.list_of_files[self.index].split("/") try: os.stat(POSITIVE_DIRECTORY+files[-2]) except: os.mkdir(POSITIVE_DIRECTORY+files[-2]) print("saving to ", "{}{}/{}".format(POSITIVE_DIRECTORY, files[-2], files[-1])) cv2.imwrite("{}{}/{}".format(POSITIVE_DIRECTORY, files[-2], files[-1]), new_img) self.saveNegatives(files) self.restart() self.loadImage() self.resetCanvas() def saveNegatives(self, files): low_x = min(self.corners[0][0], self.corners[1][0])/IMAGE_RESIZE_FACTOR high_x = max(self.corners[0][0], self.corners[1][0])/IMAGE_RESIZE_FACTOR low_y = min(self.corners[0][1], self.corners[1][1])/IMAGE_RESIZE_FACTOR high_y = max(self.corners[0][1], self.corners[1][1])/IMAGE_RESIZE_FACTOR try: os.stat(NEGATIVE_DIRECTORY+files[-2]) except: os.mkdir(NEGATIVE_DIRECTORY+files[-2]) new_img = self.cv_img[ :low_y, :] cv2.imwrite("{}{}/{}{}".format(NEGATIVE_DIRECTORY, files[-2], "LY", files[-1]), new_img) new_img = self.cv_img[ high_y: , :] cv2.imwrite("{}{}/{}{}".format(NEGATIVE_DIRECTORY, files[-2], "HY", files[-1]), new_img) new_img = self.cv_img[ :, :low_x ] cv2.imwrite("{}{}/{}{}".format(NEGATIVE_DIRECTORY, files[-2], "LX", files[-1]), new_img) new_img = self.cv_img[:, high_x: ] cv2.imwrite("{}{}/{}{}".format(NEGATIVE_DIRECTORY, files[-2], "HX", files[-1]), new_img) def restart(self): self.corners = [] self.index -=1 self.canvas.delete("all") self.loadImage() self.resetCanvas() def OnMouseDown(self, event): print(event.x, event.y) self.corners.append([event.x, event.y]) if len(self.corners) == 2: self.canvas.create_rectangle(self.corners[0][0], self.corners[0][1], self.corners[1][0], self.corners[1][1], outline ='cyan', width = 2) def main(): root = Tk() root.geometry("250x150+300+300") list_of_files = [] file_names = [] walker = iter(os.walk(IMAGE_DIRECTORY)) next(walker) for dir, _, _ in walker: files = [dir + "/" + file for file in os.listdir(dir)] list_of_files.extend(files) file_names.extend(os.listdir(dir)) list_of_processed_files = [] processed_file_names = [] walker = iter(os.walk(POSITIVE_DIRECTORY)) next(walker) for dir, _, _ in walker: files = [dir + "/" + file for file in os.listdir(dir)] list_of_processed_files.extend(files) processed_file_names.extend(os.listdir(dir)) good_names = set(file_names) - set(processed_file_names) list_of_files = [f for i, f in enumerate(list_of_files) if file_names[i] in good_names] app = Example(root, list_of_files, IMAGE_DIRECTORY+"positives") root.mainloop() if __name__ == '__main__': main() ```
What's the best strategy to train a CNN with images that only have labels for positive characteristics?
I think your best bet would be transfer learning. Start with a model that has already been trained with a wider dataset such as the ones presented [here](https://www.tensorflow.org/tutorials/image_recognition). From there you can train the model with your specific dataset. You can then use output nodes for the labels which you have available to you, and you can get the predictions for the other images from the pre-trained model which are usually trained for thousands of different classes. Alternatively, you can train a model with all the output classes you have in your label set and another output for "other". Then when an output node is selected you can pass that same input to the pre-trained model which was trained with your data as well.
8009
1
8163
null
5
2914
I am researching to implement RMSProp in a neural network project I am writing. I have not found any published paper to refer for a canonical version - I first stumbled across the idea from a [Coursera class presented by Geoffrey Hinton](https://www.coursera.org/course/neuralnets) (lecture 6 I think). I don't think the approach has ever been formally published, despite many gradient-descent optimisation libraries having an option called "RMSProp". In addition, my searches are showing up a few variations of the original idea, and it is not clear why they differ, or whether there is a clear reason to use one version over another. The general idea behind RMSProp is to scale learning rates by a moving average of current gradient magnitude. On each update step, the existing squared gradients are averaged into a running average (which is "decayed" by a factor) and when the network weight params are updated, the updates are divided by the square roots of these averaged squared gradients. This seems to work by stochastically "feeling out" the second order derivatives of the cost function. Naively, I would implement this as follows: Params: - $\gamma$ geometric rate for averaging in [0,1] - $\iota$ numerical stability/smoothing term to prevent divide-by-zero, usually small e.g 1e-6 - $\epsilon$ learning rate Terms: - $W$ network weights - $\Delta$ gradients of weights i.e. $\frac{\partial E}{\partial W}$ for a specific mini-batch - $R$ RMSProp matrix of running average squared weights Initialise: - $R \leftarrow 1$ (i.e. all matrix cells set to 1) For each mini-batch: - $R \leftarrow (1-\gamma)R + \gamma \Delta^2$ (element-wise square, not matrix multiply) - $W = W - \epsilon \frac{\Delta}{\sqrt{R + \iota}}$ (all element-wise) I have implemented and used a version similar to this before, but that time around I did something different. Instead of updating $R$ with a single $\Delta^2$ from the mini-batch (i.e. gradients summed across the mini-batch, then squared), I summed up each individual example gradient squared from the mini-batch. Reading up on this again, I'm guessing that's wrong. But it worked reasonably well, better than simple momentum. Probably not a good idea though, because of all those extra element-wise squares and sums needed, it will be less efficient if not required. So now I am discovering further variations that seem to work. They call themselves RMSProp, and none seems to come with much rationale beyond "this works". For example, the Python `climin` library [seems to implement what I suggest above](http://climin.readthedocs.org/en/latest/rmsprop.html), but then suggests a further combination with momentum with the teaser "In some cases, adding a momentum term β is beneficial", with a partial explanation about adaptable step rates - I guess I'd need to get more involved in that library before fully understanding what they are. In another example the [downhill library's RMSProp implementation](http://downhill.readthedocs.org/en/stable/generated/downhill.adaptive.RMSProp.html) combines two moving averages - one is the same as above, but then another, the average of gradients without squaring is also tracked (it is squared and taken away from the average of squared weights). I'd really like to understand more about these alternative RMSProp versions. Where have they come from, where is the theory or intuition that suggests the alternative formulations, and why do these libraries use them? Is there any evidence of better performance?
Implementing RMSProp, but finding differences between reference versions
CC BY-SA 3.0
null
2015-09-04T21:24:25.313
2019-01-03T11:58:50.247
null
null
836
[ "neural-network", "gradient-descent" ]
RMSProp is indeed an unpublished method, and in the lecture Geoffrey Hinton gives just the general idea behind RMSProp - to divide the gradient by a moving average of the gradient magnitude. The lecture has disappeared from YouTube but you can find the slides in the end of this PDF: [https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf](https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) When this principle is applied to Stochastic Gradient Descent, the update rule you showed is obtained. Since Hinton did not propose an exact algorithm, this principle has been applied to different optimization methods. I agree it's confusing all these methods call themselves RMSProp. `climin` implements RMSProp with Nesterov momentum. Momentum methods try to avoid the oscillation that often happens with SGD by slowly changing the current direction of updates. The algorithm given in `climin` documentation introduces the $\beta$ parameter that controls how much of the previous update direction is retained. Nesterov momentum is implemented by first taking a step towards the previous update direction $v_t$, calculating gradient at that position, using the gradient to obtain the new update direction $v_{t+1}$, and finally updating the parameters. The `climin` implementation also includes the smoothing term $\iota$ inside the square root for stability (1e-8), even though it's not mentioned in the documentation. The implementation in the Downhill library is based on the algorithm described in the [paper by A. Graves](https://arxiv.org/pdf/1308.0850.pdf). In the article (equations 38–40) the square of the average gradient is subtracted from the average square gradient. Apparently the idea is to approximate the variance of the gradient instead of its magnitude (recall that the variance of a random variable is equal to the mean of the square minus the square of the mean).
Layman's comparison of RMSE
Typically you want a smaller RMSE and without getting into detail it should be sufficient to just take the smaller one. However, I am concerned because you state that the models were ran on the same dataset but at different timeframes. Since RMSE scale depends on the dependent variable scale, it's entirely possible that these two timeframes are scaled different. A somewhat contrive example would be energy consumption. I would expect a model trained on daytime consumption to have a higher RMSE than for one trained between 1am and 3 am. In that case, comparing the RMSE may be meaningless. You can try to normalize your data and RMSE to help with this, but i'm unsure if AWS provides this ability. As for your second questions, you really won't get a 75% accurate number for regression. You can look at the deviations of the residual or do cross valiadtion and see how well the model performs. Again this may not be possible in AWS. edit: I juse realized that the histograms were residual plots. Do three things Increase bin size. Check to see if the residuals are centered around 0 and then check if there is skewness in the data. If the data is centered around 0 and symmetric then you can say the model error is basically random and does not favor over or under predicting. If the data is not centered around 0 and there is skewness, then the errors can be systematic and then in that case considering adding more variables.
8024
1
8348
null
8
1634
Can't reinforcement learning be used without the help of other learning algorithms like SVM and MLP back propagation? I consulted two papers: - Paper 1 - Paper 2 both have used other machine learning methods in the inner loop.
Does reinforcement learning require the help of other learning algorithms?
CC BY-SA 3.0
null
2015-09-07T08:29:25.507
2021-02-18T08:55:51.597
2021-02-18T08:55:51.597
85045
8013
[ "machine-learning", "reinforcement-learning", "algorithms" ]
You do not need additional learning algorithms to perform reinforcement learning in simple systems where you can explore all states. For those, simple iterative [Q-learning](https://en.wikipedia.org/wiki/Q-learning) can do very well - as well as a variety of similar techniques, such as Temporal Difference, SARSA. All these can be used without neural networks, provided your problem is not too big (typically under a few million state/action pairs). The simplest form of Q-learning just stores and updates a table of `<state, action> => <estimated reward>` pairs. There is no deeper statistical model inside that. Q-learning relies on estimates of reward from this table in order to take an action and then updates it with a more refined estimate after each action. Q-learning and related techniques such as Temporal Difference are sometimes called model free. However, this does not refer to the absence of a statistical model such as a neural net. Instead, it means that you do not need to have a model of the system you are learning to optimise available, such as knowing all the probabilities of results and consequences of actions in a game. In model free RL, all learning can be done simply by experiencing the system as an agent (if you do have a model then it may still be used for simulation or planning). When considering whether or not you need a neural network, then the term tabular is used for systems that work with explicit value estimates for every possible state or state/action pair. And the term function approximation is used to describe how a neural network is used in the context of RL. For large, complex problems, which may even have infinite possible states, it is not feasible to use tabular methods, and you need good generalised value estimates based on some function of the state. In those cases, you can use a neural network to create a function approximator, that can estimate the rewards from similar states to those already seen. The neural network replaces the function of the simple table in tabular Q-Learning. However, the neural network (or other supervised ML algorithm) does not perform the learning process by itself, you still need an "outer" RL method that explores states and actions in order to provide data for the NN to learn.
Reinforcement learning algorithms
As your question was focused on reinforcement learning with RStudio I.e., in R language BOOKS - Hands on Reinforcement learning with R You Tube - Reinforcement Learn Techniques with R, packtpub tutorial series Reinforcement Learn Techniques with R : What Reinforcement Learning Can Do for You | packtpub.com Your First Reinforcement Learning Program Programming the Environment | packtpub.com - Discover Algorithms for Reward-Based Learning in R | packtpub.com The Course Overview First model based program: Policy Evaluation and Iteration Programming model free environment using Monte Carlo & Q- learning Building Actions, Rewards, Punishments using Simulated Annealing Alt to Q-Learning - Hands on Reinforcement learning with R | code in action (packt) Markov decision process in action Multi-Armed bandit models Dynamic programming for optimal policies Monte Carlo methods for prediction Temporal difference learning Reinforcement learning in Game applications MAB for financial engineering TD learning in healthcare Exploring deep reinforcement learning methods Deep Q learning using keras PDF - Reinforcement Learning in R - Reinforcement Learning in R by Nicolas Pröllochs, Stefan Feuerriegel Tutorial links - HOW TO PERFORM REINFORCEMENT LEARNING WITH R - Reinforcement Learning (Q-learning) An Introduction (Part 1) Implementation using R (Part 2) COURSES - Reinforcement learning with R:Algorithms-Agents-Environment-Udemy enter link description here - Reinforcement learning specialisation Coursera Lecture NOTES - Reinforcement learning R slides - Algorithms for Reinforcement Learning OTHER GENERAL RESOURCES (not specific to R) - The chapter by Bertsekas - Mastering Reinforcement Learning with Python: Build next-generation, self-learning models using reinforcement learning techniques and best practices - Reinforcement Learning Algorithms with Python: Learn, understand, and develop smart algorithms for addressing AI challenges - Python Reinforcement Learning Projects: Eight hands-on projects exploring reinforcement learning algorithms using TensorFlow - Reinforcement Learning: Industrial Applications of Intelligent Agents - Handbook of Reinforcement Learning and Control: 325 (Studies in Systems, Decision, and Control) - Algorithms for Reinforcement Learning: Csaba Szepesvari. Nice compendium of ready to be implemented algorithms. - Reinforcement Learning and Dynamic Programming using Function Approximators. Busoniu, Lucian; Robert Babuska ; Bart De Schutter ; Damien Ernst (2010). This is a very practical book that explains some state-of-the-art algorithms (i.e., useful for real world problems) like fitted-Q-iteration and its variations. - Reinforcement Learning: State-of-the-Art. Vol. 12 of Adaptation, Learning and Optimization. Wiering, M., van Otterlo, M. (Eds.), 2012. Springer, Berlin. In Sutton's words "This book is a valuable resource for students wanting to
go beyond the older textbooks and for researchers wanting to easily catch up with
recent developments". - Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles : Draguna Vrabie, Kyriakos G. Vamvoudakis , Frank L. Lewis. I am not familiar with this one, but I have seen it recommended. - Markov Decision Processes in Artificial Intelligence, Sigaud O. & Buffet O. editors, ISTE Ld., Wiley and Sons Inc, 2010. I definitely suggest the books by Sutton and Barto as an excellent intro, the chapter by Bertsekas for getting a solid theoretical background and the book by Busoniu et al. for practical algorithms that can solve some non-toy problems. I also find useful the book by Szepesvari as a quick reference for understanding an comparing algorithms. There are also several good specialized monographs and surveys on the topic, some of these are: - "From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to Optimization and Planning" by Remi Munos (New trends on Machine Learning). This monograph covers important nonconvex optimistic optimization methods that can be applied for policy search. - "Reinforcement Learning in Robotics: A Survey" by J. Kober, J. A. Bagnell and J. Peters. - "A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning" by A. Geramifard, T. J. Walsh, S. Tllex, G. Chowdhary, N. Roy and J. P. How (Foundations and Trends in Machine Learning). - "A Survey on Policy Search for Robotic" by Newmann and Peters (Foundations and Trends in Machine Learning). - markov decision process - Algorithms for Reinforcement Learning (Synthesis Lectures on Artificial Intelligence and Machine Learning) - Neuro-Dynamic Programming (Optimization and Neural Computation Series, 3)
8029
1
8059
null
3
477
Consider the application: - We have a set of users and items. - Users can perform different action types (think browsing, clicking, upvoting etc.) on different items. - Users and items accumulate a "profile" for each action type. - For users such profile is a list of items on which they had performed a given action type - For items such profile is a list of users who performed a given action type on them - We assume that accumulated profiles define future actions. - We want to predict the action a user will take using supervised learning (classification with probability estimation) Consider the following problem: - These profiles can be very sparse (millions of items and 100 million users) and it is not feasible to use them directly as features - We would like to compute "compressed" profiles (eigenprofiles?:)) with dimensionality < 300 that can then be efficiently stored and fed to different classification algorithms - Before you say "Use TruncatedSVD/Collapsed Gibbs Sampling/Random Projections on historical data" bare with me for a second. - Enters concept drift. - New items and users are being introduced all the time to the system. - Old users churn. - Old items churn. - At some point there are items with most of the users never seen in historical data and users with only fresh items. - Before you say "retrain periodically", remember that we have a classifier in the pipeline that was taught on the "historic" decomposition and the new decomposition could assign entirely different "meaning" to cells of output vectors (abs(decompose_v1(sample)[0] - decompose_v2(sample)[0]) >> epsilon) rendering this classifier unusable. Some requirements: - The prediction service has to be available 24/7. - The prediction cannot take more than 15ms and should use a maximum of 4 cpu cores (preferably only one) Some ideas I had so far: - We could retrain the classifier on the new decomposition but this would mean that we have to re-run the decomposition on the whole training dataset (with snapshot of profiles at the time of the event we want to predict) and the whole database (all current profiles) plus store it. To make this work we would have to have a second database for storing the decomposed profiles that would be hot-swapped once the new retrained model is ready and all profiles have been decomposed. This approach is quite inefficient in both computational resources and storage resources (this is expensive storage because the retrieval has to be super-fast) - We could retrain the classifier as in solution 1. But do the decomposition ad_hoc. This puts a lot of constraints on the speed of the decomposition (has to have sub-millisecond computation times for a single sample). This does a lot of redundant computation (especially for item profiles) unless we add an extra caching layer. This avoids redundant storage and redundant computation of churned users/items at the cost of extra prediction latency and extra caching layer complexity. - <---- Please help me here We could use one of online learning algorithms such as VFT or Mondrian Forests for the classifier - so no more retraining + nice handling of concept drift. We would need an online algorithm for decomposition that satisfies strict requirements: a) at least a part of output vectors should be stable between increments (batches). b) it can introduce new features to account for new variance in the data but should do so at a controllable rate c) should not break if it encounters new users/items Questions/points of action: - Please evaluate my proposed solutions and propose alternatives - Please provide algorithms suitable for online learning and online decomposition (if they exist) as described in alternative 3. Preferably with efficient python/scala/java implementations with a sufficient layer of abstraction to use them in a web service (python scripts that take in a text file as dataset would be much less valuable than scikit modules) - Please provide links to relevant literature that dealt with similar problems/describes algorithms that could be suitable - Please share experiences/caveats/tips that you learned while dealing with similar problems Some background reading that you may find useful: - quora question on ad click prediction - google "view from the trenches" - criteo paper on click prediction - facebook on predicting ad clicks Disclaimer: Our application is not strictly ad conversion prediction and some problems such as rarity do not apply. The event we would like to predict has 8 classes and occurs c.a. 0.3%-3% of times a user browses an item.
Online/incremental unsupervised dimensionality reduction for use with classification for event prediction
CC BY-SA 3.0
null
2015-09-07T16:14:16.143
2015-09-10T15:07:14.920
null
null
12663
[ "classification", "performance", "dimensionality-reduction", "online-learning" ]
My take: - I agree with the issues raised in 1., so not much to add here - retraining and storage is indeed inefficient - Vowpal Wabbit http://hunch.net/~vw/ would be my first choice - stability of output between increments is really more of a data than algorithm feature - if you have plenty of variation on input, you won't have that much stability on output (at least not by default) - hashing can take care of the variation - you can control by a combination of three parameters: the size of the hashing table and the l1/l2 regularization - same for the new features / users (I think - most of the applications i used it had a ercord representing a user clicking or not, so new users / ads were sort of treated "the same") - normally I use VW from the command line, but an example approach (not too elegant) for controlling from Python is given here: http://fastml.com/how-to-run-external-programs-from-python-and-capture-their-output/ - if you prefer sth purely Python, then a version (without decomposition) of an online learner in the Criteo spirit can be found here: https://www.kaggle.com/c/tradeshift-text-classification/forums/t/10537/beat-the-benchmark-with-less-than-400mb-of-memory - I am not sure how to handle the concept drift - haven't paid that much attention to it so far beyond rolling statistics: for the relevant variables of interest, keep track of mean / count over recent N periods. It is a crude approach, but does seem to get the job done it terms of capturing lack of "stationarity" - helpful trick 1: single pass over data before first run to create per feature dictionary and flag certain values as rare (lump them into a single value) - helpful trick 2: ensembling predictions from more than one model (varying interaction order, learning rate)
Efficient dimensionality reduction for large dataset
Have you heard of Uniform Manifold Approximation and Projection (UMAP)? > UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for non-linear dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP as described has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning. Check their [code](https://github.com/lmcinnes/umap) and [original paper](https://arxiv.org/abs/1802.03426) for list of pros and cons, it is super easy to use. Quick Facts: UMAP can handle large datasets and is faster than t-SNE and also supports fitting to sparse matrix data, and contrary to t-SNE, a general purpose dimension reduction technique, meaning that not only it can be used for visualisation but also for reducing the feature space for feeding into other machine learning models. Concrete Examples: I have benchmarked the method and compared it against some other dimensionality reduction techniques [benchmark notebook](https://github.com/mmortazavi/UMAP_Nonlinear-Dimensionality-Reduction_Benchmark), if interested to have a quick look and a jump start.
8038
1
8060
null
1
1205
``` import numpy as np from sklearn import linear_model X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]]) Y = np.array(['C++', 'C#', 'java','python']) clf = linear_model.SGDClassifier() clf.fit(X, Y) print (clf.predict([[1.7, 0.7]])) #python ``` I am trying to predict the values from arrays Y by giving a test case and training it on a training data which is X, Now my problem is that, I want to change the training set X to TF-IDF Feature Vectors, so how can that be possible? Vaguely, I want to do something like this: ``` import numpy as np from sklearn import linear_model X = np.array_str([['abcd', 'efgh'], ['qwert', 'yuiop'], ['xyz','abc'], ['opi', 'iop']]) Y = np.array(['C++', 'C#', 'java','python']) clf = linear_model.SGDClassifier() clf.fit(X, Y) ```
Passing TFIDF Feature Vector to a SGDClassifier from sklearn
CC BY-SA 3.0
null
2015-09-08T12:11:47.187
2020-08-03T09:46:14.697
2015-09-08T16:06:06.113
2750
12557
[ "machine-learning", "classification", "python", "scikit-learn" ]
It's useful to do this with a `Pipeline`: ``` import numpy as np from sklearn import linear_model, pipeline, feature_extraction X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]]) Y = np.array(['C++', 'C#', 'java','python']) clf = pipeline.make_pipeline( feature_extraction.text.TfidfTransformer(use_idf=True), linear_model.SGDClassifier()) clf.fit(X, Y) print(clf.predict([[1.7, 0.7]])) ```
why does transform from tfidf vectorizer (sklearn) not work
As far as I can tell it's interpreting your new word_set `word_set = ['dog', 'cat', 'foo']` as three separate documents containing one word each, whereas if you did `word_set = ['dog cat foo']` it would interpret this as a single new document containing those words. What behavior are you expecting from this function? Is `corpus = words` a list of document strings, or a list of single words? If it's the latter, this is likely not doing what you think it is doing, and you should instead make `corpus` a list of document strings.
8084
1
8088
null
2
2338
[This answer](https://stackoverflow.com/a/8739526/2623899) to [this question](https://stackoverflow.com/q/8739227/2623899) works only for situations in which the desired solution to the coupled functions is not restricted to a certain range. But what if, for example, we wanted a solution such that `0 < x < 10` and `0 < y < 10`? There are functions within [scipy.optimize](http://docs.scipy.org/doc/scipy/reference/optimize.html) that find roots to a function within a given interval (e.g., [brentq](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brentq.html#scipy.optimize.brentq)), but these work only for functions of one variable. Why does `scipy` fall short of providing a root solver that works for multi-variable functions within specific ranges? How might such a solver be implemented?
Solve a pair of coupled nonlinear equations within certain limits
CC BY-SA 3.0
null
2015-09-13T21:50:39.237
2015-09-14T08:05:32.043
2017-05-23T12:38:53.587
-1
12777
[ "python", "optimization" ]
As a workaround, you could minimize another function that includes both the objective and the constraints, then check if sol.fun is (numerically) equal to zero. ``` from scipy.optimize import minimize import numpy as np f = lambda x: np.sin(x).sum() #the function to find roots of L = np.array([-1,-1]) #lower bound for each coordinate U = np.array([1, 1]) #upper bound for each coordinate g = lambda x: f(x) **2 + max(0, (L - x).max(), (x - U).max()) sol = minimize(g, [0.5,0.5]) ``` Also, scipy.optimize seems to have some optimisers that support rectangular bounds, i.e. [differential_evolution](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.differential_evolution) (since version 0.15.0).
Solving a system of equations with sparse data
If I understand you correctly, this is the case of multiple linear regression with sparse data (sparse regression). Assuming that, I hope you will find the following resources useful. 1) NCSU lecture slides on sparse regression with overview of algorithms, notes, formulas, graphics and references to literature: [http://www.stat.ncsu.edu/people/zhou/courses/st810/notes/lect23sparse.pdf](http://www.stat.ncsu.edu/people/zhou/courses/st810/notes/lect23sparse.pdf) 2) `R` ecosystem offers many packages, useful for sparse regression analysis, including: - Matrix (http://cran.r-project.org/web/packages/Matrix) - SparseM (http://cran.r-project.org/web/packages/SparseM) - MatrixModels (http://cran.r-project.org/web/packages/MatrixModels) - glmnet (http://cran.r-project.org/web/packages/glmnet) - flare (http://cran.r-project.org/web/packages/flare) 3) A blog post with an example of sparse regression solution, based on `SparseM`: [http://aleph-nought.blogspot.com/2012/03/multiple-linear-regression-with-sparse.html](http://aleph-nought.blogspot.com/2012/03/multiple-linear-regression-with-sparse.html) 4) A blog post on using sparse matrices in R, which includes a primer on using `glmnet`: [http://www.johnmyleswhite.com/notebook/2011/10/31/using-sparse-matrices-in-r](http://www.johnmyleswhite.com/notebook/2011/10/31/using-sparse-matrices-in-r) 5) More examples and some discussion on the topic can be found on StackOverflow: [https://stackoverflow.com/questions/3169371/large-scale-regression-in-r-with-a-sparse-feature-matrix](https://stackoverflow.com/questions/3169371/large-scale-regression-in-r-with-a-sparse-feature-matrix) UPDATE (based on your comment): If you're trying to solve an LP problem with constraints, you may find this theoretical paper useful: [http://web.stanford.edu/group/SOL/papers/gmsw84.pdf](http://web.stanford.edu/group/SOL/papers/gmsw84.pdf). Also, check R package limSolve: [http://cran.r-project.org/web/packages/limSolve](http://cran.r-project.org/web/packages/limSolve). And, in general, check packages in CRAN Task View "Optimization and Mathematical Programming": [http://cran.r-project.org/web/views/Optimization.html](http://cran.r-project.org/web/views/Optimization.html). Finally, check the book "Using R for Numerical Analysis in Science and Engineering" (by Victor A. Bloomfield). It has a section on solving systems of equations, represented by sparse matrices (section 5.7, pages 99-104), which includes examples, based on some of the above-mentioned packages: [http://books.google.com/books?id=9ph_AwAAQBAJ&pg=PA99&lpg=PA99&dq=r+limsolve+sparse+matrix&source=bl&ots=PHDE8nXljQ&sig=sPi4n5Wk0M02ywkubq7R7KD_b04&hl=en&sa=X&ei=FZjiU-ioIcjmsATGkYDAAg&ved=0CDUQ6AEwAw#v=onepage&q=r%20limsolve%20sparse%20matrix&f=false](http://books.google.com/books?id=9ph_AwAAQBAJ&pg=PA99&lpg=PA99&dq=r+limsolve+sparse+matrix&source=bl&ots=PHDE8nXljQ&sig=sPi4n5Wk0M02ywkubq7R7KD_b04&hl=en&sa=X&ei=FZjiU-ioIcjmsATGkYDAAg&ved=0CDUQ6AEwAw#v=onepage&q=r%20limsolve%20sparse%20matrix&f=false).
8099
1
8100
null
6
1188
I have a big data set of fake transactions for a company. Each row contains the username, credit card number, time, device used, and amount of money in the transaction. I need to classify each transaction as either malicious or not malicious and I am lost for ideas on where to start. Doing it by hand would be silly. I was thinking possibly checking for how often a credit card is used, if it is consistently used at a certain time, or if it is used from lots of different devices (iOS AND Android, as an example) would be possible starting places. I'm still fairly new to all this and ML. Would there be some ML algorithm optimal for this problem? Also, side question: what would be a good place to host the 600 or so GB of data for cheaps? Thanks
Classifying transactions as malicious
CC BY-SA 3.0
null
2015-09-15T03:15:28.163
2022-05-02T17:04:52.833
null
null
12801
[ "classification", "bigdata" ]
This problem is popularly called the "[Credit Card Fraud Detection](https://www.azleg.gov/ars/13/02105.htm)" There are several classification algorithms, which aim to tackle this problem. With the knowledge of the dataset you possess, the Decision Trees algorithm can be employed for detecting malicious transactions from the non-malicious ones. This [paper](http://www.iaeng.org/publication/IMECS2011/IMECS2011_pp442-447.pdf) is a nice resource to learn and develop the intuition about fraud detection and the usage of basic classification algorithms like the Decision Trees and the SVMs for solving the problem. There are [several other papers](https://scholar.google.co.in/scholar?hl=en&q=Fraud%20Detection&btnG=) which solve this problems employing algorithms like [Neural Networks](https://ieeexplore.ieee.org/document/4280163), Logistic Regression, Genetic Algorithms, etc. However, the paper which uses the decision trees algorithm is a nice place to start learning. what would be a good place to host the 600 or so GB of data for cheaps? [Aws S3](http://what%20would%20be%20a%20good%20place%20to%20host%20the%20600%20or%20so%20GB%20of%20data%20for%20cheaps?) would be a nice, cheap way to do that. It also integrates nicely with Redshift, in case you want to do complex analytics on the data.
Receipt fraud detection
This is an anomaly detection problem. In your case, we would refer to it as supervised anomaly detection problem as you have the labels of categories. This typically involves taking a large "normal" dataset, in this case, this would be receipts which are valid. And then using a machine learning method to learn features from this dataset (e.g. the typical words used, normal amounts entered, etc.) to generate a model of the "normal" data. You could either go down the `Computer Vision` direction (e.g. for physical features in the photos) or down the `Natural Language` direction (e.g. for textual features in the receipts), or perhaps any other features available! Once you have this model of "normal" behaviour, you can then test it against fraudulent receipts by predicting whether the receipt is normal. But remember, the model will only be as good as your catalogue of "normal" and "abnormal" data
8113
1
8121
null
0
5721
I am trying to learn `scikit-learn` `neuralnetwork` and am coming up against the same problem in regression where no matter the dataset I getting a horizontal straight line for my fit. here is an example using the Linear regression example from `scikit-learn` and then using the `SKNN` regressor , simple example code from the docs. # -*- coding: utf-8 -*- # Code source: Jaques Grobler # http://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html # License: BSD 3 clause import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model # Load the diabetes dataset diabetes = datasets.load_diabetes() # Use only one feature diabetes_X = diabetes.data[:, np.newaxis] diabetes_X_temp = diabetes_X[:, :, 2] # Split the data into training/testing sets diabetes_X_train = diabetes_X_temp[:-20] diabetes_X_test = diabetes_X_temp[-20:] # Split the targets into training/testing sets diabetes_y_train = diabetes.target[:-20] diabetes_y_test = diabetes.target[-20:] # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(diabetes_X_train, diabetes_y_train) print "Results of Linear Regression...." print "================================\n" # The coefficients print('Coefficients: ', regr.coef_) # The mean square error print("Residual sum of squares: %.2f" % np.mean((regr.predict(diabetes_X_test) - diabetes_y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(diabetes_X_test, diabetes_y_test)) # Plot outputs plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, regr.predict(diabetes_X_test), color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show() # Now using the sknn regressor # http://scikit-neuralnetwork.readthedocs.org/en/latest/guide_beginners.html # from sknn.mlp import Regressor, Layer nn = Regressor( layers=[ Layer("Rectifier", units=200), Layer("Linear")], learning_rate=0.02, n_iter=10) nn.fit(diabetes_X_train, diabetes_y_train) print "Results of SKNN Regression...." print "==============================\n" # The coefficients print('Coefficients: ', regr.coef_) # The mean square error print("Residual sum of squares: %.2f" % np.mean((nn.predict(diabetes_X_test) - diabetes_y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % nn.score(diabetes_X_test, diabetes_y_test)) # Plot outputs plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, nn.predict(diabetes_X_test), color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show() Results of Linear Regression: ``` ('Coefficients: ', array([ 938.23786125])) Residual sum of squares: 2548.07 Variance score: 0.47 ```  Results of SKNN Regression: ``` ('Coefficients: ', array([ 938.23786125])) Residual sum of squares: 5737.52 Variance score: -0.19 ``` Changing the number of iterations to 1000 results in a score of -0.15
SKNN regression problem
CC BY-SA 3.0
null
2015-09-17T14:13:54.293
2020-09-14T13:08:57.510
2020-09-14T13:08:57.510
104237
12294
[ "python", "neural-network", "scikit-learn", "regression" ]
My best guess here is that your learning rate is way too high for the problem. You also probably have far more neurons in your hidden network than you need, seeing as you're using just one feature. Recall that learning rate is controlling the "step size" in gradient descent and that for your dataset, it is likely far too high. I made some minor changes to your code and got better results than linear regression. Notice the use of 2 hidden neurons, a 0.001 learning rate, and 20 iterations. ``` # Now using the sknn regressor # http://scikit-neuralnetwork.readthedocs.org/en/latest/guide_beginners.html from sknn.mlp import Regressor, Layer nn = Regressor( layers=[ Layer("Rectifier", units=2), Layer("Linear")], learning_rate=0.001, n_iter=20) nn.fit(diabetes_X_train, diabetes_y_train) print("Results of SKNN Regression....") # The coefficients print('Coefficients: ', regr.coef_) # The mean square error print("Residual sum of squares: %.2f" % np.mean((nn.predict(diabetes_X_test) - diabetes_y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % nn.score(diabetes_X_test, diabetes_y_test)) # Plot outputs plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, nn.predict(diabetes_X_test), color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show() ``` SKNN regression: ``` Results of SKNN Regression.... Coefficients: [ 938.23786125] Residual sum of squares: 6123.67 Variance score: 0.50 ```
Sklearn Linear Regression examples
There is an application of tf-idf on the [sklearn website](http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html). sklearn handles sparse matrices for you, so I wouldn't worry about it too much: > Fortunately, most values in X will be zeros since for a given document less than a couple thousands of distinct words will be used. For this reason we say that bags of words are typically high-dimensional sparse datasets. We can save a lot of memory by only storing the non-zero parts of the feature vectors in memory. scipy.sparse matrices are data structures that do exactly this, and scikit-learn has built-in support for these structures. Regarding your point about inserting the weight, I guess you have already performed tf-idf on your training corpus, but you don't know how to apply it to your test corpus? If so you could so as follows (taken from above link) ``` from sklearn.feature_extraction.text import TfidfTransformer from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() X_train_counts = count_vect.fit_transform(data) # Perform tf-idf tfidf_transformer = TfidfTransformer() X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts) docs_new = ['God is love', 'OpenGL on the GPU is fast'] # New test documents X_test_counts = count_vect.transform(docs_new) # Count vectorise the new documents X_test_tfidf = tfidf_transformer.transform(X_new_counts) # transfrom the test counts ```
8114
1
8115
null
0
219
I'm attempting to classify text documents using a few different dimensions. I'm trying to create arbitrary topics to classify such as size and relevance, which are linear or gradual in nature. For example: size: tiny, small, medium, large, huge. relevance: bad, ok, good, excellent, awesome I am training the classifier by hand. For example, this document represents a 'small' thing, this other document is discussing a 'large' thing. When I try multi-label or multi-class SVM for this it does not work well and it also logically doesn't make sense. Which model should I use that would help me predict this linear type of data? I use scikit-learn presently with a tfidf vector of the words.
Classifying text documents using linear/incremental topics
CC BY-SA 3.0
null
2015-09-17T18:11:38.857
2015-09-17T18:41:18.710
null
null
9373
[ "classification", "scikit-learn" ]
If you want these output dimensions to be continuous, simply convert your size and relevance metrics to real-valued targets. Then you can perform [regression](https://en.wikipedia.org/wiki/Regression_analysis) instead of classification, using any of a variety of models. You could even attempt to train a multi target neural net to predict all of these outputs at once. Additionally, you might consider first using a [topic model](https://en.wikipedia.org/wiki/Topic_model) such as [LDA](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) as your feature space. Based on the values, it sounds like the "relevance" might be a variable best captured by techniques from [sentiment analysis](https://en.wikipedia.org/wiki/Sentiment_analysis).
Topic classification on text data with no/few labels
A feasible approach would be to take a pre-trained model, like BERT, and fine-tune it on a small labeled dataset. For that, you may use Huggingface's Transformers, which makes all the steps in the process relatively easy (see their tutorial on doing exactly that: [https://huggingface.co/docs/transformers/training](https://huggingface.co/docs/transformers/training))
8117
1
9927
null
8
1504
If word2vec encounters the same word multiple times in the same window, what occurs? Obviously it is meaningless to decrease the distance between the vectors for the input word and the target word. But will the repetition strengthen the relationship between the repeated word and the context words?
How does word2vec handle the input word being in the context?
CC BY-SA 3.0
null
2015-09-17T21:02:33.367
2016-10-20T12:59:46.543
2015-09-21T08:12:57.207
843
12473
[ "machine-learning", "nlp", "word-embeddings" ]
We can look at the source for guidance. > How does word2vec handle the input word being in the context? It is skipped; for both the [skip-gram](https://github.com/piskvorky/gensim/blob/f267abf94e84484047fb7569ebacba5626bc8391/gensim/models/word2vec.py#L129) and [CBOW](https://github.com/piskvorky/gensim/blob/f267abf94e84484047fb7569ebacba5626bc8391/gensim/models/word2vec.py#L154) models. > If word2vec encounters the same word multiple times in the same window, what occurs? [The relationship is strengthened](https://github.com/piskvorky/gensim/blob/f267abf94e84484047fb7569ebacba5626bc8391/gensim/models/word2vec.py#L131).
Word2Vec Implementation
> "probabilities of finding a neighboring word given a word" here you refer to the Skip-Gram architecture, where given the center word you predict the surrounding words. This extract from these [notes](https://cs224d.stanford.edu/lecture_notes/notes1.pdf) might clarify your question. Note that by assuming the conditional independence the total probability factors into a product. > "As in CBOW, we need to generate an objective function for us to evaluate the model. A key difference here is that we invoke a Naive Bayes assumption to break out the probabilities. If you have not seen this before, then simply put, it is a strong (naive) conditional independence assumption. In other words, given the center word, all output words are completely independent." Maybe this [article](https://arxiv.org/pdf/1402.3722.pdf) can also help, though it is about negative sampling it is a very clear exposition.
8120
1
9338
null
4
4518
I am using the xgboost library. My system runs a cronjob each night, where it pulls the data from the database and trains the model. However, I would like to remove the re-training of the model again and again, and just fine-tune it with any new data that came in the database. In sklearn's implemantation ([http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html)) one could use warm_start option, what about xgboost ?
Can I fine tune the xgboost model instead of re-training it?
CC BY-SA 3.0
null
2015-09-17T23:06:50.597
2021-05-13T06:17:43.140
2020-03-14T08:49:25.577
303
12858
[ "machine-learning", "scikit-learn", "xgboost" ]
I see that in the current version of python wrapper of xgboost you can specify file name or existing xgboost model (class Booster) in train function.
Hyperparameter tuning XGBoost
Another way is to use the mean_squared_log_error from the same metrics module,. First clip the negative values in the predictions to 1 and find the mean squared log error pred = np.clip(pred, min=1, max=None) err = mean_squared_log_error(yval, pred)
8124
1
8127
null
-2
204
I have string representations of text written by users in the form of parts of speech tags like so: ``` $NNDN,OVDANPN,PNVRV,^^V,^^!$^OV ``` and ``` ^,G,#,!,N,R,$ ``` etc. They are separated into two classes (0 or 1). I want to be able to cluster these such that I will be able to predict (or try to) what class the user in from their tags using the damerau levenshtein distance. The problem is that even a few hundred strings is a huge calculation for any basic clustering that I am aware of (but I am very new to this). I've tried using the counts of each tag to form a vector but applying SVM, knn classifier and Naive Bayes yielded poor results, even when using a KS test to get the best features. My gut feeling is that this seems like a problem that could be solved in the same way that scientists would compare and cluster genes. - Should I be looking at different machine learning methods? - Is there another way of representing the strings that would be more appropriate? - Is there another way of looking at the problem? I'm using the scikit-learn library for Python.
Clustering large number of strings based on tags
CC BY-SA 3.0
null
2015-09-18T09:50:09.740
2015-09-18T10:09:53.847
null
null
9025
[ "machine-learning", "classification", "python", "clustering" ]
Using the Levenshtein distance does not make a lot of sense in this context, as it is made for comparing distances between words. A commonly used representation for texts is the bag-of-words representation, where a text is converted to a vector where every element in the vector represents the count of the corresponding word. In your case you could represent a text as a bag-of-tags. The vector representation makes calculating distances a lot easier. However, I believe this is not necessary as you can classify the bags of words with Naive Bayes. Once you have tried bag-of-words you can try more complicated representations like LDA, word2vec, and the like.
Clustering strings inside strings?
Interesting question! I have not encountered it before so here is a solution I just made up, inspired by the approach taken by the word2vec paper: - Define the pair-wise similarity based on the longest common substring (LCS), or the LCS normalized by the products of the string lengths. Cache this in a matrix for any pair of strings considered since it is expensive to calculate. Also consider approximations. - Find a Euclidean (hyperspherical, perhaps?) embedding that minimizes the error (Euclidean distance if using the ball, and the dot product if using the sphere). Assume random initialization, and use a gradient-based optimization method by taking the Jacobian of the error. - Now you have a Hilbert space embedding, so cluster using your algorithm of choice! Response to deleted comment asking how to cluster multiple substrings: The bulk of the complexity lies in the first stage; the calculation of the LCS, so it depends on efficiently you do that. I've had luck with genetic algorithms. Anyway, what you'd do in this case is define a similarity vector rather than a scalar, whose elements are the k-longest pair-wise LCS; see [this](https://cstheory.stackexchange.com/questions/8361/algorithm-find-the-first-k-longest-substrings-between-two-similar-strings) discussion for algorithms. Then I would define the error by the sum of the errors corresponding to each substring. Something I did not address is how to choose the dimensionality of the embedding. The word2vec paper might provide some heuristics; see [this](https://groups.google.com/forum/#!topic/word2vec-toolkit/HRvNPIqe6mM) discussion. I recall they used pretty big spaces, on the order of a 1000 dimensions, but they were optimizing something more complicated, so I suggest you start at R^2 and work your way up. Of course, you will want to use a higher dimensionality for the multiple LCS case.
8131
1
8152
null
1
267
I have collected data for a PhD thesis, and need help understanding how to build a road map to do analytical and statistical analysis. The PhD is not itself in statistics or machine learning, but I would like to understand what are the steps and type of analysis that I have to follow for analysing data for an advanced degree? In general, how should I approach such a problem? In the data I have collected, there are 623 observations including one continuous dependent variable and 13 independent variables (continuous, categorical, and ordinal) that are defined based on the researcher experience and literature review. I considered planning to do several regression analysis to predict the dependent variable and study the effective factors (if they are positive, negative, and their magnitude) on it. I've tried multiple linear regression including different transformation on independent variables. On the other hand, I'm not sure if I should study each independent variables through the time and forecast their values in the time horizon? Here are the steps in my mind so far: - Plotting the scatter plots of different independent variables vs dependent variable to define outliers and check if the model is linear also with respect to coefficients - Removing the potential outliers - Splitting the data into two data sets to build the model and validate it after that. If the model is linear then: - Performing the multiple linear regression - Performing the multiple linear regression including different transformations to enhance the model - Validating the model - Doing the quantile regression - Doing supervised learning machine etc. If the model is not linear, I may instead need to use non-linear statistical techniques. Any feedback would be highly appreciated. My goal is to build a clear and robust road map for this part of the work.
How to start analysing and modelling data for an academic project, when not a statistician or data scientist
CC BY-SA 3.0
null
2015-09-19T04:02:11.133
2015-09-22T07:42:59.590
2020-06-16T11:08:43.077
-1
12867
[ "dataset", "predictive-modeling", "data-cleaning", "linear-regression" ]
Typically, quantitative analysis is planned and performed, based on research study's goals. Focusing on research goals and corresponding research questions, researcher would propose a model (or several models) and a set of hypotheses, associated with the model(s). Model(s) and its/their elements' types usually dictate (suggest) quantitative approaches that would make sense in a particular situation. For example, if your model includes latent variables, you would have to use appropriate methods to perform data analysis (i.e., structural equation modeling). Otherwise, you can apply a variety of other methods, such as time series analysis or, as you mentioned, multiple regression and machine learning. For more details on research workflow with latent variables, also see section #3 in [my relevant answer](https://datascience.stackexchange.com/a/1006/2452). One last note: whatever methods you use, pay enough attention to the following two very important aspects - performing full-scale exploratory data analysis (EDA) (see [my relevant answer](https://datascience.stackexchange.com/a/5095/2452)) and trying to design and perform your analysis in the reproducible research fashion (see [my relevant answer](https://datascience.stackexchange.com/a/759/2452)).
Differences between a Statistician and a Data Analyst in industry
A major difference is the job market: you'll find a lot of job ads for data analysts/scientists, very few for theoretical statisticians. In most sectors (there are some exceptions, in banking for example), companies are interested in applying existing models to their data because this is what can increase their profits. Devising new theoretical models is more on the research side of innovation. Most "pure research" job opportunities are in academia, although some big companies have research departments as well.
8135
1
8136
null
1
534
I am trying to organize a cheat sheet of sorts for data science, and I am working with the basic distinction between description, inference, and prediction. As examples of the first I see unsupervised methods described, and for the last I see supervised methods. So my question is simply, do these two sets of categories align? Is unsupervised to supervised as description is to prediction?
Is supervised machine learning by definition predictive?
CC BY-SA 3.0
null
2015-09-19T15:40:54.523
2017-12-18T01:12:38.893
null
null
12881
[ "machine-learning" ]
A description as any statistic drawn from your sample data, say the sample mean, quantiles, etc.. Inference is a conclusion drawn from your sample data about the population, e.g., rejecting or accepting some hypothesis or stating that a model is suitable or not for describing your data. Prediction is simply a guess about future observations, which hopefully uses your data and some function/model of the data in a way to formulate that guess. Both unsupervised and supervised learning methods aim to learn a function of the data that predicts another variable (typically called y) so both are drawing an inference (i.e., a model is well suited to describe your data, see the first sentence [here](https://en.wikipedia.org/wiki/Supervised_learning)). However, these two methods differ in what data is available. In supervised learning you are able to use an observed sample of y for training your model and in unsupervised learning, y is unobserved. Hope that helps!
Using machine learning specifically for feature analysis, not predictions
You don't need the linear regression to understand the effect of features in your random forest, you're better off looking at the partial dependence plots directly, this what you get when you hold all the variables fixed, and you vary one at a time. You can plot these using `sklearn.ensemble.partial_depence.plot_partial_dependence`. Take a look at the [documentation](http://scikit-learn.org/stable/modules/ensemble.html#partial-dependence) for an example of how to use it. Another type of model that can be useful for exploratory data analysis is a `DecisionTreeClassifier`, you can produce a graphical representation of this using `export_graphviz`
8138
1
8148
null
7
318
I have a data set with text fragments having a fixed structure that can contain parameters. Examples are: ``` Temperature today is 20 centigrades Temperature today is 28 centigrades ``` or ``` Her eyes are blue and hair black. Her eyes are green and hair brown. ``` The first example show a template with one numerical parameter. The second one is a template with two factor parameters. The number of templates and the number of parameter is not know. The problem is to identify the templates and assign each text fragment to the corresponding template. The obvious first idea is to use clustering. The distance measure is defined as a number of non matching words. I.e. the records in example one have distance 1, in example two distance is 2. The distance between the record in example one and two is 7. This approach works fine, providing the number of clusters is know, which is not the case, so it is not useful. I can imagine a programmatic approach scanning the distance matrix searching for records with lot of neighbors in distance 1 (or 2,3,..), but I'm curious if I can apply some unsupervised machine learning algorithm to solve the problem. R is preferred, but not required.
Identifying templates with parameters in text fragments
CC BY-SA 3.0
null
2015-09-20T12:40:10.170
2015-09-27T09:51:41.237
2015-09-21T06:57:59.920
843
10620
[ "machine-learning", "r", "nlp" ]
The basic rationale behind the following suggestion is to associate "eigenvectors" and "templates". In particular one could use LSA on the whole corpus based on a a bag-of-words. The resulting eigenvectors would serve as surrogate templates; these eigenvectors should not be directly affected by the number of words in each template. Subsequently the scores could be used to cluster the documents together following a standard procedure (eg. $k$-means in conjunction with AIC). As an alternative to LSA one could use NNMF. Let me point out that the LSA (or NNMF) would probably need to be done to the transformed TF-IDF rather than the raw word-counts matrix.
Detecting boilerplate in text samples
This might get you started. Phrase length is determined by the range() function. Basically this tokenizes and creates n-grams. Then it counts each token. Tokens with a high mean over all documents (occurs often across documents) is printed out in the last line. ``` from sklearn.feature_extraction.text import CountVectorizer import numpy as np import nltk text = """DESCRIPTION PROVIDED BY AUTHOR: The goal of my a... Author provided: The goal of my b... The goal of my c... END OF TRANSCRIPT The goal of my d... END SPONSORED BY COMPANY XYZ The goal of my e... SPONSORED: COMPANY XYZ All rights reserved date: 10/21 """ def todocuments(lines): for line in lines: words = line.lower().split(' ') doc = "" for n in range(3, 6): ts = nltk.ngrams(words, n) for t in ts: doc = doc + " " + str.join('_', t) yield doc cv = CountVectorizer(min_df=.5) fit = cv.fit_transform(todocuments(text.splitlines())) vocab_idx = {b: a for a, b in cv.vocabulary_.items()} means = fit.mean(axis=0) arr = np.squeeze(np.asarray(means)) [vocab_idx[idx] for idx in np.where(arr > .95)[0]] # ['goal_of_my', 'the_goal_of', 'the_goal_of_my'] ```
8142
1
8144
null
1
1044
I am working on a text classification work. The purpose of this work is to classify whether a particular document belong to class A or Class B. I used KNN algorithm and i am able to get some decent results. However I want to know two things. - Why a particular document has been classified as Class A or Class B? What keyword or information that made a document to be classified as such? - How to perform mis-classification analysis? Kindly help.
Error Analysis for misclassification of text documents
CC BY-SA 3.0
null
2015-09-21T11:15:45.813
2015-09-21T15:21:13.133
null
null
9793
[ "data-mining", "classification", "text-mining" ]
It seems to me that both of your questions could be answered by storing the retrieved neighbours on your test set and giving them a thorough analysis. Assuming you are using a unigram + tf-idf text representation and a cosine similarity distance metric for your K-NN retrieval, it would be trivial once you have a classified document to display the K neighbours and analyze their common unigrams and their respective tf-idf weights in order to see what influenced the classification. Moreover, doing it on your misclassified documents could help you understand which features caused the error. I'd be interested to know if there is a more systematized approach to those issues.
Improving misclassification for one class in a multi-class classification task
The main problem is that there are too many `Escas` in your dataset. If you look at the confusion matrix, the `Esca` column gets predicted (wrongly and correctly) much more that the others. This is clearly a symptom of a skewed data set. Try augmenting your images to generate a larger 'super-dataset'. Then sample a 'sub-dataset' from that 'super-dataset', such that it has an equal distribution across all classes. Train on the 'sub-dataset'. If you can't augment your data to have a better distribution across the 4 classes, here are some ideas: - Modify the loss function to more aggressively penalize Black Rots classified as Escas. - Split into two networks; the first differentiates between Leaf Blight-Healthy-Black Rot/Esca, and the second differentiates between Black Rot-Esca.
8150
1
8151
null
4
241
I am asking this question because the [previous](https://datascience.stackexchange.com/questions/8018/are-there-any-machine-learning-techniques-to-identify-points-on-plots-images) one wasn't very helpful and I asked about a different solution for the same problem. # The Problem I have lateral positions, `xcoord`, of vehicles over time which were recorded as the distances from the right edge of the road. This can be seen for one vehicle in the following plot: [](https://i.stack.imgur.com/s8BNW.png) Each point on the plot represents the position of the front center of the vehicle. When the vehicle changes the lane (lane numbers not shown) there is a drastic change in the position as seen after the 'Start of Lane Change' on the plot. The data behind this plot are like below: ``` Vehicle.ID Frame.ID xcoord Lane 1 2 13 16.46700 2 2 2 14 16.44669 2 3 2 15 16.42600 2 4 2 16 16.40540 2 5 2 17 16.38486 2 6 2 18 16.36433 2 ``` I want to identify the start and end data points of a lane change by clustering the data as shown in the plot. The data points in the plot circled in red are more similar to each other because the variation between them is smaller compared to the data points in the middle which see large variation in position (`xcoord`). My questions are: Is it possible to apply any clustering technique to segment these data so that I could identify the start and end point of a lane change? If yes, which technique would be most suitable? I use R. I have tried Hierarchical clustering before but don't know how to apply it in this context. Please help.
How to create clusters of position data?
CC BY-SA 3.0
null
2015-09-22T01:35:33.447
2015-09-22T06:55:02.183
2017-04-13T12:50:41.230
-1
4933
[ "r", "clustering" ]
I doubt any of the clustering algorithms will work well. Instead, you should look into: - segmentation (yes, this is something different), specifically time series segmentation - change detection (as you said, there is a rather constant distribution first, then a change, then a rather constant distribution again - segment-wise regression may also work: try to find the best fit that is constant, linearly changing, and constant again. It's essentially four parameters to optimize in this restricted model: average before and after + beginning and end of transition.
What is the best way to cluster this kind of data?
In hierarchical clustering, both agglomerative and divisive, you do not have to pre-specify the number of clusters. You can create all possible clusters and then select the number cluster to use at the end.
8164
1
8174
null
0
1838
I am getting `attr(, "nn.index")` as part of my KNN output in R. What is meant by that and how is this value getting calculated? ``` knn.pred <- knn(tdm.stack.nl_train, tdm.stack.nl_Test, tdm.cand_train) print(knn.pred) > knn.pred [1] Silent Silent Silent Silent Silent Silent Silent [8] Silent Silent Silent attr(,"nn.index") [,1] [1,] 292 [2,] 292 [3,] 343 [4,] 444 [5,] 250 [6,] 445 [7,] 270 [8,] 228 [9,] 302 [10,] 355 ```
what is nn.index mean in KNN output
CC BY-SA 3.0
null
2015-09-23T08:16:45.387
2015-09-24T08:52:01.403
2015-09-24T08:52:01.403
11097
9793
[ "r", "classification" ]
I guess you are using the `fnn` package. `attr` is a list of attributes which can be used for both `nn.index` and `nn.dist`. In this case, you are using `index`. So, index returns an n x k matrix for the nearest neighbor indice. And the definition of the nearest neighbor index is: > The nearest neighbor index is expressed as the ratio of the observed distance divided by the expected distance. [Definition reference](http://edndoc.esri.com/arcobjects/9.2/net/shared/geoprocessing/spatial_statistics_tools/average_nearest_neighbor_spatial_statistics_.htm)
How Does Weighted KNN Work?
We can view nearest neighbor as a voting process where we consult our $k$ nearest neighbor. We give the $i$-th data point a voting weight $w_i$. In your example, each data point in class $A$ has weight $\frac1{0.95}$ and each data point in class $B$ has weight $\frac1{0.05}$. There are $4$ votes from class $A$ and $3$ votes from class $B$. We give class $A$ a score of $\frac{4}{0.95}\approx 4.21$ and class $B$ a score of $\frac{3}{0.05}=60$. Class $B$ has a higher score, hence we assign it to class $B$.
8168
1
8171
null
0
250
I am using Matlab Neural Network toolbox for a classification problem. Now considering a single set of data, if the inbuilt neural network is trained and classified with same data multiple number of time, different accuracy and different confusion matrix is obtained. Now which result should I take? Should I take all the vales obtained in all the training instances and average them fix on one particular result?
Multiple confusion matrix for multiple training instances. Which one to take?
CC BY-SA 3.0
null
2015-09-23T09:45:30.677
2015-09-23T14:02:10.540
null
null
8013
[ "neural-network", "matlab", "confusion-matrix" ]
I can't check at the moment (no Matlab at hand), but I suppose the differences come from the different random seeds used to initialize the neural networks (at least this is the only part which i can think of that has a random component). I would suggest predicting class probabilities, averaging those and then viewing the resulting confusion matrix of the "averaged" prediction. This way you - to a degree - mitigating the effect of randomness resulting from different initializations of the weights.
Confusion regarding confusion matrix
> Question 1: Is my understanding and construction of the confusion matrix correct? Yes, you are correct in your definitions and the way you construct the confusion matrix. The links you have provided also agree with each other. They just switch rows and columns, since there is no hard rule regarding the presentation, as long as the correct relations are maintained. Link 1 shows this matrix: ``` | Pos Class | Neg Class Pos Pred | TP | FP Neg Pred | FN | TN ``` Link 2 shows the same matrix, but transposed: ``` | Pos Pred | Neg Pred Pos Class | TP | FN Neg Class | FP | TN ``` > Question 2: What is the intuitive difference between Precision and recall? Precision is the rate at which you are correct when you predict a positive class. It takes into account all of your positive predictions and figures out which proportion of those is actually correct. When your precision is high, this means that once you make a positive prediction, you are likely to be correct about it. This says nothing about how correct your negative predictions are -- you might make 1 positive and 99 negative predictions on 100 actual positives and still get 100% precision, since your only positive prediction just happened to be correct. Recall is the rate at which you are able to predict the positive class correctly. It takes into account all of the actual positive classes and figures out which proportion of those you have predicted correctly. When your recall is high, this means that very few actual positives slip by your model without being detected as such. This says nothing about how good you are at being actually correct with your positive predictions -- a model that always predicts a positive class easily achieves 100% recall. One usually strives to optimize both precision and recall by finding the most acceptable balance between the two. You might want to read this [article about the Precision-Recall curve](https://towardsdatascience.com/on-roc-and-precision-recall-curves-c23e9b63820c) to get a fuller understanding of the relationship between these metrics. > What happens if precision < recall? As you have highlighted in your post, the two formulas differ only in the denominator. It follows that when precision is smaller than recall, then the number of false positives in your predictions is larger than the number of false negatives.
8172
1
12213
null
4
2965
I built a predictive model using logistic regression for direct marking creatives. I built a “children’s” model which predicts buying behavior for those that purchase boys, girls and baby apparel. But one may have a high propensity right now to buy baby clothes but in 12 months, when ones kid is a toddler, their propensity drops for baby and increases for Girls/Boys. I’ve been trying to brainstorm ways to take into account that scenario. My question is, can anyone suggest options for adjustments to the model to based on that additional criteria?
Propensity Modeling for Retail Marketing: Model Adjustments Based on Consumer Life Changes.
CC BY-SA 3.0
null
2015-09-23T15:02:20.537
2017-05-11T22:03:52.360
null
null
12947
[ "predictive-modeling", "logistic-regression" ]
If you have the purchase data for all these customers, this could be one way to approach the problem: You could either cluster the customers into natural groups based on their most recent purchases or find association rules (the likelihood of the customer purchasing in a particular category based on their most recent purchases). I'm not sure how well this approach is going to work for you, but it worked for me in my scenario of propensity scoring. So, if it makes sense, try to relate your data to mine. I had a list of customers and their purchase behaviors. From those purchase behaviors I deduced association rules to determine which customer is likely to purchase in what product category (based on their previous purchases- associated with the previous purchases of the group). You can also include recency and frequency of the product purchases into the model to decide whether or not to recommend a particular product/offer to the customer for a specific time. Based on the confidence, support and lift metrics of those rules for each customer, I mapped them to highly likely to purchase in this category with confidence being their propensity score. Let me know if this works with your data.
Propensity Modeling, still use Test/Train Split?
You are right; you shouldn't predict on the same data you have used for training your model. If your goal is to only output probabilities and you don't mind achieving this by having not a single classifier but a number of them, you can potentially use nested cross-validation to achieve that. In the outer cross-validation steps, you break your data in $N_{out}$ folds, where one fold is kept for testing the model and the remaining $N_{out} - 1$ folds are used to train. You then feed this training dataset (which is similar, in nature, to that 70% you have mentioned in OP) to the inner cross-validation step with $N_{in}$ folds. Here, you will compare different models and choose the best performing one. This chosen model is then passed to the outer fold (see above) and can be used to obtain probabilities for the 1 fold that was left out. You then repeat the same procedure for other folds in the outer fold. At the end of this nested cross-validation process, you will have probabilities for all your rows (but they will have come from $N_{out}$ different classifiers, each corresponding to one of the outer folds). Note that the ultimate purpose of nested cross-validation is not to do this but it will give you what you want as a by-product.
8194
1
8197
null
2
141
I am a first year PhD student in statistics. During this year I have analyzed the scopes of interest of my scientific advisor and found them unpromising. He is majored in mixtures with varying concentrations models for which I have not found any references to authoritative sources. Now I want to change my PhD theme, however, there are no other scientific advisors in my university which majored in statistics. Therefore, I have 2 questions: - Is it possible to write at least 5 articles together with a PhD thesis without scientific adviser? If yes, what is a proper way to do this? Here I mean how to choose a theme, where ask for a help and so on. - Is it possible to find a remote adviser to consult with? If yes, how and where can I find him? Also I have no much time for the search. I am interested in statistics, especially in machine learning. I would like my PhD thesis to be of practical value, not pure research one which is popular in my department. Also I have commercial experience in programming (C/C++, R, Python) if that can help. Thanks in advance for any help!
PhD program in statistics
CC BY-SA 3.0
null
2015-09-24T23:46:59.960
2015-09-25T06:11:52.020
null
null
10694
[ "machine-learning", "statistics", "education" ]
Certain ingredients are needed to give you the best chance of a successful PhD. One of the important ones is that you and your supervisor have mutual interests. A second important ingredient, in my opinion, is that you immerse yourself in that environment. It's important to develop a network of colleagues. It helps to spread ideas, start collaborations, get help when needed, and to explore unthought of opportunities. From what you have said, I think you will be missing out on these two important ingredients if you continue in the same place or if you work remotely. What is also important is what you to do after the PhD. PhD is required for academic position. But I think you will be in a weak position to get to the next step (fellowships, faculty positions, etc.) if you do what you proposed. In certain industrial positions it can be looked on favourably, not necessarily for the topic you pursued, but because it says something about your personally. Basically that you can get things gone, rise to a challenge, work independently, work as a team, communicate difficult topics and can bring creativity to solving a problems. My advice would be to find a machine learning research group and apply for PhDs. If this is not possible why not consider following the topic of your supervisor and keep machine learning as a hobby? You will become and expert in statistic and so you will find manny concepts will translate between the various statistical disciplines. But only do this if you get along with her/him, and you can see yourself studying this topic. Finally you could try a compromise? Are there applications for "mixing statistics" in machine learning? Can you find one? Is there an unexplored opportunity to do something new? --- As a side note I find it ridiculous that PhD supervisor ask the student for topics. This always leads to problems because the student doesn't really have a clue about the research field. There is room for flexibility but often this hides supervisor laziness.
Statistics + Computer Science = Data Science?
I think that you're on the right track toward becoming an expert data scientist. Recently I have answered related question [here on Data Science StackExchange](https://datascience.stackexchange.com/a/742/2452) (pay attention to the definition I mention there, as it essentially answers your question by itself, as well as to aspects of practicing software engineering and applying knowledge to solving real-world problems). I hope that you will find all that useful. Good luck in your career!
8203
1
8204
null
-1
146
I just want to know which books, courses,videos, links,etc do you recommend me to start in machine learning, neural networks, languajes most commonly used. I want to start from zero, just in the begging of all beacuse I have not experience in this kind of algorithms but it's something that call my attention. Thank you!
Beggining in machine learning
CC BY-SA 3.0
null
2015-09-25T15:28:09.677
2015-10-16T17:54:23.007
2015-10-16T17:54:23.007
843
12996
[ "machine-learning", "beginner" ]
Coursera is currently offering a course on Machine learning with collaboration from MIT. Many says its strongly recommended. [https://www.coursera.org/learn/machine-learning](https://www.coursera.org/learn/machine-learning) But I found the below course from Edx more interesting. [https://www.coursera.org/learn/machine-learning](https://www.coursera.org/learn/machine-learning) It also provides hands on the Microsofts New exclusive Machine learning platform On Azure.
Machine Learning Steps
I found both of your options slightly faulty. So, this is generally (very broadly) how a predictive modelling workflow looks like: - Data Cleaning: Takes the most time, but every second spent here is worth it. The cleaner your data gets through this step, the lesser would your total time spent would be. - Splitting the data set: The data set would be splitted into training and testing sets, which would be used for the modelling and prediction purposes respectively. In addition, an additional split as a cross-validation set would also need to be done. - Transformation and Reduction: Involves processes like transformations, mean and median scaling, etc. - Feature Selection: This can be done in a lot of ways like threshold selection, subset selection, etc. - Designing predictive model: Design the predictive model on the training data depending on the features you have at hand. - Cross Validation: - Final Prediction, Validation
8225
1
8228
null
4
1811
I have a system that manages equipments. When these equipments are faulty, they will be serviced. Imagine my dataset looks like this: ``` ID Type # of times serviced ``` Example Data: ``` |ID| Type | #serviced | |1 | iphone | 1 | |2 | iphone | 0 | |3 | android | 1 | |4 | android | 0 | |5 | blackberry | 0 | ``` What I want to do is I want to predict "of all the equipments that have not been serviced, which ones are likely to be serviced" ? (ie) identify "at risk" equipments. The problem is my training data will be #serviced > 0. Any #serviced=0 will not be frozen and dont seem to be valid candidates to include in training. (ie) When it fails, it will be serviced hence the count will go up. - Is this a supervised or unsupervised problem ? (supervised because I have serviced and not-serviced labels, unsupervised because I want to cluster not-serviced with serviced and there by identify at-risk equipments) - What data should I include in training ? Note: The example is obviously simplified. In reality I have more features that describe the equipment.
Equipment failure prediction
CC BY-SA 3.0
null
2015-09-28T21:17:21.150
2015-09-30T02:06:37.910
2015-09-28T21:53:40.140
97
13056
[ "machine-learning", "classification", "predictive-modeling", "supervised-learning" ]
You should include data when the phone was serviced to create a survival model. These models are commonly used in reliability engineering as well as treatment efficacy. For reliability engineering it is very common to fit your data to a Weibull distribution. Even aircraft manufacturers consider the model to be reliable after calibrating with three to five data points. I can highly recommend the R package 'flexsurv' package. You cannot use typical linear or logistic regressions since some phones in your population will leave your observation period without ever being serviced. Survival models allow for this sort of missing information (this is called censoring). Typically you would have the following data ``` |ID| Type | serviced | # months_since_purchase |1 | iphone | 1 | 12 |2 | iphone | 0 | 15 |3 | android | 1 | 2 |4 | android | 0 | 10 |5 | blackberry | 0 | 5.5 ``` With that data you could create the following model in R ``` require(survival) model <- survfit(Surv(months_since_purchase, serviced) ~ strata(Type) + cluster(ID), data = phone_repairs) ``` The `survfit.formula` `Surv(months_since_purchase, serviced) ~ strata(Type) + cluster(ID)` indicates that `months_since_purchase` is the time at which an observation was made, `serviced` is 1 if the phone was serviced and 0 otherwise, `strata(Type)` will make sure that you create a different survival model for each phone, `cluster(ID)` will make sure that events relating to the same ID are considered as a cluster. You could extend this model with Joint Models such as `JM`.
Predicting car failures with machine learning
I'm also just a beginner in ML (who is however not familiar with survival analysis w/ R), but has tackled a couple of ML projects. Based on my knowledge, you could use supervised learning. Store data, preferably in CSV format, (one column about the duration between buying the car and the car's mechanical breakdown), and the rest about the car's data/characteristics. Next, you can run a neural network through your data, and use your NN's library's predict() method to predict the duration before breakdown based on your data. You could then theoretically (assuming that there is a logical correlation between the data) see which characteristics are most prone to make a car break down. As for implementing your program, I use Python with the Keras library, which is simple enough for any programmer to use, but there exist many other great ML libraries, notably TensorFlow. Do note that I am also just a beginner, and that my approach might be erroneous, yet I do wish you good luck on your future ML projects!
8231
1
8232
null
1
63
Suppose I am using Neural Network for a 2 class classification. After training the network with the training set, I want to predict the class label of a dataset with no class label. Now with retraining, the same dataset gives different result. For example in one training session, a sample gave the output of it belonging to class 1 while in the other session it gave the output of it belonging to class 2. Then which value should be taken as the correct one?
Which value of output should be taken in multiple sessions of training Neural Network
CC BY-SA 3.0
null
2015-09-29T08:58:04.893
2015-09-29T17:01:59.077
null
null
8013
[ "classification", "neural-network", "supervised-learning" ]
This is normal behaviour of most classifiers. You are not guaranteed 100% accuracy in machine learning, and a direct consequence is that classifiers make mistakes. Different classifiers, even if trained on the same data, can make different mistakes. Neural networks with different starting weights will often converge to slightly different results each time. Also, perhaps in your problem the classification is an artificial construct over some spectrum (e.g. "car" vs "van" or "safe" vs "dangerous") in which case the mistake in one case is entirely reasonable and expected? You should use the value from the classifier that you trust the most. To establish which one that is, use cross-validation on a hold-out set (where you know the true labels), and use the classifier with the best accuracy, or other metric, such as logloss or area under ROC. Which metric you should prefer depends on the nature of your problem, and the consequences of making a mistake. Alternatively, you could look at averaging the class probabilities to determine the best prediction - perhaps one classifier is really confident in the class assignment, and the other is not, so an average will go with the first classifier. Some kind of model aggregation will often boost accuracy, and is common in e.g. Kaggle competitions when you want the highest possible score and don't mind the extra effort and cost. However, if you want to use aggregation to solve your problem, again you should test your assumptions using validation and a suitable metric so you know whether or not it is really an improvement.
One Neural network with multiple outputs or multiple neural networks with a single output?
Given the information you provided, the most honest answer is: You have to test it by yourself, there is no general answer for it. Still, it has been shown empirically in research that a neural network may benefit from having multiple outputs. So let's say we have a neural network that has multiple outputs. Further, let us group them into specific tasks: For example: - The output neurons of group 1 tell if the image containts a dog or a cat. - The output neurons of group 2 tell the size of the animal (width and height) - The output neurons of group 3 tell the color of the animal's hair (in some encoding) and so on... A common example would be Faster-RCNN vs Mask RCNN. Assume that $g$ denotes the number of different groups of output neurons. Now if you take a feed-forward neural network, you will have common layers that eventually branch to the different output groups. Let us call $\pi$ the function that maps an input image to this particular last common layer $L$ and let $\phi_{j}$ be the function that takes the information from layer $L$ to output the result of group $j$. Thus, given an input image $\mathbf{I}$, the neural network maps it to $\begin{pmatrix} \phi_{1}(\pi(\mathbf{I})) \\ \vdots \\ \phi_{g}(\pi(\mathbf{I})) \end{pmatrix}$. The output of the last common layer $\pi(\mathbf{I})=:\mathbf{f}$ can be understood as an image descriptor $\mathbf{f}$ of the input image $\mathbf{I}$. In particular, all predicted outputs rely on the information contained in $\mathbf{f}$. $\textbf{Therefore}$: Merging multiple outputs into a single neural network can be understood as a regularization technique. The image descriptor $\mathbf{f}$ must contain not only the information if the images shows a dog or a cat, but also all the other information. It must therefore be a more comprehensive (or "more realistic") description of the input, which makes it more difficult for the network to overfit. The network cannot solve a specific task using a non-plausible explanation, as the corresponding image descriptor would lead to bad results on the other tasks. As a consequence adding additional (auxiliary) tasks to the neural network can improve the accuracy on the initial task, even if you are not interested in predicting these additional tasks. So essentially, if there is a common description of your data, that can be used to solve your required tasks, the system may benefit by using one model with multiple outputs. You may have a look into the literature, e.g. [collaborative learning](https://papers.nips.cc/paper/7454-collaborative-learning-for-deep-neural-networks.pdf), [multi-task learning](https://papers.nips.cc/paper/7334-multi-task-learning-as-multi-objective-optimization.pdf), and [auxiliary tasks](https://papers.nips.cc/paper/7406-revisiting-multi-task-learning-with-rock-a-deep-residual-auxiliary-block-for-visual-detection.pdf). I hope this answers your question.
8240
1
8659
null
1
165
I have many different strings of text. These strings of text are labels for particular things. But these labels are sloppy, sometimes one label is used for many different things. For example: "Brown foxes edition 1999 series 1-6 EDI" "Light [old] seasons 1,2,3,4 other gibberish" I would like to answer the question: "If the label contains a series, does that serie contain the value N?" For the examples above 6 would be included in the first one, but not the second. Initially I thought of using regexes but that quickly grew out of hand. Digits appear everywhere, people can get very creative with separators and the location of the series in the label is not fixed. There are many different ways the labels denote series. What I can do, however, generate labels with series. I'll just grab a bunch of separators, a start and an end digit and iterate. This gives me a nice labeled training set. Naive Bayes comes to mind for this problem but I'm not sure what good features would be. update Let me try to clarify. Given a label and given a number determine if that number is contained within that label. For example: Given I'm looking for season 2 (the number). Does "Pioneer One 2011 seasons 1-3" contain season 2?
Detecting if a sentence contains a numeric series
CC BY-SA 3.0
null
2015-09-30T07:13:18.937
2015-10-30T12:35:16.940
2015-09-30T11:02:28.990
12321
12321
[ "nlp" ]
So there are many ways to denote a series. How are you going to parse the series down to determine the values if you don't know the format? Determining if the label has a series does not get you to the specific numbers in the series. 2,3,5,7 parses out to 4 numbers Is 6 in 1996? I assume that is one number and 1996 != 6 "55,56,57" is series with a 6 but but not the number 6 Does 7-9 parse out to 2 numbers or 3 number Is 6 in 7-9? If 6 is in 7-9 identifying that as a series does not answer that question. How many ways can there be represent a series that regex got out of hand? For each format of series you also need to parse the values. You need to know the format of the series to parse out the numbers. I would have a set of regex mapped to set of parsers. Maybe use machine learning to identity new series formats but you are still going to need to parse out the series.
Python - Check if text is sentences?
I would try a semi-supervised learning technique where it passes you scraps and asks you to label them. What you're looking for will likely be kind of domain specific depending on the type of site. In the end you'll probably have a bunch of heuristics like: - If length < 50 and contains "LOGOUT", "REGISTER","SIGN IN","LOGIN" - If count of "|" > 1 - If count of all upper case words > 1
8245
1
8351
null
1
125
I have devices on which I have time series data of one continuous variable. I have to evaluate the relation between the profile of that variable on those devices and "events". Those events are given in terms of occurrences on a time period. My first intention is to make clusters of similar behavior of that variable and compare those clusters with the low/middle/high events rates. I was thinking about doing a K-means with the min, max, quartilles, mean, normal q-q p value, Kurstosis, etc. as dimensions, but I don't think it's a good idea because: - Those dimensions are not independant - It's "losing" data and so potentially losing classification potential Do you have some suggestions to group similar devices together? Also, do you have other ideas to establish that relationship? Context: - python3 with the scipy stack - ~ 3000 devices and hundreds of thousands of data per day; 5 months to consider
Devices behavior in one continuous variable vs events rate
CC BY-SA 3.0
null
2015-09-30T12:30:30.290
2017-12-18T16:15:42.357
2017-12-18T16:15:42.357
29575
3024
[ "machine-learning", "clustering", "correlation" ]
Done with K-means clustering with descriptive statistics as features: In short, I've tried the idea described in the question, even if I was thinking it won't work. Let the experience talk... I initially had a list of devices data. Each element of the list were 2 columns, R rows matrix, and R was different for each device. So, per device: ``` [ [mesureValue, timestamp], ..., [mesureValue, timestamp], ] ``` Since I'm only interested in the measureValue distribution, I've transformed the inital data to a 8 columns, N rows matrix, where N = number of devices. The columns are, computed on the correponding device's measure value: - Arithmetic mean - Median - First quartille - Third quartille - Minimum - Maximum - Range - Standard deviation With this matrix, I've applied K-means clustering using scikit learn (python). I made the link between the matrix line and the physical device by using pandas Data Frames (python) who's line index are in fact the serial number of the device. I've tried with 5 clusters, and it works. Just in case of, if I need improvements in the future, I'm planning to add other statistics in the columns, especially for deviation vs. normality. So, for example Kurstosis and normal q-q plot p value. Best regards.
Modeling events at irregular intervals
There are multiple ways to tackle this. I'll suggest two here. [1] The one that probably requires the least amount of preprocessing work is to throw the event data into a recurrent neural net that can handle variable sequence lengths. Map the event categories to a small embedding space and run it through the RNN. You can then concatenate the remaining static features and put a feed-forward classifier on top. This would take care of 1) and 2). In order to account for 3) there are several tricks. - You can append the time to event to the embedding vectors - You can multiply the embedding vectors with a "time mask" (i.e. feed the sequence of event lagging times into a RNN with sigmoid activation und multiply element-wise. The idea is that the sigmoid activation will put a weight on the event based on the time passed, e.g. an event many months ago will be unimportant and therefore receive an activation close to 0). See section 3.1. in this paper - That paper also suggests another solution where you can learn a joint embedding of time and event (section 3.2). Note that both methods report only minor (but consistent) gains over standard RNNs. [2] Given that your number of event categories is low you could also just one-hot encode the event or use a tree-based method like gradient boosting and feed the events, the time between events and your remaining features as separate inputs into the model. Unless your dataset is very huge this will perform equally well if not better because you don't have to learn embeddings and have a lot less model parameters. The only difficulty is that you will now have to fix the sequence length. But you can pad shorter sequences with N/A or 0 values and I would not expect any performance decrease because of it!
8246
1
8260
null
5
1392
What is Conjugate Gradient Descent of Neural Network? How is it different from Gradient Descent technique? I came across a [resource](http://matlab.izmiran.ru/help/toolbox/nnet/backpr59.html), but was unable to understand the difference between the two methods. It has mentioned in the procedure that: > the next search direction is determined so that it is conjugate to previous search directions. What does this mean? Also, what is line search mentioned in the web page? Can anyone please explain it with the help of a diagram?
What is conjugate gradient descent?
CC BY-SA 4.0
null
2015-09-30T13:04:37.247
2019-06-08T03:09:04.587
2019-06-08T03:09:04.587
29169
8013
[ "neural-network", "gradient-descent", "supervised-learning" ]
> What does this sentence mean? It means that the next vector should be perpendicular to all the previous ones with respect to a matrix. It's like how the [natural basis](https://en.wikipedia.org/wiki/Standard_basis) vectors are [perpendicular](https://en.wikipedia.org/wiki/Orthogonality) to each other, with the added twist of a matrix: $\mathrm {x^T A y} = 0$ instead of $\mathrm{x^T y} = 0$ > And what is line search mentioned in the webpage? [Line search](https://en.wikipedia.org/wiki/Line_search) is an optimization method that involves guessing how far along a given direction (i.e., along a line) one should move to best reach the local minimum.
Conjugated gradient method. What is an A-matrix in case of neural networks
The traditional conjugate gradient descent is an increment on the gradient descent that just takes a direction that is fully orthogonal to the previous descent direction. There is no $A$ matrix in that case. There are different rules (you can check some in my old optimization toolbox at [https://github.com/mbrucher/scikit-optimization/blob/master/scikits/optimization/step/conjugate_gradient_step.py](https://github.com/mbrucher/scikit-optimization/blob/master/scikits/optimization/step/conjugate_gradient_step.py)). If I remember properly FR combined with strong Wolfe-Powell line search rule give one of the best answer. The issue is that it requires more computation, which is why line search is never used in neural networks optimization.
8253
1
8254
null
10
8290
Suppose we have the following dataframe with multiple values for a certain column: ``` categories 0 - ["A", "B"] 1 - ["B", "C", "D"] 2 - ["B", "D"] ``` How can we get a table like this? ``` "A" "B" "C" "D" 0 - 1 1 0 0 1 - 0 1 1 1 2 - 0 1 0 1 ``` Note: I don't necessarily need a new dataframe, I'm wondering how to transform such DataFrames to a format more suitable for machine learning.
How to binary encode multi-valued categorical variable from Pandas dataframe?
CC BY-SA 3.0
null
2015-09-30T17:41:39.737
2020-08-01T15:44:28.253
null
null
13027
[ "python", "pandas" ]
If `[0, 1, 2]` are numerical labels and is not the index, then `pandas.DataFrame.pivot_table` works: In []: data = pd.DataFrame.from_records( [[0, 'A'], [0, 'B'], [1, 'B'], [1, 'C'], [1, 'D'], [2, 'B'], [2, 'D']], columns=['number_label', 'category']) data.pivot_table(index=['number_label'], columns=['category'], aggfunc=[len], fill_value=0) Out[]: len category A B C D number_label 0 1 1 0 0 1 0 1 1 1 2 0 1 0 1 [This blog](http://pbpython.com/pandas-pivot-table-explained.html) post was helpful. --- If `[0, 1, 2]` is the index, then `collections.Counter` is useful: In []: data2 = pd.DataFrame.from_dict( {'categories': {0: ['A', 'B'], 1: ['B', 'C', 'D'], 2:['B', 'D']}}) data3 = data2['categories'].apply(collections.Counter) pd.DataFrame.from_records(data3).fillna(value=0) Out[]: A B C D 0 1 1 0 0 1 0 1 1 1 2 0 1 0 1
Mass convert categorical columns in Pandas (not one-hot encoding)
If your categorical columns are currently character/object you can use something like this to do each one: ``` char_cols = df.dtypes.pipe(lambda x: x[x == 'object']).index for c in char_cols: df[c] = pd.factorize(df[c])[0] ``` If you need to be able to get back to the categories I'd create a dictionary to save the encoding; something like: ``` char_cols = df.dtypes.pipe(lambda x: x[x == 'object']).index label_mapping = {} for c in char_cols: df[c], label_mapping[c] = pd.factorize(df[c]) ``` Using Julien's mcve will output: ``` In [3]: print(df) Out[3]: a b c d 0 0 0 0 0.155463 1 1 1 1 0.496427 2 0 0 2 0.168625 3 2 0 1 0.209681 4 0 2 1 0.661857 In [4]: print(label_mapping) Out[4]: {'a': Index(['Var2', 'Var3', 'Var1'], dtype='object'), 'b': Index(['Var2', 'Var1', 'Var3'], dtype='object'), 'c': Index(['Var3', 'Var2', 'Var1'], dtype='object')} ```
8255
1
8259
null
2
190
I was posting on stats.stackexchange but perhaps I should be posting here. Context. Subscription business that charges users a monthly fee for access to the service. Management would like to predict "churn" - subscriptions who are likely to cancel. Management would like to create an email sequence in attempt to prevent high risk accounts from churning, perhaps with a discount code of some sort. So I need to identify those accounts at risk of leaving us. I have a dataset with say 50k records. Each line item is an account number along with some variables. One of the variables is "Churned" with a value of "yes" (they cancelled) or "No" (they are active). The dataset I have is all data since the beginning of time for the business. About 20k records are active paying customers and about 30k are those who used to be paying customers but who have since cancelled. My task is to build a model to predict which of the 20k active customers are currently likely to churn. Here is where I have tied my brain in a knott. I need to run the model (Predict) on the 20k records of active customers. How do I split my data between training, test and predict? Does predict data have to be exclusive of train and test data? Can I split the entire dataset of 50k into 0.8 train and 0.2 test, build a model and then predict on the 20k active accounts? That would imply I'm training and testing on data that I'm also going to predict on. Seems "wrong". Is it?
Dividing data between test, learn and predict
CC BY-SA 3.0
null
2015-09-30T19:21:44.187
2016-01-27T10:17:14.433
2016-01-27T10:17:14.433
13727
13106
[ "machine-learning", "predictive-modeling", "churn" ]
Supervised Learning: Do you have a saved time history of the data? For a supervised learning set you need some churned="No" cases and some churned="Yes" cases, but it sounds like you only have churned="Yes" and the unknown cases e.g. current customers who may or may not churn. With some time history you can go back in time and definitively label the current customers as churn="No". Then it is very easy to split up the data. And no, you probably don't want to predict on any data that you trained on since you can only train on it if you already know the solution so it will be a waste of time and throw off any metrics you might use to assess accuracy (precision/recall/F1) in the future. Unsupervised Learning: If you don't have saved time history of the data then this is an unsupervised learning set for which you have churned="yes" and churned="maybe". You could then employ anomaly or outlier detection on this set. novelty detection: The training data is not polluted by outliers, and we are interested in detecting anomalies in new observations. outlier detection: The training data contains outliers, and we need to fit the central mode of the training data, ignoring the deviant observations. You can do either one but novelty is more powerful. This is kind of a flip around as the novelty here is Churned="No" since all of your data is the confirmed Churn="Yes" cases. Hope this helps!
how to split the original data in training, validation and testing?
I think you should start with some tutorials to understand the cycle of a data project- there are normally several stages, like preparing and cleaning the data, etc... There are many free resources and courses in coursera for example that you can find by searching for "data science" or "machine learning" Regarding your specific question, I think a good place to start might be here [https://www.kaggle.com/learn/intro-to-machine-learning](https://www.kaggle.com/learn/intro-to-machine-learning) An example of splitting and validating can be found in section 4
8266
1
8273
null
10
6189
There is a package named segmented in R. Is there a similar package in python?
Is there a library that would perform segmented linear regression in python?
CC BY-SA 3.0
null
2015-10-01T18:40:47.937
2021-12-06T11:22:08.087
null
null
12613
[ "python", "regression", "linear-regression" ]
No, currently there isn't a package in Python that does segmented linear regression as thoroughly as those in R (e.g. [R packages listed in this blog post](http://www.r-bloggers.com/r-for-ecologists-putting-together-a-piecewise-regression/)). Alternatively, you can use a Bayesian Markov Chain Monte Carlo algorithm in Python to create your segmented model. Segmented linear regression, as implemented by all the R packages in the above link, doesn't permit extra parameter constraints (i.e. priors), and because these packages take a frequentist approach, the resulting model doesn't give you probability distributions for the model parameters (i.e. breakpoints, slopes, etc). Defining a segmented model in [statsmodels](http://statsmodels.sourceforge.net/), which is frequentist, is even more restrictive because the model requires a fixed x-coordinate breakpoint. You can design a segmented model in Python using the Bayesian Markov Chain Monte Carlo algorithm [emcee](http://dan.iel.fm/emcee/current/). Jake Vanderplas wrote a useful [blog post](http://jakevdp.github.io/blog/2014/06/14/frequentism-and-bayesianism-4-bayesian-in-python/) and [paper](http://arxiv.org/pdf/1411.5018.pdf) for how to implement emcee with comparisons to PyMC and PyStan. Example: - Segmented model with data: [](https://i.stack.imgur.com/QoGKH.png) - Probability distributions of fit parameters: [](https://i.stack.imgur.com/iJ5br.png) - Link to code for segmented model. - Link to (large) ipython notebook.
Linear Regression in Python
Yes, you will have to convert everything to numeric. That requires thinking about what these attributes represent accordingly you can use either the below 3 options. There are three options: - One-Hot encoding for categorical data - Arbitrary numbers for ordinal data - Use something like group means for categorical data (e. g. mean prices for city districts). You have to be careful to not infuse information you do not have in the application case. I'm expanding on option 1 and 3, if you want to know about option 2 you can go through the links attached at last. # One hot encoding If you have categorical data, you can create dummy variables with 0/1 values for each possible value. Similarly you could implement for children, smoker. E. g. ``` id Sex 0 Male 1 Feamle ``` to ``` id Male Female 0 1 0 1 0 1 ``` This can easily be done with pandas: ``` import pandas as pd data = pd.DataFrame({'Sex': ['Male', 'Female']}) print(pd.get_dummies(data)) ``` will result in: ``` Sex_Male Sex_Female 0 1 0 1 0 1 ``` # Using categorical data for groupby operations This is an additional usecase but in your case it is not necessary to use this but if you feel so, you can try implementing this as well You could use the mean for each category over past (known events). Say you have a DataFrame with the last known mean prices for cities: ``` prices = pd.DataFrame({ 'city': ['A', 'A', 'A', 'B', 'B', 'C'], 'price': [1, 1, 1, 2, 2, 3], }) mean_price = prices.groupby('city').mean() data = pd.DataFrame({'city': ['A', 'B', 'C', 'A', 'B', 'A']}) print(data.merge(mean_price, on='city', how='left')) ``` Result: ``` city price 0 A 1 1 B 2 2 C 3 3 A 1 4 B 2 5 A 1 ``` For better understanding you can go through this [Link-1](https://stackoverflow.com/questions/34007308/linear-regression-analysis-with-string-categorical-features-variables/34008270#34008270), [Link-2](https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.dtypes.html)
8277
1
8292
null
3
3915
I am trying to figure out how the amount of money that a customer would want to withdraw on an ATM tell us if the transaction is fraudulent or not.There are other attributes, of course, but now I would want to hear your views on the amount of money that the customer wants to withdraw. Data may be of this form: Let us assume that a customer, for ten consecutive transactions, withdrew the following amounts: `100.33, 384 , 458, 77.90, 456, 213.55, 500 , 500, 300, 304.` Questions: - How can we use this data to tell if the next transaction done on this account is fraudulent of not? - Are there specific algorithms that can be used for this classification? What I was thinking: I was thinking to calculate the average amount of money, say for the last ten transactions, and check how far is the next transaction amount from the average. Too much deviation would signal an anomaly. But this does not sound much, does it?
Credit card fraud detection - anomaly detection based on amount of money to be withdrawn?
CC BY-SA 3.0
null
2015-10-02T08:13:03.780
2015-10-03T20:17:46.670
null
null
13132
[ "machine-learning", "algorithms" ]
> I was thinking to calculate the average amount of money, say for the last ten transactions, and check how far is the next transaction amount from the average. Too much deviation would signal an anomaly. But this does not sound much, does it? A typical outlier detection approach. This would work in most cases. But, as the problem statement deals with credit card fraud detection, the detection technique/algorithm/implementation should be more robust. You might want to have a look at the [Mahalanobis Distance](https://en.wikipedia.org/wiki/Mahalanobis_distance) metric for this type of outlier detection. Coming to the algorithms for fraud detection, I would point out to the standards used in the industry (as I have no experience in this, but felt these resources would be useful to you). Check [my answer](https://datascience.stackexchange.com/questions/8099/classifying-transactions-as-malicious/8100#8100) for this question. It contains the popular approaches and algorithms used in the domain of fraud detection. The [Genetic Algorithm](https://en.wikipedia.org/wiki/Genetic_algorithm) is the most popular amongst them.
Receipt fraud detection
This is an anomaly detection problem. In your case, we would refer to it as supervised anomaly detection problem as you have the labels of categories. This typically involves taking a large "normal" dataset, in this case, this would be receipts which are valid. And then using a machine learning method to learn features from this dataset (e.g. the typical words used, normal amounts entered, etc.) to generate a model of the "normal" data. You could either go down the `Computer Vision` direction (e.g. for physical features in the photos) or down the `Natural Language` direction (e.g. for textual features in the receipts), or perhaps any other features available! Once you have this model of "normal" behaviour, you can then test it against fraudulent receipts by predicting whether the receipt is normal. But remember, the model will only be as good as your catalogue of "normal" and "abnormal" data
8298
1
8314
null
2
990
As an example, let's say I have a very simple data set. I am given a csv with three columns, user_id, book_id, rating. The rating can be any number 0-5, where 0 means the user has NOT rated the book. Let's say I randomly pick three users, and I get these feature/rating vectors. Martin: $<3,3,5,1,2,3,2,2,5>$ Jacob: $<3,3,5,0,0,0,0,0,0>$ Grant: $<1,1,1,2,2,2,2,2,2>$ The similarity calculations: ` +--------------+---------+---------+----------+ | | M & J | M & G | J & G | +--------------+---------+---------+----------+ | Euclidean | 6.85 | 5.91 | 6.92 | +--------------+---------+---------+----------+ | Cosine | .69 | .83 | .32 | +--------------+---------+---------+----------+ ` Now, my expectation of similarity is that Martin and Jacob would be the most similar. I would expect this because they have EXACTLY the same ratings for the books that both of them have rated. But we end up finding that Martin and Grant are the most similar. I understand mathematically how we get to this conclusion, but I don't understand how I can rely on Cosine Angular distance or Euclidean distance as a means of calculating similarity, if this type of thing occurs. For what interpretation are Martin and Grant more similar than Martin and Jacob? One thought I had was to just calculate Euclidean distance, but ignore all books for which one user hasn't rated the book. I then end up with this ` +--------------+---------+---------+----------+ | | M & J | M & G | J & G | +--------------+---------+---------+----------+ | Euclidean | 0 | 5.91 | 6.92 | +--------------+---------+---------+----------+ | Cosine | .69 | .83 | .32 | +--------------+---------+---------+----------+ ` Of course now I have a Euclidean distance of 0, which fits what I would expect of the recommender system. I see many tutorials and lectures use Cosine Angular distance to ignore the unrated books, rather than use Euclidean and ignore them, so I believe this must not work in general. EDIT: Just to experiment a little, I adjusted Jacob's feature vector to be much more similar: Jacob: $<3,3,5,1,2,3,2,0,0>$ When I calculate Cosine Angular distance with Martin, I still only get .82! Still less similar than Martin and Grant, yet by inspection I would expect these two to be very similar. Could somebody help explain where my thinking is wrong, and possibly suggest another similarity measure?
Correctly interpreting Cosine Angular Distance Similarity & Euclidean Distance Similarity
CC BY-SA 3.0
null
2015-10-04T08:02:57.263
2015-10-05T13:27:27.030
2015-10-04T08:45:58.567
13168
13168
[ "machine-learning", "recommender-system", "beginner" ]
If you look at the definitions of the two distances, cosine distance is the normalized dot product of the two vectors and euclidian is the square root of the sum of the squared elements of the difference vector. The cosine distance between M and J is smaller than between M and G because the normalization factor of M's vector still includes the numbers for which J didn't have any ratings. Even if you make J's vector more similar, like you did, the remaining numbers of M (2 and 5) get you the number you get. The number for M and G is this high because they both have non-zeroes for all the books. Even though they seem quite different, the normalization factors in the cosine are more "neutralized" by the non-zeroes for corresponding entries in the dot product. Maths don't lie. The books J didn't rate will be ignored if you make their numbers zero in the computation of the normalization factor for M. Maybe the fault in your thinking is that the books J didn't rate should be 0 while they shouldn't be any number. Finally, for recommendation systems, I would like to refer to matrix factorization.
Euclidean vs. cosine similarity
On L2 normalized data it is an easy and good exercise to prove that they are equivalent. So you should try to solve the math yourself. Hint: use squared Euclidean. Note that it is common with tfidf to not have normalized data because of various technical reasons, e.g., when using inverted indexes in text search. Furthermore, cosine is faster on very sparse data.
8313
1
8325
null
2
714
I'm working with [Enron email data set](https://www.cs.cmu.edu/~./enron/), and try to get some psychological intentions about email marketing. I know that we can derive [Sentimental analysis like emotions and mode](https://en.wikipedia.org/wiki/Sentiment_analysis) and opinion mining like positive or negative through text data. what else can I do regarding human psychological intentions?
How can I infer psychological intentions from email corpus using text mining?
CC BY-SA 3.0
null
2015-10-05T10:35:09.597
2015-10-06T12:45:01.473
2015-10-05T17:05:01.810
609
9035
[ "data-mining", "nlp", "text-mining" ]
If you have a broad meaning of "intentions" in mind, you might be interested in research showing that a person's personality (in the sense of the "big five" psychological personality theory) can be inferred with remarkable accuracy from facebook likes. Original research showing this can be found [here](http://www.pnas.org/content/110/15/5802.full.pdf) and [here](http://www.pnas.org/content/112/4/1036.full.pdf). Obviously, facebook likes are not the same as the textual information you work with, but it may be possible to infer certain likes and dislikes from the text.
prepare email text for nlp (sentiment analysis)
I have done something similar in the past. I'll sketch an outline for you. First you break the text into paragraphs and tokenize. Then write some regexp rules to capture the data you want to remove. For instance, if an email signature commonly contains a phone number, paragraph, and a website, you can count those features and flag it based on some threshold you decide. Next, do likewise with the other features you mentioned. My experience is it's highly domain dependent so you really need to look at the data and use your best judgement. The result of this process should be a data consisting of tokenized paragraphs where the paragraph has been labeled 'noise' or 'clean' based on the feature count. From there, convert your token representation using tf-idf or another type of embedding. You should be able to use this as input to your favorite classifier, and I have had success using SVMs to that end. The result is going to be biased towards your rules but you are also leveraging features that are in the labeled examples that are not explicitly in the rules. Particularly so for longer paragraphs. It might seem a bit a bit janky but believe it or not it works.
8322
1
8324
null
8
3285
What are all the options available for filling in missing data? One obvious choice is the mean, but if the percentage of missing data is large, it will decrease the accuracy. So how do we deal with missing values if they are are lot of them?
Filling missing data with other than mean values
CC BY-SA 4.0
null
2015-10-06T10:51:52.883
2019-06-08T03:14:47.970
2019-06-08T03:14:47.970
29169
13155
[ "data-mining", "missing-data" ]
There are of course other choices to fill in for missing data. The median was already mentioned, and it may work better in certain cases. There may even be much better alternatives, which may be very specific to your problem. To find out whether this is the case, you must find out more about the nature of your missing data. When you understand in detail why data is missing, the probability of coming up with a good solution will be much higher. You might want to start your investigation of missing data by finding out whether you have informative or non-informative missings. The first category is produced by random data loss; in this case, the observations with missing values are no different from the ones with complete data. As for informative missing data, this one tells you something about your observation. A simple example is a customer record with a missing contract cancellation date meaning that this customer's contract has not been cancelled so far. You usually don't want to fill in informative missings with a mean or a median, but you may want to generate a separate feature from them. You may also find out that there are several kinds of missing data, being produced by different mechanisms. In this case, you might want to produce default values in different ways.
How to fill in missing value of the mean of the other columns?
Using a transform as follows could work: ``` df["budget"] = df.groupby("genres")["budget"].transform(lambda x: x.fillna(x.mean())) ``` The mean calculation uses only the non-null values in its calculation. So the mean of each group's non-null values are imputed to that same group's null values. See also [this question on Stack Overflow](https://stackoverflow.com/questions/19966018/pandas-filling-missing-values-by-mean-in-each-group).
8331
1
8335
null
3
156
Suppose we collect data for 100,000 tosses of a fair coin and record "Heads" or "Tails" as the value for the attribute outcome and also record the time, temprature and other irrelevant attributes. We know that the outcome of each toss is random so there should be no way of predicting future unlabeled data instances. My question is how do learning algorithms (support vector machines, for example) behave when we apply them on random data such as this?
Behaviour of Learning Algorithms on Random Data
CC BY-SA 3.0
null
2015-10-06T18:45:53.400
2015-10-06T21:02:32.477
null
null
11044
[ "machine-learning", "data-mining", "classification", "predictive-modeling", "svm" ]
They will of course still learn some best decision boundary. We know it will be meaningless, but there will still be better and best coefficients for the algorithm to learn when fitting to this particular instance of data from this random process. It may produce better than 50% accuracy on the data set, but of course this is purely due to overfitting whatever the data happens to be. It will not predict future outcomes with more than 50% accuracy.
Choosing the correct learning algorithm
It seems you have a data set for one component where the component suffered a fixed number of failure modes. You want to find out which data (let's assume continuous in time, so, what time) correspond to what failure mode. In other words, you are doing "pattern recognition" in your failure data. Have you thought of using Self-Organizing Maps (SOM)? They are a sub-branch of artificial neural networks and have great capability in such problems. You should also consider that not all failure modes appear in shape of a "peak" value. So, only looking at peaks is not a very smart way. It most probably will cover most of the failures, though, there will be moments that you miss. SOM could take care of this too. Data pre-processing is done before the analysis. Be careful in normalization. You could miss the peaks or valley points easily if you don't pay enough attention in normalization. Don't just use any code or normalization method you find online. Test and check it with you data. For instance, some normalization could make all negative values positive which a negative value may have an important meaning in your work. I assume you have another variable called "failure". Another approach I suggest is building a Neural Network (NN) model, which is very common. You have three input data that you mentioned, consider the "failure" variable as your target variable. Build the neural network and apply it to your data again. If the numbers of failures are little, the NN will be able to rebuild the normal behavior of your data (NN here is called a normal behavior model). When you apply it to your input data, the NN model will detect any deviation which is not expected. MATLAB has a very good support for both of these approaches.
8334
1
8366
null
2
1615
How do we find maximum depth of Random Forest if we know the number of features ? This is needed for regularizing random forest classifier.
finding maximum depth of random forest given the number of features
CC BY-SA 3.0
null
2015-10-06T21:01:20.730
2015-10-09T13:01:23.863
null
null
13155
[ "machine-learning", "random-forest" ]
The maximum depth of a forest is a parameter which you set yourself. If you're asking how do you find the optimal depth of a tree given a set of features then this is through cross-validation. For example, create 5 rf's with 5 different tree depths and see which one performs the best on the validation set.
Why `max_features=n_features` does not make the Random Forest independent of number of trees?
Interesting puzzle indeed. First things first. The [DecisionTreeClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) has some stochastic behavior. For instance, the [splitter code](https://github.com/scikit-learn/scikit-learn/blob/14031f65d144e3966113d3daec836e443c6d7a5b/sklearn/tree/_splitter.pyx) iterates through the features at random: ``` f_j = rand_int(n_drawn_constants, f_i - n_found_constants, random_state) ``` Your data is small and comes from the same distribution. What this means is that you'll have a lot of identical purity scores depending on how iteration is done. If you (a) increase your data, or (b) make it more separable, you'll see the problem should ameliorate. To clarify: if the algorithm computes the score for feature A and then computes the score for feature B and it gets score N. Or if it computes first the score for feature B and then for feature A and it gets the same score N, you can see how each decision tree will be different, and have different scores during test, even if the train test is the same (100% if max_depth=None of course). (You can confirm this.) During my exploration of your question, I have produced the following code with my own implementation of a random forest. Since it took me some time, I figured I might as well paste it here. :) Seriously, it can be useful. You can try to disable `random_state` from my implementation to see what I mean. ``` from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score import numpy as np class MyRandomForestClassifier: def __init__(self, n_estimators): self.n_estimators = n_estimators def fit(self, X, y): self.trees = [DecisionTreeClassifier(random_state=1).fit(X, y) for _ in range(self.n_estimators)] return self def predict(self, X): yp = [tree.predict(X) for tree in self.trees] return ((np.sum(yp, 0) / len(self.trees)) > 0.5).astype(int) def score(self, X, y): return accuracy_score(y, self.predict(X)) for alpha in (1, 0.1, 0.01): np.random.seed(1) print('# alpha: %s' % str(alpha)) N = 1000 X = np.random.random((N, 10)) y = np.r_[np.zeros(N//2, int), np.ones(N//2, int)] X[y == 1] = X[y == 1]*alpha Xtr, Xts, ytr, yts = train_test_split(X, y) print('## sklearn forest') for n_estimators in (1, 10, 100, 200, 500): m = RandomForestClassifier( n_estimators, max_features=None, bootstrap=False) m.fit(Xtr, ytr) print('%3d: %.4f' % (n_estimators, m.score(Xts, yts))) print('## my forest') for n_estimators in (1, 10, 100, 200, 500): m = MyRandomForestClassifier(n_estimators) m.fit(Xtr, ytr) print('%3d: %.4f' % (n_estimators, m.score(Xts, yts))) print() ``` Summary: Each `DecisionTreeClassifier` is stochastic, data such as yours, which is small and comes from the same distribution, are bound to produce slightly different trees, even if the random forest itself is deterministic. You can fix this by passing the same seed to each `DecisionTreeClassifier` which you can do using `random_state=something`. `RandomForestClassifier` also has a `random_state` parameter which it passes along each `DecisionTreeClassifier`. (This is slightly incorrect, see the edit.) EDIT2: While this removes the stochasticity component of the training, the decision trees would still be different. The thing is that sklearn ensembles generate a new random seed for each child based on the random state they are given. They do not pass along the same `random_state`. You can see this is the case by checking the `_set_random_states` method from the ensemble base module, in particular [this line](https://github.com/scikit-learn/scikit-learn/blob/a24c8b464d094d2c468a16ea9f8bf8d42d949f84/sklearn/ensemble/base.py#L549), which propagates the `random_state` across the ensembles' children.
8339
1
8346
null
6
2324
I used following classifiers along with their accuracies: - Random forest - 85 % - SVM - 78 % - Adaboost - 82% - Logistic regression - 80% When I used voting from above classifiers for final classification, I got lesser accuracy than the case when I used Random forest alone. How is this possible? All classifiers are giving more or less same accuracies when used individually, then how does Random Forest outperform their combined result ?
Voting combined results from different classifiers gave bad accuracy
CC BY-SA 3.0
null
2015-10-06T23:08:51.327
2015-10-29T09:11:50.207
2015-10-29T08:10:54.977
5177
13155
[ "machine-learning", "classification", "logistic-regression", "random-forest" ]
The approach you are considering is similar to a multi-class SVM or a one-vs-the-rest approach. And here is how I describe the problem. The support vector machine, per example, is fundamentally a two-class classifier. In practice, however, we often have to tackle problems involving K > 2 classes. Various methods have therefore been proposed for combining multiple two-class SVMs in order to build a multi-class classifier. One commonly used approach (Vapnik, 1998) is to construct K separate SVMs, in which the kth model y_k(x) is trained using the data from class C_k as the positive examples and the data from the remaining K − 1 classes as the negative examples. This is known as the one-versus-the-rest approach where : ``` y(x) = max_k y_k(x) ``` Unfortunately, this heuristic approach suffers from the problem that the different classifiers were trained on different tasks, and there is no guarantee that the real-valued quantities y_k(x) for different classifiers will have appropriate scales. Another problem with the one-versus-the-rest approach is that the training sets are imbalanced. For instance, if we have ten classes each with equal numbers of training data points, then the individual classifiers are trained on data sets comprising 90% negative examples and only 10% positive examples, and the symmetry of the original problem is lost. Therefor, you got your bad accuracy. PS: Accuracy, in most cases, is not a good measure for evaluating a classifier model. References : - Vapnik, V. - Statistical Learning Theory. Wiley-Interscience, New York. - Christopher M. Bishop - Pattern Recognition and Machine Learning.
Different result of classification with same classifier and same input parameters
From [sklearns random forest documentation](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html): > random_state int, RandomState instance or None, default=None Controls both the randomness of the bootstrapping of the samples used when building trees (if bootstrap=True) and the sampling of the features to consider when looking for the best split at each node (if max_features < n_features). See Glossary for details. Each time you re-run this with `random_state = None` it runs different models. Set random_state to `0` (or any number) and see consistent results.
8357
1
8368
null
10
1243
I understand how a Hidden Markov Model is used in genomic sequences, such as finding a gene. But I don't understand how to come up with a particular Markov model. I mean, how many states should the model have? How many possible transitions? Should the model have a loop? How would they know that their model is optimal? Do they imagine, say 10 different models, benchmark those 10 models and publish the best one?
How do scientists come up with the correct Hidden Markov Model parameters and topology to use?
CC BY-SA 4.0
null
2015-10-09T00:02:34.463
2019-09-28T07:36:19.890
2019-09-28T07:36:19.890
14713
9123
[ "machine-learning", "model-selection", "hyperparameter", "markov" ]
I'm familiar with three main approaches: - A priori. You might know that there are four base pairs to pick from, and so allow the HMM to have four states. Or you might know that English has 44 phonemes, and so have 44 states for the hidden phoneme layer in a voice recognition model. - Estimation. The number of states can often be estimated beforehand, perhaps by simple clustering on the observed features of the HMM. If the HMM transition matrix is triangular (which is often the case in failure prediction), the number of states determines the shape of the distribution of total time from the start state to the end state. - Optimization. Like you suggest, either many models are created and fit and the best model selected. One could also adapt the methodology that learns the HMM to allow the model to add or discard states as needed.
How do I represent a hidden markov model in data structure?
With 29 states and 841 possible transitions to track whilst reading a file with 2000 entries (word, tag), then you should not be experiencing a speed problem when using a dictionary of dictionaries. Assuming your data structure as described called `transition_counts`, and receiving data in pairs, `(this_pos, next_pos)` then running 2000 times: ``` transition_counts[this_pos][next_pos] += 1 ``` takes only a fraction of a second. This is similar for code that calculates $p(POS_{t+1}|POS_t)$: ``` total_from_pos_t = sum(transition_counts[pos_t].values()) prob_pos_tplus_one = transition_counts[pos_t][pos_tplus_one] / total_from_pos_t ``` This is very fast. Your problem is not with the representation.
8361
1
8362
null
3
1706
What is the difference between support vector machine and Gaussian mixture model classifiers?
What is the difference between SVM and GMM classifier
CC BY-SA 3.0
null
2015-10-09T06:11:18.740
2015-10-09T10:54:53.450
2015-10-09T10:54:53.450
11097
13236
[ "machine-learning" ]
A Gaussian mixture model is a special case of a mixture distribution, which is a simple way of combining probability distributions. However, the SVM does not make any assumptions. It is just a function, which depends on the distance of a data point from another point/plane. Som there is absolutely no probabilistic assumptions.
What is the difference between SVM and logistic regression?
Both logistic regression and SVM are linear models under the hood, and both implement a linear classification rule: $$f_{\mathbf{w},b}(\mathbf{x}) = \mathrm{sign}(\mathbf{w}^T \mathbf{x} + b)$$ Note that I am regarding the "primal", linear form of the SVM here. In both cases the parameters $\mathbf{w}$ and $b$ are estimated by minimizing a certain function, and, as you correctly noted, the core difference between the models boils down to the use of different optimization objectives. For logistic regression: $$(\mathbf{w}, b) = \mathrm{argmin}_{\mathbf{w},b} \sum_i \log(1+e^{-z_i}),$$ where $z_i = y_if_{\mathbf{w},b}(\mathbf{x}_i)$. For SVM: $$(\mathbf{w}, b) = \mathrm{argmin}_{\mathbf{w},b} \sum_i (1-z_i)_+ + \frac{1}{2C}\Vert \mathbf{w} \Vert^2$$ Note that the regularization term $\Vert \mathbf{w} \Vert^2$ may just as well be added to the logistic regression objective - this will result in regularized logistic regression. You do not have to limit yourself to $\ell_2$-norm as the regularization term. Replace it with $\Vert \mathbf{w} \Vert_1$ in the SVM objective, and you will get $\ell_1$-SVM. Add both $\ell_1$ and $\ell_2$ regularizers to get the "[elastic net regularization](https://en.wikipedia.org/wiki/Elastic_net_regularization)". In fact, feel free to pick your favourite loss, add your favourite regularizer, and voila - help yourself to a freshly baked machine learning algorithm. This is not a coincidence. Any machine learning modeling problem can be phrased as the task of finding a probabilistic model $M$ which describes a given dataset $D$ sufficiently well. One general method for solving such a task is the technique of maximum a-posteriori (MAP) estimation, which suggests you should always choose the most probable model given the data: $$M^* = \mathrm{argmax}_M P(M|D).$$ Using the Bayes rule and remembering that $P(D)$ is constant when the data is fixed: \begin{align*} \mathrm{argmax}_M P(M|D) &= \mathrm{argmax}_M \frac{P(D|M)P(M)}{P(D)} \\ &= \mathrm{argmax}_M P(D|M)P(M) \\ &= \mathrm{argmax}_M \log P(D|M)P(M) \\ &= \mathrm{argmax}_M \log P(D|M) + \log P(M) \\ &= \mathrm{argmin}_M (-\log P(D|M)) + (-\log P(M)) \end{align*} Observe how the loss turns out to be just another name for the (minus) log-likelihood of the data (under the chosen model) and the regularization penalty is the log-prior of the model. For example, the familiar $\ell_2$-penalty is just the minus logarithm of the Gaussian prior on the parameters: $$ -\log\left((2\pi)^{-m/2}e^{-\frac{1}{2\sigma^2}\Vert \mathbf{w} \Vert^2}\right) = \mathrm{const} + \frac{1}{2\sigma^2}\Vert \mathbf{w} \Vert^2$$ Hence, another way to describe the difference between SVM and logistic regression (or any other model), is that these two postulate different probabilitic models for the data. In logistic regression the data likelihood is given via the Bernoulli distribution (with $p$=sigmoid), while the model prior is uniform (or simply ignored). In SVM the data likelihood is modeled via some $\mathrm{exp}(-\mathrm{hinge})$ distribution (not sure it even has a name, but I hope you get the idea that undoing the minus-logarithm would always bring you back to $P(D|M)$, up to a constant), and the model prior is the Gaussian. In practice, the two models have different properties, of course. For example, SVM has sparse dual representations, which makes it possible to kernelize it efficiently. Logistic regression, on the other hand, is usually [well-calibrated](https://stats.stackexchange.com/questions/208867/why-does-logistic-regression-produce-well-calibrated-models) (which is not the case with SVM). Hence, you choose the model based on your needs (or, if you are unsure, on whatever cross-validation tells you).
8369
1
8372
null
1
67
In the data, there are 355 observations including one continuous dependent variable (Y: ranges from 15-55) and 12 independent variables (continuous, categorical, and ordinal). The X1 (2 levels) and X6 (3 levels) are considered as categorical variables. Here are some questions that I have: - Can I assume that all the coefficients (except X1 and X6 which are categorical) are linear with respect to Y? - Can I consider X5 as continuous variable; however, it is ordinal and ranges from (1-7)? - Can I get the X7 (year) as continuous variable; however, it’s ordinal and rages from 2002-2006 (In fact, year of data per se does not improve the response; it is the other factors occurring in the same time period which result in improvements and we don’t know those factors), does this approach seem logical?. - In general if I use different transformations on independent variables such as log, squared, square root, and inverse, do I need to standardize the data also? Here is the scatter plot: [](https://i.stack.imgur.com/GYRpa.png) Any feedback and insights would be highly appreciated. Thank you
Can I consider this pattern of data as a linear and use parametric multiple linear regression?
CC BY-SA 3.0
null
2015-10-09T16:18:05.393
2015-10-11T00:17:27.447
2015-10-09T20:16:04.263
12867
12867
[ "data-mining" ]
I don't think "can" is the right question to ask; it's not going to give you a syntax error. The right question is "what could go wrong?". Any modeling technique will have assumptions that may be broken, and knowing how those assumptions impact the results will help you know what to look for (and how much to care when those assumptions are broken). - The best test of whether or not linearity is appropriate is whether the residuals are white or structured. For example, it looks like X9 might have a nonlinear relationship with Y. But that might be an artifact of the interaction between X9 and other variables, especially categorical variables. Fit your full model, then plot the residuals against X9 and see what it looks like. - Treating it as continuous won't cause serious problems, but you might want to think about what this implies. Is the relationship between 1 and 2 in the same direction and half the strength as the relationship between 2 and 4? If not, you might want to transform this to a scale where you do think the differences are linear. - Same as 2, except it's even more reasonable to see time as linear. - Standardization is not necessary for most linear regression techniques, as they contain their own standardization. The primary exception is techniques that use regularization, where the scale of the parameters is relevant. It's also worth pointing out that multivariate linear relationships, while they can capture general trends well, are very poor at capturing logical trends. For example, looking at X3 and X4, it could very well be that there are rules like Y>X3 and Y>X4 in place, which is hinted at but not captured by linear regression.
Is linear regression suitable for these data?
You can employ the linear regression algorithm even for categorical data. The point is that whether your data is learnable or not. For instance, take a look at your data, and see whether an expert can really find the output by taking a look at the input vector. If it's possible, your task can be learnt using linear regression method. About linearity, the point is that linear regression can also learn nonlinear mappings. You just have to provide enough higher order polynomials of the current feature space you have which is not an easy task. For instance, you can expand your current feature space by adding the square of each feature to the current feature space. You will observe that it may have better performance than the usual case if your mapping is not linear, but you may still have error. Consequently, you have to supply more polynomial features, but you do not know which to use. An alternative to linear regression which does not need to add extra features is multi layer neural networks (MLP). You can simply use them which can learn nonlinear mappings. You can take a look at the official page of [SKlearn](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html) for applying them. Furthermore, you can take a look at [here](https://scikit-learn.org/stable/modules/neural_networks_supervised.html) for applying them.
8390
1
8394
null
9
6072
I am trying to understand sentiment analysis and how to apply it using any language (R, Python etc). I would like to know if there is a good place on internet for tutorial that I can follow. I googled, but I wasn't very much satisfied because they were not tutorials but more of theory. I want theory and practical examples.
Sentiment Analysis Tutorial
CC BY-SA 3.0
null
2015-10-12T03:31:10.867
2017-05-10T03:57:59.487
2017-05-10T03:57:59.487
31513
3550
[ "machine-learning", "reference-request", "sentiment-analysis" ]
The [Stanford NLP course on Coursera](https://www.coursera.org/course/nlp) covers Sentiment Analysis in [week 3](https://class.coursera.org/nlp/lecture/preview): - [What is Sentiment Analysis?](https://class.coursera.org/nlp/lecture/31) - [Sentiment Analysis: A baseline algorithm](https://class.coursera.org/nlp/lecture/145) - [Sentiment Lexicons](https://class.coursera.org/nlp/lecture/35) - [Learning Sentiment Lexicons](https://class.coursera.org/nlp/lecture/144) - [Other Sentiment Tasks](https://class.coursera.org/nlp/lecture/33) For coding tutorials see: - Stream Hacker's NLP tutorials - Basic Sentiment Analysis with Python - Andy Bromberg's Sentiment Analysis tutorials - Laurent Luce's Sentiment Analysis tutorials These are really basic, so their performance will not be great in all cases.
Build a sentiment model from scratch
What you're describing is indeed the traditional approach for building a sentiment analysis system, so I'd say it looks like a reasonable approach to me. I'm not up to date with the sentiment analysis task at all, but I think it would be worth studying the state of the art for several reasons: - There might be more recent, better approaches - There might be datasets in the languages you're interested in, and if there is that could save you a lot of time. Check if there are any shared tasks about this, they often provide annotated datasets.
8395
1
8396
null
3
430
I have historical data from an e-shop transactions. I want to write a prediction model and check if a specific user will buy with or without a discount, so I can do some targeting offers. The idea is: - If a user will buy the regular price, will not have an offer. - If a user will not buy the regular price, check if he/she will buy with an offer. With this way, I will avoid to make an offer to someone who would buy with the regular price. So, I am still in the brainstorming and trying to find a way for implementing the 1-2. Should I create two separate models to predict the 1) and then the 2) with the second model? Or should I join both in one prediction model?
Predict which user will buy with an offer - discount
CC BY-SA 3.0
null
2015-10-12T06:36:53.497
2015-11-20T13:16:08.807
null
null
201
[ "predictive-modeling" ]
You can use Decision trees for a single model prediction for both the set of users. A good start would be to first [read up](https://en.wikipedia.org/wiki/Decision_tree) on Decision Trees and their applications. You can include the offer as a decision(as a boolean in this case). `buying with offer` and `buying without offer` can be the decision criterion. You can, in fact go ahead and put in the offer values also. For example, `offer>10%` and `offer <10%`
How to predict user next purchase items
First of all you have to realize these kind of problems have large amounts of noise compared to signal, because predicting what someone will buy based on a very small window of information is difficult. That said, you are throwing away a lot of information with your current approach. Temporal aspects include a ton of information, for example the sequence in which items were bought etcetera. While this is a lot more complicated than what you are describing now, you could look into recurrent neural networks where you feed history up to the point of prediction as a sequence and predict the item they will buy next as softmax classification. This will depend on the amount of products that you offer whether this is feasible or not. Another advantage is that so-called 'out-of-core' training is relatively easy with neural networks due to the iterative training of batches. Multi-label is also clean, you can just add a number of labels at the end of your graph if necessary.
8426
1
8429
null
8
10116
I am working on a recommendation engine, and I have chosen to use SciPy's cosine distance as a way of comparing items. I have two vectors: ``` a = [2.7654870801855078, 0.35995355443076027, 0.016221679989074141, -0.012664358453398751, 0.0036888812311235068] ``` and ``` b = [-6.2588482809118942, -0.88952297609194686, 0.017336984676103874, -0.0054928004763216964, 0.011122959185936367] ``` Running the following code will produce an output of ~1.999: ``` from scipy.spatial import distance print(distance.cosine(a,b)) ``` Is there something wrong with my input values? Anyone know why I am getting a result of >1?
Cosine Distance > 1 in scipy
CC BY-SA 3.0
null
2015-10-13T22:23:39.020
2015-10-15T01:10:16.887
2015-10-15T01:10:16.887
11097
13385
[ "python", "distance", "cosine-distance" ]
The cosine distance formula is: [](https://i.stack.imgur.com/UmTFw.png) And the formula used by the `cosine` function of the `spatial` class of scipy is: [](https://i.stack.imgur.com/KLk4n.png) So, the actual cosine similarity metric is: -0.9998. So, it signifies complete dissimilarity.
Cosine similarity versus dot product as distance metrics
Think geometrically. Cosine similarity only cares about angle difference, while dot product cares about angle and magnitude. If you normalize your data to have the same magnitude, the two are indistinguishable. Sometimes it is desirable to ignore the magnitude, hence cosine similarity is nice, but if magnitude plays a role, dot product would be better as a similarity measure. Note that neither of them is a "distance metric".